Oil Reservoir Production Optimization using Single Shooting and ESDIRK Methods
DEFF Research Database (Denmark)
Capolei, Andrea; Völcker, Carsten; Frydendall, Jan
2012-01-01
the injections and oil production such that flow is uniform in a given geological structure. Even in the case of conventional water flooding, feedback based optimal control technologies may enable higher oil recovery than with conventional operational strategies. The optimal control problems that must be solved......Conventional recovery techniques enable recovery of 10-50% of the oil in an oil field. Advances in smart well technology and enhanced oil recovery techniques enable significant larger recovery. To realize this potential, feedback model-based optimal control technologies are needed to manipulate...... are large-scale problems and require specialized numerical algorithms. In this paper, we combine a single shooting optimization algorithm based on sequential quadratic programming (SQP) with explicit singly diagonally implicit Runge-Kutta (ESDIRK) integration methods and the a continuous adjoint method...
Combustion Model and Control Parameter Optimization Methods for Single Cylinder Diesel Engine
Directory of Open Access Journals (Sweden)
Bambang Wahono
2014-01-01
Full Text Available This research presents a method to construct a combustion model and a method to optimize some control parameters of diesel engine in order to develop a model-based control system. The construction purpose of the model is to appropriately manage some control parameters to obtain the values of fuel consumption and emission as the engine output objectives. Stepwise method considering multicollinearity was applied to construct combustion model with the polynomial model. Using the experimental data of a single cylinder diesel engine, the model of power, BSFC, NOx, and soot on multiple injection diesel engines was built. The proposed method succesfully developed the model that describes control parameters in relation to the engine outputs. Although many control devices can be mounted to diesel engine, optimization technique is required to utilize this method in finding optimal engine operating conditions efficiently beside the existing development of individual emission control methods. Particle swarm optimization (PSO was used to calculate control parameters to optimize fuel consumption and emission based on the model. The proposed method is able to calculate control parameters efficiently to optimize evaluation item based on the model. Finally, the model which added PSO then was compiled in a microcontroller.
Directory of Open Access Journals (Sweden)
Deep Pooja
2016-03-01
Full Text Available This data article contains the data related to the research article “Characterization, biorecognitive activity and stability of WGA grafted lipid nanostructures for the controlled delivery of rifampicin” (Pooja et al. 2015 [1]. In the present study, SLN were prepared by a single emulsification-solvent evaporation method and the various steps of SLN preparation are shown in a flow chart. The preparation of SLN was optimized for various formulation variables including type and quantity of lipid, surfactant, amount of co-surfactant and volume of organic phase. Similarly, effect of variables related to homogezation, sonication and stirring processes, on the size and surface potential of SLN was determined and optimized.
van Rossum, Lyonne K.; Mathot, Ron A. A.; Cransberg, Karlien; Vulto, Arnold G.
2003-01-01
Glomerular filtration rate in patients can be determined by estimating the plasma clearance of inulin with the single-injection method. In this method, a single bolus injection of inulin is administered and several blood samples are collected. For practical and convenient application of this method
Directory of Open Access Journals (Sweden)
Olimpia PECINGINA
2016-05-01
Full Text Available Throughout the ages, man has continuously been involved with the process of optimization. In its earliest form, optimization consisted of unscientific rituals and prejudices like pouring libations and sacrificing animals to the gods, con- sulting the oracles, observing the positions of the stars, and watching the flight of birds. When the circumstances were appropriate, the timing was thought to be auspicious (or optimum for planting the crops or embarking on a war.
A rapid method for optimization of the rocket propulsion system for single-stage-to-orbit vehicles
Eldred, C. H.; Gordon, S. V.
1976-01-01
A rapid analytical method for the optimization of rocket propulsion systems is presented for a vertical take-off, horizontal landing, single-stage-to-orbit launch vehicle. This method utilizes trade-offs between propulsion characteristics affecting flight performance and engine system mass. The performance results from a point-mass trajectory optimization program are combined with a linearized sizing program to establish vehicle sizing trends caused by propulsion system variations. The linearized sizing technique was developed for the class of vehicle systems studied herein. The specific examples treated are the optimization of nozzle expansion ratio and lift-off thrust-to-weight ratio to achieve either minimum gross mass or minimum dry mass. Assumed propulsion system characteristics are high chamber pressure, liquid oxygen and liquid hydrogen propellants, conventional bell nozzles, and the same fixed nozzle expansion ratio for all engines on a vehicle.
International Nuclear Information System (INIS)
Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; Tooren, Michel van
2013-01-01
In engineering, there exist both aleatory uncertainties due to the inherent variation of the physical system and its operational environment, and epistemic uncertainties due to lack of knowledge and which can be reduced with the collection of more data. To analyze the uncertain distribution of the system performance under both aleatory and epistemic uncertainties, combined probability and evidence theory can be employed to quantify the compound effects of the mixed uncertainties. The existing First Order Reliability Method (FORM) based Unified Uncertainty Analysis (UUA) approach nests the optimization based interval analysis in the improved Hasofer–Lind–Rackwitz–Fiessler (iHLRF) algorithm based Most Probable Point (MPP) searching procedure, which is computationally inhibitive for complex systems and may encounter convergence problem as well. Therefore, in this paper it is proposed to use general optimization solvers to search MPP in the outer loop and then reformulate the double-loop optimization problem into an equivalent single-level optimization (SLO) problem, so as to simplify the uncertainty analysis process, improve the robustness of the algorithm, and alleviate the computational complexity. The effectiveness and efficiency of the proposed method is demonstrated with two numerical examples and one practical satellite conceptual design problem. -- Highlights: ► Uncertainty analysis under mixed aleatory and epistemic uncertainties is studied. ► A unified uncertainty analysis method is proposed with combined probability and evidence theory. ► The traditional nested analysis method is converted to single level optimization for efficiency. ► The effectiveness and efficiency of the proposed method are testified with three examples
Benson, Lauren C; Clermont, Christian A; Osis, Sean T; Kobsar, Dylan; Ferber, Reed
2018-04-11
Accelerometers have been used to classify running patterns, but classification accuracy and computational load depends on signal segmentation and feature extraction. Stride-based segmentation relies on identifying gait events, a step avoided by using window-based segmentation. For each segment, discrete points can be extracted from the accelerometer signal, or advanced features can be computed. Therefore, the purpose of this study was to examine how different segmentation and feature extraction methods influence the accuracy and computational load of classifying running conditions. Forty-four runners ran at their preferred speed and 25% faster than preferred while an accelerometer at the lower back recorded 3D accelerations. Computational load was determined as the accelerometer signal was segmented into single and five strides, and corresponding small and large windows, with discrete points extracted from the single stride segments and advanced features computed from all four segment types. Each feature set was used to classify speed conditions and classification accuracy was recorded. Computational load and classification accuracy were compared across all feature sets using a repeated-measures MANOVA, with follow-up t-tests to compare feature type (discrete vs. advanced), segmentation method (stride- vs. window-based), and segment size (small vs. large), using a Bonferroni-adjusted α = 0.003. The five-stride (97.49 (±4.57)%) and large-window advanced (97.23 (±5.51)%) feature sets produced the greatest classification accuracy, but the large-window advanced feature set had a lower computational load (0.0041 (±0.0002)s) than the stride-based feature sets. Therefore, using a few advanced features and large overlapping window sizes yields the best performance of both classification accuracy and computational load. Copyright © 2018 Elsevier Ltd. All rights reserved.
Chen, Nan; Lee, J. Jack
2013-01-01
Simon’s two-stage design is commonly used in phase II single-arm clinical trials because of its simplicity and smaller sample size under the null hypothesis compared to the one-stage design. Some studies extend this design to accommodate more interim analyses (i.e., three-stage or four-stage designs). However, most of these studies, together with the original Simon’s two-stage design, are based on the exhaustive search method, which is difficult to extend to high-dimensional, general multi-stage designs. In this study, we propose a simulated annealing (SA)-based design to optimize the early stopping boundaries and minimize the expected sample size for multi-stage or continuous monitoring single-arm trials. We compare the results of the SA method, the decision-theoretic method, the predictive probability method, and the posterior probability method. The SA method can reach the smallest expected sample sizes in all scenarios under the constraints of the same type I and type II errors. The expected sample sizes from the SA method are generally 10–20% smaller than those from the posterior probability method or the predictive probability method, and are slightly smaller than those from the decision-theoretic method in almost all scenarios. The SA method offers an excellent alternative in designing phase II trials with continuous monitoring. PMID:23545075
Single Shooting and ESDIRK Methods for adjoint-based optimization of an oil reservoir
DEFF Research Database (Denmark)
Capolei, Andrea; Völcker, Carsten; Frydendall, Jan
2012-01-01
Conventional recovery techniques enable recovery of 10-50% of the oil in an oil eld. Advances in smart well technology and enhanced oil recovery techniques enable signicant larger recovery. To realize this potential, feedback model-based optimal control technologies are needed to manipulate...
Le, Hieu The
This thesis develops a new method to detect delaminations in composite laminates using a combination of finite element method, artificial neural networks, and genetic algorithms. Next, this newly developed method is applied to successfully solve delamination detection problems. Delaminations in a composite laminate with various sizes and locations are considered in the present studies. The improved layerwise shear deformation theory is implemented into the finite element method and used to calculate responses of laminates with single and multiple delaminations. Mappings between the natural frequencies and delamination characteristics are first determined from the developed models. These data are then used to train artificial neural networks of multiplayer perceptron using back-propagation. These trained artificial neural networks are in turn used as an approximate tool to calculate the responses of the delaminated laminates and to feed the data to the delamination detection process. Two different approaches for handling the neural network models are applied in the work and are presented for comparison. The delamination detection problem is formulated as an optimization problem with mixed type design variables. A genetic algorithm, which is a guided probabilistic search technique based on the simulation of Darwin's principle of evolution and natural selection, is developed to solve this optimization problem. Single through-the-width delamination, single internal delamination, and multiple through-the-width delaminations are separately considered for detection study. At last, the application is extended to the most challenging problem, which is the detection of general delamination. Various factors affecting the detection process such as the finite element convergence factor and the laminate geometry factor are also examined. Case studies are made and the findings are summarized in detail in each chapter of the dissertation. It is found that the newly developed
Tiwari, S. P.; Singh, S.; Kumar, A.; Kumar, K.
2016-05-01
In present work, an optimized solvothermal method has been chosen to synthesize the singly doped Er3+ activator ions with La2O3 host matrix. The sample is annealed at 500 °C in order to remove the moisture and other organic impurities. The sample is characterized by using XRD and FESEM to find out the phase and surface morphology. The observed particle size is found almost 80 nm with spherical agglomerated shape. Upconversion spectra are recorded at room temperature using 976 nm diode laser excitation sources and consequently the emission peaks in green and red region are observed. The color coordinate diagram shows the results that the present material may be applicable in different light emitting sources.
Gentilini, Fabio; Turba, Maria E
2014-01-01
A novel technique, called Divergent, for single-tube real-time PCR genotyping of point mutations without the use of fluorescently labeled probes has recently been reported. This novel PCR technique utilizes a set of four primers and a particular denaturation temperature for simultaneously amplifying two different amplicons which extend in opposite directions from the point mutation. The two amplicons can readily be detected using the melt curve analysis downstream to a closed-tube real-time PCR. In the present study, some critical aspects of the original method were specifically addressed to further implement the technique for genotyping the DNM1 c.G767T mutation responsible for exercise-induced collapse in Labrador retriever dogs. The improved Divergent assay was easily set up using a standard two-step real-time PCR protocol. The melting temperature difference between the mutated and the wild-type amplicons was approximately 5°C which could be promptly detected by all the thermal cyclers. The upgraded assay yielded accurate results with 157pg of genomic DNA per reaction. This optimized technique represents a flexible and inexpensive alternative to the minor grove binder fluorescently labeled method and to high resolution melt analysis for high-throughput, robust and cheap genotyping of single nucleotide variations. Copyright © 2014 Elsevier B.V. All rights reserved.
Methods of mathematical optimization
Vanderplaats, G. N.
The fundamental principles of numerical optimization methods are reviewed, with an emphasis on potential engineering applications. The basic optimization process is described; unconstrained and constrained minimization problems are defined; a general approach to the design of optimization software programs is outlined; and drawings and diagrams are shown for examples involving (1) the conceptual design of an aircraft, (2) the aerodynamic optimization of an airfoil, (3) the design of an automotive-engine connecting rod, and (4) the optimization of a 'ski-jump' to assist aircraft in taking off from a very short ship deck.
Stochastic optimization methods
Marti, Kurt
2005-01-01
Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.
Moezi, Seyed Alireza; Zakeri, Ehsan; Zare, Amin
2018-01-01
In this study, the number, location and depth of cracks created in several Euler-Bernoulli beams, such as a simple beam and a more complex multi-step beam are investigated. The location and depth of the created cracks are determined using the hybrid Cuckoo-Nelder-Mead Optimization Algorithm (COA-NM) with high accuracy. The natural frequencies of the cracked beams are determined by solving frequency response equations, and performing modal test experiments. Results of COA-NM show a higher accuracy and convergence speed compared with other methods such as GA-NM, PSO-NM, GA, PSO, COA and several previous studies. Amount of calculations performed by COA-NM to achieve this accuracy is much less compared to other methods.
Practical methods of optimization
Fletcher, R
2013-01-01
Fully describes optimization methods that are currently most valuable in solving real-life problems. Since optimization has applications in almost every branch of science and technology, the text emphasizes their practical aspects in conjunction with the heuristics useful in making them perform more reliably and efficiently. To this end, it presents comparative numerical studies to give readers a feel for possibile applications and to illustrate the problems in assessing evidence. Also provides theoretical background which provides insights into how methods are derived. This edition offers rev
Directory of Open Access Journals (Sweden)
Elisabeth Andre-Garnier
Full Text Available The objective was to develop a method of HCV genome sequencing that allowed simultaneous genotyping and NS5A inhibitor resistance profiling. In order to validate the use of a unique RT-PCR for genotypes 1-5, 142 plasma samples from patients infected with HCV were analysed. The NS4B-NS5A partial region was successfully amplified and sequenced in all samples. In parallel, partial NS3 sequences were analyzed obtained for genotyping. Phylogenetic analysis showed concordance of genotypes and subtypes with a bootstrap >95% for each type cluster. NS5A resistance mutations were analyzed using the Geno2pheno [hcv] v0.92 tool and compared to the list of known Resistant Associated Substitutions recently published. In conclusion, this tool allows determination of HCV genotypes, subtypes and identification of NS5A resistance mutations. This single method can be used to detect pre-existing resistance mutations in NS5A before treatment and to check the emergence of resistant viruses while undergoing treatment in major HCV genotypes (G1-5 in the EU and the US.
Hoffman, Tim
Hexagonal boron nitride (hBN) is a wide bandgap III-V semiconductor that has seen new interest due to the development of other III-V LED devices and the advent of graphene and other 2-D materials. For device applications, high quality, low defect density materials are needed. Several applications for hBN crystals are being investigated, including as a neutron detector and interference-less infrared-absorbing material. Isotopically enriched crystals were utilized for enhanced propagation of phonon modes. These applications exploit the unique physical, electronic and nanophotonics applications for bulk hBN crystals. In this study, bulk hBN crystals were grown by the flux method using a molten Ni-Cr solvent at high temperatures (1500°C) and atmospheric pressures. The effects of growth parameters, source materials, and gas environment on the crystals size, morphology and purity were established and controlled, and the reliability of the process was greatly improved. Single-crystal domains exceeding 1mm in width and 200microm in thickness were produced and transferred to handle substrates for analysis. Grain size dependence with respect to dwell temperature, cooling rate and cooling temperature were analyzed and modeled using response surface morphology. Most significantly, crystal grain width was predicted to increase linearly with dwell temperature, with single-crystal domains exceeding 2mm in at 1700°C. Isotopically enriched 10B and 11B hBN crystal were produced using a Ni-Cr-B flux method, and their properties investigated. 10B concentration was evaluated using SIMS and correlated to the shift in the Raman peak of the E2g mode. Crystals with enrichment of 99% 10B and >99% 11B were achieved, with corresponding Raman shift peaks at 1392.0 cm-1 and 1356.6 cm-1, respectively. Peak FWHM also decreased as isotopic enrichment approached 100%, with widths as low as 3.5 cm-1 achieved, compared to 8.0 cm-1 for natural abundance samples. Defect selective etching was
Directory of Open Access Journals (Sweden)
Sanehiro Wada
2012-01-01
Full Text Available This paper presents a new estimation method to determine the optimal number of transducers using an Ultrasonic Velocity Profile (UVP for accurate flow rate measurement downstream of a single elbow. Since UVP can measure velocity profiles over a pipe diameter and calculate the flow rate by integrating these velocity profiles, it is also expected to obtain an accurate flow rate using multiple transducers under nondeveloped flow conditions formed downstream of an elbow. The new estimation method employs a wave number of velocity profile fluctuations along a circle on a pipe cross-section using Fast Fourier Transform (FFT. The optimal number of transducers is estimated based on the sampling theorem. To evaluate this method, a preliminary experiment and numerical simulations using Computational Fluid Dynamics (CFD are conducted. The evaluating regions of velocity profiles are located at 3 times of a pipe diameter ( for the experiment, and 1 and for the simulations downstream of an elbow, respectively. Reynolds numbers for the experiment and simulations are set at and , respectively. These results indicate the efficiency of this new method.
Energy Technology Data Exchange (ETDEWEB)
Alani, S; Honig, N; Schlocker, A; Corn, B [Tel Aviv Medical Center, Tel Aviv (Israel)
2016-06-15
Purpose: This study utilizes the Taguchi Method to evaluate the VMAT planning parameters of single isocenter treatment plans for multiple brain metastases. An optimization model based on Taguchi and utility concept is employed to optimize the planning parameters including: arc arrangement, calculation grid size, calculation model, and beam energy on multiple performance characteristics namely conformity index and dose to normal brain. Methods: Treatment plans, each with 4 metastatic brain lesions were planned using single isocenter technique. The collimator angles were optimized to avoid open areas. In this analysis four planning parameters (a–d) were considered: (a)-Arc arrangements: set1: Gantry 181cw179 couch0; gantry179ccw0, couch315; and gantry0ccw181, couch45. set2: set1 plus additional arc: Gantry 0cw179, couch270. (b)-Energy: 6-MV; 6MV-FFF (c)-Calculation grid size: 1mm; 1.5mm (d)-Calculation models: AAA; Acuros Treatment planning was performed in Varian Eclipse (ver.11.0.30). A suitable orthogonal array was selected (L8) to perform the experiments. After conducting the experiments with the combinations of planning parameters the conformity index (CI) and the normal brain dose S/N ratio for each parameter was calculated. Optimum levels for the multiple response optimizations were determined. Results: We determined that the factors most affecting the conformity index are arc arrangement and beam energy. These tests were also used to evaluate dose to normal brain. In these evaluations, the significant parameters were grid size and calculation model. Using the utility concept we determined the combination of each of the four factors tested in this study that most significantly influence quality of the resulting treatment plans: (a)-arc arrangement-set2, (b)-6MV, (c)-calc.grid 1mm, (d)-Acuros algorithm. Overall, the dominant significant influences on plan quality are (a)-arcarrangement, and (b)-beamenergy. Conclusion: Results were analyzed using ANOVA and
International Nuclear Information System (INIS)
Gil, Sandra; Loos-Vollebregt, Margaretha T.C. de; Bendicho, Carlos
2009-01-01
A headspace single-drop microextraction (HS-SDME) method has been developed in combination with electrothermal vaporization inductively coupled plasma mass spectrometry (ETV-ICP-MS) for the simultaneous determination of As, Sb, Bi, Pb, Sn and Hg in aqueous solutions. Vapor generation is carried out in a 40 mL volume closed-vial containing a solution with the target analytes in hydrochloric acid and potassium ferricyanide medium. Hydrides (As, Sb, Bi, Pb, Sn) and Hg vapor are trapped onto an aqueous single drop (3 μL volume) containing Pd(II), followed by the subsequent injection in the ETV. Experimental variables such as medium composition, sodium tetrahydroborate (III) volume and concentration, stirring rate, extraction time, sample volume, ascorbic acid concentration and palladium amount in the drop were fully optimized. The limits of detection (LOD) (3σ criterion) of the proposed method for As, Sb, Bi, Pb, Sn and Hg were 0.2, 0.04, 0.01, 0.07, 0.09 and 0.8 μg/L, respectively. Enrichment factors of 9, 85, 138, 130, 37 and 72 for As, Sb, Bi, Pb, Sn and Hg, respectively, were achieved in 210 s. The relative standard deviations (N = 5) ranged from 4 to 8%. The proposed HS-SDME-ETV-ICP-MS method has been applied for the determination of As, Sb, Bi, Pb, Sn and Hg in NWRI TM-28.3 certified reference material.
Stochastic optimization methods
Marti, Kurt
2008-01-01
Optimization problems arising in practice involve random model parameters. This book features many illustrations, several examples, and applications to concrete problems from engineering and operations research.
Optimization of genetic analysis for single cell
Directory of Open Access Journals (Sweden)
hussein mouawia
2012-03-01
Full Text Available The molecular genetic analysis of microdissected cells by laser, a method for selecting a starting material of pure DNA or RNA uncontaminated. Our study focuses on technical pre-PCR (polymerase chain reaction for the amplification of DNA from a single cell (leukocyte isolated from human blood after laser microdissection and aims to optimize the yield of DNA extracted of this cell to be amplified without errors and provide reliable genetic analyzes. This study has allowed us to reduce the duration of cell lysis in order to perform the step of expanding genomic PEP (primer extension preamplification directly after lysis the same day and the quality of genomic amplification and eliminate purification step of the product PEP, step with a risk of contamination and risk of loss of genetic material related to manipulation. This approach has shown that the combination of at least 3 STR (short tandem repeat markers for genetic analysis of single cell improves the efficiency and accuracy of PCR and minimizes the loss of allele (allele drop out; ADO. This protocol can be applied to large scale and an effective means suitable for genetic testing for molecular diagnostic from isolated single cell (cancerous - fetal.
Analytical methods of optimization
Lawden, D F
2006-01-01
Suitable for advanced undergraduates and graduate students, this text surveys the classical theory of the calculus of variations. It takes the approach most appropriate for applications to problems of optimizing the behavior of engineering systems. Two of these problem areas have strongly influenced this presentation: the design of the control systems and the choice of rocket trajectories to be followed by terrestrial and extraterrestrial vehicles.Topics include static systems, control systems, additional constraints, the Hamilton-Jacobi equation, and the accessory optimization problem. Prereq
Optimization methods for logical inference
Chandru, Vijay
2011-01-01
Merging logic and mathematics in deductive inference-an innovative, cutting-edge approach. Optimization methods for logical inference? Absolutely, say Vijay Chandru and John Hooker, two major contributors to this rapidly expanding field. And even though ""solving logical inference problems with optimization methods may seem a bit like eating sauerkraut with chopsticks. . . it is the mathematical structure of a problem that determines whether an optimization model can help solve it, not the context in which the problem occurs."" Presenting powerful, proven optimization techniques for logic in
Optimization methods in structural design
Rothwell, Alan
2017-01-01
This book offers an introduction to numerical optimization methods in structural design. Employing a readily accessible and compact format, the book presents an overview of optimization methods, and equips readers to properly set up optimization problems and interpret the results. A ‘how-to-do-it’ approach is followed throughout, with less emphasis at this stage on mathematical derivations. The book features spreadsheet programs provided in Microsoft Excel, which allow readers to experience optimization ‘hands-on.’ Examples covered include truss structures, columns, beams, reinforced shell structures, stiffened panels and composite laminates. For the last three, a review of relevant analysis methods is included. Exercises, with solutions where appropriate, are also included with each chapter. The book offers a valuable resource for engineering students at the upper undergraduate and postgraduate level, as well as others in the industry and elsewhere who are new to these highly practical techniques.Whi...
An Efficient Algorithm for Solving Single Veriable Optimization ...
African Journals Online (AJOL)
Many methods are available for finding x*E Rn which minimizes the real value function f(x), some of which are Fibonacci Search Algorithm, Quadratic Search Algorithm, Convergence Algorithm and Cubic Search Algorithm. In this research work, existing algorithms used in single variable optimization problems are critically ...
Optimal control of a single qubit by direct inversion
International Nuclear Information System (INIS)
Wenin, M.; Poetz, W.
2006-01-01
Optimal control of a driven single dissipative qubit is formulated as an inverse problem. We show that direct inversion is possible which allows an analytic construction of optimal control fields. Exact inversion is shown to be possible for dissipative qubits which can be described by a Lindblad equation. It is shown that optimal solutions are not unique. For a qubit with weak coupling to phonons we choose, among the set of exact solutions for the dissipationless qubit, one which minimizes the dissipative contribution in the kinetic equations. Examples are given for state trapping and Z-gate operation. Using analytic expressions for optimal control fields, favorable domains for dynamic stabilization in the Bloch sphere are identified. In the case of approximate inversion, the identified approximate solution may be used as a starting point for further optimization following standard methods
Optimization of Medical Teaching Methods
Directory of Open Access Journals (Sweden)
Wang Fei
2015-12-01
Full Text Available In order to achieve the goal of medical education, medicine and adapt to changes in the way doctors work, with the rapid medical teaching methods of modern science and technology must be reformed. Based on the current status of teaching in medical colleges method to analyze the formation and development of medical teaching methods, characteristics, about how to achieve optimal medical teaching methods for medical education teachers and management workers comprehensive and thorough change teaching ideas and teaching concepts provide a theoretical basis.
Distributed optimization system and method
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2003-06-10
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Optimal control linear quadratic methods
Anderson, Brian D O
2007-01-01
This augmented edition of a respected text teaches the reader how to use linear quadratic Gaussian methods effectively for the design of control systems. It explores linear optimal control theory from an engineering viewpoint, with step-by-step explanations that show clearly how to make practical use of the material.The three-part treatment begins with the basic theory of the linear regulator/tracker for time-invariant and time-varying systems. The Hamilton-Jacobi equation is introduced using the Principle of Optimality, and the infinite-time problem is considered. The second part outlines the
Single image defogging based on particle swarm optimization
Guo, Fan; Zhou, Cong; Liu, Li-jue; Tang, Jin
2017-11-01
Due to the lack of enough information to solve the equation of image degradation model, existing defogging methods generally introduce some parameters and set these values fixed. Inappropriate parameter setting leads to difficulty in obtaining the best defogging results for different input foggy images. Therefore, a single image defogging algorithm based on particle swarm optimization (PSO) is proposed in this letter to adaptively and automatically select optimal parameter values for image defogging algorithms. The proposed method is applied to two representative defogging algorithms by selecting the two main parameters and optimizing them using the PSO algorithm. Comparative study and qualitative evaluation demonstrate that the better quality results are obtained by using the proposed parameter selection method.
Brown, Paula N; Chan, Michael; Betz, Joseph M
2010-07-01
Three species of Echinacea (Echinacea purpurea, Echinacea angustifolia, and Echinacea pallida) are commonly used for medicinal purposes. The phenolic compounds caftaric acid, cichoric acid, echinacoside, cynarin, and chlorogenic acid are among the phytochemical constituents that may be responsible for the purported beneficial effects of the herb. Although methods for the analysis for these compounds have been published, documentation of their validity was inadequate as the accuracy and precision for the detection and quantification of these phenolics was not systematically determined and/or reported. To address this issue, the high-performance liquid chromatography method, originally developed by the Institute for Nutraceutical Advancement (INA), was reviewed, optimized, and validated for the detection and quantification of these phenolic compounds in Echinacea roots and aerial parts.
Replica Analysis for Portfolio Optimization with Single-Factor Model
Shinzato, Takashi
2017-06-01
In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.
Macintosh, Daniel C T; Sutherland, Mark
2004-03-01
This article describes a method for creating an improved emergence profile with single-tooth, implant-supported restorations. An easily trimmed silicone gingival substitute is used to allow polymerization of acrylic resin provisional restorations to achieve control of the emergence profile. Gingival trauma is minimized by eliminating intraoral use of monomer and minimizing surgical procedures. Provisional restorations can be assessed to ensure the contour is acceptable and the trimmed gingival substitute can be used to fabricate a similar profile in the definitive prosthesis. The provisional restorations may be used instead of standard prefabricated healing abutments to guide the healing contours of the peri-implant gingival tissue.
Optimization of Single-Layer Braced Domes
Directory of Open Access Journals (Sweden)
Grzywiński Maksym
2017-06-01
Full Text Available The paper deals with discussion of optimization problem in civil engineering structural space design. Minimization of mass should satisfy the limit state capacity and serviceability conditions. The cross-sectional areas of bars and structural dimensions are taken as design variables. Variables are used in the form of continuous and discrete. The analysis is done using the Structural and Design of Experiments modules of Ansys Workbench v17.2. As result of the method a mass reduction of 46,6 % is achieved.
Optimized multiple linear mappings for single image super-resolution
Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo
2017-12-01
Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.
Optimization of the Single Staggered Wire and Tube Heat Exchanger
Directory of Open Access Journals (Sweden)
Arsana I Made
2016-01-01
Full Text Available Wire and tube heat exchanger consists of a coiled tube, and wire is welded on the two sides of it in normal direction of the tube. Generally,wire and tube heat exchanger uses inline wire arrangement between the two sides, whereas in this study, it used staggered wire arrangement that reduces the restriction of convection heat transfer. This study performed the optimization of single staggered wire and tube heat exchanger to increase the capacity and reduce the mass of the heat exchanger. Optimization was conducted with the Hooke-Jeeves method, which aims to optimize the geometry of the heat exchanger, especially on the diameter (dw and the distance between wires (pw. The model developed to present heat transfer correlations on single staggered wire and tube heat exchanger was valid. The maximum optimization factor obtained when the diameter wire was 0.9 mm and the distance between wires (pw was 11 mm with the fref value = 1.5837. It means that the optimized design only using mass of 59,10 % and could transfer heat about 98,5 % from the basis design.
Kinsel, Richard P; Capoferri, Daniele
2008-05-01
Prosthetic replacement of the missing single maxillary central incisor with an implant-supported crown represents a profound aesthetic challenge for the restorative dentist, laboratory technician, and surgeon. In addition to the visual fidelity of color, translucency, contour, and surface texture, the proper soft tissue outline is sacrosanct to the illusion of a natural tooth. The contrast between the uniformly round shoulder of the implant and the tooth's curvilinear cementoenamel junction is particularly problematic. This clinical report demonstrates a simplified method that precisely controls the facial gingival and proximal soft tissue contours for implant-supported, metal-ceramic crowns in the aesthetic zone, using the cervical anatomy of the maxillary incisor tooth as a guide. A new role for the provisional crown that is intended to maximize the volume of keratinized tissue is also described.
Jacchia, Sara; Nardini, Elena; Savini, Christian; Petrillo, Mauro; Angers-Loustau, Alexandre; Shim, Jung-Hyun; Trijatmiko, Kurniawan; Kreysa, Joachim; Mazzara, Marco
2015-02-18
In this study, we developed, optimized, and in-house validated a real-time PCR method for the event-specific detection and quantification of Golden Rice 2, a genetically modified rice with provitamin A in the grain. We optimized and evaluated the performance of the taxon (targeting rice Phospholipase D α2 gene)- and event (targeting the 3' insert-to-plant DNA junction)-specific assays that compose the method as independent modules, using haploid genome equivalents as unit of measurement. We verified the specificity of the two real-time PCR assays and determined their dynamic range, limit of quantification, limit of detection, and robustness. We also confirmed that the taxon-specific DNA sequence is present in single copy in the rice genome and verified its stability of amplification across 132 rice varieties. A relative quantification experiment evidenced the correct performance of the two assays when used in combination.
Rajora, Manik; Zou, Pan; Yang, Yao Guang; Fan, Zhi Wen; Chen, Hung Yi; Wu, Wen Chieh; Li, Beizhi; Liang, Steven Y
2016-01-01
It can be observed from the experimental data of different processes that different process parameter combinations can lead to the same performance indicators, but during the optimization of process parameters, using current techniques, only one of these combinations can be found when a given objective function is specified. The combination of process parameters obtained after optimization may not always be applicable in actual production or may lead to undesired experimental conditions. In this paper, a split-optimization approach is proposed for obtaining multiple solutions in a single-objective process parameter optimization problem. This is accomplished by splitting the original search space into smaller sub-search spaces and using GA in each sub-search space to optimize the process parameters. Two different methods, i.e., cluster centers and hill and valley splitting strategy, were used to split the original search space, and their efficiency was measured against a method in which the original search space is split into equal smaller sub-search spaces. The proposed approach was used to obtain multiple optimal process parameter combinations for electrochemical micro-machining. The result obtained from the case study showed that the cluster centers and hill and valley splitting strategies were more efficient in splitting the original search space than the method in which the original search space is divided into smaller equal sub-search spaces.
Optimal mechanisms for single machine scheduling
Heydenreich, B.; Mishra, D.; Müller, R.; Uetz, Marc Jochen; Papadimitriou, C.; Zhang, S.
2008-01-01
We study the design of optimal mechanisms in a setting here job-agents compete for being processed by a service provider that can handle one job at a time. Each job has a processing time and incurs a waiting cost. Jobs need to be compensated for waiting. We consider two models, one where only the
A topological derivative method for topology optimization
DEFF Research Database (Denmark)
Norato, J.; Bendsøe, Martin P.; Haber, RB
2007-01-01
We propose a fictitious domain method for topology optimization in which a level set of the topological derivative field for the cost function identifies the boundary of the optimal design. We describe a fixed-point iteration scheme that implements this optimality criterion subject to a volumetric...
Biologically inspired optimization methods an introduction
Wahde, M
2008-01-01
The advent of rapid, reliable and cheap computing power over the last decades has transformed many, if not most, fields of science and engineering. The multidisciplinary field of optimization is no exception. First of all, with fast computers, researchers and engineers can apply classical optimization methods to problems of larger and larger size. In addition, however, researchers have developed a host of new optimization algorithms that operate in a rather different way than the classical ones, and that allow practitioners to attack optimization problems where the classical methods are either not applicable or simply too costly (in terms of time and other resources) to apply.This book is intended as a course book for introductory courses in stochastic optimization algorithms (in this book, the terms optimization method and optimization algorithm will be used interchangeably), and it has grown from a set of lectures notes used in courses, taught by the author, at the international master programme Complex Ada...
Improved Chicken Swarm Optimization Method for Reentry Trajectory Optimization
Directory of Open Access Journals (Sweden)
Yu Wu
2018-01-01
Full Text Available Reentry trajectory optimization has been researched as a popular topic because of its wide applications in both military and civilian use. It is a challenging problem owing to its strong nonlinearity in motion equations and constraints. Besides, it is a high-dimensional optimization problem. In this paper, an improved chicken swarm optimization (ICSO method is proposed considering that the chicken swarm optimization (CSO method is easy to fall into local optimum when solving high-dimensional optimization problem. Firstly, the model used in this study is described, including its characteristic, the nonlinear constraints, and cost function. Then, by introducing the crossover operator, the principles and the advantages of the ICSO algorithm are explained. Finally, the ICSO method solving the reentry trajectory optimization problem is proposed. The control variables are discretized at a set of Chebyshev collocation points, and the angle of attack is set to fit with the flight velocity to make the optimization efficient. Based on those operations, the process of ICSO method is depicted. Experiments are conducted using five algorithms under different indexes, and the superiority of the proposed ICSO algorithm is demonstrated. Another group of experiments are carried out to investigate the influence of hen percentage on the algorithm’s performance.
Extremal Optimization: Methods Derived from Co-Evolution
Energy Technology Data Exchange (ETDEWEB)
Boettcher, S.; Percus, A.G.
1999-07-13
We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than ''breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance proves competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.
Intelligent structural optimization: Concept, Model and Methods
International Nuclear Information System (INIS)
Lu, Dagang; Wang, Guangyuan; Peng, Zhang
2002-01-01
Structural optimization has many characteristics of Soft Design, and so, it is necessary to apply the experience of human experts to solving the uncertain and multidisciplinary optimization problems in large-scale and complex engineering systems. With the development of artificial intelligence (AI) and computational intelligence (CI), the theory of structural optimization is now developing into the direction of intelligent optimization. In this paper, a concept of Intelligent Structural Optimization (ISO) is proposed. And then, a design process model of ISO is put forward in which each design sub-process model are discussed. Finally, the design methods of ISO are presented
Time relative single-photon (photoelectron) method
International Nuclear Information System (INIS)
Luo Binqiao
1988-01-01
A single-photon (photoelectron) measuring system is designed. It researches various problems in single-photon (photoelectron) method. The electronic resolving time is less than 25 ps. The resolving time of single-photon (photoelectron) measuring system is 25 to 65 ps
OPTIMIZATION METHODS IN TRANSPORTATION OF FOREST PRODUCTS
Directory of Open Access Journals (Sweden)
Selçuk Gümüş
2008-04-01
Full Text Available Turkey has total of 21.2 million ha (27 % forest land. In this area, average 9 million m3 of logs and 5 million stere of fuel wood have been annually produced by the government forest enterprises. The total annual production is approximately 13million m3 Considering the fact that the costs of transporting forest products was about . 160 million TL in the year of 2006, the importance of optimizing the total costs in transportation can be better understood. Today, there is not common optimization method used at whole transportation problems. However, the decision makers select the most appropriate methods according to their aims.Comprehending of features and capacity of optimization methods is important for selecting of the most appropriate method. The evaluation of optimization methods that can be used at forest products transportation is aimed in this study.
Engineering applications of heuristic multilevel optimization methods
Barthelemy, Jean-Francois M.
1989-01-01
Some engineering applications of heuristic multilevel optimization methods are presented and the discussion focuses on the dependency matrix that indicates the relationship between problem functions and variables. Coordination of the subproblem optimizations is shown to be typically achieved through the use of exact or approximate sensitivity analysis. Areas for further development are identified.
Single-Frame Attitude Determination Methods for Nanosatellites
Directory of Open Access Journals (Sweden)
Guler Demet Cilden
2017-06-01
Full Text Available Single-frame methods of determining the attitude of a nanosatellite are compared in this study. The methods selected for comparison are: Single Value Decomposition (SVD, q method, Quaternion ESTimator (QUEST, Fast Optimal Attitude Matrix (FOAM − all solving optimally the Wahba’s problem, and the algebraic method using only two vector measurements. For proper comparison, two sensors are chosen for the vector observations on-board: magnetometer and Sun sensors. Covariance results obtained as a result of using those methods have a critical importance for a non-traditional attitude estimation approach; therefore, the variance calculations are also presented. The examined methods are compared with respect to their root mean square (RMS error and variance results. Also, some recommendations are given.
Universal Method for Stochastic Composite Optimization Problems
Gasnikov, A. V.; Nesterov, Yu. E.
2018-01-01
A fast gradient method requiring only one projection is proposed for smooth convex optimization problems. The method has a visual geometric interpretation, so it is called the method of similar triangles (MST). Composite, adaptive, and universal versions of MST are suggested. Based on MST, a universal method is proposed for the first time for strongly convex problems (this method is continuous with respect to the strong convexity parameter of the smooth part of the functional). It is shown how the universal version of MST can be applied to stochastic optimization problems.
Topology optimization theory, methods, and applications
Bendsøe, Martin P
2004-01-01
The topology optimization method solves the basic engineering problem of distributing a limited amount of material in a design space. The first edition of this book has become the standard text on optimal design which is concerned with the optimization of structural topology, shape and material. This edition has been substantially revised and updated to reflect progress made in modelling and computational procedures. It also encompasses a comprehensive and unified description of the state-of-the-art of the so-called material distribution method, based on the use of mathematical programming and finite elements. Applications treated include not only structures but also MEMS and materials.
Evolutionary optimization methods for accelerator design
Poklonskiy, Alexey A.
Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained
Topology optimization using the finite volume method
DEFF Research Database (Denmark)
Computational procedures for topology optimization of continuum problems using a material distribution method are typically based on the application of the finite element method (FEM) (see, e.g. [1]). In the present work we study a computational framework based on the finite volume method (FVM, see......, e.g. [2]) in order to develop methods for topology design for applications where conservation laws are critical such that element--wise conservation in the discretized models has a high priority. This encompasses problems involving for example mass and heat transport. The work described...... in this presentation is focused on a prototype model for topology optimization of steady heat diffusion. This allows for a study of the basic ingredients in working with FVM methods when dealing with topology optimization problems. The FVM and FEM based formulations differ both in how one computes the design...
Topology optimization using the finite volume method
DEFF Research Database (Denmark)
Gersborg-Hansen, Allan; Bendsøe, Martin P.; Sigmund, Ole
2005-01-01
Computational procedures for topology optimization of continuum problems using a material distribution method are typically based on the application of the finite element method (FEM) (see, e.g. [1]). In the present work we study a computational framework based on the finite volume method (FVM, s......: the Finite Volume Method. London: Longman Scientific Technical......Computational procedures for topology optimization of continuum problems using a material distribution method are typically based on the application of the finite element method (FEM) (see, e.g. [1]). In the present work we study a computational framework based on the finite volume method (FVM, see......, e.g. [2]) in order to develop methods for topology design for applications where conservation laws are critical such that element--wise conservation in the discretized models has a high priority. This encompasses problems involving for example mass and heat transport. The work described...
The optimal homotopy asymptotic method engineering applications
Marinca, Vasile
2015-01-01
This book emphasizes in detail the applicability of the Optimal Homotopy Asymptotic Method to various engineering problems. It is a continuation of the book “Nonlinear Dynamical Systems in Engineering: Some Approximate Approaches”, published at Springer in 2011, and it contains a great amount of practical models from various fields of engineering such as classical and fluid mechanics, thermodynamics, nonlinear oscillations, electrical machines, and so on. The main structure of the book consists of 5 chapters. The first chapter is introductory while the second chapter is devoted to a short history of the development of homotopy methods, including the basic ideas of the Optimal Homotopy Asymptotic Method. The last three chapters, from Chapter 3 to Chapter 5, are introducing three distinct alternatives of the Optimal Homotopy Asymptotic Method with illustrative applications to nonlinear dynamical systems. The third chapter deals with the first alternative of our approach with two iterations. Five application...
Parameter Optimization of Single-Diode Model of Photovoltaic Cell Using Memetic Algorithm
Directory of Open Access Journals (Sweden)
Yourim Yoon
2015-01-01
Full Text Available This study proposes a memetic approach for optimally determining the parameter values of single-diode-equivalent solar cell model. The memetic algorithm, which combines metaheuristic and gradient-based techniques, has the merit of good performance in both global and local searches. First, 10 single algorithms were considered including genetic algorithm, simulated annealing, particle swarm optimization, harmony search, differential evolution, cuckoo search, least squares method, and pattern search; then their final solutions were used as initial vectors for generalized reduced gradient technique. From this memetic approach, we could further improve the accuracy of the estimated solar cell parameters when compared with single algorithm approaches.
Adam: A Method for Stochastic Optimization
Kingma, D.P.; Ba, L.J.
2015-01-01
We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions. The method is straightforward to implement and is based on adaptive estimates of lower-order moments of the gradients. The method is computationally efficient, has little memory
Optimization of breeding methods when introducing multiple ...
African Journals Online (AJOL)
Optimization of breeding methods when introducing multiple resistance genes from American to Chinese wheat. JN Qi, X Zhang, C Yin, H Li, F Lin. Abstract. Stripe rust is one of the most destructive diseases of wheat worldwide. Growing resistant cultivars with resistance genes is the most effective method to control this ...
Optimizing How We Teach Research Methods
Cvancara, Kristen E.
2017-01-01
Courses: Research Methods (undergraduate or graduate level). Objective: The aim of this exercise is to optimize the ability for students to integrate an understanding of various methodologies across research paradigms within a 15-week semester, including a review of procedural steps and experiential learning activities to practice each method, a…
A method optimization study for atomic absorption ...
African Journals Online (AJOL)
A sensitive, reliable and relative fast method has been developed for the determination of total zinc in insulin by atomic absorption spectrophotometer. This designed study was used to optimize the procedures for the existing methods. Spectrograms of both standard and sample solutions of zinc were recorded by measuring ...
Continuation methods in multiobjective optimization for combined structure control design
Milman, M.; Salama, M.; Scheid, R.; Bruno, R.; Gibson, J. S.
1990-01-01
A homotopy approach involving multiobjective functions is developed to outline the methods that have evolved for the combined control-structure optimization of physical systems encountered in the technology of large space structures. A method to effect a timely consideration of the control performance prior to the finalization of the structural design involves integrating the control and structural design processes into a unified design methodology that combines the two optimization problems into a single formulation. This study uses the combined optimization problem as a family of weighted structural and control costs. Connections with vector optimizations are described; an analysis of the zero-set of required conditions is made, and a numerical example is given.
International Nuclear Information System (INIS)
Lee, Sae Il; Lee, Dong Ho; Kim, Kyu Hong; Park, Tae Choon; Lim, Byeung Jun; Kang, Young Seok
2013-01-01
The multidisciplinary design optimization method, which integrates aerodynamic performance and structural stability, was utilized in the development of a single-stage transonic axial compressor. An approximation model was created using artificial neural network for global optimization within given ranges of variables and several design constraints. The genetic algorithm was used for the exploration of the Pareto front to find the maximum objective function value. The final design was chosen after a second stage gradient-based optimization process to improve the accuracy of the optimization. To validate the design procedure, numerical simulations and compressor tests were carried out to evaluate the aerodynamic performance and safety factor of the optimized compressor. Comparison between numerical optimal results and experimental data are well matched. The optimum shape of the compressor blade is obtained and compared to the baseline design. The proposed optimization framework improves the aerodynamic efficiency and the safety factor.
Optimal estimation of diffusion coefficients from single-particle trajectories
DEFF Research Database (Denmark)
Vestergaard, Christian L.; Blainey, Paul C.; Flyvbjerg, Henrik
2014-01-01
How does one optimally determine the diffusion coefficient of a diffusing particle from a single-time-lapse recorded trajectory of the particle? We answer this question with an explicit, unbiased, and practically optimal covariance-based estimator (CVE). This estimator is regression-free and is far...... substrate, the CVE is biased by substrate motion. However, given some long time series and a substrate under some tension, an extended MLE can separate particle diffusion on the substrate from substrate motion in the laboratory frame. This provides benchmarks that allow removal of bias caused by substrate...
State space Newton's method for topology optimization
DEFF Research Database (Denmark)
Evgrafov, Anton
2014-01-01
We introduce a new algorithm for solving certain classes of topology optimization problems, which enjoys fast local convergence normally achieved by the full space methods while working in a smaller reduced space. The computational complexity of Newton’s direction finding subproblem in the algori......We introduce a new algorithm for solving certain classes of topology optimization problems, which enjoys fast local convergence normally achieved by the full space methods while working in a smaller reduced space. The computational complexity of Newton’s direction finding subproblem...
An introduction to harmony search optimization method
Wang, Xiaolei; Zenger, Kai
2014-01-01
This brief provides a detailed introduction, discussion and bibliographic review of the nature1-inspired optimization algorithm called Harmony Search. It uses a large number of simulation results to demonstrate the advantages of Harmony Search and its variants and also their drawbacks. The authors show how weaknesses can be amended by hybridization with other optimization methods. The Harmony Search Method with Applications will be of value to researchers in computational intelligence in demonstrating the state of the art of research on an algorithm of current interest. It also helps researche
Optimal boarding method for airline passengers
Energy Technology Data Exchange (ETDEWEB)
Steffen, Jason H.; /Fermilab
2008-02-01
Using a Markov Chain Monte Carlo optimization algorithm and a computer simulation, I find the passenger ordering which minimizes the time required to board the passengers onto an airplane. The model that I employ assumes that the time that a passenger requires to load his or her luggage is the dominant contribution to the time needed to completely fill the aircraft. The optimal boarding strategy may reduce the time required to board and airplane by over a factor of four and possibly more depending upon the dimensions of the aircraft. I explore some features of the optimal boarding method and discuss practical modifications to the optimal. Finally, I mention some of the benefits that could come from implementing an improved passenger boarding scheme.
Path optimization method for the sign problem
Directory of Open Access Journals (Sweden)
Ohnishi Akira
2018-01-01
Full Text Available We propose a path optimization method (POM to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral method and the complex Langevin method are promising and extensively discussed. In these methods, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to optimize the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t(f ϵ R and by optimizing f(t to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin method is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to optimize the path.
Adaptive finite element method for shape optimization
Morin, Pedro
2012-01-16
We examine shape optimization problems in the context of inexact sequential quadratic programming. Inexactness is a consequence of using adaptive finite element methods (AFEM) to approximate the state and adjoint equations (via the dual weighted residual method), update the boundary, and compute the geometric functional. We present a novel algorithm that equidistributes the errors due to shape optimization and discretization, thereby leading to coarse resolution in the early stages and fine resolution upon convergence, and thus optimizing the computational effort. We discuss the ability of the algorithm to detect whether or not geometric singularities such as corners are genuine to the problem or simply due to lack of resolution - a new paradigm in adaptivity. © EDP Sciences, SMAI, 2012.
Predictive optimized adaptive PSS in a single machine infinite bus.
Milla, Freddy; Duarte-Mermoud, Manuel A
2016-07-01
Power System Stabilizer (PSS) devices are responsible for providing a damping torque component to generators for reducing fluctuations in the system caused by small perturbations. A Predictive Optimized Adaptive PSS (POA-PSS) to improve the oscillations in a Single Machine Infinite Bus (SMIB) power system is discussed in this paper. POA-PSS provides the optimal design parameters for the classic PSS using an optimization predictive algorithm, which adapts to changes in the inputs of the system. This approach is part of small signal stability analysis, which uses equations in an incremental form around an operating point. Simulation studies on the SMIB power system illustrate that the proposed POA-PSS approach has better performance than the classical PSS. In addition, the effort in the control action of the POA-PSS is much less than that of other approaches considered for comparison. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Method for optimizing harvesting of crops
DEFF Research Database (Denmark)
2010-01-01
In order e.g. to optimize harvesting crops of the kind which may be self dried on a field prior to a harvesting step (116, 118), there is disclosed a method of providing a mobile unit (102) for working (114, 116, 118) the field with crops, equipping the mobile unit (102) with crop biomass measuring...
A Gradient Taguchi Method for Engineering Optimization
Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song
2017-10-01
To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.
Global Optimization Ensemble Model for Classification Methods
Directory of Open Access Journals (Sweden)
Hina Anwar
2014-01-01
Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.
STOCHASTIC GRADIENT METHODS FOR UNCONSTRAINED OPTIMIZATION
Directory of Open Access Journals (Sweden)
Nataša Krejić
2014-12-01
Full Text Available This papers presents an overview of gradient based methods for minimization of noisy functions. It is assumed that the objective functions is either given with error terms of stochastic nature or given as the mathematical expectation. Such problems arise in the context of simulation based optimization. The focus of this presentation is on the gradient based Stochastic Approximation and Sample Average Approximation methods. The concept of stochastic gradient approximation of the true gradient can be successfully extended to deterministic problems. Methods of this kind are presented for the data fitting and machine learning problems.
Method for manufacturing a single crystal nanowire
van den Berg, Albert; Bomer, Johan G.; Carlen, Edwin; Chen, S.; Kraaijenhagen, Roderik Adriaan; Pinedo, Herbert Michael
2013-01-01
A method for manufacturing a single crystal nano-structure is provided comprising the steps of providing a device layer with a 100 structure on a substrate; providing a stress layer onto the device layer; patterning the stress layer along the 110 direction of the device layer; selectively removing
Method for manufacturing a single crystal nanowire
van den Berg, Albert; Bomer, Johan G.; Carlen, Edwin; Chen, S.; Kraaijenhagen, R.A.; Pinedo, Herbert Michael
2010-01-01
A method for manufacturing a single crystal nano-structure is provided comprising the steps of providing a device layer with a 100 structure on a substrate; providing a stress layer onto the device layer; patterning the stress layer along the 110 direction of the device layer; selectively removing
Zhang, Li; Wu, Kexin; Liu, Yang
2017-12-01
A multi-objective performance optimization method is proposed, and the problem that single structural parameters of small fan balance the optimization between the static characteristics and the aerodynamic noise is solved. In this method, three structural parameters are selected as the optimization variables. Besides, the static pressure efficiency and the aerodynamic noise of the fan are regarded as the multi-objective performance. Furthermore, the response surface method and the entropy method are used to establish the optimization function between the optimization variables and the multi-objective performances. Finally, the optimized model is found when the optimization function reaches its maximum value. Experimental data shows that the optimized model not only enhances the static characteristics of the fan but also obviously reduces the noise. The results of the study will provide some reference for the optimization of multi-objective performance of other types of rotating machinery.
Design and analysis of sensorless torque optimization for single phase induction motors
International Nuclear Information System (INIS)
Vaez-Zadeh, S.; Payman, A.
2006-01-01
Single phase induction motors are traditionally used in constant speed applications and suffer from unsymmetrical performance. A reliable speed signal can improve their performance and extend their applications as variable speed drives. In this paper, a speed estimation method for these motors is proposed based on a machine model in the stator flux reference frame. The method is examined in a sensorless torque optimization system over a wide operating range. Extensive simulation results prove the validity of the proposed method. Also, the motor performance under the torque optimization system is analyzed
Bio Inspired Algorithms in Single and Multiobjective Reliability Optimization
DEFF Research Database (Denmark)
Madsen, Henrik; Albeanu, Grigore; Burtschy, Bernard
2014-01-01
Non-traditional search and optimization methods based on natural phenomena have been proposed recently in order to avoid local or unstable behavior when run towards an optimum state. This paper describes the principles of bio inspired algorithms and reports on Migration Algorithms and Bees...
Layout optimization with algebraic multigrid methods
Regler, Hans; Ruede, Ulrich
1993-01-01
Finding the optimal position for the individual cells (also called functional modules) on the chip surface is an important and difficult step in the design of integrated circuits. This paper deals with the problem of relative placement, that is the minimization of a quadratic functional with a large, sparse, positive definite system matrix. The basic optimization problem must be augmented by constraints to inhibit solutions where cells overlap. Besides classical iterative methods, based on conjugate gradients (CG), we show that algebraic multigrid methods (AMG) provide an interesting alternative. For moderately sized examples with about 10000 cells, AMG is already competitive with CG and is expected to be superior for larger problems. Besides the classical 'multiplicative' AMG algorithm where the levels are visited sequentially, we propose an 'additive' variant of AMG where levels may be treated in parallel and that is suitable as a preconditioner in the CG algorithm.
Adiabatic optimization versus diffusion Monte Carlo methods
Jarret, Michael; Jordan, Stephen P.; Lackey, Brad
2016-10-01
Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .
Layout optimization using the homogenization method
Suzuki, Katsuyuki; Kikuchi, Noboru
1993-01-01
A generalized layout problem involving sizing, shape, and topology optimization is solved by using the homogenization method for three-dimensional linearly elastic shell structures in order to seek a possibility of establishment of an integrated design system of automotive car bodies, as an extension of the previous work by Bendsoe and Kikuchi. A formulation of a three-dimensional homogenized shell, a solution algorithm, and several examples of computing the optimum layout are presented in this first part of the two articles.
Lifecycle-Based Swarm Optimization Method for Numerical Optimization
Directory of Open Access Journals (Sweden)
Hai Shen
2014-01-01
Full Text Available Bioinspired optimization algorithms have been widely used to solve various scientific and engineering problems. Inspired by biological lifecycle, this paper presents a novel optimization algorithm called lifecycle-based swarm optimization (LSO. Biological lifecycle includes four stages: birth, growth, reproduction, and death. With this process, even though individual organism died, the species will not perish. Furthermore, species will have stronger ability of adaptation to the environment and achieve perfect evolution. LSO simulates Biological lifecycle process through six optimization operators: chemotactic, assimilation, transposition, crossover, selection, and mutation. In addition, the spatial distribution of initialization population meets clumped distribution. Experiments were conducted on unconstrained benchmark optimization problems and mechanical design optimization problems. Unconstrained benchmark problems include both unimodal and multimodal cases the demonstration of the optimal performance and stability, and the mechanical design problem was tested for algorithm practicability. The results demonstrate remarkable performance of the LSO algorithm on all chosen benchmark functions when compared to several successful optimization techniques.
portfolio optimization based on nonparametric estimation methods
Directory of Open Access Journals (Sweden)
mahsa ghandehari
2017-03-01
Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.
DEFF Research Database (Denmark)
Vestergaard, Christian Lyngby
2012-01-01
Optimal Estimation of Diusion Coecients from Noisy Time-Lapse- Measurements of Single-Particle Trajectories Single-particle tracking techniques allow quantitative measurements of diusion at the single-molecule level. Recorded time-series are mostly short and contain considerable measurement noise....... The standard method for estimating diusion coecients from single-particle trajectories is based on leastsquares tting to the experimentally measured mean square displacements. This method is highly inecient, since it ignores the high correlations inherent in these. We derive the exact maximum likelihood...... parameter values. We extend the methods to particles diusing on a uctuating substrate, e.g., exible or semi exible polymers such as DNA, and show that uctuations induce an important bias in the estimates of diusion coecients if they are not accounted for. We apply the methods to obtain precise estimates...
A Survey of Methods for Gas-Lift Optimization
Directory of Open Access Journals (Sweden)
Kashif Rashid
2012-01-01
Full Text Available This paper presents a survey of methods and techniques developed for the solution of the continuous gas-lift optimization problem over the last two decades. These range from isolated single-well analysis all the way to real-time multivariate optimization schemes encompassing all wells in a field. While some methods are clearly limited due to their neglect of treating the effects of inter-dependent wells with common flow lines, other methods are limited due to the efficacy and quality of the solution obtained when dealing with large-scale networks comprising hundreds of difficult to produce wells. The aim of this paper is to provide an insight into the approaches developed and to highlight the challenges that remain.
The construction of optimal stated choice experiments theory and methods
Street, Deborah J
2007-01-01
The most comprehensive and applied discussion of stated choice experiment constructions available The Construction of Optimal Stated Choice Experiments provides an accessible introduction to the construction methods needed to create the best possible designs for use in modeling decision-making. Many aspects of the design of a generic stated choice experiment are independent of its area of application, and until now there has been no single book describing these constructions. This book begins with a brief description of the various areas where stated choice experiments are applicable, including marketing and health economics, transportation, environmental resource economics, and public welfare analysis. The authors focus on recent research results on the construction of optimal and near-optimal choice experiments and conclude with guidelines and insight on how to properly implement these results. Features of the book include: Construction of generic stated choice experiments for the estimation of main effects...
MIND. Optimization method for industrial energy systems
Energy Technology Data Exchange (ETDEWEB)
Nilsson, Katarina.
1990-04-01
It is of great importance to encourage the consciousness of energy demand and energy conservation issues in industrial applications as the potential for savings in many cases is very good. The MIND optimization method is a tool for life cycle cost minimization of a flexible range of industrial energy systems. It can be used in analyses of energy systems in response to changes within the systems, changes of the boundary conditions and synthesis of industrial energy systems. The aim is to find an optimal structure in the energy system where several alternative process routes and kinds of energy are available. Equipment alternatives may concern choices of recondition, exchange, new tehnology, time of investment and size considerations. Energy can be supplied to the industrial energy system as electricity, steam and with various kinds of fuel. Energy and material flows are represented in the optimization as well as non-linearities in energy demand functions and investment cost functions. Boundary conditions and process variations can be represented with a time division where the length of each time step and the number of time steps can be chosen. Two applications are presented to show the flexibility of the MIND method, heat treating processes in the engineering industry and milk processing in a dairy. (36 refs.).
Optimal correction and design parameter search by modern methods of rigorous global optimization
International Nuclear Information System (INIS)
Makino, K.; Berz, M.
2011-01-01
optics for the computation of aberrations allow the determination of particularly sharp underestimators for large regions. As a consequence, the subsequent progressive pruning of the allowed search space as part of the optimization progresses is carried out particularly effectively. The end result is the rigorous determination of the single or multiple optimal solutions of the parameter optimization, regardless of their location, their number, and the starting values of optimization. The methods are particularly powerful if executed in interplay with genetic optimizers generating their new populations within the currently active unpruned space. Their current best guess provides rigorous upper bounds of the minima, which can then beneficially be used for better pruning. Examples of the method and its performance will be presented, including the determination of all operating points of desired tunes or chromaticities, etc. in storage ring lattices.
METHODS OF INTEGRATED OPTIMIZATION MAGLEV TRANSPORT SYSTEMS
Directory of Open Access Journals (Sweden)
A. Lasher
2013-09-01
example, this research proved the sustainability of the proposed integrated optimization parameters of transport systems. This approach could be applied not only for MTS, but also for other transport systems. Originality. The bases of the complex optimization of transport presented are the new system of universal scientific methods and approaches that ensure high accuracy and authenticity of calculations with the simulation of transport systems and transport networks taking into account the dynamics of their development. Practical value. The development of the theoretical and technological bases of conducting the complex optimization of transport makes it possible to create the scientific tool, which ensures the fulfillment of the automated simulation and calculating of technical and economic structure and technology of the work of different objects of transport, including its infrastructure.
Methods for Distributed Optimal Energy Management
DEFF Research Database (Denmark)
Brehm, Robert
micro-grids by prevention of meteorologic power flows into high voltage grids. A method, based on mathematical optimisation and a consensus algorithm is introduced and evaluated to coordinate charge/discharge scheduling for batteries between a number of buildings in order to improve self......The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast......-consumption of renewable energy resources in low voltage grids. It can be shown that this method prevents mutual discharging of batteries and prevents peak loads, a supervisory control instance can dictate the level of autarchy from the utility grid. Further it is shown that the problem of optimal energy flow management...
Parametric Method For Evaluating Optimal Ship Deadweight
Directory of Open Access Journals (Sweden)
Michalski Jan P.
2014-04-01
Full Text Available The paper presents a method of choosing the optimal value of the cargo ships deadweight. The method may be useful at the stage of establishing the main owners requirements concerning the ship design parameters as well as for choosing a proper ship for a given transportation task. The deadweight is determined on the basis of a selected economic measure of the transport effectiveness of ship - the Required Freight Rate (RFR. The mathematical model of the problem is of a deterministic character and the simplifying assumptions are justified for ships operating in the liner trade. The assumptions are so selected that solution of the problem is obtained in analytical closed form. The presented method can be useful for application in the pre-investment ships designing parameters simulation or transportation task studies.
Optimal management strategies in variable environments: Stochastic optimal control methods
Williams, B.K.
1985-01-01
Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both
Global optimization methods for engineering design
Arora, Jasbir S.
1990-01-01
The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.
PRODUCT OPTIMIZATION METHOD BASED ON ANALYSIS OF OPTIMAL VALUES OF THEIR CHARACTERISTICS
Directory of Open Access Journals (Sweden)
Constantin D. STANESCU
2016-05-01
Full Text Available The paper presents an original method of optimizing products based on the analysis of optimal values of their characteristics . Optimization method comprises statistical model and analytical model . With this original method can easily and quickly obtain optimal product or material .
Optimal Variational Method for Truly Nonlinear Oscillators
Directory of Open Access Journals (Sweden)
Vasile Marinca
2013-01-01
Full Text Available The Optimal Variational Method (OVM is introduced and applied for calculating approximate periodic solutions of “truly nonlinear oscillators”. The main advantage of this procedure consists in that it provides a convenient way to control the convergence of approximate solutions in a very rigorous way and allows adjustment of convergence regions where necessary. This approach does not depend upon any small or large parameters. A very good agreement was found between approximate and numerical solution, which proves that OVM is very efficient and accurate.
Hybrid intelligent optimization methods for engineering problems
Pehlivanoglu, Yasin Volkan
The purpose of optimization is to obtain the best solution under certain conditions. There are numerous optimization methods because different problems need different solution methodologies; therefore, it is difficult to construct patterns. Also mathematical modeling of a natural phenomenon is almost based on differentials. Differential equations are constructed with relative increments among the factors related to yield. Therefore, the gradients of these increments are essential to search the yield space. However, the landscape of yield is not a simple one and mostly multi-modal. Another issue is differentiability. Engineering design problems are usually nonlinear and they sometimes exhibit discontinuous derivatives for the objective and constraint functions. Due to these difficulties, non-gradient-based algorithms have become more popular in recent decades. Genetic algorithms (GA) and particle swarm optimization (PSO) algorithms are popular, non-gradient based algorithms. Both are population-based search algorithms and have multiple points for initiation. A significant difference from a gradient-based method is the nature of the search methodologies. For example, randomness is essential for the search in GA or PSO. Hence, they are also called stochastic optimization methods. These algorithms are simple, robust, and have high fidelity. However, they suffer from similar defects, such as, premature convergence, less accuracy, or large computational time. The premature convergence is sometimes inevitable due to the lack of diversity. As the generations of particles or individuals in the population evolve, they may lose their diversity and become similar to each other. To overcome this issue, we studied the diversity concept in GA and PSO algorithms. Diversity is essential for a healthy search, and mutations are the basic operators to provide the necessary variety within a population. After having a close scrutiny of the diversity concept based on qualification and
Large pyramid shaped single crystals of BiFeO{sub 3} by solvothermal synthesis method
Energy Technology Data Exchange (ETDEWEB)
Sornadurai, D.; Ravindran, T. R.; Paul, V. Thomas; Sastry, V. Sankara [Condensed Matter Physics Division, Materials Science Group, Physical Metallurgy Division, Metallurgy and Materials Group, Indira Gandhi Centre for Atomic Research, Kalpakkam, Tamil Nadu (India); Condensed Matter Physics Division, Materials Science Group (India)
2012-06-05
Synthesis parameters are optimized in order to grow single crystals of multiferroic BiFeO{sub 3}. 2 to 3 mm size pyramid (tetrahedron) shaped single crystals were successfully obtained by solvothermal method. Scanning electron microscopy with EDAX confirmed the phase formation. Raman scattering spectra of bulk BiFeO3 single crystals have been measured which match well with reported spectra.
Firefly Optimization and Mathematical Modeling of a Vehicle Crash Test Based on Single-Mass
Directory of Open Access Journals (Sweden)
Andreas Klausen
2014-01-01
Full Text Available In this paper mathematical modeling of a vehicle crash test based on a single-mass is studied. The model under consideration consists of a single-mass coupled with a spring and/or a damper. The parameters for the spring and damper are obtained by analyzing the measured acceleration in the center of gravity of the vehicle during a crash. A model with a nonlinear spring and damper is also proposed and the parameters will be optimized with different damper and spring characteristics and optimization algorithms. The optimization algorithms used are interior-point and firefly algorithm. The objective of this paper is to compare different methods used to establish a simple model of a car crash and validate the results against real crash data.
Comparing methods for single paragraph similarity analysis.
Stone, Benjamin; Dennis, Simon; Kwantes, Peter J
2011-01-01
The focus of this paper is two-fold. First, similarities generated from six semantic models were compared to human ratings of paragraph similarity on two datasets-23 World Entertainment News Network paragraphs and 50 ABC newswire paragraphs. Contrary to findings on smaller textual units such as word associations (Griffiths, Tenenbaum, & Steyvers, 2007), our results suggest that when single paragraphs are compared, simple nonreductive models (word overlap and vector space) can provide better similarity estimates than more complex models (LSA, Topic Model, SpNMF, and CSM). Second, various methods of corpus creation were explored to facilitate the semantic models' similarity estimates. Removing numeric and single characters, and also truncating document length improved performance. Automated construction of smaller Wikipedia-based corpora proved to be very effective, even improving upon the performance of corpora that had been chosen for the domain. Model performance was further improved by augmenting corpora with dataset paragraphs. Copyright © 2010 Cognitive Science Society, Inc.
The methods and applications of optimization of radiation protection
International Nuclear Information System (INIS)
Liu Hua
2007-01-01
Optimization is the most important principle in radiation protection. The present article briefs the concept and up-to-date progress of optimization of protection, introduces some methods used in current optimization analysis, and presents various applications of optimization of protection. The author emphasizes that optimization of protection is a forward-looking iterative process aimed at preventing exposures before they occur. (author)
Circular SAR Optimization Imaging Method of Buildings
Directory of Open Access Journals (Sweden)
Wang Jian-feng
2015-12-01
Full Text Available The Circular Synthetic Aperture Radar (CSAR can obtain the entire scattering properties of targets because of its great ability of 360° observation. In this study, an optimal orientation of the CSAR imaging algorithm of buildings is proposed by applying a combination of coherent and incoherent processing techniques. FEKO software is used to construct the electromagnetic scattering modes and simulate the radar echo. The FEKO imaging results are compared with the isotropic scattering results. On comparison, the optimal azimuth coherent accumulation angle of CSAR imaging of buildings is obtained. Practically, the scattering directions of buildings are unknown; therefore, we divide the 360° echo of CSAR into many overlapped and few angle echoes corresponding to the sub-aperture and then perform an imaging procedure on each sub-aperture. Sub-aperture imaging results are applied to obtain the all-around image using incoherent fusion techniques. The polarimetry decomposition method is used to decompose the all-around image and further retrieve the edge information of buildings successfully. The proposed method is validated with P-band airborne CSAR data from Sichuan, China.
Optimization methods for activities selection problems
Mahad, Nor Faradilah; Alias, Suriana; Yaakop, Siti Zulaika; Arshad, Norul Amanina Mohd; Mazni, Elis Sofia
2017-08-01
Co-curriculum activities must be joined by every student in Malaysia and these activities bring a lot of benefits to the students. By joining these activities, the students can learn about the time management and they can developing many useful skills. This project focuses on the selection of co-curriculum activities in secondary school using the optimization methods which are the Analytic Hierarchy Process (AHP) and Zero-One Goal Programming (ZOGP). A secondary school in Negeri Sembilan, Malaysia was chosen as a case study. A set of questionnaires were distributed randomly to calculate the weighted for each activity based on the 3 chosen criteria which are soft skills, interesting activities and performances. The weighted was calculated by using AHP and the results showed that the most important criteria is soft skills. Then, the ZOGP model will be analyzed by using LINGO Software version 15.0. There are two priorities to be considered. The first priority which is to minimize the budget for the activities is achieved since the total budget can be reduced by RM233.00. Therefore, the total budget to implement the selected activities is RM11,195.00. The second priority which is to select the co-curriculum activities is also achieved. The results showed that 9 out of 15 activities were selected. Thus, it can concluded that AHP and ZOGP approach can be used as the optimization methods for activities selection problem.
Computational methods applied to wind tunnel optimization
Lindsay, David
This report describes computational methods developed for optimizing the nozzle of a three-dimensional subsonic wind tunnel. This requires determination of a shape that delivers flow to the test section, typically with a speed increase of 7 or more and a velocity uniformity of .25% or better, in a compact length without introducing boundary layer separation. The need for high precision, smooth solutions, and three-dimensional modeling required the development of special computational techniques. These include: (1) alternative formulations to Neumann and Dirichlet boundary conditions, to deal with overspecified, ill-posed, or cyclic problems, and to reduce the discrepancy between numerical solutions and boundary conditions; (2) modification of the Finite Element Method to obtain solutions with numerically exact conservation properties; (3) a Matlab implementation of general degree Finite Element solvers for various element designs in two and three dimensions, exploiting vector indexing to obtain optimal efficiency; (4) derivation of optimal quadrature formulas for integration over simplexes in two and three dimensions, and development of a program for semi-automated generation of formulas for any degree and dimension; (5) a modification of a two-dimensional boundary layer formulation to provide accurate flow conservation in three dimensions, and modification of the algorithm to improve stability; (6) development of multi-dimensional spline functions to achieve smoother solutions in three dimensions by post-processing, new three-dimensional elements for C1 basis functions, and a program to assist in the design of elements with higher continuity; and (7) a development of ellipsoidal harmonics and Lame's equation, with generalization to any dimension and a demonstration that Cartesian, cylindrical, spherical, spheroidal, and sphero-conical harmonics are all limiting cases. The report includes a description of the Finite Difference, Finite Volume, and domain remapping
Optimal Rules for Single Machine Scheduling with Stochastic Breakdowns
Directory of Open Access Journals (Sweden)
Jinwei Gu
2014-01-01
Full Text Available This paper studies the problem of scheduling a set of jobs on a single machine subject to stochastic breakdowns, where jobs have to be restarted if preemptions occur because of breakdowns. The breakdown process of the machine is independent of the jobs processed on the machine. The processing times required to complete the jobs are constants if no breakdown occurs. The machine uptimes are independently and identically distributed (i.i.d. and are subject to a uniform distribution. It is proved that the Longest Processing Time first (LPT rule minimizes the expected makespan. For the large-scale problem, it is also showed that the Shortest Processing Time first (SPT rule is optimal to minimize the expected total completion times of all jobs.
Optimized variational analysis scheme of single Doppler radar wind data
Sasaki, Yoshi K.; Allen, Steve; Mizuno, Koki; Whitehead, Victor; Wilk, Kenneth E.
1989-01-01
A computer scheme for extracting singularities has been developed and applied to single Doppler radar wind data. The scheme is planned for use in real-time wind and singularity analysis and forecasting. The method, known as Doppler Operational Variational Extraction of Singularities is outlined, focusing on the principle of local symmetry. Results are presented from the application of the scheme to a storm-generated gust front in Oklahoma on May 28, 1987.
Morovati, Amirhosein; Ghaffari, Alireza; Erfani Jabarian, Lale; Mehramizi, Ali
2017-01-01
Guaifenesin, a highly water-soluble active (50 mg/mL), classified as a BCS class I drug. Owing to its poor flowability and compressibility, formulating tablets especially high-dose one, may be a challenge. Direct compression may not be feasible. Bilayer tablet technology applied to Mucinex®, endures challenges to deliver a robust formulation. To overcome challenges involved in bilayer-tablet manufacturing and powder compressibility, an optimized single layer tablet prepared by a binary mixture (Two-in-one), mimicking the dual drug release character of Mucinex ® was purposed. A 3-factor, 3-level Box-Behnken design was applied to optimize seven considered dependent variables (Release "%" in 1, 2, 4, 6, 8, 10 and 12 h) regarding different levels of independent one (X 1 : Cetyl alcohol, X 2 : Starch 1500 ® , X 3 : HPMC K100M amounts). Two granule portions were prepared using melt and wet granulations, blended together prior to compression. An optimum formulation was obtained (X 1 : 37.10, X 2 : 2, X 3 : 42.49 mg). Desirability function was 0.616. F2 and f1 between release profiles of Mucinex® and the optimum formulation were 74 and 3, respectively. An n-value of about 0.5 for both optimum and Mucinex® formulations showed diffusion (Fickian) control mechanism. However, HPMC K100M rise in 70 mg accompanied cetyl alcohol rise in 60 mg led to first order kinetic (n = 0.6962). The K values of 1.56 represented an identical burst drug releases. Cetyl alcohol and starch 1500 ® modulated guaifenesin release from HPMC K100M matrices, while due to their binding properties, improved its poor flowability and compressibility, too.
On Best Practice Optimization Methods in R
Directory of Open Access Journals (Sweden)
John C. Nash
2014-09-01
Full Text Available R (R Core Team 2014 provides a powerful and flexible system for statistical computations. It has a default-install set of functionality that can be expanded by the use of several thousand add-in packages as well as user-written scripts. While R is itself a programming language, it has proven relatively easy to incorporate programs in other languages, particularly Fortran and C. Success, however, can lead to its own costs: • Users face a confusion of choice when trying to select packages in approaching a problem. • A need to maintain workable examples using early methods may mean some tools offered as a default may be dated. • In an open-source project like R, how to decide what tools offer "best practice" choices, and how to implement such a policy, present a serious challenge. We discuss these issues with reference to the tools in R for nonlinear parameter estimation (NLPE and optimization, though for the present article `optimization` will be limited to function minimization of essentially smooth functions with at most bounds constraints on the parameters. We will abbreviate this class of problems as NLPE. We believe that the concepts proposed are transferable to other classes of problems seen by R users.
Optimization of single-step tapering amplitude and energy detuning for high-gain FELs
Li, He-Ting; Jia, Qi-Ka
2015-01-01
We put forward a method to optimize the single-step tapering amplitude of undulator strength and initial energy tuning of electron beam to maximize the saturation power of high gain free-electron lasers (FELs), based on the physics of longitudinal electron beam phase space. Using the FEL simulation code GENESIS, we numerically demonstrate the accuracy of the estimations for parameters corresponding to the linac coherent light source and the Tesla test facility.
Improvement of Source Number Estimation Method for Single Channel Signal.
Directory of Open Access Journals (Sweden)
Zhi Dong
Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.
Eijlander, Robyn T.; Kuipers, Oscar P.
Single-cell methods are a powerful application in microbial research to study the molecular mechanism underlying phenotypic heterogeneity and cell-to-cell variability. Here, we describe the optimization and application of single-cell time-lapse fluorescence microscopy for the food spoilage bacterium
Evolutionary optimization of classifiers and features for single-trial EEG Discrimination
Directory of Open Access Journals (Sweden)
Wessberg Johan
2007-08-01
Full Text Available Abstract Background State-of-the-art signal processing methods are known to detect information in single-trial event-related EEG data, a crucial aspect in development of real-time applications such as brain computer interfaces. This paper investigates one such novel approach, evaluating how individual classifier and feature subset tailoring affects classification of single-trial EEG finger movements. The discrete wavelet transform was used to extract signal features that were classified using linear regression and non-linear neural network models, which were trained and architecturally optimized with evolutionary algorithms. The input feature subsets were also allowed to evolve, thus performing feature selection in a wrapper fashion. Filter approaches were implemented as well by limiting the degree of optimization. Results Using only 10 features and 100 patterns, the non-linear wrapper approach achieved the highest validation classification accuracy (subject mean 75%, closely followed by the linear wrapper method (73.5%. The optimal features differed much between subjects, yet some physiologically plausible patterns were observed. Conclusion High degrees of classifier parameter, structure and feature subset tailoring on individual levels substantially increase single-trial EEG classification rates, an important consideration in areas where highly accurate detection rates are essential. Also, the presented method provides insight into the spatial characteristics of finger movement EEG patterns.
Numerical methods and optimization a consumer guide
Walter, Éric
2014-01-01
Initial training in pure and applied sciences tends to present problem-solving as the process of elaborating explicit closed-form solutions from basic principles, and then using these solutions in numerical applications. This approach is only applicable to very limited classes of problems that are simple enough for such closed-form solutions to exist. Unfortunately, most real-life problems are too complex to be amenable to this type of treatment. Numerical Methods and Optimization – A Consumer Guide presents methods for dealing with them. Shifting the paradigm from formal calculus to numerical computation, the text makes it possible for the reader to · discover how to escape the dictatorship of those particular cases that are simple enough to receive a closed-form solution, and thus gain the ability to solve complex, real-life problems; · understand the principles behind recognized algorithms used in state-of-the-art numerical software; · learn the advantag...
Optimization of radiation monitoring methods of environment
International Nuclear Information System (INIS)
Bondarkov, M.D.
2012-01-01
Full text : Report is devoted to the substantiation of the ways to optimize methods of providing radioecological monitoring (RM) in Ukraine. For this purpose the design features of RM at different levels, the analysis of modern requirements for the RM, the methods for RM ensuring were considered in the dissertation, the use for instrumentation supply of laboratories of new simplified methods, that were developed in this paper, was proposed. This work proposed to strengthen radiobiological component of monitoring, the advantages and disadvantages of the proposed methods were analyzed. The research of the spatial and vertical distribution of radionuclides in soils of the most polluted part of the Chernobyl zone was conducted using the proposed methods. For the first time the parameters of vertical migration of the isotopes 154Eu, 238-240Pu and 241Am in soil profiles of Ch NPP close zone were calculated. The parameters of vertical migration of 90Sr, 137Cs were refined. The calculations of effective environmental and semi-refined periods of above mentioned isotopes for different soil types were conducted, the estimation of dose rates to biota was done, and radioecological characterization of the test sites of the cooling pond was conducted. The features of radioecology of birds, rodents and shrews, bats and amphibians were studied. The dose rates for these species were assessed and their compliance with 103 ICRP Guiding. The species differences in the pollution of wild rodents, insectivores, passerine birds, amphibians and bats on a large amount of factual material were estimated. The investigation of the radioecological contamination of the features of the urbanized landscape was conducted on the example of Pripyat silty. The practical significance of the work is that the developed methods of non radiochemical determination of radiostrontium activity, alpha emitting isotopes of plutonium, which can significantly hasten and facilitate the evaluation of the
Method for optimizing harvesting of crops
DEFF Research Database (Denmark)
2008-01-01
In order e.g. to optimize harvesting crops of the kind which may be self dried on a field prior to a harvesting step (116, 118), there is disclosed a method of providing a mobile unit (102) for working (114, 116, 118) the field with crops, equipping the mobile unit (102) with crop biomass...... measuring means (108) and with crop moisture content measurement means (106), measuring crop biomass (107a, 107b) and crop moisture content (109a, 109b) of the crop, providing a spatial crop biomass and crop moisture content characteristics map of the field based on the biomass data (107a, 107b) provided...... from moving the mobile unit on the field and the moisture content (109a, 109b), and determining an optimised drying time (104a, 104b) prior to the following harvesting step (116, 118) in response to the spatial crop biomass and crop moisture content characteristics map and in response to a weather...
Bellman – Ford Method for Solving the Optimal Route Problem
Directory of Open Access Journals (Sweden)
Laima Greičiūnė
2014-12-01
Full Text Available The article aims to adapt the dynamic programming method for optimal route determination using real-time data on ITS equipment. For this purpose, VBA code has been applied for solving the Bellman - Ford method for an optimal route considering optimality criteria for time, distance and the amount of emissions.
Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks
Rai, Man Mohan
2006-01-01
Genetic and evolutionary algorithms have been applied to solve numerous problems in engineering design where they have been used primarily as optimization procedures. These methods have an advantage over conventional gradient-based search procedures became they are capable of finding global optima of multi-modal functions and searching design spaces with disjoint feasible regions. They are also robust in the presence of noisy data. Another desirable feature of these methods is that they can efficiently use distributed and parallel computing resources since multiple function evaluations (flow simulations in aerodynamics design) can be performed simultaneously and independently on ultiple processors. For these reasons genetic and evolutionary algorithms are being used more frequently in design optimization. Examples include airfoil and wing design and compressor and turbine airfoil design. They are also finding increasing use in multiple-objective and multidisciplinary optimization. This lecture will focus on an evolutionary method that is a relatively new member to the general class of evolutionary methods called differential evolution (DE). This method is easy to use and program and it requires relatively few user-specified constants. These constants are easily determined for a wide class of problems. Fine-tuning the constants will off course yield the solution to the optimization problem at hand more rapidly. DE can be efficiently implemented on parallel computers and can be used for continuous, discrete and mixed discrete/continuous optimization problems. It does not require the objective function to be continuous and is noise tolerant. DE and applications to single and multiple-objective optimization will be included in the presentation and lecture notes. A method for aerodynamic design optimization that is based on neural networks will also be included as a part of this lecture. The method offers advantages over traditional optimization methods. It is more
Boroumand, Samira; Chamjangali, Mansour Arab; Bagherian, Ghadamali
2017-03-01
A simple and sensitive double injection/single detector flow injection analysis (FIA) method is proposed for the simultaneous kinetic determination of ascorbic acid (AA) and uric acid (UA). This method is based upon the difference between the rates of the AA and UA reactions with Fe3 + in the presence of 1, 10-phenanthroline (phen). The absorbance of Fe2 +/1, 10-phenanthroline (Fe-phen) complex obtained as the product was measured spectrophotometrically at 510 nm. To reach a good accuracy in the differential kinetic determination via the mathematical manipulations of the transient signals, different criteria were considered in the selection of the optimum conditions. The multi criteria decision making (MCDM) approach was applied for the selection of the optimum conditions. The importance weights of the evaluation criteria were determined using the analytic hierarchy process, entropy method, and compromised weighting (CW). The experimental conditions (alternatives) were ranked by the technique for order preference by similarity to an ideal solution. Under the selected optimum conditions, the obtained analytical signals were linear in the ranges of 0.50-5.00 and 0.50-4.00 mg L- 1 for AA and UA, respectively. The 3σ detection limits were 0.07 mg L- 1 for AA and 0.12 mg L- 1 for UA. The relative standard deviations for four replicate determinations of AA and UA were 2.03% and 3.30% respectively. The method was also applied for the analysis of analytes in the blood serum, Vitamine C tablets, and tap water with satisfactory results.
A Review of Design Optimization Methods for Electrical Machines
Directory of Open Access Journals (Sweden)
Gang Lei
2017-11-01
Full Text Available Electrical machines are the hearts of many appliances, industrial equipment and systems. In the context of global sustainability, they must fulfill various requirements, not only physically and technologically but also environmentally. Therefore, their design optimization process becomes more and more complex as more engineering disciplines/domains and constraints are involved, such as electromagnetics, structural mechanics and heat transfer. This paper aims to present a review of the design optimization methods for electrical machines, including design analysis methods and models, optimization models, algorithms and methods/strategies. Several efficient optimization methods/strategies are highlighted with comments, including surrogate-model based and multi-level optimization methods. In addition, two promising and challenging topics in both academic and industrial communities are discussed, and two novel optimization methods are introduced for advanced design optimization of electrical machines. First, a system-level design optimization method is introduced for the development of advanced electric drive systems. Second, a robust design optimization method based on the design for six-sigma technique is introduced for high-quality manufacturing of electrical machines in production. Meanwhile, a proposal is presented for the development of a robust design optimization service based on industrial big data and cloud computing services. Finally, five future directions are proposed, including smart design optimization method for future intelligent design and production of electrical machines.
Design optimization methods for genomic DNA tiling arrays.
Bertone, Paul; Trifonov, Valery; Rozowsky, Joel S; Schubert, Falk; Emanuelsson, Olof; Karro, John; Kao, Ming-Yang; Snyder, Michael; Gerstein, Mark
2006-02-01
A recent development in microarray research entails the unbiased coverage, or tiling, of genomic DNA for the large-scale identification of transcribed sequences and regulatory elements. A central issue in designing tiling arrays is that of arriving at a single-copy tile path, as significant sequence cross-hybridization can result from the presence of non-unique probes on the array. Due to the fragmentation of genomic DNA caused by the widespread distribution of repetitive elements, the problem of obtaining adequate sequence coverage increases with the sizes of subsequence tiles that are to be included in the design. This becomes increasingly problematic when considering complex eukaryotic genomes that contain many thousands of interspersed repeats. The general problem of sequence tiling can be framed as finding an optimal partitioning of non-repetitive subsequences over a prescribed range of tile sizes, on a DNA sequence comprising repetitive and non-repetitive regions. Exact solutions to the tiling problem become computationally infeasible when applied to large genomes, but successive optimizations are developed that allow their practical implementation. These include an efficient method for determining the degree of similarity of many oligonucleotide sequences over large genomes, and two algorithms for finding an optimal tile path composed of longer sequence tiles. The first algorithm, a dynamic programming approach, finds an optimal tiling in linear time and space; the second applies a heuristic search to reduce the space complexity to a constant requirement. A Web resource has also been developed, accessible at http://tiling.gersteinlab.org, to generate optimal tile paths from user-provided DNA sequences.
Topology Optimization Methods for Acoustic-Mechanical Coupling Problems
DEFF Research Database (Denmark)
Jensen, Jakob Søndergaard; Dilgen, Cetin Batur; Dilgen, Sümer Bartug
2017-01-01
A comparative overview of methods for topology optimization of acoustic mechanical coupling problems is provided. The goal is to pave the road for developing efficient optimization schemes for the design of complex acoustic devices such as hearingaids.......A comparative overview of methods for topology optimization of acoustic mechanical coupling problems is provided. The goal is to pave the road for developing efficient optimization schemes for the design of complex acoustic devices such as hearingaids....
A Review of Deterministic Optimization Methods in Engineering and Management
Directory of Open Access Journals (Sweden)
Ming-Hua Lin
2012-01-01
Full Text Available With the increasing reliance on modeling optimization problems in practical applications, a number of theoretical and algorithmic contributions of optimization have been proposed. The approaches developed for treating optimization problems can be classified into deterministic and heuristic. This paper aims to introduce recent advances in deterministic methods for solving signomial programming problems and mixed-integer nonlinear programming problems. A number of important applications in engineering and management are also reviewed to reveal the usefulness of the optimization methods.
Method of optimization onboard communication network
Platoshin, G. A.; Selvesuk, N. I.; Semenov, M. E.; Novikov, V. M.
2018-02-01
In this article the optimization levels of onboard communication network (OCN) are proposed. We defined the basic parameters, which are necessary for the evaluation and comparison of modern OCN, we identified also a set of initial data for possible modeling of the OCN. We also proposed a mathematical technique for implementing the OCN optimization procedure. This technique is based on the principles and ideas of binary programming. It is shown that the binary programming technique allows to obtain an inherently optimal solution for the avionics tasks. An example of the proposed approach implementation to the problem of devices assignment in OCN is considered.
On the Convergence Analysis of the Optimized Gradient Method.
Kim, Donghwan; Fessler, Jeffrey A
2017-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.
Single-cell qPCR on dispersed primary pituitary cells -an optimized protocol
Directory of Open Access Journals (Sweden)
Haug Trude M
2010-11-01
Full Text Available Abstract Background The incidence of false positives is a potential problem in single-cell PCR experiments. This paper describes an optimized protocol for single-cell qPCR measurements in primary pituitary cell cultures following patch-clamp recordings. Two different cell harvesting methods were assessed using both the GH4 prolactin producing cell line from rat, and primary cell culture from fish pituitaries. Results Harvesting whole cells followed by cell lysis and qPCR performed satisfactory on the GH4 cell line. However, harvesting of whole cells from primary pituitary cultures regularly produced false positives, probably due to RNA leakage from cells ruptured during the dispersion of the pituitary cells. To reduce RNA contamination affecting the results, we optimized the conditions by harvesting only the cytosol through a patch pipette, subsequent to electrophysiological experiments. Two important factors proved crucial for reliable harvesting. First, silanizing the patch pipette glass prevented foreign extracellular RNA from attaching to charged residues on the glass surface. Second, substituting the commonly used perforating antibiotic amphotericin B with β-escin allowed efficient cytosol harvest without loosing the giga seal. Importantly, the two harvesting protocols revealed no difference in RNA isolation efficiency. Conclusion Depending on the cell type and preparation, validation of the harvesting technique is extremely important as contaminations may give false positives. Here we present an optimized protocol allowing secure harvesting of RNA from single cells in primary pituitary cell culture following perforated whole cell patch clamp experiments.
AC signal characterization for optimization of a CMOS single-electron pump
Murray, Roy; Perron, Justin K.; Stewart, M. D., Jr.; Zimmerman, Neil M.
2018-02-01
Pumping single electrons at a set rate is being widely pursued as an electrical current standard. Semiconductor charge pumps have been pursued in a variety of modes, including single gate ratchet, a variety of 2-gate ratchet pumps, and 2-gate turnstiles. Whether pumping with one or two AC signals, lower error rates can result from better knowledge of the properties of the AC signal at the device. In this work, we operated a CMOS single-electron pump with a 2-gate ratchet style measurement and used the results to characterize and optimize our two AC signals. Fitting this data at various frequencies revealed both a difference in signal path length and attenuation between our two AC lines. Using this data, we corrected for the difference in signal path length and attenuation by applying an offset in both the phase and the amplitude at the signal generator. Operating the device as a turnstile while using the optimized parameters determined from the 2-gate ratchet measurement led to much flatter, more robust charge pumping plateaus. This method was useful in tuning our device up for optimal charge pumping, and may prove useful to the semiconductor quantum dot community to determine signal attenuation and path differences at the device.
A simple method to optimize HMC performance
Bussone, Andrea; Drach, Vincent; Hansen, Martin; Hietanen, Ari; Rantaharju, Jarno; Pica, Claudio
2016-01-01
We present a practical strategy to optimize a set of Hybrid Monte Carlo parameters in simulations of QCD and QCD-like theories. We specialize to the case of mass-preconditioning, with multiple time-step Omelyan integrators. Starting from properties of the shadow Hamiltonian we show how the optimal setup for the integrator can be chosen once the forces and their variances are measured, assuming that those only depend on the mass-preconditioning parameter.
A simple method to optimize HMC performance
Bussone, A.; Della Morte, M.; Drach, V.; Hansen, M.; Hietanen, A.; Rantaharju, J.; Pica, C.
We present a practical strategy to optimize a set of Hybrid Monte Carlo parameters in simulations of QCD and QCD-like theories. We specialize to the case of mass-preconditioning, with multiple time-step Omelyan integrators. Starting from properties of the shadow Hamiltonian we show how the optimal setup for the integrator can be chosen once the forces and their variances are measured, assuming that those only depend on the mass-preconditioning parameter.
Topology optimization based on the harmony search method
International Nuclear Information System (INIS)
Lee, Seung-Min; Han, Seog-Young
2017-01-01
A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.
Topology optimization based on the harmony search method
Energy Technology Data Exchange (ETDEWEB)
Lee, Seung-Min; Han, Seog-Young [Hanyang University, Seoul (Korea, Republic of)
2017-06-15
A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.
Review of design optimization methods for turbomachinery aerodynamics
Li, Zhihui; Zheng, Xinqian
2017-08-01
In today's competitive environment, new turbomachinery designs need to be not only more efficient, quieter, and ;greener; but also need to be developed at on much shorter time scales and at lower costs. A number of advanced optimization strategies have been developed to achieve these requirements. This paper reviews recent progress in turbomachinery design optimization to solve real-world aerodynamic problems, especially for compressors and turbines. This review covers the following topics that are important for optimizing turbomachinery designs. (1) optimization methods, (2) stochastic optimization combined with blade parameterization methods and the design of experiment methods, (3) gradient-based optimization methods for compressors and turbines and (4) data mining techniques for Pareto Fronts. We also present our own insights regarding the current research trends and the future optimization of turbomachinery designs.
Directory of Open Access Journals (Sweden)
R. Subramanian
2011-01-01
Full Text Available Purpose – The aim of this paper is to optimize the capacitor value of a single phase open well submersible motor operating under extreme voltage conditions using fuzzy logic optimization technique and compared with no-load volt-ampere method. This is done by keeping the displacement angle (a between main winding and auxiliary winding near 90o, phase angle (f between the supply voltage and line current near 0o. The optimization work is carried out by using Fuzzy Logic Toolbox software built on the MATLAB technical computing environment with Simulink software. Findings – The optimum capacitor value obtained is used with a motor and tested for different supply voltage conditions. The vector diagrams obtained from the experimental test results indicates that the performance is improved from the existing value. Originality/value – This method will be highly useful for the practicing design engineers in selecting the optimum capacitance value for single phase induction motors to achieve the best performance for operating at extreme supply voltage conditions.
A Method for Determining Optimal Residential Energy Efficiency Packages
Energy Technology Data Exchange (ETDEWEB)
Polly, B. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gestwick, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bianchi, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Anderson, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Horowitz, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Christensen, C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Judkoff, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2011-04-01
This report describes an analysis method for determining optimal residential energy efficiency retrofit packages and, as an illustrative example, applies the analysis method to a 1960s-era home in eight U.S. cities covering a range of International Energy Conservation Code (IECC) climate regions. The method uses an optimization scheme that considers average energy use (determined from building energy simulations) and equivalent annual cost to recommend optimal retrofit packages specific to the building, occupants, and location.
Energy Technology Data Exchange (ETDEWEB)
Schulze-Riegert, R.; Krosche, M.; Stekolschikov, K. [Scandpower Petroleum Technology GmbH, Hamburg (Germany); Fahimuddin, A. [Technische Univ. Braunschweig (Germany)
2007-09-13
History Matching in Reservoir Simulation, well location and production optimization etc. is generally a multi-objective optimization problem. The problem statement of history matching for a realistic field case includes many field and well measurements in time and type, e.g. pressure measurements, fluid rates, events such as water and gas break-throughs, etc. Uncertainty parameters modified as part of the history matching process have varying impact on the improvement of the match criteria. Competing match criteria often reduce the likelihood of finding an acceptable history match. It is an engineering challenge in manual history matching processes to identify competing objectives and to implement the changes required in the simulation model. In production optimization or scenario optimization the focus on one key optimization criterion such as NPV limits the identification of alternatives and potential opportunities, since multiple objectives are summarized in a predefined global objective formulation. Previous works primarily focus on a specific optimization method. Few works actually concentrate on the objective formulation and multi-objective optimization schemes have not yet been applied to reservoir simulations. This paper presents a multi-objective optimization approach applicable to reservoir simulation. It addresses the problem of multi-objective criteria in a history matching study and presents analysis techniques identifying competing match criteria. A Pareto-Optimizer is discussed and the implementation of that multi-objective optimization scheme is applied to a case study. Results are compared to a single-objective optimization method. (orig.)
COMPARISON OF NONLINEAR DYNAMICS OPTIMIZATION METHODS FOR APS-U
Energy Technology Data Exchange (ETDEWEB)
Sun, Y.; Borland, Michael
2017-06-25
Many different objectives and genetic algorithms have been proposed for storage ring nonlinear dynamics performance optimization. These optimization objectives include nonlinear chromaticities and driving/detuning terms, on-momentum and off-momentum dynamic acceptance, chromatic detuning, local momentum acceptance, variation of transverse invariant, Touschek lifetime, etc. In this paper, the effectiveness of several different optimization methods and objectives are compared for the nonlinear beam dynamics optimization of the Advanced Photon Source upgrade (APS-U) lattice. The optimized solutions from these different methods are preliminarily compared in terms of the dynamic acceptance, local momentum acceptance, chromatic detuning, and other performance measures.
Augmented Lagrangian Method For Discretized Optimal Control ...
African Journals Online (AJOL)
In this paper, we are concerned with one-dimensional time invariant optimal control problem, whose objective function is quadratic and the dynamical system is a differential equation with initial condition .Since most real life problems are nonlinear and their analytical solutions are not readily available, we resolve to ...
METHOD FOR OPTIMIZING THE ENERGY OF PUMPS
Skovmose Kallesøe, Carsten; De Persis, Claudio
2013-01-01
The device for energy-optimization on operation of several centrifugal pumps controlled in rotational speed, in a hydraulic installation, begins firstly with determining which pumps as pilot pumps are assigned directly to a consumer and which pumps are hydraulically connected in series upstream of
Parallel optimization methods for agile manufacturing
Energy Technology Data Exchange (ETDEWEB)
Meza, J.C.; Moen, C.D.; Plantenga, T.D.; Spence, P.A.; Tong, C.H. [Sandia National Labs., Livermore, CA (United States); Hendrickson, B.A.; Leland, R.W.; Reese, G.M. [Sandia National Labs., Albuquerque, NM (United States)
1997-08-01
The rapid and optimal design of new goods is essential for meeting national objectives in advanced manufacturing. Currently almost all manufacturing procedures involve the determination of some optimal design parameters. This process is iterative in nature and because it is usually done manually it can be expensive and time consuming. This report describes the results of an LDRD, the goal of which was to develop optimization algorithms and software tools that will enable automated design thereby allowing for agile manufacturing. Although the design processes vary across industries, many of the mathematical characteristics of the problems are the same, including large-scale, noisy, and non-differentiable functions with nonlinear constraints. This report describes the development of a common set of optimization tools using object-oriented programming techniques that can be applied to these types of problems. The authors give examples of several applications that are representative of design problems including an inverse scattering problem, a vibration isolation problem, a system identification problem for the correlation of finite element models with test data and the control of a chemical vapor deposition reactor furnace. Because the function evaluations are computationally expensive, they emphasize algorithms that can be adapted to parallel computers.
Li, Min; Yuan, Yunbin; Zhang, Baocheng; Wang, Ningbo; Li, Zishen; Liu, Xifeng; Zhang, Xiao
2018-02-01
The ionosphere effective height (IEH) is a very important parameter in total electron content (TEC) measurements under the widely used single-layer model assumption. To overcome the requirement of a large amount of simultaneous vertical and slant ionospheric observations or dense "coinciding" pierce points data, a new approach comparing the converted vertical TEC (VTEC) value using mapping function based on a given IEH with the "ground truth" VTEC value provided by the combined International GNSS Service Global Ionospheric Maps is proposed for the determination of the optimal IEH. The optimal IEH in the Chinese region is determined using three different methods based on GNSS data. Based on the ionosonde data from three different locations in China, the altitude variation of the peak electron density (hmF2) is found to have clear diurnal, seasonal and latitudinal dependences, and the diurnal variation of hmF2 varies from approximately 210 to 520 km in Hainan. The determination of the optimal IEH employing the inverse method suggested by Birch et al. (Radio Sci 37, 2002. doi: 10.1029/2000rs002601) did not yield a consistent altitude in the Chinese region. Tests of the method minimizing the mapping function errors suggested by Nava et al. (Adv Space Res 39:1292-1297, 2007) indicate that the optimal IEH ranges from 400 to 600 km, and the height of 450 km is the most frequent IEH at both high and low solar activities. It is also confirmed that the IEH of 450-550 km is preferred for the Chinese region instead of the commonly adopted 350-450 km using the determination method of the optimal IEH proposed in this paper.
Advanced Topology Optimization Methods for Conceptual Architectural Design
DEFF Research Database (Denmark)
Aage, Niels; Amir, Oded; Clausen, Anders
2015-01-01
in topological optimization: Interactive control and continuous visualization; embedding flexible voids within the design space; consideration of distinct tension / compression properties; and optimization of dual material systems. In extension, optimization procedures for skeletal structures such as trusses......This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities...
Logic-based methods for optimization combining optimization and constraint satisfaction
Hooker, John
2011-01-01
A pioneering look at the fundamental role of logic in optimization and constraint satisfaction While recent efforts to combine optimization and constraint satisfaction have received considerable attention, little has been said about using logic in optimization as the key to unifying the two fields. Logic-Based Methods for Optimization develops for the first time a comprehensive conceptual framework for integrating optimization and constraint satisfaction, then goes a step further and shows how extending logical inference to optimization allows for more powerful as well as flexible
Celik, Yuksel; Ulker, Erkan
2013-01-01
Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms.
Development of novel growth methods for halide single crystals
Yokota, Yuui; Kurosawa, Shunsuke; Shoji, Yasuhiro; Ohashi, Yuji; Kamada, Kei; Yoshikawa, Akira
2017-03-01
We developed novel growth methods for halide scintillator single crystals with hygroscopic nature, Halide micro-pulling-down [H-μ-PD] method and Halide Vertical Bridgman [H-VB] method. The H-μ-PD method with a removable chamber system can grow a single crystal of halide scintillator material with hygroscopicity at faster growth rate than the conventional methods. On the other hand, the H-VB method can grow a large bulk single crystal of halide scintillator without a quartz ampule. CeCl3, LaBr3, Ce:LaBr3 and Eu:SrI2 fiber single crystals could be grown by the H-μ-PD method and Eu:SrI2 bulk single crystals of 1 and 1.5 inch in diameter could be grown by the H-VB method. The grown fiber and bulk single crystals showed comparable scintillation properties to the previous reports using the conventional methods.
Trajectory Optimization Based on Multi-Interval Mesh Refinement Method
Directory of Open Access Journals (Sweden)
Ningbo Li
2017-01-01
Full Text Available In order to improve the optimization accuracy and convergence rate for trajectory optimization of the air-to-air missile, a multi-interval mesh refinement Radau pseudospectral method was introduced. This method made the mesh endpoints converge to the practical nonsmooth points and decreased the overall collocation points to improve convergence rate and computational efficiency. The trajectory was divided into four phases according to the working time of engine and handover of midcourse and terminal guidance, and then the optimization model was built. The multi-interval mesh refinement Radau pseudospectral method with different collocation points in each mesh interval was used to solve the trajectory optimization model. Moreover, this method was compared with traditional h method. Simulation results show that this method can decrease the dimensionality of nonlinear programming (NLP problem and therefore improve the efficiency of pseudospectral methods for solving trajectory optimization problems.
Optimal scheduling of micro grids based on single objective programming
Chen, Yue
2018-04-01
Faced with the growing demand for electricity and the shortage of fossil fuels, how to optimally optimize the micro-grid has become an important research topic to maximize the economic, technological and environmental benefits of the micro-grid. This paper considers the role of the battery and the micro-grid and power grid to allow the exchange of power not exceeding 150kW preconditions, the main study of the economy to load for the goal is to minimize the electricity cost (abandonment of wind), to establish an optimization model, and to solve the problem by genetic algorithm. The optimal scheduling scheme is obtained and the utilization of renewable energy and the impact of the battery involved in regulation are analyzed.
Practical optimization of Steiner trees via the cavity method
Braunstein, Alfredo; Muntoni, Anna
2016-07-01
The optimization version of the cavity method for single instances, called Max-Sum, has been applied in the past to the minimum Steiner tree problem on graphs and variants. Max-Sum has been shown experimentally to give asymptotically optimal results on certain types of weighted random graphs, and to give good solutions in short computation times for some types of real networks. However, the hypotheses behind the formulation and the cavity method itself limit substantially the class of instances on which the approach gives good results (or even converges). Moreover, in the standard model formulation, the diameter of the tree solution is limited by a predefined bound, that affects both computation time and convergence properties. In this work we describe two main enhancements to the Max-Sum equations to be able to cope with optimization of real-world instances. First, we develop an alternative ‘flat’ model formulation that allows the relevant configuration space to be reduced substantially, making the approach feasible on instances with large solution diameter, in particular when the number of terminal nodes is small. Second, we propose an integration between Max-Sum and three greedy heuristics. This integration allows Max-Sum to be transformed into a highly competitive self-contained algorithm, in which a feasible solution is given at each step of the iterative procedure. Part of this development participated in the 2014 DIMACS Challenge on Steiner problems, and we report the results here. The performance on the challenge of the proposed approach was highly satisfactory: it maintained a small gap to the best bound in most cases, and obtained the best results on several instances in two different categories. We also present several improvements with respect to the version of the algorithm that participated in the competition, including new best solutions for some of the instances of the challenge.
Numerical methods of mathematical optimization with Algol and Fortran programs
Künzi, Hans P; Zehnder, C A; Rheinboldt, Werner
1971-01-01
Numerical Methods of Mathematical Optimization: With ALGOL and FORTRAN Programs reviews the theory and the practical application of the numerical methods of mathematical optimization. An ALGOL and a FORTRAN program was developed for each one of the algorithms described in the theoretical section. This should result in easy access to the application of the different optimization methods.Comprised of four chapters, this volume begins with a discussion on the theory of linear and nonlinear optimization, with the main stress on an easily understood, mathematically precise presentation. In addition
Primal Interior-Point Method for Large Sparse Minimax Optimization
Czech Academy of Sciences Publication Activity Database
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2009-01-01
Roč. 45, č. 5 (2009), s. 841-864 ISSN 0023-5954 R&D Projects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : unconstrained optimization * large-scale optimization * minimax optimization * nonsmooth optimization * interior-point methods * modified Newton methods * variable metric methods * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.445, year: 2009 http://dml.cz/handle/10338.dmlcz/140034
Chuang, Li-Yeh; Moi, Sin-Hua; Lin, Yu-Da; Yang, Cheng-Hong
2016-10-01
Evolutionary algorithms could overcome the computational limitations for the statistical evaluation of large datasets for high-order single nucleotide polymorphism (SNP) barcodes. Previous studies have proposed several chaotic particle swarm optimization (CPSO) methods to detect SNP barcodes for disease analysis (e.g., for breast cancer and chronic diseases). This work evaluated additional chaotic maps combined with the particle swarm optimization (PSO) method to detect SNP barcodes using a high-dimensional dataset. Nine chaotic maps were used to improve PSO method results and compared the searching ability amongst all CPSO methods. The XOR and ZZ disease models were used to compare all chaotic maps combined with PSO method. Efficacy evaluations of CPSO methods were based on statistical values from the chi-square test (χ 2 ). The results showed that chaotic maps could improve the searching ability of PSO method when population are trapped in the local optimum. The minor allele frequency (MAF) indicated that, amongst all CPSO methods, the numbers of SNPs, sample size, and the highest χ 2 value in all datasets were found in the Sinai chaotic map combined with PSO method. We used the simple linear regression results of the gbest values in all generations to compare the all methods. Sinai chaotic map combined with PSO method provided the highest β values (β≥0.32 in XOR disease model and β≥0.04 in ZZ disease model) and the significant p-value (p-value<0.001 in both the XOR and ZZ disease models). The Sinai chaotic map was found to effectively enhance the fitness values (χ 2 ) of PSO method, indicating that the Sinai chaotic map combined with PSO method is more effective at detecting potential SNP barcodes in both the XOR and ZZ disease models. Copyright © 2016 Elsevier B.V. All rights reserved.
Review of dynamic optimization methods in renewable natural resource management
Williams, B.K.
1989-01-01
In recent years, the applications of dynamic optimization procedures in natural resource management have proliferated. A systematic review of these applications is given in terms of a number of optimization methodologies and natural resource systems. The applicability of the methods to renewable natural resource systems are compared in terms of system complexity, system size, and precision of the optimal solutions. Recommendations are made concerning the appropriate methods for certain kinds of biological resource problems.
Optimization and control methods in industrial engineering and construction
Wang, Xiangyu
2014-01-01
This book presents recent advances in optimization and control methods with applications to industrial engineering and construction management. It consists of 15 chapters authored by recognized experts in a variety of fields including control and operation research, industrial engineering, and project management. Topics include numerical methods in unconstrained optimization, robust optimal control problems, set splitting problems, optimum confidence interval analysis, a monitoring networks optimization survey, distributed fault detection, nonferrous industrial optimization approaches, neural networks in traffic flows, economic scheduling of CCHP systems, a project scheduling optimization survey, lean and agile construction project management, practical construction projects in Hong Kong, dynamic project management, production control in PC4P, and target contracts optimization. The book offers a valuable reference work for scientists, engineers, researchers and practitioners in industrial engineering and c...
Full-step interior-point methods for symmetric optimization
Gu, G.
2009-01-01
In [SIAM J. Optim., 16(4):1110--1136 (electronic), 2006] Roos proposed a full-Newton step Infeasible Interior-Point Method (IIPM) for Linear Optimization (LO). It is a primal-dual homotopy method; it differs from the classical IIPMs in that it uses only full steps. This means that no line searches
Stochastic optimal control of single neuron spike trains
DEFF Research Database (Denmark)
Iolov, Alexandre; Ditlevsen, Susanne; Longtin, Andrë
2014-01-01
stimulation of a neuron to achieve a target spike train under the physiological constraint to not damage tissue. Approach. We pose a stochastic optimal control problem to precisely specify the spike times in a leaky integrate-and-fire (LIF) model of a neuron with noise assumed to be of intrinsic or synaptic...... to the spike times (open-loop control). Main results. We have developed a stochastic optimal control algorithm to obtain precise spike times. It is applicable in both the supra-threshold and sub-threshold regimes, under open-loop and closed-loop conditions and with an arbitrary noise intensity; the accuracy...... into account physiological constraints on the control. A precise and robust targeting of neural activity based on stochastic optimal control has great potential for regulating neural activity in e.g. prosthetic applications and to improve our understanding of the basic mechanisms by which neuronal firing...
Gradient-based methods for production optimization of oil reservoirs
Energy Technology Data Exchange (ETDEWEB)
Suwartadi, Eka
2012-07-01
Production optimization for water flooding in the secondary phase of oil recovery is the main topic in this thesis. The emphasis has been on numerical optimization algorithms, tested on case examples using simple hypothetical oil reservoirs. Gradientbased optimization, which utilizes adjoint-based gradient computation, is used to solve the optimization problems. The first contribution of this thesis is to address output constraint problems. These kinds of constraints are natural in production optimization. Limiting total water production and water cut at producer wells are examples of such constraints. To maintain the feasibility of an optimization solution, a Lagrangian barrier method is proposed to handle the output constraints. This method incorporates the output constraints into the objective function, thus avoiding additional computations for the constraints gradient (Jacobian) which may be detrimental to the efficiency of the adjoint method. The second contribution is the study of the use of second-order adjoint-gradient information for production optimization. In order to speedup convergence rate in the optimization, one usually uses quasi-Newton approaches such as BFGS and SR1 methods. These methods compute an approximation of the inverse of the Hessian matrix given the first-order gradient from the adjoint method. The methods may not give significant speedup if the Hessian is ill-conditioned. We have developed and implemented the Hessian matrix computation using the adjoint method. Due to high computational cost of the Newton method itself, we instead compute the Hessian-timesvector product which is used in a conjugate gradient algorithm. Finally, the last contribution of this thesis is on surrogate optimization for water flooding in the presence of the output constraints. Two kinds of model order reduction techniques are applied to build surrogate models. These are proper orthogonal decomposition (POD) and the discrete empirical interpolation method (DEIM
Toward solving the sign problem with path optimization method
Mori, Yuto; Kashiwa, Kouji; Ohnishi, Akira
2017-12-01
We propose a new approach to circumvent the sign problem in which the integration path is optimized to control the sign problem. We give a trial function specifying the integration path in the complex plane and tune it to optimize the cost function which represents the seriousness of the sign problem. We call it the path optimization method. In this method, we do not need to solve the gradient flow required in the Lefschetz-thimble method and then the construction of the integration-path contour arrives at the optimization problem where several efficient methods can be applied. In a simple model with a serious sign problem, the path optimization method is demonstrated to work well; the residual sign problem is resolved and precise results can be obtained even in the region where the global sign problem is serious.
Probabilistic methods for maintenance program optimization
International Nuclear Information System (INIS)
Liming, J.K.; Smith, M.J.; Gekler, W.C.
1989-01-01
In today's regulatory and economic environments, it is more important than ever that managers, engineers, and plant staff join together in developing and implementing effective management plans for safety and economic risk. This need applied to both power generating stations and other process facilities. One of the most critical parts of these management plans is the development and continuous enhancement of a maintenance program that optimizes plant or facility safety and profitability. The ultimate objective is to maximize the potential for station or facility success, usually measured in terms of projected financial profitability, while meeting or exceeding meaningful and reasonable safety goals, usually measured in terms of projected damage or consequence frequencies. This paper describes the use of the latest concepts in developing and evaluating maintenance programs to achieve maintenance program optimization (MPO). These concepts are based on significant field experience gained through the integration and application of fundamentals developed for industry and Electric Power Research Institute (EPRI)-sponsored projects on preventive maintenance (PM) program development and reliability-centered maintenance (RCM)
Mixed methods for viscoelastodynamics and topology optimization
Directory of Open Access Journals (Sweden)
Giacomo Maurelli
2014-07-01
Full Text Available A truly-mixed approach for the analysis of viscoelastic structures and continua is presented. An additive decomposition of the stress state into a viscoelastic part and a purely elastic one is introduced along with an Hellinger-Reissner variational principle wherein the stress represents the main variable of the formulation whereas the kinematic descriptor (that in the case at hand is the velocity field acts as Lagrange multiplier. The resulting problem is a Differential Algebraic Equation (DAE because of the need to introduce static Lagrange multipliers to comply with the Cauchy boundary condition on the stress. The associated eigenvalue problem is known in the literature as constrained eigenvalue problem and poses several difficulties for its solution that are addressed in the paper. The second part of the paper proposes a topology optimization approach for the rationale design of viscoelastic structures and continua. Details concerning density interpolation, compliance problems and eigenvalue-based objectives are given. Worked numerical examples are presented concerning both the dynamic analysis of viscoelastic structures and their topology optimization.
Directory of Open Access Journals (Sweden)
Yuksel Celik
2013-01-01
Full Text Available Marriage in honey bees optimization (MBO is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm’s performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms.
Handbook of statistical methods single subject design
Satake, Eiki; Maxwell, David L
2008-01-01
This book is a practical guide of the most commonly used approaches in analyzing and interpreting single-subject data. It arranges the methodologies used in a logical sequence using an array of research studies from the existing published literature to illustrate specific applications. The book provides a brief discussion of each approach such as visual, inferential, and probabilistic model, the applications for which it is intended, and a step-by-step illustration of the test as used in an actual research study.
Optimization of Single-Sensor Two-State Hot-Wire Anemometer Transmission Bandwidth.
Ligęza, Paweł
2008-10-28
Hot-wire anemometric measurements of non-isothermal flows require the use of thermal compensation or correction circuitry. One possible solution is a two-state hot-wire anemometer that uses the cyclically changing heating level of a single sensor. The area in which flow velocity and fluid temperature can be measured is limited by the dimensions of the sensor's active element. The system is designed to measure flows characterized by high velocity and temperature gradients, although its transmission bandwidth is very limited. In this study, we propose a method to optimize the two-state hot-wire anemometer transmission bandwidth. The method is based on the use of a specialized constanttemperature system together with variable dynamic parameters. It is also based on a suitable measurement cycle paradigm. Analysis of the method was undertaken using model testing. Our results reveal a possible significant broadening of the two-state hot-wire anemometer's transmission bandwidth.
Computation of Optimal Monotonicity Preserving General Linear Methods
Ketcheson, David I.
2009-07-01
Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.
Present-day Problems and Methods of Optimization in Mechatronics
Directory of Open Access Journals (Sweden)
Tarnowski Wojciech
2017-06-01
Full Text Available It is justified that design is an inverse problem, and the optimization is a paradigm. Classes of design problems are proposed and typical obstacles are recognized. Peculiarities of the mechatronic designing are specified as a proof of a particle importance of optimization in the mechatronic design. Two main obstacles of optimization are discussed: a complexity of mathematical models and an uncertainty of the value system, in concrete case. Then a set of non-standard approaches and methods are presented and discussed, illustrated by examples: a fuzzy description, a constraint-based iterative optimization, AHP ranking method and a few MADM functions in Matlab.
Control Methods Utilizing Energy Optimizing Schemes in Refrigeration Systems
DEFF Research Database (Denmark)
Larsen, L.S; Thybo, C.; Stoustrup, Jakob
2003-01-01
The potential energy savings in refrigeration systems using energy optimal control has been proved to be substantial. This however requires an intelligent control that drives the refrigeration systems towards the energy optimal state. This paper proposes an approach for a control, which drives...... the condenser pressure towards an optimal state. The objective of this is to present a feasible method that can be used for energy optimizing control. A simulation model of a simple refrigeration system will be used as basis for testing the control method....
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation
Directory of Open Access Journals (Sweden)
Eric Frick
2018-04-01
Full Text Available The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA. This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO. First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated (r > 0.82 with the true, time-varying joint center solution.
A new secant method for unconstrained optimization
Vavasis, Stephen A.
2008-01-01
We present a gradient-based algorithm for unconstrained minimization derived from iterated linear change of basis. The new method is equivalent to linear conjugate gradient in the case of a quadratic objective function. In the case of exact line search it is a secant method. In practice, it performs comparably to BFGS and DFP and is sometimes more robust.
A method optimization study for atomic absorption ...
African Journals Online (AJOL)
Sadia Ata
2014-04-24
Apr 24, 2014 ... Abstract A sensitive, reliable and relative fast method has been developed for the determination of total zinc in insulin by atomic absorption spectrophotometer. This designed study was used to opti- mize the procedures for the existing methods. Spectrograms of both standard and sample solutions.
Optimizing Usability Studies by Complementary Evaluation Methods
Schmettow, Martin; Bach, Cedric; Scapin, Dominique
2014-01-01
This paper examines combinations of complementary evaluation methods as a strategy for efficient usability problem discovery. A data set from an earlier study is re-analyzed, involving three evaluation methods applied to two virtual environment applications. Results of a mixed-effects logistic
Optimization of time-correlated single photon counting spectrometer
International Nuclear Information System (INIS)
Zhang Xiufeng; Du Haiying; Sun Jinsheng
2011-01-01
The paper proposes a performance improving scheme for the conventional time-correlated single photon counting spectrometer and develops a high speed data acquisition card based on PCI bus and FPGA technologies. The card is used to replace the multi-channel analyzer to improve the capability and decrease the volume of the spectrometer. The process of operation is introduced along with the integration of the spectrometer system. Many standard samples are measured. The experimental results show that the sensitivity of the spectrometer is single photon counting, and the time resolution of fluorescence lifetime measurement can be picosecond level. The instrument could measure the time-resolved spectroscopy. (authors)
Electrochemical Single-Molecule Transistors with Optimized Gate Coupling
DEFF Research Database (Denmark)
Osorio, Henrry M.; Catarelli, Samantha; Cea, Pilar
2015-01-01
. These data are rationalized in terms of a two-step electrochemical model for charge transport across the redox bridge. In this model the gate coupling in the ionic liquid is found to be fully effective with a modeled gate coupling parameter, ξ, of unity. This compares to a much lower gate coupling parameter......Electrochemical gating at the single molecule level of viologen molecular bridges in ionic liquids is examined. Contrary to previous data recorded in aqueous electrolytes, a clear and sharp peak in the single molecule conductance versus electrochemical potential data is obtained in ionic liquids...
Models and Methods for Free Material Optimization
DEFF Research Database (Denmark)
Weldeyesus, Alemseged Gebrehiwot
conditions for physical attainability, in the context that, it has to be symmetric and positive semidefinite. FMO problems have been studied for the last two decades in many articles that led to the development of a wide range of models, methods, and theories. As the design variables in FMO are the local...... programs. The method has successfully obtained solutions to large-scale classical FMO problems of simultaneous analysis and design, nested and dual formulations. The second goal is to extend the method and the FMO problem formulations to general laminated shell structures. The thesis additionally addresses...
Flexible and generalized uncertainty optimization theory and methods
Lodwick, Weldon A
2017-01-01
This book presents the theory and methods of flexible and generalized uncertainty optimization. Particularly, it describes the theory of generalized uncertainty in the context of optimization modeling. The book starts with an overview of flexible and generalized uncertainty optimization. It covers uncertainties that are both associated with lack of information and that more general than stochastic theory, where well-defined distributions are assumed. Starting from families of distributions that are enclosed by upper and lower functions, the book presents construction methods for obtaining flexible and generalized uncertainty input data that can be used in a flexible and generalized uncertainty optimization model. It then describes the development of such a model in detail. All in all, the book provides the readers with the necessary background to understand flexible and generalized uncertainty optimization and develop their own optimization model. .
Support method for solving an optimal xenon shutdown problem
International Nuclear Information System (INIS)
Dung, L.C.
1992-01-01
Since the discovering of the maximum principle by Pontriagin in 1956, methods for solving optimal control problems have been developed fast. There are the efforts to solve an optimal problem of transient process in a nuclear reactor using its ideas. However, the classical maximum principle does not show how to construct an optimal control or suboptimal control with a given exactness. We exploit mainly in the present work the ideas of the support method proposed by Gabasov and Kirillova for linear systems, in order to solve an optimal control problem for non-linear systems. The constructive maximum principle for non-linear dynamic systems with controllable structure received by us in this paper is new result. The ε - maximum principle is used for receiving an 7-phase ε - optimal control of optimal xenon shutdown problem. (author)
A brief introduction to single-molecule fluorescence methods
Wildenberg, S.M.J.L.; Prevo, B.; Peterman, E.J.G.; Peterman, EJG; Wuite, GJL
2011-01-01
One of the more popular single-molecule approaches in biological science is single-molecule fluorescence microscopy, which is the subject of the following section of this volume. Fluorescence methods provide the sensitivity required to study biology on the single-molecule level, but they also allow
A brief introduction to single-molecule fluorescence methods
van den Wildenberg, Siet M.J.L.; Prevo, Bram; Peterman, Erwin J.G.
2018-01-01
One of the more popular single-molecule approaches in biological science is single-molecule fluorescence microscopy, which will be the subject of the following section of this volume. Fluorescence methods provide the sensitivity required to study biology on the single-molecule level, but they also
An efficient multilevel optimization method for engineering design
Vanderplaats, G. N.; Yang, Y. J.; Kim, D. S.
1988-01-01
An efficient multilevel deisgn optimization technique is presented. The proposed method is based on the concept of providing linearized information between the system level and subsystem level optimization tasks. The advantages of the method are that it does not require optimum sensitivities, nonlinear equality constraints are not needed, and the method is relatively easy to use. The disadvantage is that the coupling between subsystems is not dealt with in a precise mathematical manner.
Instrument design optimization with computational methods
Energy Technology Data Exchange (ETDEWEB)
Moore, Michael H. [Old Dominion Univ., Norfolk, VA (United States)
2017-08-01
Using Finite Element Analysis to approximate the solution of differential equations, two different instruments in experimental Hall C at the Thomas Jefferson National Accelerator Facility are analyzed. The time dependence of density uctuations from the liquid hydrogen (LH2) target used in the Q_{wea}k experiment (2011-2012) are studied with Computational Fluid Dynamics (CFD) and the simulation results compared to data from the experiment. The 2.5 kW liquid hydrogen target was the highest power LH2 target in the world and the first to be designed with CFD at Jefferson Lab. The first complete magnetic field simulation of the Super High Momentum Spectrometer (SHMS) is presented with a focus on primary electron beam deflection downstream of the target. The SHMS consists of a superconducting horizontal bending magnet (HB) and three superconducting quadrupole magnets. The HB allows particles scattered at an angle of 5:5 deg to the beam line to be steered into the quadrupole magnets which make up the optics of the spectrometer. Without mitigation, remnant fields from the SHMS may steer the unscattered beam outside of the acceptable envelope on the beam dump and limit beam operations at small scattering angles. A solution is proposed using optimal placement of a minimal amount of shielding iron around the beam line.
Honey Bees Inspired Optimization Method: The Bees Algorithm
Directory of Open Access Journals (Sweden)
Ernesto Mastrocinque
2013-11-01
Full Text Available Optimization algorithms are search methods where the goal is to find an optimal solution to a problem, in order to satisfy one or more objective functions, possibly subject to a set of constraints. Studies of social animals and social insects have resulted in a number of computational models of swarm intelligence. Within these swarms their collective behavior is usually very complex. The collective behavior of a swarm of social organisms emerges from the behaviors of the individuals of that swarm. Researchers have developed computational optimization methods based on biology such as Genetic Algorithms, Particle Swarm Optimization, and Ant Colony. The aim of this paper is to describe an optimization algorithm called the Bees Algorithm, inspired from the natural foraging behavior of honey bees, to find the optimal solution. The algorithm performs both an exploitative neighborhood search combined with random explorative search. In this paper, after an explanation of the natural foraging behavior of honey bees, the basic Bees Algorithm and its improved versions are described and are implemented in order to optimize several benchmark functions, and the results are compared with those obtained with different optimization algorithms. The results show that the Bees Algorithm offering some advantage over other optimization methods according to the nature of the problem.
Plane-based optimization for 3D object reconstruction from single line drawings.
Liu, Jianzhuang; Cao, Liangliang; Li, Zhenguo; Tang, Xiaoou
2008-02-01
In previous optimization-based methods of 3D planar-faced object reconstruction from single 2D line drawings, the missing depths of the vertices of a line drawing (and other parameters in some methods) are used as the variables of the objective functions. A 3D object with planar faces is derived by finding values for these variables that minimize the objective functions. These methods work well for simple objects with a small number N of variables. As N grows, however, it is very difficult for them to find expected objects. This is because with the nonlinear objective functions in a space of large dimension N, the search for optimal solutions can easily get trapped into local minima. In this paper, we use the parameters of the planes that pass through the planar faces of an object as the variables of the objective function. This leads to a set of linear constraints on the planes of the object, resulting in a much lower dimensional nullspace where optimization is easier to achieve. We prove that the dimension of this nullspace is exactly equal to the minimum number of vertex depths which define the 3D object. Since a practical line drawing is usually not an exact projection of a 3D object, we expand the nullspace to a larger space based on the singular value decomposition of the projection matrix of the line drawing. In this space, robust 3D reconstruction can be achieved. Compared with two most related methods, our method not only can reconstruct more complex 3D objects from 2D line drawings, but also is computationally more efficient.
A hybrid optimization method for biplanar transverse gradient coil design
International Nuclear Information System (INIS)
Qi Feng; Tang Xin; Jin Zhe; Jiang Zhongde; Shen Yifei; Meng Bin; Zu Donglin; Wang Weimin
2007-01-01
The optimization of transverse gradient coils is one of the fundamental problems in designing magnetic resonance imaging gradient systems. A new approach is presented in this paper to optimize the transverse gradient coils' performance. First, in the traditional spherical harmonic target field method, high order coefficients, which are commonly ignored, are used in the first stage of the optimization process to give better homogeneity. Then, some cosine terms are introduced into the series expansion of stream function. These new terms provide simulated annealing optimization with new freedoms. Comparison between the traditional method and the optimized method shows that the inhomogeneity in the region of interest can be reduced from 5.03% to 1.39%, the coil efficiency increased from 3.83 to 6.31 mT m -1 A -1 and the minimum distance of these discrete coils raised from 1.54 to 3.16 mm
Optimizing Robinson Operator with Ant Colony Optimization As a Digital Image Edge Detection Method
Yanti Nasution, Tarida; Zarlis, Muhammad; K. M Nasution, Mahyuddin
2017-12-01
Edge detection serves to identify the boundaries of an object against a background of mutual overlap. One of the classic method for edge detection is operator Robinson. Operator Robinson produces a thin, not assertive and grey line edge. To overcome these deficiencies, the proposed improvements to edge detection method with the approach graph with Ant Colony Optimization algorithm. The repairs may be performed are thicken the edge and connect the edges cut off. Edge detection research aims to do optimization of operator Robinson with Ant Colony Optimization then compare the output and generated the inferred extent of Ant Colony Optimization can improve result of edge detection that has not been optimized and improve the accuracy of the results of Robinson edge detection. The parameters used in performance measurement of edge detection are morphology of the resulting edge line, MSE and PSNR. The result showed that Robinson and Ant Colony Optimization method produces images with a more assertive and thick edge. Ant Colony Optimization method is able to be used as a method for optimizing operator Robinson by improving the image result of Robinson detection average 16.77 % than classic Robinson result.
Process control and optimization with simple interval calculation method
DEFF Research Database (Denmark)
Pomerantsev, A.; Rodionova, O.; Høskuldsson, Agnar
2006-01-01
for the quality improvement in the course of production. The latter is an active quality optimization, which takes into account the actual history of the process. The advocate approach is allied to the conventional method of multivariate statistical process control (MSPC) as it also employs the historical process......Methods of process control and optimization are presented and illustrated with a real world example. The optimization methods are based on the PLS block modeling as well as on the simple interval calculation methods of interval prediction and object status classification. It is proposed to employ...... the series of expanding PLS/SIC models in order to support the on-line process improvements. This method helps to predict the effect of planned actions on the product quality and thus enables passive quality control. We have also considered an optimization approach that proposes the correcting actions...
Process control and optimization with simple interval calculation method
DEFF Research Database (Denmark)
Pomerantsev, A.; Rodionova, O.; Høskuldsson, Agnar
2006-01-01
Methods of process control and optimization are presented and illustrated with a real world example. The optimization methods are based on the PLS block modeling as well as on the simple interval calculation methods of interval prediction and object status classification. It is proposed to employ...... the series of expanding PLS/SIC models in order to support the on-line process improvements. This method helps to predict the effect of planned actions on the product quality and thus enables passive quality control. We have also considered an optimization approach that proposes the correcting actions...... for the quality improvement in the course of production. The latter is an active quality optimization, which takes into account the actual history of the process. The advocate approach is allied to the conventional method of multivariate statistical process control (MSPC) as it also employs the historical process...
Optimizing Cognitive Rehabilitation: Effective Instructional Methods
Sohlberg, McKay Moore; Turkstra, Lyn S.
2011-01-01
Rehabilitation professionals face a key challenge when working with clients with acquired cognitive impairments: how to teach new skills to individuals who have difficulty learning. Unique in its focus, this book presents evidence-based instructional methods specifically designed to help this population learn more efficiently. The expert authors…
Methods for Large-Scale Nonlinear Optimization.
1980-05-01
haves more like an iterative method since it has the potential of converging in fewer than, or more than, ns - t iterations. Recently, Dembo (1979) has...R., Powell, M. J. D. and Reid, J. K. (1974). On the estimation of sparse Jacobian matrices, J. Inst. Maths Applics., 13, pp. 117-119. Dembo , R. S
Exact and useful optimization methods for microeconomics
Balder, E.J.
2011-01-01
This paper points out that the treatment of utility maximization in current textbooks on microeconomic theory is deficient in at least three respects: breadth of coverage, completeness-cum-coherence of solution methods and mathematical correctness. Improvements are suggested in the form of a
Methods for Gas Sensing with Single-Walled Carbon Nanotubes
Kaul, Anupama B. (Inventor)
2013-01-01
Methods for gas sensing with single-walled carbon nanotubes are described. The methods comprise biasing at least one carbon nanotube and exposing to a gas environment to detect variation in temperature as an electrical response.
Chai, Runqi; Savvaris, Al; Tsourdos, Antonios
2016-06-01
In this paper, a fuzzy physical programming (FPP) method has been introduced for solving multi-objective Space Manoeuvre Vehicles (SMV) skip trajectory optimization problem based on hp-adaptive pseudospectral methods. The dynamic model of SMV is elaborated and then, by employing hp-adaptive pseudospectral methods, the problem has been transformed to nonlinear programming (NLP) problem. According to the mission requirements, the solutions were calculated for each single-objective scenario. To get a compromised solution for each target, the fuzzy physical programming (FPP) model is proposed. The preference function is established with considering the fuzzy factor of the system such that a proper compromised trajectory can be acquired. In addition, the NSGA-II is tested to obtain the Pareto-optimal solution set and verify the Pareto optimality of the FPP solution. Simulation results indicate that the proposed method is effective and feasible in terms of dealing with the multi-objective skip trajectory optimization for the SMV.
Directory of Open Access Journals (Sweden)
Yuguan Hou
2015-01-01
Full Text Available For the case of the single snapshot, the integrated SNR gain could not be obtained without the multiple snapshots, which degrades the mutual coupling correction performance under the lower SNR case. In this paper, a Convex Chain MUSIC (CC-MUSIC algorithm is proposed for the mutual coupling correction of the L-shaped nonuniform array with single snapshot. It is an online self-calibration algorithm and does not require the prior knowledge of the correction matrix initialization and the calibration source with the known position. An optimization for the approximation between the no mutual coupling covariance matrix without the interpolated transformation and the covariance matrix with the mutual coupling and the interpolated transformation is derived. A global optimization problem is formed for the mutual coupling correction and the spatial spectrum estimation. Furthermore, the nonconvex optimization problem of this global optimization is transformed as a chain of the convex optimization, which is basically an alternating optimization routine. The simulation results demonstrate the effectiveness of the proposed method, which improve the resolution ability and the estimation accuracy of the multisources with the single snapshot.
Verma, Aekaansh; Shang, Jessica; Esmaily-Moghadam, Mahdi; Wong, Kwai; Marsden, Alison
2016-11-01
Babies born with a single functional ventricle typically undergo three open-heart surgeries starting as neonates. The first of these stages (BT shunt or Norwood) has the highest mortality rates of the three, approaching 30%. Proceeding directly to a stage-2 Glenn surgery has historically demonstrated inadequate pulmonary flow (PF) & high mortality. Recently, the Assisted Bi-directional Glenn (ABG) was proposed as a promising means to achieve a stable physiology by assisting the PF via an 'ejector pump' from the systemic circulation. We present preliminary parametrization and optimization results for the ABG geometry, with the goal of increasing PF. To limit excessive pressure increases in the Superior Vena Cava (SVC), the SVC pressure is included as a constraint. We use 3-D finite element flow simulations coupled with a single ventricle lumped parameter network to evaluate PF & the pressure constraint. We employ a derivative free optimization method- the Surrogate Management Framework, in conjunction with the OpenDIEL framework to simulate multiple simultaneous evaluations. Results show that nozzle diameter is the most important design parameter affecting ABG performance. The application of these results to patient specific situations will be discussed. This work was supported by an NSF CAREER award (OCI1150184) and by the XSEDE National Computing Resource.
Cost Optimal Design of a Single-Phase Dry Power Transformer
Directory of Open Access Journals (Sweden)
Raju Basak
2015-08-01
Full Text Available The Dry type transformers are preferred to their oil-immersed counterparts for various reasons, particularly because their operation is hazardless. The application of dry transformers was limited to small ratings in the earlier days. But now these are being used for considerably higher ratings. Therefore, their cost-optimal design has gained importance. This paper deals with the design procedure for achieving cost optimal design of a dry type single-phase power transformer of small rating, subject to usual design constraints on efficiency and voltage regulation. The selling cost for the transformer has been taken as the objective function. Only two key variables have been chosen, the turns/volt and the height: width ratio of window, which affects the cost function to high degrees. Other variables have been chosen on the basis of designers’ experience. Copper has been used as conductor material and CRGOS as core material to achieve higher efficiency, lower running cost and compact design. The electrical and magnetic loadings have been kept at their maximum values without violating the design constraints. The optimal solution has been obtained by the method of exhaustive search using nested loops.
OPTIMAL SIGNAL PROCESSING METHODS IN GPR
Directory of Open Access Journals (Sweden)
Saeid Karamzadeh
2014-01-01
Full Text Available In the past three decades, a lot of various applications of Ground Penetrating Radar (GPR took place in real life. There are important challenges of this radar in civil applications and also in military applications. In this paper, the fundamentals of GPR systems will be covered and three important signal processing methods (Wavelet Transform, Matched Filter and Hilbert Huang will be compared to each other in order to get most accurate information about objects which are in subsurface or behind the wall.
Optimal Pilot and Payload Power Control in Single-Cell Massive MIMO Systems
Cheng, Hei Victor; Björnson, Emil; Larsson, Erik G.
2016-01-01
This paper considers the jointly optimal pilot and data power allocation in single-cell uplink massive multiple-input-multiple- output systems. Using the spectral efficiency (SE) as performance metric and setting a total energy budget per coherence interval, the power control is formulated as optimization problems for two different objective functions: the weighted minimum SE among the users and the weighted sum SE. A closed form solution for the optimal length of the pilot sequence is derive...
A Method for Robust Strategic Railway Dispatch Applied to a Single Track Line
DEFF Research Database (Denmark)
Harrod, Steven
2013-01-01
A method is presented for global optimization of a dispatch plan assuming perfect information over a given time horizon. An example problem is solved for the North American case of a single dominant high-speed train sharing a network with a majority flow of slower trains. Initial dispatch priority...
Optimal PMU Placement with Uncertainty Using Pareto Method
Directory of Open Access Journals (Sweden)
A. Ketabi
2012-01-01
Full Text Available This paper proposes a method for optimal placement of Phasor Measurement Units (PMUs in state estimation considering uncertainty. State estimation has first been turned into an optimization exercise in which the objective function is selected to be the number of unobservable buses which is determined based on Singular Value Decomposition (SVD. For the normal condition, Differential Evolution (DE algorithm is used to find the optimal placement of PMUs. By considering uncertainty, a multiobjective optimization exercise is hence formulated. To achieve this, DE algorithm based on Pareto optimum method has been proposed here. The suggested strategy is applied on the IEEE 30-bus test system in several case studies to evaluate the optimal PMUs placement.
Advanced Topology Optimization Methods for Conceptual Architectural Design
DEFF Research Database (Denmark)
Aage, Niels; Amir, Oded; Clausen, Anders
2014-01-01
This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities in topolo......This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities...
Advanced Topology Optimization Methods for Conceptual Architectural Design
DEFF Research Database (Denmark)
Aage, Niels; Amir, Oded; Clausen, Anders
2015-01-01
This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities in topolo......This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities...
Evaluation of a proposed optimization method for discrete-event simulation models
Directory of Open Access Journals (Sweden)
Alexandre Ferreira de Pinho
2012-12-01
Full Text Available Optimization methods combined with computer-based simulation have been utilized in a wide range of manufacturing applications. However, in terms of current technology, these methods exhibit low performance levels which are only able to manipulate a single decision variable at a time. Thus, the objective of this article is to evaluate a proposed optimization method for discrete-event simulation models based on genetic algorithms which exhibits more efficiency in relation to computational time when compared to software packages on the market. It should be emphasized that the variable's response quality will not be altered; that is, the proposed method will maintain the solutions' effectiveness. Thus, the study draws a comparison between the proposed method and that of a simulation instrument already available on the market and has been examined in academic literature. Conclusions are presented, confirming the proposed optimization method's efficiency.
Modifying nodal pricing method considering market participants optimality and reliability
Directory of Open Access Journals (Sweden)
A. R. Soofiabadi
2015-06-01
Full Text Available This paper develops a method for nodal pricing and market clearing mechanism considering reliability of the system. The effects of components reliability on electricity price, market participants’ profit and system social welfare is considered. This paper considers reliability both for evaluation of market participant’s optimality as well as for fair pricing and market clearing mechanism. To achieve fair pricing, nodal price has been obtained through a two stage optimization problem and to achieve fair market clearing mechanism, comprehensive criteria has been introduced for optimality evaluation of market participant. Social welfare of the system and system efficiency are increased under proposed modified nodal pricing method.
Malliavin method for optimal investment in financial markets with memory
Directory of Open Access Journals (Sweden)
An Qiguang
2016-01-01
Full Text Available We consider a financial market with memory effects in which wealth processes are driven by mean-field stochastic Volterra equations. In this financial market, the classical dynamic programming method can not be used to study the optimal investment problem, because the solution of mean-field stochastic Volterra equation is not a Markov process. In this paper, a new method through Malliavin calculus introduced in [1], can be used to obtain the optimal investment in a Volterra type financial market. We show a sufficient and necessary condition for the optimal investment in this financial market with memory by mean-field stochastic maximum principle.
Method and system for SCR optimization
Lefebvre, Wesley Curt [Boston, MA; Kohn, Daniel W [Cambridge, MA
2009-03-10
Methods and systems are provided for controlling SCR performance in a boiler. The boiler includes one or more generally cross sectional areas. Each cross sectional area can be characterized by one or more profiles of one or more conditions affecting SCR performance and be associated with one or more adjustable desired profiles of the one or more conditions during the operation of the boiler. The performance of the boiler can be characterized by boiler performance parameters. A system in accordance with one or more embodiments of the invention can include a controller input for receiving a performance goal for the boiler corresponding to at least one of the boiler performance parameters and for receiving data values corresponding to boiler control variables and to the boiler performance parameters. The boiler control variables include one or more current profiles of the one or more conditions. The system also includes a system model that relates one or more profiles of the one or more conditions in the boiler to the boiler performance parameters. The system also includes an indirect controller that determines one or more desired profiles of the one or more conditions to satisfy the performance goal for the boiler. The indirect controller uses the system model, the received data values and the received performance goal to determine the one or more desired profiles of the one or more conditions. The system model also includes a controller output that outputs the one or more desired profiles of the one or more conditions.
Hybrid DFP-CG method for solving unconstrained optimization problems
Osman, Wan Farah Hanan Wan; Asrul Hery Ibrahim, Mohd; Mamat, Mustafa
2017-09-01
The conjugate gradient (CG) method and quasi-Newton method are both well known method for solving unconstrained optimization method. In this paper, we proposed a new method by combining the search direction between conjugate gradient method and quasi-Newton method based on BFGS-CG method developed by Ibrahim et al. The Davidon-Fletcher-Powell (DFP) update formula is used as an approximation of Hessian for this new hybrid algorithm. Numerical result showed that the new algorithm perform well than the ordinary DFP method and proven to posses both sufficient descent and global convergence properties.
Numerical optimization methods for controlled systems with parameters
Tyatyushkin, A. I.
2017-10-01
First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.
Method for Determining Optimal Residential Energy Efficiency Retrofit Packages
Energy Technology Data Exchange (ETDEWEB)
Polly, B.; Gestwick, M.; Bianchi, M.; Anderson, R.; Horowitz, S.; Christensen, C.; Judkoff, R.
2011-04-01
Businesses, government agencies, consumers, policy makers, and utilities currently have limited access to occupant-, building-, and location-specific recommendations for optimal energy retrofit packages, as defined by estimated costs and energy savings. This report describes an analysis method for determining optimal residential energy efficiency retrofit packages and, as an illustrative example, applies the analysis method to a 1960s-era home in eight U.S. cities covering a range of International Energy Conservation Code (IECC) climate regions. The method uses an optimization scheme that considers average energy use (determined from building energy simulations) and equivalent annual cost to recommend optimal retrofit packages specific to the building, occupants, and location. Energy savings and incremental costs are calculated relative to a minimum upgrade reference scenario, which accounts for efficiency upgrades that would occur in the absence of a retrofit because of equipment wear-out and replacement with current minimum standards.
Closed Loop Optimal Control of a Stewart Platform Using an Optimal Feedback Linearization Method
Directory of Open Access Journals (Sweden)
Hami Tourajizadeh
2016-06-01
Full Text Available Optimal control of a Stewart robot is performed in this paper using a sequential optimal feedback linearization method considering the jack dynamics. One of the most important applications of a Stewart platform is tracking a machine along a specific path or from a defined point to another point. However, the control procedure of these robots is more challenging than that of serial robots since their dynamics are extremely complicated and non-linear. In addition, saving energy, together with achieving the desired accuracy, is one of the most desirable objectives. In this paper, a proper non-linear optimal control is employed to gain the maximum accuracy by applying the minimum force distribution to the jacks. Dynamics of the jacks are included in this paper to achieve more accurate results. Optimal control is performed for a six-DOF hexapod robot and its accuracy is increased using a sequential feedback linearization method, while its energy optimization is realized using the LQR method for the linearized system. The efficiency of the proposed optimal control is verified by simulating a six-DOF hexapod robot in MATLAB, and its related results are gained and analysed. The actual position of the end-effector, its velocity, the initial and final forces of the jacks and the length and velocity of the jacks are obtained and then compared with open loop and non-optimized systems; analytical comparisons show the efficiency of the proposed methods.
Czochralski method of growing single crystals. State-of-art
International Nuclear Information System (INIS)
Bukowski, A.; Zabierowski, P.
1999-01-01
Modern Czochralski method of single crystal growing has been described. The example of Czochralski process is given. The advantages that caused the rapid progress of the method have been presented. The method limitations that motivated the further research and new solutions are also presented. As the example two different ways of the technique development has been described: silicon single crystals growth in the magnetic field; continuous liquid feed of silicon crystals growth. (author)
On some other preferred method for optimizing the welded joint
Directory of Open Access Journals (Sweden)
Pejović Branko B.
2016-01-01
Full Text Available The paper shows an example of performed optimization of sizes in terms of welding costs in a characteristic loaded welded joint. Hence, in the first stage, the variables and constant parameters are defined, and mathematical shape of the optimization function is determined. The following stage of the procedure defines and places the most important constraint functions that limit the design of structures, that the technologist and the designer should take into account. Subsequently, a mathematical optimization model of the problem is derived, that is efficiently solved by a proposed method of geometric programming. Further, a mathematically based thorough optimization algorithm is developed of the proposed method, with a main set of equations defining the problem that are valid under certain conditions. Thus, the primary task of optimization is reduced to the dual task through a corresponding function, which is easier to solve than the primary task of the optimized objective function. The main reason for this is a derived set of linear equations. Apparently, a correlation is used between the optimal primary vector that minimizes the objective function and the dual vector that maximizes the dual function. The method is illustrated on a computational practical example with a different number of constraint functions. It is shown that for the case of a lower level of complexity, a solution is reached through an appropriate maximization of the dual function by mathematical analysis and differential calculus.
A method for optimizing the performance of buildings
DEFF Research Database (Denmark)
Pedersen, Frank
2007-01-01
This thesis describes a method for optimizing the performance of buildings. Design decisions made in early stages of the building design process have a significant impact on the performance of buildings, for instance, the performance with respect to the energy consumption, economical aspects......, and the indoor environment. The method is intended for supporting design decisions for buildings, by combining methods for calculating the performance of buildings with numerical optimization methods. The method is able to find optimum values of decision variables representing different features of the building...... is calculating using unit prices for construction jobs, which can be found in price catalogues. Simple algebraic expressions are used as models for these prices. The model parameters are found by using data-fitting. In order to solve the optimization problem formulated earlier, a gradient-free sequential...
ROTAX: a nonlinear optimization program by axes rotation method
International Nuclear Information System (INIS)
Suzuki, Tadakazu
1977-09-01
A nonlinear optimization program employing the axes rotation method has been developed for solving nonlinear problems subject to nonlinear inequality constraints and its stability and convergence efficiency were examined. The axes rotation method is a direct search of the optimum point by rotating the orthogonal coordinate system in a direction giving the minimum objective. The searching direction is rotated freely in multi-dimensional space, so the method is effective for the problems represented with the contours having deep curved valleys. In application of the axes rotation method to the optimization problems subject to nonlinear inequality constraints, an improved version of R.R. Allran and S.E.J. Johnsen's method is used, which deals with a new objective function composed of the original objective and a penalty term to consider the inequality constraints. The program is incorporated in optimization code system SCOOP. (auth.)
Directory of Open Access Journals (Sweden)
Ruisheng Sun
2016-01-01
Full Text Available This paper presents a new parametric optimization approach based on a modified particle swarm optimization (PSO to design a class of impulsive-correction projectiles with discrete, flexible-time interval, and finite-energy control. In terms of optimal control theory, the task is described as the formulation of minimum working number of impulses and minimum control error, which involves reference model linearization, boundary conditions, and discontinuous objective function. These result in difficulties in finding the global optimum solution by directly utilizing any other optimization approaches, for example, Hp-adaptive pseudospectral method. Consequently, PSO mechanism is employed for optimal setting of impulsive control by considering the time intervals between two neighboring lateral impulses as design variables, which makes the briefness of the optimization process. A modification on basic PSO algorithm is developed to improve the convergence speed of this optimization through linearly decreasing the inertial weight. In addition, a suboptimal control and guidance law based on PSO technique are put forward for the real-time consideration of the online design in practice. Finally, a simulation case coupled with a nonlinear flight dynamic model is applied to validate the modified PSO control algorithm. The results of comparative study illustrate that the proposed optimal control algorithm has a good performance in obtaining the optimal control efficiently and accurately and provides a reference approach to handling such impulsive-correction problem.
SOLVING ENGINEERING OPTIMIZATION PROBLEMS WITH THE SWARM INTELLIGENCE METHODS
Directory of Open Access Journals (Sweden)
V. Panteleev Andrei
2017-01-01
Full Text Available An important stage in problem solving process for aerospace and aerostructures designing is calculating their main charac- teristics optimization. The results of the four constrained optimization problems related to the design of various technical systems: such as determining the best parameters of welded beams, pressure vessel, gear, spring are presented. The purpose of each task is to minimize the cost and weight of the construction. The object functions in optimization practical problem are nonlinear functions with a lot of variables and a complex layer surface indentations. That is why using classical approach for extremum seeking is not efficient. Here comes the necessity of using such methods of optimization that allow to find a near optimal solution in acceptable amount of time with the minimum waste of computer power. Such methods include the methods of Swarm Intelligence: spiral dy- namics algorithm, stochastic diffusion search, hybrid seeker optimization algorithm. The Swarm Intelligence methods are designed in such a way that a swarm consisting of agents carries out the search for extremum. In search for the point of extremum, the parti- cles exchange information and consider their experience as well as the experience of population leader and the neighbors in some area. To solve the listed problems there has been designed a program complex, which efficiency is illustrated by the solutions of four applied problems. Each of the considered applied optimization problems is solved with all the three chosen methods. The ob- tained numerical results can be compared with the ones found in a swarm with a particle method. The author gives recommenda- tions on how to choose methods parameters and penalty function value, which consider inequality constraints.
A class of trust-region methods for parallel optimization
Energy Technology Data Exchange (ETDEWEB)
P. D. Hough; J. C. Meza
1999-03-01
The authors present a new class of optimization methods that incorporates a Parallel Direct Search (PDS) method within a trust-region Newton framework. This approach combines the inherent parallelism of PDS with the rapid and robust convergence properties of Newton methods. Numerical tests have yielded favorable results for both standard test problems and engineering applications. In addition, the new method appears to be more robust in the presence of noisy functions that are inherent in many engineering simulations.
RELATIVE CAMERA POSE ESTIMATION METHOD USING OPTIMIZATION ON THE MANIFOLD
Directory of Open Access Journals (Sweden)
C. Cheng
2017-05-01
Full Text Available To solve the problem of relative camera pose estimation, a method using optimization with respect to the manifold is proposed. Firstly from maximum-a-posteriori (MAP model to nonlinear least squares (NLS model, the general state estimation model using optimization is derived. Then the camera pose estimation model is applied to the general state estimation model, while the parameterization of rigid body transformation is represented by Lie group/algebra. The jacobian of point-pose model with respect to Lie group/algebra is derived in detail and thus the optimization model of rigid body transformation is established. Experimental results show that compared with the original algorithms, the approaches with optimization can obtain higher accuracy both in rotation and translation, while avoiding the singularity of Euler angle parameterization of rotation. Thus the proposed method can estimate relative camera pose with high accuracy and robustness.
A Finite Element Removal Method for 3D Topology Optimization
Directory of Open Access Journals (Sweden)
M. Akif Kütük
2013-01-01
Full Text Available Topology optimization provides great convenience to designers during the designing stage in many industrial applications. With this method, designers can obtain a rough model of any part at the beginning of a designing stage by defining loading and boundary conditions. At the same time the optimization can be used for the modification of a product which is being used. Lengthy solution time is a disadvantage of this method. Therefore, the method cannot be widespread. In order to eliminate this disadvantage, an element removal algorithm has been developed for topology optimization. In this study, the element removal algorithm is applied on 3-dimensional parts, and the results are compared with the ones available in the related literature. In addition, the effects of the method on solution times are investigated.
Numerical methods for optimal control problems with state constraints
Pytlak, Radosław
1999-01-01
While optimality conditions for optimal control problems with state constraints have been extensively investigated in the literature the results pertaining to numerical methods are relatively scarce. This book fills the gap by providing a family of new methods. Among others, a novel convergence analysis of optimal control algorithms is introduced. The analysis refers to the topology of relaxed controls only to a limited degree and makes little use of Lagrange multipliers corresponding to state constraints. This approach enables the author to provide global convergence analysis of first order and superlinearly convergent second order methods. Further, the implementation aspects of the methods developed in the book are presented and discussed. The results concerning ordinary differential equations are then extended to control problems described by differential-algebraic equations in a comprehensive way for the first time in the literature.
A QFD-based optimization method for a scalable product platform
Luo, Xinggang; Tang, Jiafu; Kwong, C. K.
2010-02-01
In order to incorporate the customer into the early phase of the product development cycle and to better satisfy customers' requirements, this article adopts quality function deployment (QFD) for optimal design of a scalable product platform. A five-step QFD-based method is proposed to determine the optimal values for platform engineering characteristics (ECs) and non-platform ECs of the products within a product family. First of all, the houses of quality (HoQs) for all product variants are developed and a QFD-based optimization approach is used to determine the optimal ECs for each product variant. Sensitivity analysis is performed for each EC with respect to overall customer satisfaction (OCS). Based on the obtained sensitivity indices of ECs, a mathematical model is established to simultaneously optimize the values of the platform and the non-platform ECs. Finally, by comparing and analysing the optimal solutions with different number of platform ECs, the ECs with which the worst OCS loss can be avoided are selected as platform ECs. An illustrative example is used to demonstrate the feasibility of this method. A comparison between the proposed method and a two-step approach is conducted on the example. The comparison shows that, as a kind of single-stage approach, the proposed method yields better average degree of customer satisfaction due to the simultaneous optimization of platform and non-platform ECs.
Optimal Combination of Aircraft Maintenance Tasks by a Novel Simplex Optimization Method
Directory of Open Access Journals (Sweden)
Huaiyuan Li
2015-01-01
Full Text Available Combining maintenance tasks into work packages is not only necessary for arranging maintenance activities, but also critical for the reduction of maintenance cost. In order to optimize the combination of maintenance tasks by fuzzy C-means clustering algorithm, an improved fuzzy C-means clustering model is introduced in this paper. In order to reduce the dimension, variables representing clustering centers are eliminated in the improved cluster model. So the improved clustering model can be directly solved by the optimization method. To optimize the clustering model, a novel nonlinear simplex optimization method is also proposed in this paper. The novel method searches along all rays emitting from the center to each vertex, and those search directions are rightly n+1 positive basis. The algorithm has both theoretical convergence and good experimental effect. Taking the optimal combination of some maintenance tasks of a certain aircraft as an instance, the novel simplex optimization method and the clustering model both exhibit excellent performance.
Deterministic operations research models and methods in linear optimization
Rader, David J
2013-01-01
Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear
Optimization method for quantitative calculation of clay minerals in soil
Indian Academy of Sciences (India)
Therefore, we rec- ommend employing Matlab to solve equations of the kind discussed here. In conclusion, using optimization methods to calculate the clay mineral contents in soil is viable based on the chemical analysis data. Further stud- ies combining this method with X-ray diffraction, differential thermal, and infrared ...
Primal-Dual Interior Point Multigrid Method for Topology Optimization
Czech Academy of Sciences Publication Activity Database
Kočvara, Michal; Mohammed, S.
2016-01-01
Roč. 38, č. 5 (2016), B685-B709 ISSN 1064-8275 Grant - others:European Commission - EC(XE) 313781 Institutional support: RVO:67985556 Keywords : topology optimization * multigrid method s * interior point method Subject RIV: BA - General Mathematics Impact factor: 2.195, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/kocvara-0462418.pdf
Optimal layout of radiological environment monitoring based on TOPSIS method
International Nuclear Information System (INIS)
Li Sufen; Zhou Chunlin
2006-01-01
TOPSIS is a method for multi-objective-decision-making, which can be applied to comprehensive assessment of environmental quality. This paper adopts it to get the optimal layout of radiological environment monitoring, it is proved that this method is a correct, simple and convenient, practical one, and beneficial to supervision departments to scientifically and reasonably layout Radiological Environment monitoring sites. (authors)
Optimization method for quantitative calculation of clay minerals in soil
Indian Academy of Sciences (India)
In this study, an attempt was made to propose an optimization method for the quantitative determination of clay minerals in soil based on bulk chemical composition data. The fundamental principles and processes of the calculation are elucidated. Some samples were used for reliability verification of the method and the ...
Cost optimal river dike design using probabilistic methods
Bischiniotis, K.; Kanning, W.; Jonkman, S.N.
2014-01-01
This research focuses on the optimization of river dikes using probabilistic methods. Its aim is to develop a generic method that automatically estimates the failure probabilities of many river dike cross-sections and gives the one with the least cost, taking into account the boundary conditions and
A new method of preparing single-walled carbon nanotubes
Indian Academy of Sciences (India)
A novel method of purification for single-walled carbon nanotubes, prepared by an arc-discharge method, is described. The method involves a combination of acid washing followed by high temperature hydrogen treatment to remove the metal nanoparticles and amorphous carbon present in the as-synthesized singlewalled ...
Directory of Open Access Journals (Sweden)
Mehdi Neshat
2015-11-01
Full Text Available In this article, the objective was to present effective and optimal strategies aimed at improving the Swallow Swarm Optimization (SSO method. The SSO is one of the best optimization methods based on swarm intelligence which is inspired by the intelligent behaviors of swallows. It has been able to offer a relatively strong method for solving optimization problems. However, despite its many advantages, the SSO suffers from two shortcomings. Firstly, particles movement speed is not controlled satisfactorily during the search due to the lack of an inertia weight. Secondly, the variables of the acceleration coefficient are not able to strike a balance between the local and the global searches because they are not sufficiently flexible in complex environments. Therefore, the SSO algorithm does not provide adequate results when it searches in functions such as the Step or Quadric function. Hence, the fuzzy adaptive Swallow Swarm Optimization (FASSO method was introduced to deal with these problems. Meanwhile, results enjoy high accuracy which are obtained by using an adaptive inertia weight and through combining two fuzzy logic systems to accurately calculate the acceleration coefficients. High speed of convergence, avoidance from falling into local extremum, and high level of error tolerance are the advantages of proposed method. The FASSO was compared with eleven of the best PSO methods and SSO in 18 benchmark functions. Finally, significant results were obtained.
A solution quality assessment method for swarm intelligence optimization algorithms.
Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua
2014-01-01
Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.
A Combined Method in Parameters Optimization of Hydrocyclone
Directory of Open Access Journals (Sweden)
Jing-an Feng
2016-01-01
Full Text Available To achieve efficient separation of calcium hydroxide and impurities in carbide slag by using hydrocyclone, the physical granularity property of carbide slag, hydrocyclone operation parameters for slurry concentration, and the slurry velocity inlet are designed to be optimized. The optimization methods are combined with the Design of Experiment (DOE method and the Computational Fluid Dynamics (CFD method. Based on Design Expert software, the central composite design (CCD with three factors and five levels amounting to five groups of 20 test responses was constructed, and the experiments were performed by numerical simulation software FLUENT. Through the analysis of variance deduced from numerical simulation experiment results, the regression equations of pressure drop, overflow concentration, purity, and separation efficiencies of two solid phases were, respectively, obtained. The influences of factors were analyzed by the responses, respectively. Finally, optimized results were obtained by the multiobjective optimization method through the Design Expert software. Based on the optimized conditions, the validation test by numerical simulation and separation experiment were separately proceeded. The results proved that the combined method could be efficiently used in studying the hydrocyclone and it has a good performance in application engineering.
A Solution Quality Assessment Method for Swarm Intelligence Optimization Algorithms
Directory of Open Access Journals (Sweden)
Zhaojun Zhang
2014-01-01
Full Text Available Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of “value performance,” the “ordinal performance” is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and “good enough” set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO, particle swarm optimization (PSO, and artificial fish swarm algorithm (AFS were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.
Feng, Qiang; Chen, Yiran; Sun, Bo; Li, Songjie
2014-01-01
An optimization method for condition based maintenance (CBM) of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the optimization problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial optimization problem of single aircraft strategy. Remain useful life (RUL) distribution of the key line replaceable Module (LRM) has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation method of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an optimization method for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed method. The results shows that it could realize optimization and control of the aircraft fleet oriented to mission success.
A novel optimization method, Gravitational Search Algorithm (GSA), for PWR core optimization
International Nuclear Information System (INIS)
Mahmoudi, S.M.; Aghaie, M.; Bahonar, M.; Poursalehi, N.
2016-01-01
Highlights: • The Gravitational Search Algorithm (GSA) is introduced. • The advantage of GSA is verified in Shekel’s Foxholes. • Reload optimizing in WWER-1000 and WWER-440 cases are performed. • Maximizing K eff , minimizing PPFs and flattening power density is considered. - Abstract: In-core fuel management optimization (ICFMO) is one of the most challenging concepts of nuclear engineering. In recent decades several meta-heuristic algorithms or computational intelligence methods have been expanded to optimize reactor core loading pattern. This paper presents a new method of using Gravitational Search Algorithm (GSA) for in-core fuel management optimization. The GSA is constructed based on the law of gravity and the notion of mass interactions. It uses the theory of Newtonian physics and searcher agents are the collection of masses. In this work, at the first step, GSA method is compared with other meta-heuristic algorithms on Shekel’s Foxholes problem. In the second step for finding the best core, the GSA algorithm has been performed for three PWR test cases including WWER-1000 and WWER-440 reactors. In these cases, Multi objective optimizations with the following goals are considered, increment of multiplication factor (K eff ), decrement of power peaking factor (PPF) and power density flattening. It is notable that for neutronic calculation, PARCS (Purdue Advanced Reactor Core Simulator) code is used. The results demonstrate that GSA algorithm have promising performance and could be proposed for other optimization problems of nuclear engineering field.
A novel optimal coordinated control strategy for the updated robot system for single port surgery.
Bai, Weibang; Cao, Qixin; Leng, Chuntao; Cao, Yang; Fujie, Masakatsu G; Pan, Tiewen
2017-09-01
Research into robotic systems for single port surgery (SPS) has become widespread around the world in recent years. A new robot arm system for SPS was developed, but its positioning platform and other hardware components were not efficient. Special features of the developed surgical robot system make good teleoperation with safety and efficiency difficult. A robot arm is combined and used as new positioning platform, and the remote center motion is realized by a new method using active motion control. A new mapping strategy based on kinematics computation and a novel optimal coordinated control strategy based on real-time approaching to a defined anthropopathic criterion configuration that is referred to the customary ease state of human arms and especially the configuration of boxers' habitual preparation posture are developed. The hardware components, control architecture, control system, and mapping strategy of the robotic system has been updated. A novel optimal coordinated control strategy is proposed and tested. The new robot system can be more dexterous, intelligent, convenient and safer for preoperative positioning and intraoperative adjustment. The mapping strategy can achieve good following and representation for the slave manipulator arms. And the proposed novel control strategy can enable them to complete tasks with higher maneuverability, lower possibility of self-interference and singularity free while teleoperating. Copyright © 2017 John Wiley & Sons, Ltd.
Reverse optimization reconstruction method in non-null aspheric interferometry
Zhang, Lei; Liu, Dong; Shi, Tu; Yang, Yongying; Chong, Shiyao; Shen, Yibing; Bai, Jian
2015-10-01
Aspheric non-null test achieves more flexible measurements than the null test. However, the precision calibration for retrace error has always been difficult. A reverse optimization reconstruction (ROR) method is proposed for the retrace error calibration as well as the aspheric figure error extraction based on system modeling. An optimization function is set up with system model, in which the wavefront data from experiment is inserted as the optimization objective while the figure error under test in the model as the optimization variable. The optimization is executed by the reverse ray tracing in the system model until the test wavefront in the model is consistent with the one in experiment. At this point, the surface figure error in the model is considered to be consistent with the one in experiment. With the Zernike fitting, the aspheric surface figure error is then reconstructed in the form of Zernike polynomials. Numerical simulations verifying the high accuracy of the ROR method are presented with error considerations. A set of experiments are carried out to demonstrate the validity and repeatability of ROR method. Compared with the results of Zygo interferometer (null test), the measurement error by the ROR method achieves better than 1/10λ.
Structural Topology Optimization Based on the Smoothed Finite Element Method
Directory of Open Access Journals (Sweden)
Vahid Shobeiri
Full Text Available Abstract In this paper, the smoothed finite element method, incorporated with the level set method, is employed to carry out the topology optimization of continuum structures. The structural compliance is minimized subject to a constraint on the weight of material used. The cell-based smoothed finite element method is employed to improve the accuracy and stability of the standard finite element method. Several numerical examples are presented to prove the validity and utility of the proposed method. The obtained results are compared with those obtained by several standard finite element-based examples in order to access the applicability and effectiveness of the proposed method. The common numerical instabilities of the structural topology optimization problems such as checkerboard pattern and mesh dependency are studied in the examples.
Optimization method for electron beam melting and refining of metals
Donchev, Veliko; Vutova, Katia
2014-03-01
Pure metals and special alloys obtained by electron beam melting and refining (EBMR) in vacuum, using electron beams as a heating source, have a lot of applications in nuclear and airspace industries, electronics, medicine, etc. An analytical optimization problem for the EBMR process based on mathematical heat model is proposed. The used criterion is integral functional minimization of a partial derivative of the temperature in the metal sample. The investigated technological parameters are the electron beam power, beam radius, the metal casting velocity, etc. The optimization problem is discretized using a non-stationary heat model and corresponding adapted Pismen-Rekford numerical scheme, developed by us and multidimensional trapezional rule. Thus a discrete optimization problem is built where the criterion is a function of technological process parameters. The discrete optimization problem is heuristically solved by cluster optimization method. Corresponding software for the optimization task is developed. The proposed optimization scheme can be applied for quality improvement of the pure metals (Ta, Ti, Cu, etc.) produced by the modern and ecological-friendly EBMR process.
Pixel-based learning method for an optimized photomask in optical lithography
Jeong, Moongyu; Hahn, Jae W.
2017-10-01
Circuit design is driven to the physical limit, and thus patterns on a wafer suffer from serious distortion due to the optical proximity effect. Advanced computational methods have been recommended for photomask optimization to solve this problem. However, this entails extremely high computational costs leading to problems including lengthy run time and complex set-up processes. This study proposes a pixel-based learning method for an optimized photomask that can be used as an optimized mask predictor. Optimized masks are prepared by a commercial tool, and the feature vectors and target label values are extracted. Feature vectors are composed of partial signals that are also used in simulation and observed at the center of the pixels. The target label values are determined by the existence of mask polygons at the pixel locations. A single-hidden-layer artificial neural network (ANN) is trained to learn the optimized masks. A stochastic gradient method is adopted for training to handle about 2 million samples. The masks that are predicted by an ANN show averaged edge placement error of 1.3 nm, exceeding that of an optimized mask by 1.0 nm, and averaged process variation band of 4.8 nm, which is lower than that of the optimized mask by 0.1 nm.
Directory of Open Access Journals (Sweden)
Fujisawa Hironori
2010-05-01
Full Text Available Abstract Background High-density oligonucleotide arrays are effective tools for genotyping numerous loci simultaneously. In small genome species (genome size: Results We compared the single feature polymorphism (SFP detection performance of whole-genome and transcript hybridizations using the Affymetrix GeneChip® Rice Genome Array, using the rice cultivars with full genome sequence, japonica cultivar Nipponbare and indica cultivar 93-11. Both genomes were surveyed for all probe target sequences. Only completely matched 25-mer single copy probes of the Nipponbare genome were extracted, and SFPs between them and 93-11 sequences were predicted. We investigated optimum conditions for SFP detection in both whole genome and transcript hybridization using differences between perfect match and mismatch probe intensities of non-polymorphic targets, assuming that these differences are representative of those between mismatch and perfect targets. Several statistical methods of SFP detection by whole-genome hybridization were compared under the optimized conditions. Causes of false positives and negatives in SFP detection in both types of hybridization were investigated. Conclusions The optimizations allowed a more than 20% increase in true SFP detection in whole-genome hybridization and a large improvement of SFP detection performance in transcript hybridization. Significance analysis of the microarray for log-transformed raw intensities of PM probes gave the best performance in whole genome hybridization, and 22,936 true SFPs were detected with 23.58% false positives by whole genome hybridization. For transcript hybridization, stable SFP detection was achieved for highly expressed genes, and about 3,500 SFPs were detected at a high sensitivity (> 50% in both shoot and young panicle transcripts. High SFP detection performances of both genome and transcript hybridizations indicated that microarrays of a complex genome (e.g., of Oryza sativa can be
METHOD OF CALCULATING THE OPTIMAL HEAT EMISSION GEOTHERMAL WELLS
Directory of Open Access Journals (Sweden)
A. I. Akaev
2015-01-01
Full Text Available This paper presents a simplified method of calculating the optimal regimes of the fountain and the pumping exploitation of geothermal wells, reducing scaling and corrosion during operation. Comparative characteristics to quantify the heat of formation for these methods of operation under the same pressure at the wellhead. The problem is solved graphic-analytical method based on a balance of pressure in the well with the heat pump.
Variable Metric Methods for Unconstrained Optimization and Nonlinear Least Squares
Czech Academy of Sciences Publication Activity Database
Lukšan, Ladislav; Spedicato, E.
2000-01-01
Roč. 124, č. 1-2 (2000), s. 61-95 ISSN 0377-0427 R&D Projects: GA ČR GA201/00/0080 Institutional research plan: AV0Z1030915 Keywords : quasi-Newton methods * variable metric methods * unconstrained optimization * nonlinear least squares * sparse problems * partially separable problems * limited-memory methods Subject RIV: BA - General Mathematics Impact factor: 0.455, year: 2000
Optimization of Inventories for Multiple Companies by Fuzzy Control Method
Kawase, Koichi; Konishi, Masami; Imai, Jun
2008-01-01
In this research, Fuzzy control theory is applied to the inventory control of the supply chain between multiple companies. The proposed control method deals with the amountof inventories expressing supply chain between multiple companies. Referring past demand and tardiness, inventory amounts of raw materials are determined by Fuzzy inference. The method that an appropriate inventory control becomes possible optimizing fuzzy control gain by using SA method for Fuzzy control. The variation of ...
Optimization methods and silicon solar cell numerical models
Girardini, K.; Jacobsen, S. E.
1986-01-01
An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.
Investigation of Optimal Integrated Circuit Raster Image Vectorization Method
Directory of Open Access Journals (Sweden)
Leonas Jasevičius
2011-03-01
Full Text Available Visual analysis of integrated circuit layer requires raster image vectorization stage to extract layer topology data to CAD tools. In this paper vectorization problems of raster IC layer images are presented. Various line extraction from raster images algorithms and their properties are discussed. Optimal raster image vectorization method was developed which allows utilization of common vectorization algorithms to achieve the best possible extracted vector data match with perfect manual vectorization results. To develop the optimal method, vectorized data quality dependence on initial raster image skeleton filter selection was assessed.Article in Lithuanian
Optimal mesh hierarchies in Multilevel Monte Carlo methods
Von Schwerin, Erik
2016-01-08
I will discuss how to choose optimal mesh hierarchies in Multilevel Monte Carlo (MLMC) simulations when computing the expected value of a quantity of interest depending on the solution of, for example, an Ito stochastic differential equation or a partial differential equation with stochastic data. I will consider numerical schemes based on uniform discretization methods with general approximation orders and computational costs. I will compare optimized geometric and non-geometric hierarchies and discuss how enforcing some domain constraints on parameters of MLMC hierarchies affects the optimality of these hierarchies. I will also discuss the optimal tolerance splitting between the bias and the statistical error contributions and its asymptotic behavior. This talk presents joint work with N.Collier, A.-L.Haji-Ali, F. Nobile, and R. Tempone.
Coordinated Optimal Operation Method of the Regional Energy Internet
Directory of Open Access Journals (Sweden)
Rishang Long
2017-05-01
Full Text Available The development of the energy internet has become one of the key ways to solve the energy crisis. This paper studies the system architecture, energy flow characteristics and coordinated optimization method of the regional energy internet. Considering the heat-to-electric ratio of a combined cooling, heating and power unit, energy storage life and real-time electricity price, a double-layer optimal scheduling model is proposed, which includes economic and environmental benefit in the upper layer and energy efficiency in the lower layer. A particle swarm optimizer–individual variation ant colony optimization algorithm is used to solve the computational efficiency and accuracy. Through the calculation and simulation of the simulated system, the energy savings, level of environmental protection and economic optimal dispatching scheme are realized.
Directory of Open Access Journals (Sweden)
Bin He
2014-01-01
Full Text Available In city traffic, it is important to improve transportation efficiency and the spacing of platoon should be shortened when crossing the street. The best method to deal with this problem is automatic control of vehicles. In this paper, a mathematical model is established for the platoon’s longitudinal movement. A systematic analysis of longitudinal control law is presented for the platoon of vehicles. However, the parameter calibration for the platoon model is relatively difficult because the platoon model is complex and the parameters are coupled with each other. In this paper, the particle swarm optimization method is introduced to effectively optimize the parameters of platoon. The proposed method effectively finds the optimal parameters based on simulations and makes the spacing of platoon shorter.
Directory of Open Access Journals (Sweden)
Guo-Qiang Zeng
2014-01-01
Full Text Available As a novel evolutionary optimization method, extremal optimization (EO has been successfully applied to a variety of combinatorial optimization problems. However, the applications of EO in continuous optimization problems are relatively rare. This paper proposes an improved real-coded population-based EO method (IRPEO for continuous unconstrained optimization problems. The key operations of IRPEO include generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The experimental results on 10 benchmark test functions with the dimension N=30 have shown that IRPEO is competitive or even better than the recently reported various genetic algorithm (GA versions with different mutation operations in terms of simplicity, effectiveness, and efficiency. Furthermore, the superiority of IRPEO to other evolutionary algorithms such as original population-based EO, particle swarm optimization (PSO, and the hybrid PSO-EO is also demonstrated by the experimental results on some benchmark functions.
Method: a single nucleotide polymorphism genotyping method for Wheat streak mosaic virus
2012-01-01
Background The September 11, 2001 attacks on the World Trade Center and the Pentagon increased the concern about the potential for terrorist attacks on many vulnerable sectors of the US, including agriculture. The concentrated nature of crops, easily obtainable biological agents, and highly detrimental impacts make agroterrorism a potential threat. Although procedures for an effective criminal investigation and attribution following such an attack are available, important enhancements are still needed, one of which is the capability for fine discrimination among pathogen strains. The purpose of this study was to develop a molecular typing assay for use in a forensic investigation, using Wheat streak mosaic virus (WSMV) as a model plant virus. Method This genotyping technique utilizes single base primer extension to generate a genetic fingerprint. Fifteen single nucleotide polymorphisms (SNPs) within the coat protein and helper component-protease genes were selected as the genetic markers for this assay. Assay optimization and sensitivity testing was conducted using synthetic targets. WSMV strains and field isolates were collected from regions around the world and used to evaluate the assay for discrimination. The assay specificity was tested against a panel of near-neighbors consisting of genetic and environmental near-neighbors. Result Each WSMV strain or field isolate tested produced a unique SNP fingerprint, with the exception of three isolates collected within the same geographic location that produced indistinguishable fingerprints. The results were consistent among replicates, demonstrating the reproducibility of the assay. No SNP fingerprints were generated from organisms included in the near-neighbor panel, suggesting the assay is specific for WSMV. Using synthetic targets, a complete profile could be generated from as low as 7.15 fmoles of cDNA. Conclusion The molecular typing method presented is one tool that could be incorporated into the forensic
Topology optimization of hyperelastic structures using a level set method
Chen, Feifei; Wang, Yiqiang; Wang, Michael Yu; Zhang, Y. F.
2017-12-01
Soft rubberlike materials, due to their inherent compliance, are finding widespread implementation in a variety of applications ranging from assistive wearable technologies to soft material robots. Structural design of such soft and rubbery materials necessitates the consideration of large nonlinear deformations and hyperelastic material models to accurately predict their mechanical behaviour. In this paper, we present an effective level set-based topology optimization method for the design of hyperelastic structures that undergo large deformations. The method incorporates both geometric and material nonlinearities where the strain and stress measures are defined within the total Lagrange framework and the hyperelasticity is characterized by the widely-adopted Mooney-Rivlin material model. A shape sensitivity analysis is carried out, in the strict sense of the material derivative, where the high-order terms involving the displacement gradient are retained to ensure the descent direction. As the design velocity enters into the shape derivative in terms of its gradient and divergence terms, we develop a discrete velocity selection strategy. The whole optimization implementation undergoes a two-step process, where the linear optimization is first performed and its optimized solution serves as the initial design for the subsequent nonlinear optimization. It turns out that this operation could efficiently alleviate the numerical instability and facilitate the optimization process. To demonstrate the validity and effectiveness of the proposed method, three compliance minimization problems are studied and their optimized solutions present significant mechanical benefits of incorporating the nonlinearities, in terms of remarkable enhancement in not only the structural stiffness but also the critical buckling load.
Panorama parking assistant system with improved particle swarm optimization method
Cheng, Ruzhong; Zhao, Yong; Li, Zhichao; Jiang, Weigang; Wang, Xin'an; Xu, Yong
2013-10-01
A panorama parking assistant system (PPAS) for the automotive aftermarket together with a practical improved particle swarm optimization method (IPSO) are proposed in this paper. In the PPAS system, four fisheye cameras are installed in the vehicle with different views, and four channels of video frames captured by the cameras are processed as a 360-deg top-view image around the vehicle. Besides the embedded design of PPAS, the key problem for image distortion correction and mosaicking is the efficiency of parameter optimization in the process of camera calibration. In order to address this problem, an IPSO method is proposed. Compared with other parameter optimization methods, the proposed method allows a certain range of dynamic change for the intrinsic and extrinsic parameters, and can exploit only one reference image to complete all of the optimization; therefore, the efficiency of the whole camera calibration is increased. The PPAS is commercially available, and the IPSO method is a highly practical way to increase the efficiency of the installation and the calibration of PPAS in automobile 4S shops.
An Asymmetrical Space Vector Method for Single Phase Induction Motor
DEFF Research Database (Denmark)
Cui, Yuanhai; Blaabjerg, Frede; Andersen, Gert Karmisholt
2002-01-01
Single phase induction motors are the workhorses in low-power applications in the world, and also the variable speed is necessary. Normally it is achieved either by the mechanical method or by controlling the capacitor connected with the auxiliary winding. Any above method has some drawback which...
Cheap arbitrary high order methods for single integrand SDEs
DEFF Research Database (Denmark)
Debrabant, Kristian; Kværnø, Anne
2017-01-01
For a particular class of Stratonovich SDE problems, here denoted as single integrand SDEs, we prove that by applying a deterministic Runge-Kutta method of order $p_d$ we obtain methods converging in the mean-square and weak sense with order $\\lfloor p_d/2\\rfloor$. The reason is that the B-series...
Control and Optimization Methods for Electric Smart Grids
Ilić, Marija
2012-01-01
Control and Optimization Methods for Electric Smart Grids brings together leading experts in power, control and communication systems,and consolidates some of the most promising recent research in smart grid modeling,control and optimization in hopes of laying the foundation for future advances in this critical field of study. The contents comprise eighteen essays addressing wide varieties of control-theoretic problems for tomorrow’s power grid. Topics covered include: Control architectures for power system networks with large-scale penetration of renewable energy and plug-in vehicles Optimal demand response New modeling methods for electricity markets Control strategies for data centers Cyber-security Wide-area monitoring and control using synchronized phasor measurements. The authors present theoretical results supported by illustrative examples and practical case studies, making the material comprehensible to a wide audience. The results reflect the exponential transformation that today’s grid is going...
Optimization of MIMO Systems Capacity Using Large Random Matrix Methods
Directory of Open Access Journals (Sweden)
Philippe Loubaton
2012-11-01
Full Text Available This paper provides a comprehensive introduction of large random matrix methods for input covariance matrix optimization of mutual information of MIMO systems. It is first recalled informally how large system approximations of mutual information can be derived. Then, the optimization of the approximations is discussed, and important methodological points that are not necessarily covered by the existing literature are addressed, including the strict concavity of the approximation, the structure of the argument of its maximum, the accuracy of the large system approach with regard to the number of antennas, or the justification of iterative water-filling optimization algorithms. While the existing papers have developed methods adapted to a specific model, this contribution tries to provide a unified view of the large system approximation approach.
Qyyum, Muhammad Abdul; Long, Nguyen Van Duc; Minh, Le Quang; Lee, Moonyong
2018-01-01
Design optimization of the single mixed refrigerant (SMR) natural gas liquefaction (LNG) process involves highly non-linear interactions between decision variables, constraints, and the objective function. These non-linear interactions lead to an irreversibility, which deteriorates the energy efficiency of the LNG process. In this study, a simple and highly efficient hybrid modified coordinate descent (HMCD) algorithm was proposed to cope with the optimization of the natural gas liquefaction process. The single mixed refrigerant process was modeled in Aspen Hysys® and then connected to a Microsoft Visual Studio environment. The proposed optimization algorithm provided an improved result compared to the other existing methodologies to find the optimal condition of the complex mixed refrigerant natural gas liquefaction process. By applying the proposed optimization algorithm, the SMR process can be designed with the 0.2555 kW specific compression power which is equivalent to 44.3% energy saving as compared to the base case. Furthermore, in terms of coefficient of performance (COP), it can be enhanced up to 34.7% as compared to the base case. The proposed optimization algorithm provides a deep understanding of the optimization of the liquefaction process in both technical and numerical perspectives. In addition, the HMCD algorithm can be employed to any mixed refrigerant based liquefaction process in the natural gas industry.
International Nuclear Information System (INIS)
Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun
2016-01-01
Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the
Practical inventory routing: A problem definition and an optimization method
Geiger, Martin Josef; Sevaux, Marc
2011-01-01
The global objective of this work is to provide practical optimization methods to companies involved in inventory routing problems, taking into account this new type of data. Also, companies are sometimes not able to deal with changing plans every period and would like to adopt regular structures for serving customers.
Applying the Taguchi method for optimized fabrication of bovine ...
African Journals Online (AJOL)
The objective of the present study was to optimize the fabrication of bovine serum albumin (BSA) nanoparticle by applying the Taguchi method with characterization of the nanoparticle bioproducts. BSA nanoparticles have been extensively studied in our previous works as suitable carrier for drug delivery, since they are ...
Response surface method to optimize the low cost medium for ...
African Journals Online (AJOL)
A protease producing Bacillus sp. GA CAS10 was isolated from ascidian Phallusia arabica, Tuticorin, Southeast coast of India. Response surface methodology was employed for the optimization of different nutritional and physical factors for the production of protease. Plackett-Burman method was applied to identify ...
Optimization Methods in Operations Research and Systems Analysis
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 6. Optimization Methods in Operations Research and Systems Analysis. V G Tikekar. Book Review Volume 2 Issue 6 June 1997 pp 91-92. Fulltext. Click here to view fulltext PDF. Permanent link:
Golinko, I. M.; Kovrigo, Yu. M.; Kubrak, A. I.
2014-03-01
An express method for optimally tuning analog PI and PID controllers is considered. An integral quality criterion with minimizing the control output is proposed for optimizing control systems. The suggested criterion differs from existing ones in that the control output applied to the technological process is taken into account in a correct manner, due to which it becomes possible to maximally reduce the expenditure of material and/or energy resources in performing control of industrial equipment sets. With control organized in such manner, smaller wear and longer service life of control devices are achieved. A unimodal nature of the proposed criterion for optimally tuning a controller is numerically demonstrated using the methods of optimization theory. A functional interrelation between the optimal controller parameters and dynamic properties of a controlled plant is numerically determined for a single-loop control system. The results obtained from simulation of transients in a control system carried out using the proposed and existing functional dependences are compared with each other. The proposed calculation formulas differ from the existing ones by a simple structure and highly accurate search for the optimal controller tuning parameters. The obtained calculation formulas are recommended for being used by specialists in automation for design and optimization of control systems.
The UIC 406 capacity method used on single track sections
DEFF Research Database (Denmark)
Landex, Alex; Kaas, Anders H.; Jacobsen, Erik M.
2007-01-01
follow each other in the same direction. Anyway, special care has to be shown to how to expound the UIC 406 capacity method in specific cases. Therefore, this paper discusses where to divide the railway lines into line sections and how crossing stations and junctions and conflicts when entering......This paper describes the relatively new UIC 406 capacity method which is an easy and effective way of calculating capacity consumption on railway lines. However, it is possible to expound the method in different ways which can lead to different capacity consumptions. This paper describes the UIC...... 406 method for single track lines and how it is expounded in Denmark. Many capacity analyses using the UIC 406 capacity method for double track lines have been carried out and presented internationally but only few capacity analyses using the UIC 406 capacity method on single track lines have been...
The Design and Optimization of GaAs Single Solar Cells Using the Genetic Algorithm and Silvaco ATLAS
Directory of Open Access Journals (Sweden)
Kamal Attari
2017-01-01
Full Text Available Single-junction solar cells are the most available in the market and the most simple in terms of the realization and fabrication comparing to the other solar devices. However, these single-junction solar cells need more development and optimization for higher conversion efficiency. In addition to the doping densities and compromises between different layers and their best thickness value, the choice of the materials is also an important factor on improving the efficiency. In this paper, an efficient single-junction solar cell model of GaAs is presented and optimized. In the first step, an initial model was simulated and then the results were processed by an algorithm code. In this work, the proposed optimization method is a genetic search algorithm implemented in Matlab receiving ATLAS data to generate an optimum output power solar cell. Other performance parameters such as photogeneration rates, external quantum efficiency (EQE, and internal quantum efficiency (EQI are also obtained. The simulation shows that the proposed method provides significant conversion efficiency improvement of 29.7% under AM1.5G illumination. The other results were Jsc = 34.79 mA/cm2, Voc = 1 V, and fill factor (FF = 85%.
Yang, Qi; Zhang, Yanzhu; Zhao, Tiebiao; Chen, YangQuan
2017-04-04
Image super-resolution using self-optimizing mask via fractional-order gradient interpolation and reconstruction aims to recover detailed information from low-resolution images and reconstruct them into high-resolution images. Due to the limited amount of data and information retrieved from low-resolution images, it is difficult to restore clear, artifact-free images, while still preserving enough structure of the image such as the texture. This paper presents a new single image super-resolution method which is based on adaptive fractional-order gradient interpolation and reconstruction. The interpolated image gradient via optimal fractional-order gradient is first constructed according to the image similarity and afterwards the minimum energy function is employed to reconstruct the final high-resolution image. Fractional-order gradient based interpolation methods provide an additional degree of freedom which helps optimize the implementation quality due to the fact that an extra free parameter α-order is being used. The proposed method is able to produce a rich texture detail while still being able to maintain structural similarity even under large zoom conditions. Experimental results show that the proposed method performs better than current single image super-resolution techniques. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Incorporating single detector failure into the ROP detector layout optimization for CANDU reactors
International Nuclear Information System (INIS)
Kastanya, Doddy
2015-01-01
Highlights: • ROP TSP value needs to be adjusted when any detector in the system fails. • Single detector failure criterion has been incorporated into the detector layout optimization as a constraint. • Results show that the optimized detector layout is more robust with respect to its vulnerability to a single detector failure. • An early rejection scheme has been introduced to speed-up the optimization process. - Abstract: In CANDU ® reactors, the regional overpower protection (ROP) systems are designed to protect the reactor against overpower in the fuel which could reduce the safety margin-to-dryout. In the CANDU ® 600 MW (CANDU 6) design, there are two ROP systems in the core, each of which is connected to a fast-acting shutdown system. Each ROP system consists of a number of fast-responding, self-powered flux detectors suitably distributed throughout the core within vertical and horizontal flux detector assemblies. The placement of these ROP detectors is a challenging discrete optimization problem. In the past few years, two algorithms, DETPLASA and ADORE, have been developed to optimize the detector layout for the ROP systems in CANDU reactors. These algorithms utilize the simulated annealing (SA) technique to optimize the placement of the detectors in the core. The objective of the optimization process is typically either to maximize the TSP value for a given number of detectors in the system or to minimize the number of detectors in the system to obtain a target TSP value. One measure to determine the robustness of the optimized detector layout is to evaluate the maximum decrease (penalty) in TSP value when any single detector in the system fails. The smaller the penalty, the more robust the design is. Therefore, in order to ensure that the optimized detector layout is robust, the single detector failure (SDF) criterion has been incorporated as an additional constraint into the ADORE algorithm. Results from this study indicate that there is a
Incorporating single detector failure into the ROP detector layout optimization for CANDU reactors
Energy Technology Data Exchange (ETDEWEB)
Kastanya, Doddy, E-mail: Doddy.Kastanya@snclavalin.com
2015-12-15
Highlights: • ROP TSP value needs to be adjusted when any detector in the system fails. • Single detector failure criterion has been incorporated into the detector layout optimization as a constraint. • Results show that the optimized detector layout is more robust with respect to its vulnerability to a single detector failure. • An early rejection scheme has been introduced to speed-up the optimization process. - Abstract: In CANDU{sup ®} reactors, the regional overpower protection (ROP) systems are designed to protect the reactor against overpower in the fuel which could reduce the safety margin-to-dryout. In the CANDU{sup ®} 600 MW (CANDU 6) design, there are two ROP systems in the core, each of which is connected to a fast-acting shutdown system. Each ROP system consists of a number of fast-responding, self-powered flux detectors suitably distributed throughout the core within vertical and horizontal flux detector assemblies. The placement of these ROP detectors is a challenging discrete optimization problem. In the past few years, two algorithms, DETPLASA and ADORE, have been developed to optimize the detector layout for the ROP systems in CANDU reactors. These algorithms utilize the simulated annealing (SA) technique to optimize the placement of the detectors in the core. The objective of the optimization process is typically either to maximize the TSP value for a given number of detectors in the system or to minimize the number of detectors in the system to obtain a target TSP value. One measure to determine the robustness of the optimized detector layout is to evaluate the maximum decrease (penalty) in TSP value when any single detector in the system fails. The smaller the penalty, the more robust the design is. Therefore, in order to ensure that the optimized detector layout is robust, the single detector failure (SDF) criterion has been incorporated as an additional constraint into the ADORE algorithm. Results from this study indicate that there
Error Estimates for Approximate Optimization by the Extended Ritz Method
Czech Academy of Sciences Publication Activity Database
Kůrková, Věra; Sanguineti, M.
2005-01-01
Roč. 15, č. 2 (2005), s. 461-487 ISSN 1052-6234 R&D Projects: GA ČR GA201/02/0428 Institutional research plan: CEZ:AV0Z10300504 Keywords : functional optimization * rates of convergence of suboptimal solutions * (extended) Ritz method * curse of dimensionality * convex best approximation problems * learning from data by kernel methods Subject RIV: BA - General Mathematics Impact factor: 1.238, year: 2005
Hybrid robust predictive optimization method of power system dispatch
Chandra, Ramu Sharat [Niskayuna, NY; Liu, Yan [Ballston Lake, NY; Bose, Sumit [Niskayuna, NY; de Bedout, Juan Manuel [West Glenville, NY
2011-08-02
A method of power system dispatch control solves power system dispatch problems by integrating a larger variety of generation, load and storage assets, including without limitation, combined heat and power (CHP) units, renewable generation with forecasting, controllable loads, electric, thermal and water energy storage. The method employs a predictive algorithm to dynamically schedule different assets in order to achieve global optimization and maintain the system normal operation.
Optimization strategies of in-tube extraction (ITEX) methods
Laaks, Jens; Jochmann, Maik A.; Schilling, Beat; Schmidt, Torsten C.
2015-01-01
Microextraction techniques, especially dynamic techniques like in-tube extraction (ITEX), can require an extensive method optimization procedure. This work summarizes the experiences from several methods and gives recommendations for the setting of proper extraction conditions to minimize experimental effort. Therefore, the governing parameters of the extraction and injection stages are discussed. This includes the relative extraction efficiencies of 11 kinds of sorbent tubes, either commerci...
Exergetic optimization of a thermoacoustic engine using the particle swarm optimization method
International Nuclear Information System (INIS)
Chaitou, Hussein; Nika, Philippe
2012-01-01
Highlights: ► Optimization of a thermoacoustic engine using the particle swarm optimization method. ► Exergetic efficiency, acoustic power and their product are the optimized functions. ► PSO method is used successfully for the first time in the TA research. ► The powerful PSO tool is advised to be more involved in the TA research and design. ► EE times AP optimized function is highly recommended to design any new TA devices. - Abstract: Thermoacoustic engines convert heat energy into acoustic energy. Then, the acoustic energy can be used to pump heat or to generate electricity. It is well-known that the acoustic energy and therefore the exergetic efficiency depend on parameters such as the stack’s hydraulic radius, the stack’s position in the resonator and the traveling–standing-wave ratio. In this paper, these three parameters are investigated in order to study and analyze the best value of the produced acoustic energy, the exergetic efficiency and the product of the acoustic energy by the exergetic efficiency of a thermoacoustic engine with a parallel-plate stack. The dimensionless expressions of the thermoacoustic equations are derived and calculated. Then, the Particle Swarm Optimization method (PSO) is introduced and used for the first time in the thermoacoustic research. The use of the PSO method and the optimization of the acoustic energy multiplied by the exergetic efficiency are novel contributions to this domain of research. This paper discusses some significant conclusions which are useful for the design of new thermoacoustic engines.
Autonomous guided vehicles methods and models for optimal path planning
Fazlollahtabar, Hamed
2015-01-01
This book provides readers with extensive information on path planning optimization for both single and multiple Autonomous Guided Vehicles (AGVs), and discusses practical issues involved in advanced industrial applications of AGVs. After discussing previously published research in the field and highlighting the current gaps, it introduces new models developed by the authors with the goal of reducing costs and increasing productivity and effectiveness in the manufacturing industry. The new models address the increasing complexity of manufacturing networks, due for example to the adoption of flexible manufacturing systems that involve automated material handling systems, robots, numerically controlled machine tools, and automated inspection stations, while also considering the uncertainty and stochastic nature of automated equipment such as AGVs. The book discusses and provides solutions to important issues concerning the use of AGVs in the manufacturing industry, including material flow optimization with A...
Compositions and methods for detecting single nucleotide polymorphisms
Energy Technology Data Exchange (ETDEWEB)
Yeh, Hsin-Chih; Werner, James; Martinez, Jennifer S.
2016-11-22
Described herein are nucleic acid based probes and methods for discriminating and detecting single nucleotide variants in nucleic acid molecules (e.g., DNA). The methods include use of a pair of probes can be used to detect and identify polymorphisms, for example single nucleotide polymorphism in DNA. The pair of probes emit a different fluorescent wavelength of light depending on the association and alignment of the probes when hybridized to a target nucleic acid molecule. Each pair of probes is capable of discriminating at least two different nucleic acid molecules that differ by at least a single nucleotide difference. The methods can probes can be used, for example, for detection of DNA polymorphisms that are indicative of a particular disease or condition.
International Nuclear Information System (INIS)
Gao, Hao
2016-01-01
For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT. (paper)
Someya, Hiroshi; Yamamura, Masayuki; Sakamoto, Kensaku
This paper discusses DNA-based stochastic optimizations under the constraint that the search starts from a given point in a search space. Generally speaking, a stochastic optimization method explores a search space and finds out the optimum or a sub-optimum after many cycles of trials and errors. This search process could be implemented efficiently by ``molecular computing'', which processes DNA molecules by the techniques of molecular biology to generate and evaluate a vast number of solution candidates at a time. We assume the exploration starting from a single point, and propose a method to embody DNA-based optimization under this constraint, because this method has a promising application in the research field of protein engineering. In this application, a string of nucleotide bases (a base sequence) encodes a protein possessing a specific activity, which could be given as a value of an objective function. Thus, a problem of obtaining a protein with the optimum or a sub-optimum about the desired activity corresponds to a combinatorial problem of obtaining a base sequence giving the optimum or a sub-optimum in the sequence space. Biologists usually modify a base sequence corresponding to a naturally occurring protein into another sequence giving a desired activity. In other words, they explore the space in the proximity of a natural protein as a start point. We first examined if the optimization methods that involve a single start point, such as simulated annealing, Gibbs sampler, and MH algorithms, can be implemented by DNA-based operations. Then, we proposed an application of genetic algorithm, and examined the performance of this application on a model fitness landscape by computer experiments. These experiments gave helpful guidelines in the embodiments of DNA-based stochastic optimization, including a better design of crossover operator.
Development of two dimensional electrophoresis method using single chain DNA
International Nuclear Information System (INIS)
Ikeda, Junichi; Hidaka, So
1998-01-01
By combining a separation method due to molecular weight and a method to distinguish difference of mono-bases, it was aimed to develop a two dimensional single chain DNA labeled with Radioisotope (RI). From electrophoretic pattern difference of parent and variant strands, it was investigated to isolate the root module implantation control gene. At first, a Single Strand Conformation Polymorphism (SSCP) method using concentration gradient gel was investigated. As a result, it was formed that intervals between double chain and single chain DNAs expanded, but intervals of both single chain DNAs did not expand. On next, combination of non-modified acrylic amide electrophoresis method and Denaturing Gradient-Gel Electrophoresis (DGGE) method was examined. As a result, hybrid DNA developed by two dimensional electrophoresis arranged on two lines. But, among them a band of DNA modified by high concentration of urea could not be found. Therefore, in this fiscal year's experiments, no preferable result could be obtained. By the used method, it was thought to be impossible to detect the differences. (G.K.)
Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd
2018-03-01
Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.
New library construction method for single-cell genomes.
Directory of Open Access Journals (Sweden)
Larry Xi
Full Text Available A central challenge in sequencing single-cell genomes is the accurate determination of point mutations, phasing of these mutations, and identifying copy number variations with few assumptions. Ideally, this is accomplished under as low sequencing coverage as possible. Here we report our attempt to meet these goals with a novel library construction and library amplification methodology. In our approach, single-cell genomic DNA is first fragmented with saturated transposition to make a primary library that uniformly covers the whole genome by short fragments. The library is then amplified by a carefully optimized PCR protocol in a uniform and synchronized fashion for next-generation sequencing. Each step of the protocol can be quantitatively characterized. Our shallow sequencing data show that the library is tightly distributed and is useful for the determination of copy number variations.
Flat-top Drop Filter based on a Single Topology Optimized Photonic Crystal Cavity
DEFF Research Database (Denmark)
Frandsen, Lars Hagedorn; Elesin, Yuriy; Guan, Xiaowei
2015-01-01
Outperforming conventional design concepts, a flat-top drop filter has been designed byapplying 3D topology optimization to a single waveguide-coupled L3 photonic crystal cavity.Measurements on the design fabricated in silicon-on-insulator material reveal that the pass-band ofthe drop channel...
Optimization of the crystallizability of a single-chain antibody fragment
Czech Academy of Sciences Publication Activity Database
Škerlová, Jana; Král, Vlastimil; Fábry, Milan; Sedláček, Juraj; Veverka, Václav; Řezáčová, Pavlína
2014-01-01
Roč. 70, č. 12 (2014), s. 1701-1706 ISSN 1744-3091 R&D Projects: GA MŠk(CZ) LK11205 Institutional support: RVO:61388963 ; RVO:68378050 Keywords : single-chain antibody fragment * Thermofluor assay * differential scanning fluorimetry * crystallizability optimization * oligomerization * crystallization Subject RIV: CE - Biochemistry Impact factor: 0.527, year: 2014
Method for harvesting rare earth barium copper oxide single crystals
Todt, Volker R.; Sengupta, Suvankar; Shi, Donglu
1996-01-01
A method of preparing high temperature superconductor single crystals. The method of preparation involves preparing precursor materials of a particular composition, heating the precursor material to achieve a peritectic mixture of peritectic liquid and crystals of the high temperature superconductor, cooling the peritectic mixture to quench directly the mixture on a porous, wettable inert substrate to wick off the peritectic liquid, leaving single crystals of the high temperature superconductor on the porous substrate. Alternatively, the peritectic mixture can be cooled to a solid mass and reheated on a porous, inert substrate to melt the matrix of peritectic fluid while leaving the crystals melted, allowing the wicking away of the peritectic liquid.
Mathematical programming methods for large-scale topology optimization problems
DEFF Research Database (Denmark)
Rojas Labanda, Susana
for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...... structure of the problem. In both solvers, information of the exact Hessian is considered. A robust iterative method is implemented to efficiently solve large-scale linear systems. Both TopSQP and TopIP have successful results in terms of convergence, number of iterations, and objective function values....... Thanks to the use of the iterative method implemented, TopIP is able to solve large-scale problems with more than three millions degrees of freedom....
Energy Technology Data Exchange (ETDEWEB)
Stassi, D.; Ma, H.; Schmidt, T. G., E-mail: taly.gilat-schmidt@marquette.edu [Department of Biomedical Engineering, Marquette University, Milwaukee, Wisconsin 53201 (United States); Dutta, S.; Soderman, A.; Pazzani, D.; Gros, E.; Okerlund, D. [GE Healthcare, Waukesha, Wisconsin 53188 (United States)
2016-01-15
Purpose: Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, making it suited for prospectively gated studies where only a subset of phases are available. Methods: An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three
Systematic method for optimizing plutonium transmutation in LWRs
Sorensen, Reuben T.
We have developed the Systematic Reactor Optimization in 2-Dimensions (SRO2D) code to maximize the transmutation of plutonium in light water reactors (LWRs). The necessary conditions for optimal fuel and burnable absorber loadings are obtained with Pontryagin's maximum principle and a direct adjoining approach to explicitly account for a power peaking inequality constraint. The resulting set of coupled system, Euler-Lagrange (E-L), and optimality equations are solved iteratively with the method of conjugate gradients until no further improvement is achieved in the objective function. To satisfy the power peaking inequality constraint throughout the operating cycle we have employed a backwards diffusion theory (BDT) technique as part of the conjugate gradient optimization package. The BDT approach establishes a relationship between the burnable absorber loading and the power distribution during the cycle, such that constraint violations are reduced with each conjugate gradient iteration and eventually eliminated. Our in-core optimization methodology has been implemented in the SRO2D code, assuming two-group, two-dimensional neutron diffusion theory. The system equations are solved in a quasi-static fashion forward in time from beginning-of-cycle (BOC) to end-of-cycle (EOC), while the E-L equations are solved backwards in time from EOC to BOC to reflect the adjoint nature of the Lagrange multipliers. Cycle length extension calculations of a first cycle AP600 plant verify our implementation effort, yielding a nearly identical loading pattern to that issued by Westinghouse in the AP600 Safety Analysis Report. Utilizing a self-generated Pu recycling mode, our in-core optimization methodology is coupled with an equilibrium cycle methodology to arrive at an optimized asymptotic Pu inventory and composition. Beginning with a poor loading pattern, our LWR optimization package improves the core performance by reducing the maximum power peaking factor from 2.0 to 1
Optimal treatment cost allocation methods in pollution control
International Nuclear Information System (INIS)
Chen Wenying; Fang Dong; Xue Dazhi
1999-01-01
Total emission control is an effective pollution control strategy. However, Chinese application of total emission control lacks reasonable and fair methods for optimal treatment cost allocation, a critical issue in total emission control. The author considers four approaches to allocate treatment costs. The first approach is to set up a multiple-objective planning model and to solve the model using the shortest distance ideal point method. The second approach is to define degree of satisfaction for cost allocation results for each polluter and to establish a method based on this concept. The third is to apply bargaining and arbitration theory to develop a model. The fourth is to establish a cooperative N-person game model which can be solved using the Shapley value method, the core method, the Cost Gap Allocation method or the Minimum Costs-Remaining Savings method. These approaches are compared using a practicable case study
A Brief Introduction to Single-Molecule Fluorescence Methods.
van den Wildenberg, Siet M J L; Prevo, Bram; Peterman, Erwin J G
2018-01-01
One of the more popular single-molecule approaches in biological science is single-molecule fluorescence microscopy, which will be the subject of the following section of this volume. Fluorescence methods provide the sensitivity required to study biology on the single-molecule level, but they also allow access to useful measurable parameters on time and length scales relevant for the biomolecular world. Before several detailed experimental approaches will be addressed, we will first give a general overview of single-molecule fluorescence microscopy. We start with discussing the phenomenon of fluorescence in general and the history of single-molecule fluorescence microscopy. Next, we will review fluorescent probes in more detail and the equipment required to visualize them on the single-molecule level. We will end with a description of parameters measurable with such approaches, ranging from protein counting and tracking, single-molecule localization super-resolution microscopy, to distance measurements with Förster Resonance Energy Transfer and orientation measurements with fluorescence polarization.
An In Vitro Single-Primer Site-Directed Mutagenesis Method for Use in Biotechnology.
Huang, Yanchao; Zhang, Likui
2017-01-01
Site-directed mutagenesis is a powerful method to introduce mutation(s) into DNA sequences. A number of methods have been developed over the years with a main goal being to create a high number of mutant genes. The single-mutagenic primer method for site-directed mutagenesis is the most direct method that yields mutant genes in about 25-50 % of transformants in a robust, low-cost reaction. The supercompetent XL10-Gold bacteria used in the Stratagene protocol carry a phage, which may be a problem for some applications; however, in our single-mutagenic primer method the supercompetent bacteria are not needed. A thermostable DNA polymerase with high fidelity and processivity, such as Phusion DNA polymerase, is required for our optimized procedure to avoid extra mutation(s) and enhance mutagenic efficiency.
Haas, Beth L.; Matson, Jyl S.; DiRita, Victor J.; Biteen, Julie S.
2015-01-01
Single-molecule fluorescence microscopy enables biological investigations inside living cells to achieve millisecond- and nanometer-scale resolution. Although single-molecule-based methods are becoming increasingly accessible to non-experts, optimizing new single-molecule experiments can be challenging, in particular when super-resolution imaging and tracking are applied to live cells. In this review, we summarize common obstacles to live-cell single-molecule microscopy and describe the methods we have developed and applied to overcome these challenges in live bacteria. We examine the choice of fluorophore and labeling scheme, approaches to achieving single-molecule levels of fluorescence, considerations for maintaining cell viability, and strategies for detecting single-molecule signals in the presence of noise and sample drift. We also discuss methods for analyzing single-molecule trajectories and the challenges presented by the finite size of a bacterial cell and the curvature of the bacterial membrane. PMID:25123183
A method for optimizing the performance of buildings
Energy Technology Data Exchange (ETDEWEB)
Pedersen, Frank
2006-07-01
This thesis describes a method for optimizing the performance of buildings. Design decisions made in early stages of the building design process have a significant impact on the performance of buildings, for instance, the performance with respect to the energy consumption, economical aspects, and the indoor environment. The method is intended for supporting design decisions for buildings, by combining methods for calculating the performance of buildings with numerical optimization methods. The method is able to find optimum values of decision variables representing different features of the building, such as its shape, the amount and type of windows used, and the amount of insulation used in the building envelope. The parties who influence design decisions for buildings, such as building owners, building users, architects, consulting engineers, contractors, etc., often have different and to some extent conflicting requirements to buildings. For instance, the building owner may be more concerned about the cost of constructing the building, rather than the quality of the indoor climate, which is more likely to be a concern of the building user. In order to support the different types of requirements made by decision-makers for buildings, an optimization problem is formulated, intended for representing a wide range of design decision problems for buildings. The problem formulation involves so-called performance measures, which can be calculated with simulation software for buildings. For instance, the annual amount of energy required by the building, the cost of constructing the building, and the annual number of hours where overheating occurs, can be used as performance measures. The optimization problem enables the decision-makers to specify many different requirements to the decision variables, as well as to the performance of the building. Performance measures can for instance be required to assume their minimum or maximum value, they can be subjected to upper or
Single-ended transition state finding with the growing string method.
Zimmerman, Paul M
2015-04-05
Reaction path finding and transition state (TS) searching are important tasks in computational chemistry. Methods that seek to optimize an evenly distributed set of structures to represent a chemical reaction path are known as double-ended string methods. Such methods can be highly reliable because the endpoints of the string are fixed, which effectively lowers the dimensionality of the reaction path search. String methods, however, require that the reactant and product structures are known beforehand, which limits their ability for systematic exploration of reactive steps. In this article, a single-ended growing string method (GSM) is introduced which allows for reaction path searches starting from a single structure. The method works by sequentially adding nodes along coordinates that drive bonds, angles, and/or torsions to a desired reactive outcome. After the string is grown and an approximate reaction path through the TS is found, string optimization commences and the exact TS is located along with the reaction path. Fast convergence of the string is achieved through use of internal coordinates and eigenvector optimization schemes combined with Hessian estimates. Comparison to the double-ended GSM shows that single-ended method can be even more computationally efficient than the already rapid double-ended method. Examples, including transition metal reactivity and a systematic, automated search for unknown reactivity, demonstrate the efficacy of the new method. This automated reaction search is able to find 165 reaction paths from 333 searches for the reaction of NH3 BH3 and (LiH)4 , all without guidance from user intuition. © 2015 Wiley Periodicals, Inc.
Optimized localization analysis for single-molecule tracking and super-resolution microscopy
DEFF Research Database (Denmark)
Mortensen, Kim; Churchman, L. S.; Spudich, J. A.
2010-01-01
We optimally localized isolated fluorescent beads and molecules imaged as diffraction-limited spots, determined the orientation of molecules and present reliable formulas for the precision of various localization methods. Both theory and experimental data showed that unweighted least-squares fitt......We optimally localized isolated fluorescent beads and molecules imaged as diffraction-limited spots, determined the orientation of molecules and present reliable formulas for the precision of various localization methods. Both theory and experimental data showed that unweighted least...
Zhou, Junle; Chen, Lingen; Ding, Zemin; Sun, Fengrui
2016-05-01
Applying finite-time thermodynamics (FTT) and electronic transport theory, the optimal performances of irreversible single resonance energy selective electron (ESE) refrigerator are analyzed. The effects of heat leakage between two electron reservoirs on optimal performances are discussed. The influences of system operating parameters on cooling load, coefficient of performance (COP), figure of merit and ecological function are demonstrated using numerical examples. Comparative performance analyses among different objective functions show that performance characteristics at maximum ecological function and maximum figure of merit are of great practical significance. Combining the two optimization objectives of maximum ecological function and maximum figure of merit together, more specific optimal ranges of cooling load and COP are obtained. The results can provide some advices to the design of practical electronic machine systems.
International Nuclear Information System (INIS)
Berthiau, G.
1995-10-01
The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program
First-principle optimal local pseudopotentials construction via optimized effective potential method
International Nuclear Information System (INIS)
Mi, Wenhui; Zhang, Shoutao; Wang, Yanchao; Ma, Yanming; Miao, Maosheng
2016-01-01
The local pseudopotential (LPP) is an important component of orbital-free density functional theory, a promising large-scale simulation method that can maintain information on a material’s electron state. The LPP is usually extracted from solid-state density functional theory calculations, thereby it is difficult to assess its transferability to cases involving very different chemical environments. Here, we reveal a fundamental relation between the first-principles norm-conserving pseudopotential (NCPP) and the LPP. On the basis of this relationship, we demonstrate that the LPP can be constructed optimally from the NCPP for a large number of elements using the optimized effective potential method. Specially, our method provides a unified scheme for constructing and assessing the LPP within the framework of first-principles pseudopotentials. Our practice reveals that the existence of a valid LPP with high transferability may strongly depend on the element.
GENERALIZED INVERSE INTERVAL METHOD OF GLOBAL CONSTRAINED OPTIMIZATION
Directory of Open Access Journals (Sweden)
A. V. Panteleyev
2014-01-01
Full Text Available The algorithmic and program software of inverse interval method for global constrained optimization are considered. The solution of model examples and the proof of the theorems of the algorithm’s convergence are presented. The generalized scheme of developed algorithms has been created. This scheme has two replaceable modules of compression and check. This module approach allows the users to implement their own versions of the algorithm without loss of the method convergence. This will help to tune the method according to the characteristics of the current problem.
Optimal interpolation method for intercomparison of atmospheric measurements.
Ridolfi, Marco; Ceccherini, Simone; Carli, Bruno
2006-04-01
Intercomparison of atmospheric measurements is often a difficult task because of the different spatial response functions of the experiments considered. We propose a new method for comparison of two atmospheric profiles characterized by averaging kernels with different vertical resolutions. The method minimizes the smoothing error induced by the differences in the averaging kernels by exploiting an optimal interpolation rule to map one profile into the retrieval grid of the other. Compared with the techniques published so far, this method permits one to retain the vertical resolution of the less-resolved profile involved in the intercomparison.
Maximum gradient method for optimization of some reactor operating parameters
International Nuclear Information System (INIS)
Miasnikov, A.
1976-03-01
The method and the algorithm ensuing therefrom are described for the determination of the optimum operating state of a reactor. The optimum operating state is considered to be the extreme of the selected functional of the radial power distribution. The functional extreme is determined numerically, using a method which is one of the possible variants of the maximum gradient method. The radial distribution of the neutron absorption in regulating rods and the fuel element burnup are considered to be the variable parameters used in the optimization. (author)
Grey Wolf Optimizer Based on Powell Local Optimization Method for Clustering Analysis
Directory of Open Access Journals (Sweden)
Sen Zhang
2015-01-01
Full Text Available One heuristic evolutionary algorithm recently proposed is the grey wolf optimizer (GWO, inspired by the leadership hierarchy and hunting mechanism of grey wolves in nature. This paper presents an extended GWO algorithm based on Powell local optimization method, and we call it PGWO. PGWO algorithm significantly improves the original GWO in solving complex optimization problems. Clustering is a popular data analysis and data mining technique. Hence, the PGWO could be applied in solving clustering problems. In this study, first the PGWO algorithm is tested on seven benchmark functions. Second, the PGWO algorithm is used for data clustering on nine data sets. Compared to other state-of-the-art evolutionary algorithms, the results of benchmark and data clustering demonstrate the superior performance of PGWO algorithm.
Review of methods to probe single cell metabolism and bioenergetics.
Vasdekis, Andreas E; Stephanopoulos, Gregory
2015-01-01
Single cell investigations have enabled unexpected discoveries, such as the existence of biological noise and phenotypic switching in infection, metabolism and treatment. Herein, we review methods that enable such single cell investigations specific to metabolism and bioenergetics. Firstly, we discuss how to isolate and immobilize individuals from a cell suspension, including both permanent and reversible approaches. We also highlight specific advances in microbiology for its implications in metabolic engineering. Methods for probing single cell physiology and metabolism are subsequently reviewed. The primary focus therein is on dynamic and high-content profiling strategies based on label-free and fluorescence microspectroscopy and microscopy. Non-dynamic approaches, such as mass spectrometry and nuclear magnetic resonance, are also briefly discussed. Published by Elsevier Inc.
Single particle electrochemical sensors and methods of utilization
Schoeniger, Joseph [Oakland, CA; Flounders, Albert W [Berkeley, CA; Hughes, Robert C [Albuquerque, NM; Ricco, Antonio J [Los Gatos, CA; Wally, Karl [Lafayette, CA; Kravitz, Stanley H [Placitas, NM; Janek, Richard P [Oakland, CA
2006-04-04
The present invention discloses an electrochemical device for detecting single particles, and methods for using such a device to achieve high sensitivity for detecting particles such as bacteria, viruses, aggregates, immuno-complexes, molecules, or ionic species. The device provides for affinity-based electrochemical detection of particles with single-particle sensitivity. The disclosed device and methods are based on microelectrodes with surface-attached, affinity ligands (e.g., antibodies, combinatorial peptides, glycolipids) that bind selectively to some target particle species. The electrodes electrolyze chemical species present in the particle-containing solution, and particle interaction with a sensor element modulates its electrolytic activity. The devices may be used individually, employed as sensors, used in arrays for a single specific type of particle or for a range of particle types, or configured into arrays of sensors having both these attributes.
Optimizing sonication parameters for dispersion of single-walled carbon nanotubes
Energy Technology Data Exchange (ETDEWEB)
Yu, Haibo [Fraunhofer Institute for Electronic Nano Systems (Fraunhofer ENAS), 09126 Chemnitz (Germany); Graduate University of the Chinese Academy of Sciences, Beijing (China); State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, 110016 Shenyang (China); Hermann, Sascha, E-mail: sascha.hermann@zfm.tu-chemnitz.de [Center for Microtechnologies (ZfM), Chemnitz University of Technology, 09126 Chemnitz (Germany); Schulz, Stefan E.; Gessner, Thomas [Fraunhofer Institute for Electronic Nano Systems (Fraunhofer ENAS), 09126 Chemnitz (Germany); Center for Microtechnologies (ZfM), Chemnitz University of Technology, 09126 Chemnitz (Germany); Dong, Zaili [State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, 110016 Shenyang (China); Li, Wen J., E-mail: wenjungli@gmail.com [State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, 110016 Shenyang (China); Department of Mechanical and Biomedical Engineering, City University of Hong Kong, Hong Kong SAR (China)
2012-10-26
Graphical abstract: We study the dispersing behavior of SWCNTs based on the surfactant and the optimization of sonication parameters including the sonication power and running time. Highlights: Black-Right-Pointing-Pointer We study the optimization of sonication for the surfactant-based dispersion of SWCNTs. Black-Right-Pointing-Pointer The absorption spectrum of SWCNT solution strongly depend on the sonication conditions. Black-Right-Pointing-Pointer The sonication process has an important influence on the average length and diameters of SWCNTs in solution. Black-Right-Pointing-Pointer Centrifugation mainly contributes to the decrease of nonresonant absorption background. Black-Right-Pointing-Pointer Under the same sonication parameters, the large-diameter tip performs dispersion of SWCNTs better than the small-diameter tip. -- Abstract: Non-covalent functionalization based on surfactants has become one of the most common methods for dispersing of single-walled carbon nanotubes (SWCNTs). Previously, efforts have mainly been focused on experimenting with different surfactant systems, varying their concentrations and solvents. However sonication plays a very important role during the surfactant-based dispersion process for SWCNTs. The sonication treatment enables the surfactant molecules to adsorb onto the surface of SWCNTs by overcoming the interactions induced by the hydrophobic, electrostatic and van der Waals forces. This work describes a systematic study of the influence of the sonication power and time on the dispersion of SWCNTs. UV-vis-NIR absorption spectra is used to analyze and to evaluate the dispersion of SWCNTs in an aqueous solution of 1 w/v% sodium deoxycholate (DOC) showing that the resonant and nonresonant background absorption strongly depends on the sonication conditions. Furthermore, the diameter and length of SWCNTs under different sonication parameters are investigated using atomic force microscopy (AFM).
Developing Automatic Multi-Objective Optimization Methods for Complex Actuators
Directory of Open Access Journals (Sweden)
CHIS, R.
2017-11-01
Full Text Available This paper presents the analysis and multiobjective optimization of a magnetic actuator. By varying just 8 parameters of the magnetic actuator’s model the design space grows to more than 6 million configurations. Much more, the 8 objectives that must be optimized are conflicting and generate a huge objectives space, too. To cope with this complexity, we use advanced heuristic methods for Automatic Design Space Exploration. FADSE tool is one Automatic Design Space Exploration framework including different state of the art multi-objective meta-heuristics for solving NP-hard problems, which we used for the analysis and optimization of the COMSOL and MATLAB model of the magnetic actuator. We show that using a state of the art genetic multi-objective algorithm, response surface modelling methods and some machine learning techniques, the timing complexity of the design space exploration can be reduced, while still taking into consideration objective constraints so that various Pareto optimal configurations can be found. Using our developed approach, we were able to decrease the simulation time by at least a factor of 10, compared to a run that does all the simulations, while keeping prediction errors to around 1%.
Shivade, Anand S.; Shinde, Vasudev D.
2014-01-01
In this paper, wire electrical discharge machining of D3 tool steel is studied. Influence of pulse-on time, pulse-off time, peak current and wire speed are investigated for MRR, dimensional deviation, gap current and machining time, during intricate machining of D3 tool steel. Taguchi method is used for single characteristics optimization and to optimize all four process parameters simultaneously, Grey relational analysis (GRA) is employed along with Taguchi method. Through GRA, grey relation...
Design of large Francis turbine using optimal methods
International Nuclear Information System (INIS)
Flores, E; Bornard, L; Tomas, L; Couston, M; Liu, J
2012-01-01
Among a high number of Francis turbine references all over the world, covering the whole market range of heads, Alstom has especially been involved in the development and equipment of the largest power plants in the world : Three Gorges (China −32×767 MW - 61 to 113 m), Itaipu (Brazil- 20x750 MW - 98.7m to 127m) and Xiangjiaba (China - 8x812 MW - 82.5m to 113.6m - in erection). Many new projects are under study to equip new power plants with Francis turbines in order to answer an increasing demand of renewable energy. In this context, Alstom Hydro is carrying out many developments to answer those needs, especially for jumbo units such the planned 1GW type units in China. The turbine design for such units requires specific care by using the state of the art in computation methods and the latest technologies in model testing as well as the maximum feedback from operation of Jumbo plants already in operation. We present in this paper how a large Francis turbine can be designed using specific design methods, including the global and local optimization methods. The design of the spiral case, the tandem cascade profiles, the runner and the draft tube are designed with optimization loops involving a blade design tool, an automatic meshing software and a Navier-Stokes solver, piloted by a genetic algorithm. These automated optimization methods, presented in different papers over the last decade, are nowadays widely used, thanks to the growing computation capacity of the HPC clusters: the intensive use of such optimization methods at the turbine design stage allows to reach very high level of performances, while the hydraulic flow characteristics are carefully studied over the whole water passage to avoid any unexpected hydraulic phenomena.
Design of large Francis turbine using optimal methods
Flores, E.; Bornard, L.; Tomas, L.; Liu, J.; Couston, M.
2012-11-01
Among a high number of Francis turbine references all over the world, covering the whole market range of heads, Alstom has especially been involved in the development and equipment of the largest power plants in the world : Three Gorges (China -32×767 MW - 61 to 113 m), Itaipu (Brazil- 20x750 MW - 98.7m to 127m) and Xiangjiaba (China - 8x812 MW - 82.5m to 113.6m - in erection). Many new projects are under study to equip new power plants with Francis turbines in order to answer an increasing demand of renewable energy. In this context, Alstom Hydro is carrying out many developments to answer those needs, especially for jumbo units such the planned 1GW type units in China. The turbine design for such units requires specific care by using the state of the art in computation methods and the latest technologies in model testing as well as the maximum feedback from operation of Jumbo plants already in operation. We present in this paper how a large Francis turbine can be designed using specific design methods, including the global and local optimization methods. The design of the spiral case, the tandem cascade profiles, the runner and the draft tube are designed with optimization loops involving a blade design tool, an automatic meshing software and a Navier-Stokes solver, piloted by a genetic algorithm. These automated optimization methods, presented in different papers over the last decade, are nowadays widely used, thanks to the growing computation capacity of the HPC clusters: the intensive use of such optimization methods at the turbine design stage allows to reach very high level of performances, while the hydraulic flow characteristics are carefully studied over the whole water passage to avoid any unexpected hydraulic phenomena.
METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS
Directory of Open Access Journals (Sweden)
V. Panteleev Andrei
2017-01-01
Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and
A new method of preparing single-walled carbon nanotubes
Indian Academy of Sciences (India)
Home; Journals; Journal of Chemical Sciences; Volume 115; Issue 5-6. A new method of preparing single-walled carbon nanotubes ... Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur PO, Bangalore 560 064, India; Solid State and Structural Chemistry Unit, Indian Institute of Science, Bangalore 560 012, ...
A new method of preparing single-walled carbon nanotubes
Indian Academy of Sciences (India)
Unknown
A new method of preparing single-walled carbon nanotubes. ¶. S R C VIVEKCHAND1 and A GOVINDARAJ1,2,*. 1Chemistry and Physics of Materials Unit, Jawaharlal Nehru Centre for. Advanced Scientific Research, Jakkur PO, Bangalore 560 064, India. 2Solid State and Structural Chemistry Unit, Indian Institute of Science ...
METHOD FOR MANUFACTURING A SINGLE CRYSTAL NANO-WIRE.
Van Den Berg, Albert; Bomer, Johan; Carlen Edwin, Thomas; Chen, Songyue; Kraaijenhagen Roderik, Adriaan; Pinedo Herbert, Michael
2011-01-01
A method for manufacturing a single crystal nano-structure is provided comprising the steps of providing a device layer with a 100 structure on a substrate; providing a stress layer onto the device layer; patterning the stress layer along the 110 direction of the device layer; selectively removing
METHOD FOR MANUFACTURING A SINGLE CRYSTAL NANO-WIRE
Van Den Berg, Albert; Bomer, Johan; Carlen Edwin, Thomas; Chen, Songyue; Kraaijenhagen Roderik, Adriaan; Pinedo Herbert, Michael
2012-01-01
A method for manufacturing a single crystal nano-structure includes providing a device layer with a 100 structure on a substrate; providing a stress layer onto the device layer; patterning the stress layer along the 110 direction of the device layer; selectively removing parts of the stress layer to
Directory of Open Access Journals (Sweden)
Hong Shan
2015-12-01
Full Text Available ABSTRACT Single particle analysis, which can be regarded as an average of signals from thousands or even millions of particle projections, is an efficient method to study the three-dimensional structures of biological macromolecules. An intrinsic assumption in single particle analysis is that all the analyzed particles must have identical composition and conformation. Thus specimen heterogeneity in either composition or conformation has raised great challenges for high-resolution analysis. For particles with multiple conformations, inaccurate alignments and orientation parameters will yield an averaged map with diminished resolution and smeared density. Besides extensive classification approaches, here based on the assumption that the macromolecular complex is made up of multiple rigid modules whose relative orientations and positions are in slight fluctuation around equilibriums, we propose a new method called as local optimization refinement to address this conformational heterogeneity for an improved resolution. The key idea is to optimize the orientation and shift parameters of each rigid module and then reconstruct their three-dimensional structures individually. Using simulated data of 80S/70S ribosomes with relative fluctuations between the large (60S/50S and the small (40S/30S subunits, we tested this algorithm and found that the resolutions of both subunits are significantly improved. Our method provides a proof-of-principle solution for high-resolution single particle analysis of macromolecular complexes with dynamic conformations.
Shan, Hong; Wang, Zihao; Zhang, Fa; Xiong, Yong; Yin, Chang-Cheng; Sun, Fei
2016-01-01
Single particle analysis, which can be regarded as an average of signals from thousands or even millions of particle projections, is an efficient method to study the three-dimensional structures of biological macromolecules. An intrinsic assumption in single particle analysis is that all the analyzed particles must have identical composition and conformation. Thus specimen heterogeneity in either composition or conformation has raised great challenges for high-resolution analysis. For particles with multiple conformations, inaccurate alignments and orientation parameters will yield an averaged map with diminished resolution and smeared density. Besides extensive classification approaches, here based on the assumption that the macromolecular complex is made up of multiple rigid modules whose relative orientations and positions are in slight fluctuation around equilibriums, we propose a new method called as local optimization refinement to address this conformational heterogeneity for an improved resolution. The key idea is to optimize the orientation and shift parameters of each rigid module and then reconstruct their three-dimensional structures individually. Using simulated data of 80S/70S ribosomes with relative fluctuations between the large (60S/50S) and the small (40S/30S) subunits, we tested this algorithm and found that the resolutions of both subunits are significantly improved. Our method provides a proof-of-principle solution for high-resolution single particle analysis of macromolecular complexes with dynamic conformations.
Utilization of niching methods of genetic algorithms in nuclear reactor problems optimization
International Nuclear Information System (INIS)
Sacco, Wagner Figueiredo; Schirru, Roberto
2000-01-01
Genetic Algorithms (GAs) are biologically motivated adaptive systems which have been used, with good results, in function optimization. However, traditional GAs rapidly push an artificial population toward convergence. That is, all individuals in the population soon become nearly identical. Niching Methods allow genetic algorithms to maintain a population of diverse individuals. GAs that incorporate these methods are capable of locating multiple, optimal solutions within a single population. The purpose of this study is to test existing niching techniques and two methods introduced herein, bearing in mind their eventual application in nuclear reactor related problems, specially the nuclear reactor core reload one, which has multiple solutions. Tests are performed using widely known test functions and their results show that the new methods are quite promising, specially in real world problems like the nuclear reactor core reload. (author)
Optimized Method for Knee Displacement Measurement in Vehicle Sled Crash Test
Directory of Open Access Journals (Sweden)
Sun Hang
2017-01-01
Full Text Available This paper provides an optimized method for measuring dummy’s knee displacement in vehicle sled crash test. The proposed method utilizes completely new elements for measurement, which are acceleration and angular velocity of dummy’s pelvis, as well as the rotational angle of its femur. Compared with the traditional measurement only using camera-based high-speed motion image analysis, the optimized one can not only maintain the measuring accuracy, but also avoid the disturbance caused by dummy movement, dashboard blocking and knee deformation during the crash. An experiment is made to verify the accuracy of the proposed method, which eliminates the strong dependence on single target tracing in traditional method. Moreover, it is very appropriate for calculating the penetration depth to the dashboard.
Ai, Xueshan; Dong, Zuo; Mo, Mingzhu
2017-04-01
The optimal reservoir operation is in generally a multi-objective problem. In real life, most of the reservoir operation optimization problems involve conflicting objectives, for which there is no single optimal solution which can simultaneously gain an optimal result of all the purposes, but rather a set of well distributed non-inferior solutions or Pareto frontier exists. On the other hand, most of the reservoirs operation rules is to gain greater social and economic benefits at the expense of ecological environment, resulting to the destruction of riverine ecology and reduction of aquatic biodiversity. To overcome these drawbacks, this study developed a multi-objective model for the reservoir operating with the conflicting functions of hydroelectric energy generation, irrigation and ecological protection. To solve the model with the objectives of maximize energy production, maximize the water demand satisfaction rate of irrigation and ecology, we proposed a multi-objective optimization method of variable penalty coefficient (VPC), which was based on integrate dynamic programming (DP) with discrete differential dynamic programming (DDDP), to generate a well distributed non-inferior along the Pareto front by changing the penalties coefficient of different objectives. This method was applied to an existing China reservoir named Donggu, through a course of a year, which is a multi-annual storage reservoir with multiple purposes. The case study results showed a good relationship between any two of the objectives and a good Pareto optimal solutions, which provide a reference for the reservoir decision makers.
An Efficient Optimization Method for Solving Unsupervised Data Classification Problems
Directory of Open Access Journals (Sweden)
Parvaneh Shabanzadeh
2015-01-01
Full Text Available Unsupervised data classification (or clustering analysis is one of the most useful tools and a descriptive task in data mining that seeks to classify homogeneous groups of objects based on similarity and is used in many medical disciplines and various applications. In general, there is no single algorithm that is suitable for all types of data, conditions, and applications. Each algorithm has its own advantages, limitations, and deficiencies. Hence, research for novel and effective approaches for unsupervised data classification is still active. In this paper a heuristic algorithm, Biogeography-Based Optimization (BBO algorithm, was adapted for data clustering problems by modifying the main operators of BBO algorithm, which is inspired from the natural biogeography distribution of different species. Similar to other population-based algorithms, BBO algorithm starts with an initial population of candidate solutions to an optimization problem and an objective function that is calculated for them. To evaluate the performance of the proposed algorithm assessment was carried on six medical and real life datasets and was compared with eight well known and recent unsupervised data classification algorithms. Numerical results demonstrate that the proposed evolutionary optimization algorithm is efficient for unsupervised data classification.
An Optimal Method to Design Wireless Sensor Network Structures
Directory of Open Access Journals (Sweden)
Yang Ling
2018-01-01
Full Text Available In order to optimize the structure of wireless sensor network, an improved wireless sensor network sleep mechanism is proposed. First, some nodes in the area with too high redundancy are dormant by density control, so that the active nodes are even more distributed. Then, the active node is subjected to circular coverage redundancy decision. Different circumferential coverage decision methods are used for network boundary nodes and non-boundary nodes. As a result, the boundary nodes and non-boundary nodes are well dormant, and the network redundancy is reduced. The simulation results show that the improved dormancy mechanism makes the number of active nodes in the network smaller and more evenly, and the network lifetime is extended on the basis of maintaining the original coverage of the network. Therefore, the proposed method can achieve optimal coverage in wireless sensor networks. The network prolongs network lifetime while ensuring reliable monitoring performance.
Optimal and adaptive methods of processing hydroacoustic signals (review)
Malyshkin, G. S.; Sidel'nikov, G. B.
2014-09-01
Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.
Experimental methods for the analysis of optimization algorithms
Bartz-Beielstein, Thomas; Paquete, Luis; Preuss, Mike
2010-01-01
In operations research and computer science it is common practice to evaluate the performance of optimization algorithms on the basis of computational results, and the experimental approach should follow accepted principles that guarantee the reliability and reproducibility of results. However, computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on diffe
Children and youth with disabilities: innovative methods for single qualitative interviews.
Teachman, Gail; Gibson, Barbara E
2013-02-01
There is a paucity of explicit literature outlining methods for single-interview studies with children, and almost none have focused on engaging children with disabilities. Drawing from a pilot study, we address these gaps by describing innovative techniques, strategies, and methods for engaging children and youth with disabilities in a single qualitative interview. In the study, we explored the beliefs, assumptions, and experiences of children and youth with cerebral palsy and their parents regarding the importance of walking. We describe three key aspects of our child-interview methodological approach: collaboration with parents, a toolkit of customizable interview techniques, and strategies to consider the power differential inherent in child-researcher interactions. Examples from our research illustrate what worked well and what was less successful. Researchers can optimize single interviews with children with disabilities by collaborating with family members and by preparing a toolkit of customizable interview techniques.
Optimized assembly and covalent coupling of single-molecule DNA origami nanoarrays.
Gopinath, Ashwin; Rothemund, Paul W K
2014-12-23
Artificial DNA nanostructures, such as DNA origami, have great potential as templates for the bottom-up fabrication of both biological and nonbiological nanodevices at a resolution unachievable by conventional top-down approaches. However, because origami are synthesized in solution, origami-templated devices cannot easily be studied or integrated into larger on-chip architectures. Electrostatic self-assembly of origami onto lithographically defined binding sites on Si/SiO2 substrates has been achieved, but conditions for optimal assembly have not been characterized, and the method requires high Mg2+ concentrations at which most devices aggregate. We present a quantitative study of parameters affecting origami placement, reproducibly achieving single-origami binding at 94±4% of sites, with 90% of these origami having an orientation within ±10° of their target orientation. Further, we introduce two techniques for converting electrostatic DNA-surface bonds to covalent bonds, allowing origami arrays to be used under a wide variety of Mg2+-free solution conditions.
Sun, Bingyun; Kovatch, Jessica Rae; Badiong, Albert; Merbouh, Nabyl
2017-10-06
Single-cell proteomics represents a field of extremely sensitive proteomic analysis, owing to the minute amount of yet complex proteins in a single cell. Without amplification potential as of nucleic acids, single-cell mass spectrometry (MS) analysis demands special instrumentation running with optimized parameters to maximize the sensitivity and throughput for comprehensive proteomic discovery. To facilitate such analysis, we here investigated two factors critical to peptide sequencing and protein detection in shotgun proteomics, i.e. precursor ion isolation window (IW) and maximum precursor ion injection time (ITmax), on an ultrahigh-field quadrupole Orbitrap (Q-Exactive HF). Counterintuitive to the frequently used proteomic parameters for bulk samples (>100 ng), our experimental data and subsequent modeling suggested a universally optimal IW of 4.0 Th for sample quantity ranging from 100 ng to 1 ng, and a sample-quantity dependent ITmax of more than 250 ms for 1-ng samples. Compared with the benchmark condition of IW = 2.0 Th and ITmax = 50 ms, our optimization generated up to 300% increase to the detected protein groups for 1-ng samples. The additionally identified proteins allowed deeper penetration of proteome for better revealing crucial cellular functions such as signaling and cell adhesion. We hope this effort can prompt single-cell and trace proteomic analysis and enable a rational selection of MS parameters.
Design and optimization for the occupant restraint system of vehicle based on a single freedom model
Zhang, Junyuan; Ma, Yue; Chen, Chao; Zhang, Yan
2013-05-01
Throughout the vehicle crash event, the interactions between vehicle, occupant, restraint system (VOR) are complicated and highly non-linear. CAE and physical tests are the most widely used in vehicle passive safety development, but they can only be done with the detailed 3D model or physical samples. Often some design errors and imperfections are difficult to correct at that time, and a large amount of time will be needed. A restraint system concept design approach which based on single-degree-of-freedom occupant-vehicle model (SDOF) is proposed in this paper. The interactions between the restraint system parameters and the occupant responses in a crash are studied from the view of mechanics and energy. The discrete input and the iterative algorithm method are applied to the SDOF model to get the occupant responses quickly for arbitrary excitations (impact pulse) by MATLAB. By studying the relationships between the ridedown efficiency, the restraint stiffness, and the occupant response, the design principle of the restraint stiffness aiming to reduce occupant injury level during conceptual design is represented. Higher ridedown efficiency means more occupant energy absorbed by the vehicle, but the research result shows that higher ridedown efficiency does not mean lower occupant injury level. A proper restraint system design principle depends on two aspects. On one hand, the restraint system should lead to as high ridedown efficiency as possible, and at the same time, the restraint system should maximize use of the survival space to reduce the occupant deceleration level. As an example, an optimization of a passenger vehicle restraint system is designed by the concept design method above, and the final results are validated by MADYMO, which is the most widely used software in restraint system design, and the sled test. Consequently, a guideline and method for the occupant restraint system concept design is established in this paper.
A seismic fault recognition method based on ant colony optimization
Chen, Lei; Xiao, Chuangbai; Li, Xueliang; Wang, Zhenli; Huo, Shoudong
2018-05-01
Fault recognition is an important section in seismic interpretation and there are many methods for this technology, but no one can recognize fault exactly enough. For this problem, we proposed a new fault recognition method based on ant colony optimization which can locate fault precisely and extract fault from the seismic section. Firstly, seismic horizons are extracted by the connected component labeling algorithm; secondly, the fault location are decided according to the horizontal endpoints of each horizon; thirdly, the whole seismic section is divided into several rectangular blocks and the top and bottom endpoints of each rectangular block are considered as the nest and food respectively for the ant colony optimization algorithm. Besides that, the positive section is taken as an actual three dimensional terrain by using the seismic amplitude as a height. After that, the optimal route from nest to food calculated by the ant colony in each block is judged as a fault. Finally, extensive comparative tests were performed on the real seismic data. Availability and advancement of the proposed method were validated by the experimental results.
High Order Adjoint Derivatives using ESDIRK Methods for Oil Reservoir Production Optimization
DEFF Research Database (Denmark)
Capolei, Andrea; Stenby, Erling Halfdan; Jørgensen, John Bagterp
2012-01-01
In production optimization, computation of the gradients is the computationally expensive step. We improve the computational efficiency of such algorithms by improving the gradient computation using high-order ESDIRK (Explicit Singly Diagonally Implicit Runge-Kutta) temporal integration methods...... and continuous adjoints . The high order integration scheme allows larger time steps and therefore faster solution times. We compare gradient computation by the continuous adjoint method to the discrete adjoint method and the finite-difference method. The methods are implemented for a two phase flow reservoir...... simulator. Computational experiments demonstrate that the accuracy of the sensitivities obtained by the adjoint methods are comparable to the accuracy obtained by the finite difference method. The continuous adjoint method is able to use a different time grid than the forward integration. Therefore, it can...
Single Image Super Resolution using a Joint GMM Method.
Sandeep, P; Jacob, Tony
2016-07-07
Single Image Super Resolution (SR) algorithms based on joint dictionaries and sparse representations of image patches have received significant attention in literature and deliver state of the art results. Recently, Gaussian Mixture Models (GMMs) have emerged as favored prior for natural image patches in various image restoration problems. In this work, we approach the single image SR problem by using a joint GMM learnt from concatenated vectors of high and low resolution patches sampled from a large database of pairs of high resolution and the corresponding low resolution images. Covariance matrices of the learnt Gaussian models capture the inherent correlations between high and low resolution patches which are utilized for inferring high resolution patches from given low resolution patches. The proposed joint GMM method can be interpreted as the GMM analogue of joint dictionary based algorithms for single image SR. We study the performance of the proposed joint GMM method by comparing with various competing algorithms for single image SR. Our experiments on various natural images demonstrate the competitive performance obtained by the proposed method at low computational cost.
A method of object recognition for single pixel imaging
Li, Boxuan; Zhang, Wenwen
2018-01-01
Computational ghost imaging(CGI), utilizing a single-pixel detector, has been extensively used in many fields. However, in order to achieve a high-quality reconstructed image, a large number of iterations are needed, which limits the flexibility of using CGI in practical situations, especially in the field of object recognition. In this paper, we purpose a method utilizing the feature matching to identify the number objects. In the given system, approximately 90% of accuracy of recognition rates can be achieved, which provides a new idea for the application of single pixel imaging in the field of object recognition
New design method for valves internals, to optimize process
Energy Technology Data Exchange (ETDEWEB)
Jorge, Leonardo [PDVSA (Venezuela)
2011-07-01
In the heavy oil industry, various methods can be used to reduce the viscosity of oil, one of them being the injection of diluent. This method is commonly used in the Orinoco oil belt but it requires good control of the volume of diluent injected as well as the gas flow to optimize production; thus flow control valves need to be accurate. A new valve with a new method was designed with the characteristic of being very reliable and was then bench tested and compared with the other commercially available valves. Results showed better repeatability, accuracy and reliability with lower maintenance for the new method. The use of this valve provides significant savings while distributing the exact amount of fluids; up to date a less than 2% failure rate has been recorded in the field. The new method developed demonstrated impressive performance and PDVSA has decided to use it in mass.
Real stabilization method for nuclear single-particle resonances
International Nuclear Information System (INIS)
Zhang Li; Zhou Shangui; Meng Jie; Zhao Enguang
2008-01-01
We develop the real stabilization method within the framework of the relativistic mean-field (RMF) model. With the self-consistent nuclear potentials from the RMF model, the real stabilization method is used to study single-particle resonant states in spherical nuclei. As examples, the energies, widths, and wave functions of low-lying neutron resonant states in 120 Sn are obtained. These results are compared with those from the scattering phase-shift method and the analytic continuation in the coupling constant approach and satisfactory agreements are found
Comparison of optimization methods for electronic-structure calculations
International Nuclear Information System (INIS)
Garner, J.; Das, S.G.; Min, B.I.; Woodward, C.; Benedek, R.
1989-01-01
The performance of several local-optimization methods for calculating electronic structure is compared. The fictitious first-order equation of motion proposed by Williams and Soler is integrated numerically by three procedures: simple finite-difference integration, approximate analytical integration (the Williams-Soler algorithm), and the Born perturbation series. These techniques are applied to a model problem for which exact solutions are known, the Mathieu equation. The Williams-Soler algorithm and the second Born approximation converge equally rapidly, but the former involves considerably less computational effort and gives a more accurate converged solution. Application of the method of conjugate gradients to the Mathieu equation is discussed
Optimization in engineering sciences approximate and metaheuristic methods
Stefanoiu, Dan; Popescu, Dumitru; Filip, Florin Gheorghe; El Kamel, Abdelkader
2014-01-01
The purpose of this book is to present the main metaheuristics and approximate and stochastic methods for optimization of complex systems in Engineering Sciences. It has been written within the framework of the European Union project ERRIC (Empowering Romanian Research on Intelligent Information Technologies), which is funded by the EU's FP7 Research Potential program and has been developed in co-operation between French and Romanian teaching researchers. Through the principles of various proposed algorithms (with additional references) this book allows the reader to explore various methods o
A discrete optimization method for nuclear fuel management
International Nuclear Information System (INIS)
Argaud, J.P.
1993-04-01
Nuclear loading pattern elaboration can be seen as a combinational optimization problem of tremendous size and with non-linear cost-functions, and search are always numerically expensive. After a brief introduction of the main aspects of nuclear fuel management, this paper presents a new idea to treat the combinational problem by using informations included in the gradient of a cost function. The method is to choose, by direct observation of the gradient, the more interesting changes in fuel loading patterns. An example is then developed to illustrate an operating mode of the method, and finally, connections with simulated annealing and genetic algorithms are described as an attempt to improve search processes
Kernel method for clustering based on optimal target vector
International Nuclear Information System (INIS)
Angelini, Leonardo; Marinazzo, Daniele; Pellicoro, Mario; Stramaglia, Sebastiano
2006-01-01
We introduce Ising models, suitable for dichotomic clustering, with couplings that are (i) both ferro- and anti-ferromagnetic (ii) depending on the whole data-set and not only on pairs of samples. Couplings are determined exploiting the notion of optimal target vector, here introduced, a link between kernel supervised and unsupervised learning. The effectiveness of the method is shown in the case of the well-known iris data-set and in benchmarks of gene expression levels, where it works better than existing methods for dichotomic clustering
A new method for decision making in multi-objective optimization problems
Directory of Open Access Journals (Sweden)
Oscar Brito Augusto
2012-08-01
Full Text Available Many engineering sectors are challenged by multi-objective optimization problems. Even if the idea behind these problems is simple and well established, the implementation of any procedure to solve them is not a trivial task. The use of evolutionary algorithms to find candidate solutions is widespread. Usually they supply a discrete picture of the non-dominated solutions, a Pareto set. Although it is very interesting to know the non-dominated solutions, an additional criterion is needed to select one solution to be deployed. To better support the design process, this paper presents a new method of solving non-linear multi-objective optimization problems by adding a control function that will guide the optimization process over the Pareto set that does not need to be found explicitly. The proposed methodology differs from the classical methods that combine the objective functions in a single scale, and is based on a unique run of non-linear single-objective optimizers.
Suleimanov, Yury V; Green, William H
2015-09-08
We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation double- and single-ended transition-state optimization algorithms--the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several single-molecule systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes.
A new placement optimization method for viscoelastic dampers: Energy dissipation method
Qu, Ji-Ting
2012-09-01
A new mathematic model of location optimization for viscoelastic dampers is established through energy analysis based on force analogy method. Three working conditions (three lower limits of the new location index) as well as four ground motions are considered in this study, using MATLAB and SAP2000 in programming and verifying. This paper deals with the optimal placement of viscoelastic dampers and step-by-step time history analyses are carried out. Numerical analysis is illustrated to verify the effectiveness and feasibility of the new mathematic model for structural control. In addition, not only the optimal placement method using force analogy method can confirm dampers' locations all at once and be accurate to each span, but also it is without circular calculating. At last, a few helpful conclusions on viscoelastic dampers' optimal placement are made.
Optimization strategies of in-tube extraction (ITEX) methods.
Laaks, Jens; Jochmann, Maik A; Schilling, Beat; Schmidt, Torsten C
2015-09-01
Microextraction techniques, especially dynamic techniques like in-tube extraction (ITEX), can require an extensive method optimization procedure. This work summarizes the experiences from several methods and gives recommendations for the setting of proper extraction conditions to minimize experimental effort. Therefore, the governing parameters of the extraction and injection stages are discussed. This includes the relative extraction efficiencies of 11 kinds of sorbent tubes, either commercially available or custom made, regarding 53 analytes from different classes of compounds. They cover aromatics, heterocyclic aromatics, halogenated hydrocarbons, fuel oxygenates, alcohols, esters, and aldehydes. The number of extraction strokes and the corresponding extraction flow, also in dependence of the expected analyte concentrations, are discussed as well as the interactions between sample and extraction phase temperature. The injection parameters cover two different injection methods. The first is intended for the analysis of highly volatile analytes and the second either for the analysis of lower volatile analytes or when the analytes can be re-focused by a cold trap. The desorption volume, the desorption temperature, and the desorption flow are compared, together with the suitability of both methods for analytes of varying volatilities. The results are summarized in a flow chart, which can be used to select favorable starting conditions for further method optimization.
Single-objective optimization of thermo-electric coolers using genetic algorithm
Khanh, Doan V. K.; Vasant, P.; Elamvazuthi, Irraivan; Dieu, Vo N.
2014-10-01
Thermo-electric Coolers (TECs) nowadays is applied in a wide range of thermal energy systems. This is due to its superior features where no refrigerant and dynamic parts are needed. TECs generate no electrical or acoustical noise and are environment friendly. Over the past decades, many researches were employed to improve the efficiency of TECs by enhancing the material parameters and design parameters. The material parameters are restricted by currently available materials and module fabricating technologies. Therefore, the main objective of TECs design is to determine a set of design parameters such as leg area, leg length and the number of legs. Two elements that play an important role when considering the suitability of TECs in applications are rated of refrigeration (ROR) and coefficient of performance (COP). In this paper, the review of some previous researches will be conducted to see the diversity of optimization in the design of TECs in enhancing the performance and efficiency. After that, single objective optimization problems (SOP) will be tested first by using Genetic Algorithm (GA) to optimize geometry properties so that TECs will operate at near optimal conditions. In the future works, multi-objective optimization problems (MOP) using hybrid GA with another optimization technique will be considered to give a better results and compare with previous research such as Non-Dominated Sorting Genetic Algorithm (NSGA-II) to see the advantages and disadvantages.
Principles of crystallization, and methods of single crystal growth
International Nuclear Information System (INIS)
Chacra, T.
2010-01-01
Most of single crystals (monocrystals), have distinguished optical, electrical, or magnetic properties, which make from single crystals, key elements in most of technical modern devices, as they may be used as lenses, Prisms, or grating sin optical devises, or Filters in X-Ray and spectrographic devices, or conductors and semiconductors in electronic, and computer industries. Furthermore, Single crystals are used in transducer devices. Moreover, they are indispensable elements in Laser and Maser emission technology.Crystal Growth Technology (CGT), has started, and developed in the international Universities and scientific institutions, aiming at some of single crystals, which may have significant properties and industrial applications, that can attract the attention of international crystal growth centers, to adopt the industrial production and marketing of such crystals. Unfortunately, Arab universities generally, and Syrian universities specifically, do not give even the minimum interest, to this field of Science.The purpose of this work is to attract the attention of Crystallographers, Physicists and Chemists in the Arab universities and research centers to the importance of crystal growth, and to work on, in the first stage to establish simple, uncomplicated laboratories for the growth of single crystal. Such laboratories can be supplied with equipment, which are partly available or can be manufactured in the local market. Many references (Articles, Papers, Diagrams, etc..) has been studied, to conclude the most important theoretical principles of Phase transitions,especially of crystallization. The conclusions of this study, are summarized in three Principles; Thermodynamic-, Morphologic-, and Kinetic-Principles. The study is completed by a brief description of the main single crystal growth methods with sketches, of equipment used in each method, which can be considered as primary designs for the equipment, of a new crystal growth laboratory. (author)
Optimization and modification of the method for detection of rhamnolipids
Directory of Open Access Journals (Sweden)
Takeshi Tabuchi
2015-10-01
Full Text Available Use of biosurfactants in bioremediation, facilitates and accelerates microbial degradation of hydrocarbons. CTAB/MB agar method created by Siegmund & Wagner for screening of rhamnolipids (RL producing strains, has been widely used but has not improved significantly for more than 20 years. To optimize the technique as a quantitative method, CTAB/MB agar plates were made and different variables were tested, like incubation time, cooling, CTAB concentration, methylene blue presence, wells diameter and inocula volume. Furthermore, a new method for RL detection within halos was developed: precipitation of RL with HCl, allows the formation a new halos pattern, easier to observe and to measure. This research reaffirm that this method is not totally suitable for a fine quantitative analysis, because of the difficulty to accurately correlate RL concentration and the area of the halos. RL diffusion does not seem to have a simple behavior and there are a lot of factors that affect RL migration rate.
International Nuclear Information System (INIS)
Gelfand, D.W.; Chen, Y.M.; Ott, D.J.; Munitz, H.A.
1986-01-01
Single-contrast studies account for 75% of barium enema examinations and are often performed in the elderly. By optimizing all factors, the following results were obtained: for polyps of less than 1 cm, 40 of 57 were detected (sensitivity, 70.2%); for polyps of 1 cm or larger, 33 of 35 were detected (sensitivity, 94%). Overall, 73 of 92 polyps were detected (sensitivity, 79.3%). These sensitivities result from meticulous preparation and the use of compression filming, low-density barium, moderate kilovoltages, high-resolution screens, remote control apparatus, and high-bandpass TV fluoroscopy. The authors conclude that an optimal single-contrast barium enema examination detects colonic polyps with a sensitivity approaching that of the double-contrast study and may be employed in elderly patients who cannot undergo the double-contrast study
Optimized driving of superconducting artificial atoms for improved single-qubit gates
Chow, J. M.; Dicarlo, L.; Gambetta, J. M.; Motzoi, F.; Frunzio, L.; Girvin, S. M.; Schoelkopf, R. J.
2010-10-01
We employ simultaneous shaping of in-phase and out-of-phase resonant microwave drives to reduce single-qubit gate errors arising from the weak anharmonicity of transmon superconducting artificial atoms. To reduce the effect of higher levels present in the transmon spectrum, we apply Gaussian and derivative-of-Gaussian envelopes to the in-phase and out-of-phase quadratures, respectively, and optimize over their relative amplitude. Using randomized benchmarking, we obtain a minimum average error per gate of 0.007±0.005 using 4-ns-wide pulses, which is limited by decoherence. This simple optimization technique works for multiple transmons coupled to a single microwave resonator in a quantum bus architecture.
Optimization of the measuring method selection for natural radionuclides
International Nuclear Information System (INIS)
Heinrich, T.; Funke, L.; Koehler, M.; Schkade, U.K.; Ullrich, F.; Loebner, W.; Hoepner, J.; Weiss, D.
2007-01-01
The publication is aimed to an optimized selection of measuring methods for the evaluation of natural radionuclides in environmental media, taking into account the required financial and temporal investment besides the informative value of the results. The evaluation is considered as a recommendation for contractors concerning required measurements or for installation or upgrading of laboratory equipment. The evaluation identifies measuring requirements and boundary conditions according to legal regulations and discusses a strategy to reach optimized results. The radiological environment monitoring is focused on the estimation of radiation exposure of personal and public. Requirements for measuring techniques (detection limits, limit values and guideline values) are summarized in tables. The evaluation is covering radionuclide measurements in the following media: air (airborne particulates); water; soils, sediments and residues; residues from natural gas, crude oil and thermal water extraction; uranium containing paints in the porcelain industry; thorium compounds for weld electrodes; filter dusts from the steel industry; biomedia
Newton-type methods for optimization and variational problems
Izmailov, Alexey F
2014-01-01
This book presents comprehensive state-of-the-art theoretical analysis of the fundamental Newtonian and Newtonian-related approaches to solving optimization and variational problems. A central focus is the relationship between the basic Newton scheme for a given problem and algorithms that also enjoy fast local convergence. The authors develop general perturbed Newtonian frameworks that preserve fast convergence and consider specific algorithms as particular cases within those frameworks, i.e., as perturbations of the associated basic Newton iterations. This approach yields a set of tools for the unified treatment of various algorithms, including some not of the Newton type per se. Among the new subjects addressed is the class of degenerate problems. In particular, the phenomenon of attraction of Newton iterates to critical Lagrange multipliers and its consequences as well as stabilized Newton methods for variational problems and stabilized sequential quadratic programming for optimization. This volume will b...
Defects detecting method of lamp cap of single soldering lug
Cai, Jihe; Lv, Jidong
2017-07-01
In order to resolve the problems of low efficiency and large separating difference in fault detection of lamp holders with single soldering lug, an image-detection-based defect detection method is presented in this paper. The selected image is first preprocessed, where the possible area of soldering lug is cut in this preprocessing to narrow the scope for subsequent partition with the consideration that the smooth surface of metal at lamp holder and black insulation glass may reflect the light. Then, the soldering lug is extracted by a series of processing including clustering partition. Based on this, the defects are detected by regional marking, area comparison, circularity and coordinate deviation. The experiment results show that the designed method is simple and practical to detect main quality defects of lamp holder with single soldering lug correctly and efficiently.
Fox, Robert V.; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin; Margulieux, Kelsey R.; Holland, Andrew W.
2014-09-09
Methods of forming single source precursors (SSPs) include forming intermediate products having the empirical formula 1/2{L.sub.2N(.mu.-X).sub.2M'X.sub.2}.sub.2, and reacting MER with the intermediate products to form SSPs of the formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2, wherein L is a Lewis base, M is a Group IA atom, N is a Group IB atom, M' is a Group IIIB atom, each E is a Group VIB atom, each X is a Group VIIA atom or a nitrate group, and each R group is an alkyl, aryl, vinyl, (per)fluoro alkyl, (per)fluoro aryl, silane, or carbamato group. Methods of forming polymeric or copolymeric SSPs include reacting at least one of HE.sup.1R.sup.1E.sup.1H and MER with one or more substances having the empirical formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2 or L.sub.2N(.mu.-X).sub.2M'(X).sub.2 to form a polymeric or copolymeric SSP. New SSPs and intermediate products are formed by such methods.
Fox, Robert V.; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin; Margulieux, Kelsey R.; Holland, Andrew W.
2012-12-04
Methods of forming single source precursors (SSPs) include forming intermediate products having the empirical formula 1/2{L.sub.2N(.mu.-X).sub.2M'X.sub.2}.sub.2, and reacting MER with the intermediate products to form SSPs of the formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2, wherein L is a Lewis base, M is a Group IA atom, N is a Group IB atom, M' is a Group IIIB atom, each E is a Group VIB atom, each X is a Group VIIA atom or a nitrate group, and each R group is an alkyl, aryl, vinyl, (per)fluoro alkyl, (per)fluoro aryl, silane, or carbamato group. Methods of forming polymeric or copolymeric SSPs include reacting at least one of HE.sup.1R.sup.1E.sup.1H and MER with one or more substances having the empirical formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2 or L.sub.2N(.mu.-X).sub.2M'(X).sub.2 to form a polymeric or copolymeric SSP. New SSPs and intermediate products are formed by such methods.
International Nuclear Information System (INIS)
Zhou, Junle; Chen, Lingen; Ding, Zemin; Sun, Fengrui
2016-01-01
Ecological performance of a single resonance ESE heat engine with heat leakage is conducted by applying finite time thermodynamics. By introducing Nielsen function and numerical calculations, expressions about power output, efficiency, entropy generation rate and ecological objective function are derived; relationships between ecological objective function and power output, between ecological objective function and efficiency as well as between power output and efficiency are demonstrated; influences of system parameters of heat leakage, boundary energy and resonance width on the optimal performances are investigated in detail; a specific range of boundary energy is given as a compromise to make ESE heat engine system work at optimal operation regions. Comparing performance characteristics with different optimization objective functions, the significance of selecting ecological objective function as the design objective is clarified specifically: when changing the design objective from maximum power output into maximum ecological objective function, the improvement of efficiency is 4.56%, while the power output drop is only 2.68%; when changing the design objective from maximum efficiency to maximum ecological objective function, the improvement of power output is 229.13%, and the efficiency drop is only 13.53%. - Highlights: • An irreversible single resonance energy selective electron heat engine is studied. • Heat leakage between two reservoirs is considered. • Power output, efficiency and ecological objective function are derived. • Optimal performance comparison for three objective functions is carried out.
Optimal multi-photon phase sensing with a single interference fringe
Xiang, G. Y.; Hofmann, H. F.; Pryde, G. J.
2013-01-01
Quantum entanglement can help to increase the precision of optical phase measurements beyond the shot noise limit (SNL) to the ultimate Heisenberg limit. However, the N-photon parity measurements required to achieve this optimal sensitivity are extremely difficult to realize with current photon detection technologies, requiring high-fidelity resolution of N + 1 different photon distributions between the output ports. Recent experimental demonstrations of precision beyond the SNL have therefore used only one or two photon-number detection patterns instead of parity measurements. Here we investigate the achievable phase sensitivity of the simple and efficient single interference fringe detection technique. We show that the maximally-entangled “NOON” state does not achieve optimal phase sensitivity when N > 4, rather, we show that the Holland-Burnett state is optimal. We experimentally demonstrate this enhanced sensitivity using a single photon-counted fringe of the six-photon Holland-Burnett state. Specifically, our single-fringe six-photon measurement achieves a phase variance three times below the SNL. PMID:24067490
Sutrisno; Widowati; Tjahjana, R. H.
2017-10-01
In this paper, we formulate a hybrid mathematical model of supplier selection problem integrated with inventory control problem of a single product inventory system with piecewise holding cost. This model will be formulated in a piecewise affine (PWA) form that can be converted into mixed logical dynamic (MLD) form. By using this MLD model, we solve the supplier selection problem and control this inventory system so that the stock level tracks a desired level as the reference trajectory as closed as possible with minimal total cost. We use model predictive control for hybrid system to solve the problem. From the numerical experiment results, the optimal supplier was selected at each time period and the evolution of the stock level tracks the desired level well.
Vázquez-Castilla, Sara; Jaramillo-Carmona, Sara; Fuentes-Alventosa, Jose María; Jiménez-Araujo, Ana; Rodriguez-Arcos, Rocío; Cermeño-Sacristán, Pedro; Espejo-Calvo, Juan Antonio; Guillén-Bejarano, Rafael
2013-07-03
The main goal of this study was the optimization of a HPLC-MS method for the qualitative and quantitative analysis of asparagus saponins. The method includes extraction with aqueous ethanol, cleanup by solid phase extraction, separation by reverse phase chromatography, electrospray ionization, and detection in a single quadrupole mass analyzer. The method was used for the comparison of selected genotypes of Huétor-Tájar asparagus landrace and selected varieties of commercial diploid hybrids of green asparagus. The results showed that while protodioscin was almost the only saponin detected in the commercial hybrids, eight different saponins were detected in the Huétor-Tájar asparagus genotypes. The mass spectra indicated that HT saponins are derived from a furostan type steroidal genin having a single bond between carbons 5 and 6 of the B ring. The total concentration of saponins was found to be higher in triguero asparagus than in commercial hybrids.
Optimizing ETL by a Two-level Data Staging Method
DEFF Research Database (Denmark)
Liu, Xiufeng; Iftikhar, Nadeem; Nielsen, Per Sieverts
2016-01-01
In data warehousing, the data from source systems are populated into a central data warehouse (DW) through extraction, transformation and loading (ETL). The standard ETL approach usually uses sequential jobs to process the data with dependencies, such as dimension and fact data. It is a non......-trivial task to process the so-called early-/late-arriving data, which arrive out of order. This paper proposes a two-level data staging area method to optimize ETL. The proposed method is an all-in-one solution that supports processing different types of data from operational systems, including early......-/late-arriving data, and fast-/slowly-changing data. The introduced additional staging area decouples loading process from data extraction and transformation, which improves ETL flexibility and minimizes intervention to the data warehouse. This paper evaluates the proposed method empirically, which shows...
Multiobjective Optimization Methods for Congestion Management in Deregulated Power Systems
Directory of Open Access Journals (Sweden)
K. Vijayakumar
2012-01-01
Full Text Available Congestion management is one of the important functions performed by system operator in deregulated electricity market to ensure secure operation of transmission system. This paper proposes two effective methods for transmission congestion alleviation in deregulated power system. Congestion or overload in transmission networks is alleviated by rescheduling of generators and/or load shedding. The two objectives conflicting in nature (1 transmission line over load and (2 congestion cost are optimized in this paper. The multiobjective fuzzy evolutionary programming (FEP and nondominated sorting genetic algorithm II methods are used to solve this problem. FEP uses the combined advantages of fuzzy and evolutionary programming (EP techniques and gives better unique solution satisfying both objectives, whereas nondominated sorting genetic algorithm (NSGA II gives a set of Pareto-optimal solutions. The methods propose an efficient and reliable algorithm for line overload alleviation due to critical line outages in a deregulated power markets. The quality and usefulness of the algorithm is tested on IEEE 30 bus system.
An optimized method for counting dopaminergic neurons in zebrafish.
Directory of Open Access Journals (Sweden)
Hideaki Matsui
Full Text Available In recent years, considerable effort has been devoted to the development of a fish model for Parkinson's disease (PD to examine the pathological mechanisms of neurodegeneration. To effectively evaluate PD pathology, the ability to accurately and reliably count dopaminergic neurons is important. However, there is currently no such standardized method. Due to the relatively small number of dopaminergic neurons in fish, stereological estimation would not be suitable. In addition, serial sectioning requires proficiency to not lose any sections, and it permits double counting due to the large size of some of the dopaminergic neurons. In this study, we report an optimized protocol for staining dopaminergic neurons in zebrafish and provide a reliable counting method. Finally, using our optimized protocol, we confirmed that administration of 6-hydroxydopamine (a neurotoxin or the deletion of the PINK1 gene (one of the causative genes of familiar PD in zebrafish caused significant reduction in the number of dopaminergic and noradrenergic neurons. In summary, this method will serve as an important tool for the appropriate evaluation and establishment of fish PD models.
Optimized application of penalized regression methods to diverse genomic data.
Waldron, Levi; Pintilie, Melania; Tsao, Ming-Sound; Shepherd, Frances A; Huttenhower, Curtis; Jurisica, Igor
2011-12-15
Penalized regression methods have been adopted widely for high-dimensional feature selection and prediction in many bioinformatic and biostatistical contexts. While their theoretical properties are well-understood, specific methodology for their optimal application to genomic data has not been determined. Through simulation of contrasting scenarios of correlated high-dimensional survival data, we compared the LASSO, Ridge and Elastic Net penalties for prediction and variable selection. We found that a 2D tuning of the Elastic Net penalties was necessary to avoid mimicking the performance of LASSO or Ridge regression. Furthermore, we found that in a simulated scenario favoring the LASSO penalty, a univariate pre-filter made the Elastic Net behave more like Ridge regression, which was detrimental to prediction performance. We demonstrate the real-life application of these methods to predicting the survival of cancer patients from microarray data, and to classification of obese and lean individuals from metagenomic data. Based on these results, we provide an optimized set of guidelines for the application of penalized regression for reproducible class comparison and prediction with genomic data. A parallelized implementation of the methods presented for regression and for simulation of synthetic data is provided as the pensim R package, available at http://cran.r-project.org/web/packages/pensim/index.html. chuttenh@hsph.harvard.edu; juris@ai.utoronto.ca Supplementary data are available at Bioinformatics online.
Hybrid Training Method for MLP: Optimization of Architecture and Training.
Zanchettin, C; Ludermir, T B; Almeida, L M
2011-08-01
The performance of an artificial neural network (ANN) depends upon the selection of proper connection weights, network architecture, and cost function during network training. This paper presents a hybrid approach (GaTSa) to optimize the performance of the ANN in terms of architecture and weights. GaTSa is an extension of a previous method (TSa) proposed by the authors. GaTSa is based on the integration of the heuristic simulated annealing (SA), tabu search (TS), genetic algorithms (GA), and backpropagation, whereas TSa does not use GA. The main advantages of GaTSa are the following: a constructive process to add new nodes in the architecture based on GA, the ability to escape from local minima with uphill moves (SA feature), and faster convergence by the evaluation of a set of solutions (TS feature). The performance of GaTSa is investigated through an empirical evaluation of 11 public-domain data sets using different cost functions in the simultaneous optimization of the multilayer perceptron ANN architecture and weights. Experiments demonstrated that GaTSa can also be used for relevant feature selection. GaTSa presented statistically relevant results in comparison with other global and local optimization techniques.
Optimized Charging Scheduling with Single Mobile Charger for Wireless Rechargeable Sensor Networks
Directory of Open Access Journals (Sweden)
Qihua Wang
2017-11-01
Full Text Available Due to the rapid development of wireless charging technology, the recharging issue in wireless rechargeable sensor network (WRSN has been a popular research problem in the past few years. The weakness of previous work is that charging route planning is not reasonable. In this work, a dynamic optimal scheduling scheme aiming to maximize the vacation time ratio of a single mobile changer for WRSN is proposed. In the proposed scheme, the wireless sensor network is divided into several sub-networks according to the initial topology of deployed sensor networks. After comprehensive analysis of energy states, working state and constraints for different sensor nodes in WRSN, we transform the optimized charging path problem of the whole network into the local optimization problem of the sub networks. The optimized charging path with respect to dynamic network topology in each sub-network is obtained by solving an optimization problem, and the lifetime of the deployed wireless sensor network can be prolonged. Simulation results show that the proposed scheme has good and reliable performance for a small wireless rechargeable sensor network.
Optimization of sequential decisions by least squares Monte Carlo method
DEFF Research Database (Denmark)
Nishijima, Kazuyoshi; Anders, Annett
change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which......The present paper considers the sequential decision optimization problem. This is an important class of decision problems in engineering. Important examples include decision problems on the quality control of manufactured products and engineering components, timing of the implementation of climate....... For the purpose to demonstrate the use and advantages two numerical examples are provided, which is on the quality control of manufactured products....
An alternative method for restoring single-tooth implants.
McArdle, B F; Clarizio, L F
2001-09-01
Having laboratory technicians prepare soft-tissue casts and implant abutments with or without concomitant removable temporary prostheses during the restorative phase of single-tooth replacement is an accepted practice. It can, however, result in functional and esthetic intraoral discrepancies. Single-tooth implants can be restored with crowns (like those for natural teeth) fabricated at a dental laboratory on casts obtained from final impressions of prepared implant abutments. In the case reported, the restorative dentist restored the patient's single-tooth implant after taking a transfer impression. He constructed a cast simulating the peri-implant soft tissue with final impression material and prepared the abutment on this model. His dental assistant then fabricated a fixed provisional restoration on the prepared abutment. At the patient's next visit, the dentist torqued the prepared abutment onto the implant, took a final impression and inserted the provisional restoration. A crown was made conventionally at the dental laboratory and cemented in place at the following visit. This alternative method for restoring single-tooth implants enhances esthetics by more accurately simulating marginal gingival architecture. It also improves function by preloading the implant through fixed temporization after the dentist, rather than the laboratory technician, prepares the abutment to the dentist's preferred contours.
CSIR Research Space (South Africa)
Debba, Pravesh
2010-11-01
Full Text Available This paper reports on the results from ordinary least squares and ridge regression as statistical methods, and is compared to numerical optimization methods such as the stochastic method for global optimization, simulated annealing, particle swarm...
Simaria, Ana S; Hassan, Sally; Varadaraju, Hemanthram; Rowley, Jon; Warren, Kim; Vanek, Philip; Farid, Suzanne S
2014-01-01
For allogeneic cell therapies to reach their therapeutic potential, challenges related to achieving scalable and robust manufacturing processes will need to be addressed. A particular challenge is producing lot-sizes capable of meeting commercial demands of up to 10(9) cells/dose for large patient numbers due to the current limitations of expansion technologies. This article describes the application of a decisional tool to identify the most cost-effective expansion technologies for different scales of production as well as current gaps in the technology capabilities for allogeneic cell therapy manufacture. The tool integrates bioprocess economics with optimization to assess the economic competitiveness of planar and microcarrier-based cell expansion technologies. Visualization methods were used to identify the production scales where planar technologies will cease to be cost-effective and where microcarrier-based bioreactors become the only option. The tool outputs also predict that for the industry to be sustainable for high demand scenarios, significant increases will likely be needed in the performance capabilities of microcarrier-based systems. These data are presented using a technology S-curve as well as windows of operation to identify the combination of cell productivities and scale of single-use bioreactors required to meet future lot sizes. The modeling insights can be used to identify where future R&D investment should be focused to improve the performance of the most promising technologies so that they become a robust and scalable option that enables the cell therapy industry reach commercially relevant lot sizes. The tool outputs can facilitate decision-making very early on in development and be used to predict, and better manage, the risk of process changes needed as products proceed through the development pathway. © 2013 Wiley Periodicals, Inc.
A MISO-ARX-Based Method for Single-Trial Evoked Potential Extraction
Directory of Open Access Journals (Sweden)
Nannan Yu
2017-01-01
Full Text Available In this paper, we propose a novel method for solving the single-trial evoked potential (EP estimation problem. In this method, the single-trial EP is considered as a complex containing many components, which may originate from different functional brain sites; these components can be distinguished according to their respective latencies and amplitudes and are extracted simultaneously by multiple-input single-output autoregressive modeling with exogenous input (MISO-ARX. The extraction process is performed in three stages: first, we use a reference EP as a template and decompose it into a set of components, which serve as subtemplates for the remaining steps. Then, a dictionary is constructed with these subtemplates, and EPs are preliminarily extracted by sparse coding in order to roughly estimate the latency of each component. Finally, the single-trial measurement is parametrically modeled by MISO-ARX while characterizing spontaneous electroencephalographic activity as an autoregression model driven by white noise and with each component of the EP modeled by autoregressive-moving-average filtering of the subtemplates. Once optimized, all components of the EP can be extracted. Compared with ARX, our method has greater tracking capabilities of specific components of the EP complex as each component is modeled individually in MISO-ARX. We provide exhaustive experimental results to show the effectiveness and feasibility of our method.
Directory of Open Access Journals (Sweden)
Kai Moriguchi
2015-01-01
Full Text Available We evaluated the potential of simulated annealing as a reliable method for optimizing thinning rates for single even-aged stands. Four types of yield models were used as benchmark models to examine the algorithm’s versatility. Thinning rate, which was constrained to 0–50% every 5 years at stand ages of 10–45 years, was optimized to maximize the net present value for one fixed rotation term (50 years. The best parameters for the simulated annealing were chosen from 113 patterns, using the mean of the net present value from 39 runs to ensure the best performance. We compared the solutions with those from coarse full enumeration to evaluate the method’s reliability and with 39 runs of random search to evaluate its efficiency. In contrast to random search, the best run of simulated annealing for each of the four yield models resulted in a better solution than coarse full enumeration. However, variations in the objective function for two yield models obtained with simulated annealing were significantly larger than those of random search. In conclusion, simulated annealing with optimized parameters is more efficient for optimizing thinning rates than random search. However, it is necessary to execute multiple runs to obtain reliable solutions.
Wang, Hong; Wang, Xicheng; Li, Zheng; Li, Keqiu
2016-01-01
The metabolic network model allows for an in-depth insight into the molecular mechanism of a particular organism. Because most parameters of the metabolic network cannot be directly measured, they must be estimated by using optimization algorithms. However, three characteristics of the metabolic network model, i.e., high nonlinearity, large amount parameters, and huge variation scopes of parameters, restrict the application of many traditional optimization algorithms. As a result, there is a growing demand to develop efficient optimization approaches to address this complex problem. In this paper, a Kriging-based algorithm aiming at parameter estimation is presented for constructing the metabolic networks. In the algorithm, a new infill sampling criterion, named expected improvement and mutual information (EI&MI), is adopted to improve the modeling accuracy by selecting multiple new sample points at each cycle, and the domain decomposition strategy based on the principal component analysis is introduced to save computing time. Meanwhile, the convergence speed is accelerated by combining a single-dimensional optimization method with the dynamic coordinate perturbation strategy when determining the new sample points. Finally, the algorithm is applied to the arachidonic acid metabolic network to estimate its parameters. The obtained results demonstrate the effectiveness of the proposed algorithm in getting precise parameter values under a limited number of iterations.
Heuristic methods for single link shared backup path protection
DEFF Research Database (Denmark)
Haahr, Jørgen Thorlund; Stidsen, Thomas Riis; Zachariasen, Martin
2014-01-01
schemes are employed. In contrast to manual intervention, automatic protection schemes such as shared backup path protection (SBPP) can recover from failure quickly and efficiently. SBPP is a simple but efficient protection scheme that can be implemented in backbone networks with technology available...... today. In SBPP backup paths are planned in advance for every failure scenario in order to recover from failures quickly and efficiently. Planning SBPP is an NP-hard optimization problem, and previous work confirms that it is time-consuming to solve the problem in practice using exact methods.We present...... heuristic algorithms and lower bound methods for the SBPP planning problem. Experimental results show that the heuristic algorithms are able to find good quality solutions in minutes. A solution gap of less than 3.5 % was achieved for 5 of 7 benchmark instances (and a gap of less than 11 % for the remaining...
International Nuclear Information System (INIS)
Chen, Qun; Xu, Yun-Chao; Hao, Jun-Hong
2014-01-01
Highlights: • An optimization method for practical thermodynamic cycle is developed. • The entransy-based heat transfer analysis and thermodynamic analysis are combined. • Theoretical relation between system requirements and design parameters is derived. • The optimization problem can be converted into conditional extremum problem. • The proposed method provides several useful optimization criteria. - Abstract: A thermodynamic cycle usually consists of heat transfer processes in heat exchangers and heat-work conversion processes in compressors, expanders and/or turbines. This paper presents a new optimization method for effective improvement of thermodynamic cycle performance with the combination of entransy theory and thermodynamics. The heat transfer processes in a gas refrigeration cycle are analyzed by entransy theory and the heat-work conversion processes are analyzed by thermodynamics. The combination of these two analysis yields a mathematical relation directly connecting system requirements, e.g. cooling capacity rate and power consumption rate, with design parameters, e.g. heat transfer area of each heat exchanger and heat capacity rate of each working fluid, without introducing any intermediate variable. Based on this relation together with the conditional extremum method, we theoretically derive an optimization equation group. Simultaneously solving this equation group offers the optimal structural and operating parameters for every single gas refrigeration cycle and furthermore provides several useful optimization criteria for all the cycles. Finally, a practical gas refrigeration cycle is taken as an example to show the application and validity of the newly proposed optimization method
Homann, Stefanie; Hofmann, Christian; Gorin, Aleksandr M; Nguyen, Huy Cong Xuan; Huynh, Diana; Hamid, Phillip; Maithel, Neil; Yacoubian, Vahe; Mu, Wenli; Kossyvakis, Athanasios; Sen Roy, Shubhendu; Yang, Otto Orlean; Kelesidis, Theodoros
2017-01-01
Transfection is one of the most frequently used techniques in molecular biology that is also applicable for gene therapy studies in humans. One of the biggest challenges to investigate the protein function and interaction in gene therapy studies is to have reliable monospecific detection reagents, particularly antibodies, for all human gene products. Thus, a reliable method that can optimize transfection efficiency based on not only expression of the target protein of interest but also the uptake of the nucleic acid plasmid, can be an important tool in molecular biology. Here, we present a simple, rapid and robust flow cytometric method that can be used as a tool to optimize transfection efficiency at the single cell level while overcoming limitations of prior established methods that quantify transfection efficiency. By using optimized ratios of transfection reagent and a nucleic acid (DNA or RNA) vector directly labeled with a fluorochrome, this method can be used as a tool to simultaneously quantify cellular toxicity of different transfection reagents, the amount of nucleic acid plasmid that cells have taken up during transfection as well as the amount of the encoded expressed protein. Finally, we demonstrate that this method is reproducible, can be standardized and can reliably and rapidly quantify transfection efficiency, reducing assay costs and increasing throughput while increasing data robustness.
An n -material thresholding method for improving integerness of solutions in topology optimization
International Nuclear Information System (INIS)
Watts, Seth; Engineering); Tortorelli, Daniel A.; Engineering)
2016-01-01
It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, the canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.
A spatial domain optimization method to generate plane dependent masks
Wu, Yifeng
2006-01-01
Stochastic screening technique uses a fixed threshold array to generate halftoned images. When this technique is applied to color images, an important problem is how to generate the masks for different color planes. Ideally, a set of plane dependent color masks should have the following characteristics: a) when total ink coverage is less than 100%, no dots in different colors should overlap from each other. b) for each individual mask, dot distribution should be uniform, c) no visual artifact should be visible due to the low frequency patterns. In this paper, we propose a novel color mask generation method in which the optimal dot placement is searched directly in spatial domain. The advantage of using the spatial domain approach is that we can control directly the dot uniformity during the optimization, and we can also cope with the color plane-dependency by introducing some inter-plane constraints. We will show that using this method, we can generate plane dependent color masks with the characteristics mentioned above.
Optimization method for dimensioning a geological HLW waste repository
International Nuclear Information System (INIS)
Ouvrier, N.; Chaudon, L.; Malherbe, L.
1990-01-01
This method was developed by the CEA to optimize the dimensions of a geological repository by taking account of technical and economic parameters. It involves optimizing radioactive waste storage conditions on the basis of economic criteria with allowance for specified thermal constraints. The results are intended to identify trends and guide the choice from among available options: simple and highly flexible models were therefore used in this study, and only nearfield thermal constraints were taken into consideration. Because of the present uncertainty on the physicochemical properties of the repository environment and on the unit cost figures, this study focused on developing a suitable method rather than on obtaining definitive results. The optimum values found for the two media investigated (granite and salt) show that it is advisable to minimize the interim storage time, implying the containers must be separated by buffer material, whereas vertical spacing may not be required after a 30-year interim storage period. Moreover, the boreholes should be as deep as possible, on a close pitch in widely spaced handling drifts. These results depend to a considerable extent on the assumption of high interim storage costs
Directory of Open Access Journals (Sweden)
Ang Gong
2015-12-01
Full Text Available For Global Navigation Satellite System (GNSS single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.
Gong, Ang; Zhao, Xiubin; Pang, Chunlei; Duan, Rong; Wang, Yong
2015-12-02
For Global Navigation Satellite System (GNSS) single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA) method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.
Method for nonlinear optimization for gas tagging and other systems
Chen, T.; Gross, K.C.; Wegerich, S.
1998-01-06
A method and system are disclosed for providing nuclear fuel rods with a configuration of isotopic gas tags. The method includes selecting a true location of a first gas tag node, selecting initial locations for the remaining n-1 nodes using target gas tag compositions, generating a set of random gene pools with L nodes, applying a Hopfield network for computing on energy, or cost, for each of the L gene pools and using selected constraints to establish minimum energy states to identify optimal gas tag nodes with each energy compared to a convergence threshold and then upon identifying the gas tag node continuing this procedure until establishing the next gas tag node until all remaining n nodes have been established. 6 figs.
Experimental Methods for the Analysis of Optimization Algorithms
DEFF Research Database (Denmark)
of solution quality, runtime and other measures; and the third part collects advanced methods from experimental design for configuring and tuning algorithms on a specific class of instances with the goal of using the least amount of experimentation. The contributor list includes leading scientists......In operations research and computer science it is common practice to evaluate the performance of optimization algorithms on the basis of computational results, and the experimental approach should follow accepted principles that guarantee the reliability and reproducibility of results. However......, computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on different...
ARSTEC, Nonlinear Optimization Program Using Random Search Method
International Nuclear Information System (INIS)
Rasmuson, D. M.; Marshall, N. H.
1979-01-01
1 - Description of problem or function: The ARSTEC program was written to solve nonlinear, mixed integer, optimization problems. An example of such a problem in the nuclear industry is the allocation of redundant parts in the design of a nuclear power plant to minimize plant unavailability. 2 - Method of solution: The technique used in ARSTEC is the adaptive random search method. The search is started from an arbitrary point in the search region and every time a point that improves the objective function is found, the search region is centered at that new point. 3 - Restrictions on the complexity of the problem: Presently, the maximum number of independent variables allowed is 10. This can be changed by increasing the dimension of the arrays
A discrete optimization method for nuclear fuel management
International Nuclear Information System (INIS)
Argaud, J.P.
1993-04-01
Nuclear loading pattern elaboration can be seen as a combinational optimization problem, of tremendous size and with non-linear cost-functions, and search are always numerically expensive. After a brief introduction of the main aspects of nuclear fuel management, this note presents a new idea to treat the combinational problem by using informations included in the gradient of a cost function. The method is to choose, by direct observation of the gradient, the more interesting changes in fuel loading patterns. An example is then developed to illustrate an operating mode of the method, and finally, connections with simulated annealing and genetic algorithms are described as an attempt to improve search processes. (author). 1 fig., 16 refs
Multi-Objective Optimization of a Turbofan for an Advanced, Single-Aisle Transport
Berton, Jeffrey J.; Guynn, Mark D.
2012-01-01
Considerable interest surrounds the design of the next generation of single-aisle commercial transports in the Boeing 737 and Airbus A320 class. Aircraft designers will depend on advanced, next-generation turbofan engines to power these airplanes. The focus of this study is to apply single- and multi-objective optimization algorithms to the conceptual design of ultrahigh bypass turbofan engines for this class of aircraft, using NASA s Subsonic Fixed Wing Project metrics as multidisciplinary objectives for optimization. The independent design variables investigated include three continuous variables: sea level static thrust, wing reference area, and aerodynamic design point fan pressure ratio, and four discrete variables: overall pressure ratio, fan drive system architecture (i.e., direct- or gear-driven), bypass nozzle architecture (i.e., fixed- or variable geometry), and the high- and low-pressure compressor work split. Ramp weight, fuel burn, noise, and emissions are the parameters treated as dependent objective functions. These optimized solutions provide insight to the ultrahigh bypass engine design process and provide information to NASA program management to help guide its technology development efforts.
Solution of Constrained Optimal Control Problems Using Multiple Shooting and ESDIRK Methods
DEFF Research Database (Denmark)
Capolei, Andrea; Jørgensen, John Bagterp
2012-01-01
algorithm. As we consider stiff systems, implicit solvers with sensitivity computation capabilities for initial value problems must be used in the multiple shooting algorithm. Traditionally, multi-step methods based on the BDF algorithm have been used for such problems. The main novel contribution......In this paper, we describe a novel numerical algorithm for solution of constrained optimal control problems of the Bolza type for stiff and/or unstable systems. The numerical algorithm combines explicit singly diagonally implicit Runge-Kutta (ESDIRK) integration methods with a multiple shooting...... of this paper is the use of ESDIRK integration methods for solution of the initial value problems and the corresponding sensitivity equations arising in the multiple shooting algorithm. Compared to BDF-methods, ESDIRK-methods are advantageous in multiple shooting algorithms in which restarts and frequent...
Optimal Deflection of Earth-Crossing Object Using a Three-Dimensional Single Impulse
Directory of Open Access Journals (Sweden)
Byeong-Hee Mihn
2005-09-01
Full Text Available Optimization problems are formulated to calculate optimal impulses for deflecting Earth-Crossing Objects using a Nonlinear Programming. This formulation allows us to analyze the velocity changes in normal direction to the celestial body's orbital plane, which is neglected in many previous studies. The constrained optimization in the three-dimensional space is based on a patched conic method including the Earth's gravitational effects, and yields impulsive Δ V to deflect the target's orbit. The optimal solution is dependent on relative positions and velocities between the Earth and the Earth-crossing objects, and can be represented by optimal magnitude and angle of Δ V as a functions of a impulse time. The perpendicular component of Δ V to the orbit plane can sometimes play un-negligible role as the impulse time approaches the impact time. The optimal Δ V is increased when the original orbit of Earth-crossing object is more similar to the Earth's orbit, and is also exponentially increased as the impulse time reaches to the impact time. The analyses performed in present paper can be used to the deflection missions in the future.
Yang, Guo Sheng; Wang, Xiao Yang; Li, Xue Dong
2018-03-01
With the establishment of the integrated model of relay protection and the scale of the power system expanding, the global setting and optimization of relay protection is an extremely difficult task. This paper presents a kind of application in relay protection of global optimization improved particle swarm optimization algorithm and the inverse time current protection as an example, selecting reliability of the relay protection, selectivity, quick action and flexibility as the four requires to establish the optimization targets, and optimizing protection setting values of the whole system. Finally, in the case of actual power system, the optimized setting value results of the proposed method in this paper are compared with the particle swarm algorithm. The results show that the improved quantum particle swarm optimization algorithm has strong search ability, good robustness, and it is suitable for optimizing setting value in the relay protection of the whole power system.
Quantifying and optimizing single-molecule switching nanoscopy at high speeds.
Directory of Open Access Journals (Sweden)
Yu Lin
Full Text Available Single-molecule switching nanoscopy overcomes the diffraction limit of light by stochastically switching single fluorescent molecules on and off, and then localizing their positions individually. Recent advances in this technique have greatly accelerated the data acquisition speed and improved the temporal resolution of super-resolution imaging. However, it has not been quantified whether this speed increase comes at the cost of compromised image quality. The spatial and temporal resolution depends on many factors, among which laser intensity and camera speed are the two most critical parameters. Here we quantitatively compare the image quality achieved when imaging Alexa Fluor 647-immunolabeled microtubules over an extended range of laser intensities and camera speeds using three criteria - localization precision, density of localized molecules, and resolution of reconstructed images based on Fourier Ring Correlation. We found that, with optimized parameters, single-molecule switching nanoscopy at high speeds can achieve the same image quality as imaging at conventional speeds in a 5-25 times shorter time period. Furthermore, we measured the photoswitching kinetics of Alexa Fluor 647 from single-molecule experiments, and, based on this kinetic data, we developed algorithms to simulate single-molecule switching nanoscopy images. We used this software tool to demonstrate how laser intensity and camera speed affect the density of active fluorophores and influence the achievable resolution. Our study provides guidelines for choosing appropriate laser intensities for imaging Alexa Fluor 647 at different speeds and a quantification protocol for future evaluations of other probes and imaging parameters.
Advanced Topology Optimization Methods for Conceptual Architectural Design
DEFF Research Database (Denmark)
Aage, Niels; Amir, Oded; Clausen, Anders
2014-01-01
in topological optimization: Interactive control and continuous visualization; embedding flexible voids within the design space; consideration of distinct tension / compression properties; and optimization of dual material systems. In extension, optimization procedures for skeletal structures such as trusses...... and frames are implemented. The developed procedures allow for the exploration of new territories in optimization of architectural structures, and offer new methodological strategies for bridging conceptual gaps between optimization and architectural practice....
Directory of Open Access Journals (Sweden)
Nurmaulidar Nurmaulidar
2015-04-01
Full Text Available Travelling Salesman Problem (TSP is one of complex optimization problem that is difficult to be solved, and require quite a long time for a large number of cities. Evolutionary algorithm is a precise algorithm used in solving complex optimization problem as it is part of heuristic method. Evolutionary algorithm, like many other algorithms, also experiences a premature convergence phenomenon, whereby variation is eliminated from a population of fairly fit individuals before a complete solution is achieved. Therefore it requires a method to delay the convergence. A specific method of fitness sharing called phenotype fitness sharing has been used in this research. The aim of this research is to find out whether fitness sharing in evolutionary algorithm is able to optimize TSP. There are two concepts of evolutionary algorithm being used in this research. the first one used single elitism and the other one used federated solution. The two concepts had been tested to the method of fitness sharing by using the threshold of 0.25, 0.50 and 0.75. The result was then compared to a non fitness sharing method. The result in this study indicated that by using single elitism concept, fitness sharing was able to give a more optimum result for the data of 100-1000 cities. On the other hand, by using federation solution concept, fitness sharing can yield a more optimum result for the data above 1000 cities, as well as a better solution of data-spreading compared to the method without fitness sharing.
A Single-Degree-of-Freedom Energy Optimization Strategy for Power-Split Hybrid Electric Vehicles
Directory of Open Access Journals (Sweden)
Chaoying Xia
2017-07-01
Full Text Available This paper presents a single-degree-of-freedom energy optimization strategy to solve the energy management problem existing in power-split hybrid electric vehicles (HEVs. The proposed strategy is based on a quadratic performance index, which is innovatively designed to simultaneously restrict the fluctuation of battery state of charge (SOC and reduce fuel consumption. An extended quadratic optimal control problem is formulated by approximating the fuel consumption rate as a quadratic polynomial of engine power. The approximated optimal control law is obtained by utilizing the solution properties of the Riccati equation and adjoint equation. It is easy to implement in real-time and the engineering significance is explained in details. In order to validate the effectiveness of the proposed strategy, the forward-facing vehicle simulation model is established based on the ADVISOR software (Version 2002, National Renewable Energy Laboratory, Golden, CO, USA. The simulation results show that there is only a little fuel consumption difference between the proposed strategy and the Pontryagin’s minimum principle (PMP-based global optimal strategy, and the proposed strategy also exhibits good adaptability under different initial battery SOC, cargo mass and road slope conditions.
Vadgama, Rajeshkumar N; Odaneth, Annamma A; Lali, Arvind M
2015-12-01
Isopropyl myristate finds many applications in food, cosmetic and pharmaceutical industries as an emollient, thickening agent, or lubricant. Using a homogeneous reaction phase, non-specific lipase derived from Candida antartica, marketed as Novozym 435, was determined to be most suitable for the enzymatic synthesis of isopropyl myristate. The high molar ratio of alcohol to acid creates novel single phase medium which overcomes mass transfer effects and facilitates downstream processing. The effect of various reaction parameters was optimized to obtain a high yield of isopropyl myristate. Effect of temperature, agitation speed, organic solvent, biocatalyst loading and batch operational stability of the enzyme was systematically studied. The conversion of 87.65% was obtained when the molar ratio of isopropyl alcohol to myristic acid (15:1) was used with 4% (w/w) catalyst loading and agitation speed of 150 rpm at 60 °C. The enzyme has also shown good batch operational stability under optimized conditions.
Shobeiri, Vahid
2016-03-01
In this article, the bi-directional evolutionary structural optimization (BESO) method based on the element-free Galerkin (EFG) method is presented for topology optimization of continuum structures. The mathematical formulation of the topology optimization is developed considering the nodal strain energy as the design variable and the minimization of compliance as the objective function. The EFG method is used to derive the shape functions using the moving least squares approximation. The essential boundary conditions are enforced by the method of Lagrange multipliers. Several topology optimization problems are presented to show the effectiveness of the proposed method. Many issues related to topology optimization of continuum structures, such as chequerboard patterns and mesh dependency, are studied in the examples.
A Method for Turbocharging Four-Stroke Single Cylinder Engines
Buchman, Michael; Winter, Amos
2014-11-01
Turbocharging is not conventionally used with single cylinder engines due to the timing mismatch between when the turbo is powered and when it can deliver air to the cylinder. The proposed solution involves a fixed, pressurized volume - which we call an air capacitor - on the intake side of the engine between the turbocharger and intake valves. The capacitor acts as a buffer and would be implemented as a new style of intake manifold with a larger volume than traditional systems. This talk will present the flow analysis used to determine the optimal size for the capacitor, which was found to be four to five times the engine capacity, as well as its anticipated contributions to engine performance. For a capacitor sized for a one-liter engine, the time to reach operating pressure was found to be approximately two seconds, which would be acceptable for slowly accelerating applications and steady state applications. The air density increase that could be achieved, compared to ambient air, was found to vary between fifty percent for adiabatic compression and no heat transfer from the capacitor, to eighty percent for perfect heat transfer. These increases in density are proportional to, to first order, the anticipated power increases that could be realized. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1122374.
Optimized t-expansion method for the Rabi Hamiltonian
International Nuclear Information System (INIS)
Travenec, Igor; Samaj, Ladislav
2011-01-01
A polemic arose recently about the applicability of the t-expansion method to the calculation of the ground state energy E 0 of the Rabi model. For specific choices of the trial function and very large number of involved connected moments, the t-expansion results are rather poor and exhibit considerable oscillations. In this Letter, we formulate the t-expansion method for trial functions containing two free parameters which capture two exactly solvable limits of the Rabi Hamiltonian. At each order of the t-series, E 0 is assumed to be stationary with respect to the free parameters. A high accuracy of E 0 estimates is achieved for small numbers (5 or 6) of involved connected moments, the relative error being smaller than 10 -4 (0.01%) within the whole parameter space of the Rabi Hamiltonian. A special symmetrization of the trial function enables us to calculate also the first excited energy E 1 , with the relative error smaller than 10 -2 (1%). -- Highlights: → We study the ground state energy of the Rabi Hamiltonian. → We use the t-expansion method with an optimized trial function. → High accuracy of estimates is achieved, the relative error being smaller than 0.01%. → The calculation of the first excited state energy is made. The method has a general applicability.
Underwater Environment SDAP Method Using Multi Single-Beam Sonars
Directory of Open Access Journals (Sweden)
Zheping Yan
2013-01-01
Full Text Available A new autopilot system for unmanned underwater vehicle (UUV using multi-single-beam sonars is proposed for environmental exploration. The proposed autopilot system is known as simultaneous detection and patrolling (SDAP, which addresses two fundamental challenges: autonomous guidance and control. Autonomous guidance, autonomous path planning, and target tracking are based on the desired reference path which is reconstructed from the sonar data collected from the environmental contour with the predefined safety distance. The reference path is first estimated by using a support vector clustering inertia method and then refined by Bézier curves in order to satisfy the inertia property of the UUV. Differential geometry feedback linearization method is used to guide the vehicle entering into the predefined path while finite predictive stable inversion control algorithm is employed for autonomous target approaching. The experimental results from sea trials have demonstrated that the proposed system can provide satisfactory performance implying its great potential for future underwater exploration tasks.
Information theoretic methods for image processing algorithm optimization
Prokushkin, Sergey F.; Galil, Erez
2015-01-01
Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).
Methods for the design and optimization of shaped tokamaks
International Nuclear Information System (INIS)
Haney, S.W.
1988-05-01
Two major questions associated with the design and optimization of shaped tokamaks are considered. How do physics and engineering constraints affect the design of shaped tokamaks? How can the process of designing shaped tokamaks be improved? The first question is addressed with the aid of a completely analytical procedure for optimizing the design of a resistive-magnet tokamak reactor. It is shown that physics constraints---particularly the MHD beta limits and the Murakami density limit---have an enormous, and sometimes, unexpected effect on the final design. The second question is addressed through the development of a series of computer models for calculating plasma equilibria, estimating poloidal field coil currents, and analyzing axisymmetric MHD stability in the presence of resistive conductors and feedback. The models offer potential advantages over conventional methods since they are characterized by extremely fast computer execution times, simplicity, and robustness. Furthermore, evidence is presented that suggests that very little loss of accuracy is required to achieve these desirable features. 94 refs., 66 figs., 14 tabs
Single Allocation Hub-and-spoke Networks Design Based on Ant Colony Optimization Algorithm
Directory of Open Access Journals (Sweden)
Yang Pingle
2014-10-01
Full Text Available Capacitated single allocation hub-and-spoke networks can be abstracted as a mixed integer linear programming model equation with three variables. Introducing an improved ant colony algorithm, which has six local search operators. Meanwhile, introducing the "Solution Pair" concept to decompose and optimize the composition of the problem, the problem can become more specific and effectively meet the premise and advantages of using ant colony algorithm. Finally, location simulation experiment is made according to Australia Post data to demonstrate this algorithm has good efficiency and stability for solving this problem.
Optimized design and performance of a shared pump single clad 2 μm TDFA
Tench, Robert E.; Romano, Clément; Delavaux, Jean-Marc
2018-05-01
We report the design, experimental performance, and simulation of a single stage, co- and counter-pumped Tm-doped fiber amplifier (TDFA) in the 2 μm signal wavelength band with an optimized 1567 nm shared pump source. We investigate the dependence of output power, gain, and efficiency on pump coupling ratio and signal wavelength. Small signal gains of >50 dB, an output power of 2 W, and small signal noise figures of data. We also discuss performance tradeoffs with respect to amplifier topology for this simple and efficient TDFA.
Optimal retirement planning with a focus on single and multilife annuities
DEFF Research Database (Denmark)
Konicz, Agnieszka Karolina; Pisinger, David; Weissensteiner, Alex
We optimize the asset allocation, consumption and bequest decisions of an investor with uncertain lifetime and under time-varying investment opportunities. The asset menu is given by stocks, zero coupon bonds and pure endowments with dierent maturities. The latter are contingent on either a single...... or a joint life, and pay xed or variable benets. We further include transaction costs on stocks and bonds, and surrender charges on pure endowments. We show that despite high surrender charges, annuities are the primary asset class in a portfolio, and that annuity income is never fully consumed, but used...
Optimal retirement planning with a focus on single and multilife annuities
DEFF Research Database (Denmark)
Konicz, Agnieszka Karolina; Weissensteiner, Alex
We optimize the asset allocation, consumption and bequest decisions of an investor with uncertain lifetime and under time-varying investment opportunities. The asset menu is given by stocks, zero coupon bonds and pure endowments with different maturities. The latter are contingent on either...... a single or a joint life, and pay fixed or variable benefits. We further include transaction costs on stocks and bonds, and surrender charges on pure endowments. We show that despite high surrender charges, annuities are the primary asset class in a portfolio, and that annuity income is never fully...
Single well tracer method to evaluate enhanced recovery
Sheely, Jr., Clyde Q.; Baldwin, Jr., David E.
1978-01-01
Data useful to evaluate the effectiveness of or to design an enhanced recovery process (the recovery process involving mobilizing and moving hydrocarbons through a hydrocarbon-bearing subterranean formation from an injection well to a production well by injecting a mobilizing fluid into the injection well) are obtained by a process which comprises sequentially: determining hydrocarbon saturation in the formation in a volume in the formation near a well bore penetrating the formation, injecting sufficient of the mobilizing fluid to mobilize and move hydrocarbons from a volume in the formation near the well bore penetrating the formation, and determining by the single well tracer method a hydrocarbon saturation profile in a volume from which hydrocarbons are moved. The single well tracer method employed is disclosed by U.S. Pat. No. 3,623,842. The process is useful to evaluate surfactant floods, water floods, polymer floods, CO.sub.2 floods, caustic floods, micellar floods, and the like in the reservoir in much less time at greatly reduced costs, compared to conventional multi-well pilot test.
Mente, Carsten; Prade, Ina; Brusch, Lutz; Breier, Georg; Deutsch, Andreas
2011-07-01
Lattice-gas cellular automata (LGCAs) can serve as stochastic mathematical models for collective behavior (e.g. pattern formation) emerging in populations of interacting cells. In this paper, a two-phase optimization algorithm for global parameter estimation in LGCA models is presented. In the first phase, local minima are identified through gradient-based optimization. Algorithmic differentiation is adopted to calculate the necessary gradient information. In the second phase, for global optimization of the parameter set, a multi-level single-linkage method is used. As an example, the parameter estimation algorithm is applied to a LGCA model for early in vitro angiogenic pattern formation.
DEFF Research Database (Denmark)
Stolpe, Mathias; Bendsøe, Martin P.
2007-01-01
This paper present some initial results pertaining to a search for globally optimal solutions to a challenging benchmark example proposed by Zhou and Rozvany. This means that we are dealing with global optimization of the classical single load minimum compliance topology design problem with a fixed...... finite element discretization and with discrete design variables. Global optimality is achieved by the implementation of some specially constructed convergent nonlinear branch and cut methods, based on the use of natural relaxations and by applying strengthening constraints (linear valid inequalities...
Statistical Methods for Single-Particle Electron Cryomicroscopy
DEFF Research Database (Denmark)
Jensen, Katrine Hommelhoff
, several randomly oriented copies of the protein are available, each representing a certain viewing direction of the structure. This implies two main computational problems: (1) to determine the angular relationship between the individual projection images, i.e. determine the protein pose in each view...... from the noisy, randomly oriented projection images. Many statistical approaches to SPR have been proposed in the past. Typically, due to the computation time complexity, they rely on approximated maximum likelihood (ML) or maximum a posteriori (MAP) estimate of the structure. All methods presented...... statistical inversion to optimally cope with the high amount of noise, as well as to incorporate prior information to obtain more reliable estimates. For the first problem, we investigate the statistical recovery of the geometry between a set of projection images. In more detail, we show the equivalence...
Directory of Open Access Journals (Sweden)
Sangyeong Jeong
2017-10-01
Full Text Available This paper proposes an experimental optimization method for a wireless power transfer (WPT system. The power transfer characteristics of a WPT system with arbitrary loads and various types of coupling and compensation networks can be extracted by frequency domain measurements. The various performance parameters of the WPT system, such as input real/imaginary/apparent power, power factor, efficiency, output power and voltage gain, can be accurately extracted in a frequency domain by a single passive measurement. Subsequently, the design parameters can be efficiently tuned by separating the overall design steps into two parts. The extracted performance parameters of the WPT system were validated with time-domain experiments.
Hybrid RHF/MP2 geometry optimizations with the effective fragment molecular orbital method
DEFF Research Database (Denmark)
Christensen, Anders Steen; Svendsen, Casper Steinmann; Fedorov, Dmitri G
2014-01-01
The frozen domain effective fragment molecular orbital method is extended to allow for the treatment of a single fragment at the MP2 level of theory. The approach is applied to the conversion of chorismate to prephenate by Chorismate Mutase, where the substrate is treated at the MP2 level of theory...... while the rest of the system is treated at the RHF level. MP2 geometry optimization is found to lower the barrier by up to 3.5 kcal/mol compared to RHF optimzations and ONIOM energy refinement and leads to a smoother convergence with respect to the basis set for the reaction profile. For double zeta...
Highly optimized tunable Er3+-doped single longitudinal mode fiber ring laser, experiment and model
DEFF Research Database (Denmark)
Poulsen, Christian; Sejka, Milan
1993-01-01
A continuous wave (CW) tunable diode-pumped Er3+-doped fiber ring laser, pumped by diode laser at wavelengths around 1480 nm, is discussed. Wavelength tuning range of 42 nm, maximum slope efficiency of 48% and output power of 14.4 mW have been achieved. Single longitudinal mode lasing with a line...... with a linewidth of 6 kHz has been measured. A fast model of erbium-doped fiber laser was developed and used to optimize output parameters of the laser......A continuous wave (CW) tunable diode-pumped Er3+-doped fiber ring laser, pumped by diode laser at wavelengths around 1480 nm, is discussed. Wavelength tuning range of 42 nm, maximum slope efficiency of 48% and output power of 14.4 mW have been achieved. Single longitudinal mode lasing...
DMTO – a method for Discrete Material and Thickness Optimization of laminated composite structures
DEFF Research Database (Denmark)
Sørensen, Søren Nørgaard; Sørensen, Rene; Lund, Erik
2014-01-01
This paper presents a gradient based topology optimization method for Discrete Material and Thickness Optimization of laminated composite structures, labelled the DMTOmethod. The capabilities of the proposed method are demonstrated on mass minimization, subject to constraints on the structural...
Convex functions and optimization methods on Riemannian manifolds
Udrişte, Constantin
1994-01-01
This unique monograph discusses the interaction between Riemannian geometry, convex programming, numerical analysis, dynamical systems and mathematical modelling. The book is the first account of the development of this subject as it emerged at the beginning of the 'seventies. A unified theory of convexity of functions, dynamical systems and optimization methods on Riemannian manifolds is also presented. Topics covered include geodesics and completeness of Riemannian manifolds, variations of the p-energy of a curve and Jacobi fields, convex programs on Riemannian manifolds, geometrical constructions of convex functions, flows and energies, applications of convexity, descent algorithms on Riemannian manifolds, TC and TP programs for calculations and plots, all allowing the user to explore and experiment interactively with real life problems in the language of Riemannian geometry. An appendix is devoted to convexity and completeness in Finsler manifolds. For students and researchers in such diverse fields as pu...
Shape optimized headers and methods of manufacture thereof
Perrin, Ian James
2013-11-05
Disclosed herein is a shape optimized header comprising a shell that is operative for collecting a fluid; wherein an internal diameter and/or a wall thickness of the shell vary with a change in pressure and/or a change in a fluid flow rate in the shell; and tubes; wherein the tubes are in communication with the shell and are operative to transfer fluid into the shell. Disclosed herein is a method comprising fixedly attaching tubes to a shell; wherein the shell is operative for collecting a fluid; wherein an internal diameter and/or a wall thickness of the shell vary with a change in pressure and/or a change in a fluid flow rate in the shell; and wherein the tubes are in communication with the shell and are operative to transfer fluid into the shell.
Comparison of operation optimization methods in energy system modelling
DEFF Research Database (Denmark)
Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian
2013-01-01
In areas with large shares of Combined Heat and Power (CHP) production, significant introduction of intermittent renewable power production may lead to an increased number of operational constraints. As the operation pattern of each utility plant is determined by optimization of economics......, possibilities for decoupling production constraints may be valuable. Introduction of heat pumps in the district heating network may pose this ability. In order to evaluate if the introduction of heat pumps is economically viable, we develop calculation methods for the operation patterns of each of the used...... operation constraints, while the third approach uses nonlinear programming. In the present case the non-linearity occurs in the boiler efficiency of power plants and the cv-value of an extraction plant. The linear programming model is used as a benchmark, as this type is frequently used, and has the lowest...
Directory of Open Access Journals (Sweden)
Zhiming Zhang
2014-04-01
Full Text Available A single-valued neutrosophic set (SVNS and an interval neutrosophic set (INS are two instances of a neutrosophic set, which can efficiently deal with uncertain, imprecise, incomplete, and inconsistent information. In this paper, we develop a novel method for solving singlevalued neutrosophic multi-criteria decision making with incomplete weight information, in which the criterion values are given in the form of single-valued neutrosophic sets (SVNSs, and the information about criterion weights is incompletely known or completely unknown. The developed method consists of two stages. The first stage is to use the maximizing deviation method to establish an optimization model, which derives the optimal weights of criteria under single-valued neutrosophic environments. After obtaining the weights of criteria through the above stage, the second stage is to develop a single-valued neutrosophic TOPSIS (SVNTOPSIS method to determine a solution with the shortest distance to the single-valued neutrosophic positive ideal solution (SVNPIS and the greatest distance from the singlevalued neutrosophic negative ideal solution (SVNNIS. Moreover, a best global supplier selection problem is used to demonstrate the validity and applicability of the developed method. Finally, the extended results in interval neutrosophic situations are pointed out and a comparison analysis with the other methods is given to illustrate the advantages of the developed methods.
Guo, Xuezhen; Claassen, G D H; Oude Lansink, A G J M; Saatkamp, H W
2014-06-01
Economic analysis of hazard surveillance in livestock production chains is essential for surveillance organizations (such as food safety authorities) when making scientifically based decisions on optimization of resource allocation. To enable this, quantitative decision support tools are required at two levels of analysis: (1) single-hazard surveillance system and (2) surveillance portfolio. This paper addresses the first level by presenting a conceptual approach for the economic analysis of single-hazard surveillance systems. The concept includes objective and subjective aspects of single-hazard surveillance system analysis: (1) a simulation part to derive an efficient set of surveillance setups based on the technical surveillance performance parameters (TSPPs) and the corresponding surveillance costs, i.e., objective analysis, and (2) a multi-criteria decision making model to evaluate the impacts of the hazard surveillance, i.e., subjective analysis. The conceptual approach was checked for (1) conceptual validity and (2) data validity. Issues regarding the practical use of the approach, particularly the data requirement, were discussed. We concluded that the conceptual approach is scientifically credible for economic analysis of single-hazard surveillance systems and that the practicability of the approach depends on data availability. Copyright © 2014 Elsevier B.V. All rights reserved.
An Optimized Replica Distribution Method in Cloud Storage System
Directory of Open Access Journals (Sweden)
Yan Wang
2017-01-01
Full Text Available Aiming at establishing a shared storage environment, cloud storage systems are typical applications of cloud computing. Therefore, data replication technology has become a key research issue in storage systems. Considering the performance of data access and balancing the relationship between replica consistency maintenance costs and the performance of multiple replicas access, the methods of replica catalog design and the information acquisition method are proposed. Moreover, the deputy catalog acquisition method to design and copy the information is given. Then, the nodes with the global replica of the information replicate data resources, which have the high access frequency and the long response time. Afterwards, the Markov chain model is constructed. And a matrix geometric solution is used to export the steady-state solution of the model. The performance parameters in terms of the average response time, finish time, and the replica frequency are given to optimize the number of replicas in the storage system. Finally, numerical results with analysis are proposed to demonstrate the influence of the above parameters on the system performance.
Optimized design and structural mechanics of a single-piece composite helicopter driveshaft
Henry, Todd C.
In rotorcraft driveline design, single-piece composite driveshafts have much potential for reducing driveline mass and complexity over multi-segmented metallic driveshafts. The singlepiece shaft concept is enabled by the relatively high fatigue strain capacity of fiber reinforced polymer composites over metals. Challenges for single-piece driveshaft design lie in addressing the self-heating behavior of the composite due to the material damping, as well as, whirling stability, torsional buckling stability, and composite strength. Increased composite temperature due to self-heating reduces the composite strength and is accounted for in this research. The laminate longitudinal stiffness ( Ex) and strength (Fx) are known to be heavily degraded by fiber undulation, however, both are not well understood in compression. The whirling stability (a function of longitudinal stiffness) and the composite strength are strongly influential in driveshaft optimization, and thus are investigated further through the testing of flat and filament wound composite specimens. The design of single-piece composite driveshafts, however, needs to consider many failure criteria, including hysteresis-induced overheating, whirl stability, torsional buckling stability, and material failure by overstress. The present investigation uses multi-objective optimization to investigate the design space which visually highlights design trades. Design variables included stacking sequence, number of laminas, and number of hanger bearings. The design goals were to minimize weight and maximize the lowest factor of safety by adaptively generating solutions to the multi-objective problem. Several design spaces were investigated by examining the effect of misalignment, ambient temperature, and constant power transmission on the optimized solution. Several materials of interest were modeled using experimentally determined elastic properties and novel temperature-dependent composite strength. Compared to the
Consensus of satellite cluster flight using an energy-matching optimal control method
Luo, Jianjun; Zhou, Liang; Zhang, Bo
2017-11-01
This paper presents an optimal control method for consensus of satellite cluster flight under a kind of energy matching condition. Firstly, the relation between energy matching and satellite periodically bounded relative motion is analyzed, and the satellite energy matching principle is applied to configure the initial conditions. Then, period-delayed errors are adopted as state variables to establish the period-delayed errors dynamics models of a single satellite and the cluster. Next a novel satellite cluster feedback control protocol with coupling gain is designed, so that the satellite cluster periodically bounded relative motion consensus problem (period-delayed errors state consensus problem) is transformed to the stability of a set of matrices with the same low dimension. Based on the consensus region theory in the research of multi-agent system consensus issues, the coupling gain can be obtained to satisfy the requirement of consensus region and decouple the satellite cluster information topology and the feedback control gain matrix, which can be determined by Linear quadratic regulator (LQR) optimal method. This method can realize the consensus of satellite cluster period-delayed errors, leading to the consistency of semi-major axes (SMA) and the energy-matching of satellite cluster. Then satellites can emerge the global coordinative cluster behavior. Finally the feasibility and effectiveness of the present energy-matching optimal consensus for satellite cluster flight is verified through numerical simulations.
Jacob, H. G.
1972-01-01
An optimization method has been developed that computes the optimal open loop inputs for a dynamical system by observing only its output. The method reduces to static optimization by expressing the inputs as series of functions with parameters to be optimized. Since the method is not concerned with the details of the dynamical system to be optimized, it works for both linear and nonlinear systems. The method and the application to optimizing longitudinal landing paths for a STOL aircraft with an augmented wing are discussed. Noise, fuel, time, and path deviation minimizations are considered with and without angle of attack, acceleration excursion, flight path, endpoint, and other constraints.
An improved method for the molecular identification of single dinoflagellate cysts
Directory of Open Access Journals (Sweden)
Yangchun Gao
2017-04-01
Full Text Available Background Dinoflagellate cysts (i.e., dinocysts are biologically and ecologically important as they can help dinoflagellate species survive harsh environments, facilitate their dispersal and serve as seeds for harmful algal blooms. In addition, dinocysts derived from some species can produce more toxins than vegetative forms, largely affecting species through their food webs and even human health. Consequently, accurate identification of dinocysts represents the first crucial step in many ecological studies. As dinocysts have limited or even no available taxonomic keys, molecular methods have become the first priority for dinocyst identification. However, molecular identification of dinocysts, particularly when using single cells, poses technical challenges. The most serious is the low success rate of PCR, especially for heterotrophic species. Methods In this study, we aim to improve the success rate of single dinocyst identification for the chosen dinocyst species (Gonyaulax spinifera, Polykrikos kofoidii, Lingulodinium polyedrum, Pyrophacus steinii, Protoperidinium leonis and Protoperidinium oblongum distributed in the South China Sea. We worked on two major technical issues: cleaning possible PCR inhibitors attached on the cyst surface and designing new dinoflagellate-specific PCR primers to improve the success of PCR amplification. Results For the cleaning of single dinocysts separated from marine sediments, we used ultrasonic wave-based cleaning and optimized cleaning parameters. Our results showed that the optimized ultrasonic wave-based cleaning method largely improved the identification success rate and accuracy of both molecular and morphological identifications. For the molecular identification with the newly designed dinoflagellate-specific primers (18S634F-18S634R, the success ratio was as high as 86.7% for single dinocysts across multiple taxa when using the optimized ultrasonic wave-based cleaning method, and much higher than that
The Adjoint Method for Gradient-based Dynamic Optimization of UV Flash Processes
DEFF Research Database (Denmark)
Ritschel, Tobias Kasper Skovborg; Capolei, Andrea; Jørgensen, John Bagterp
2017-01-01
This paper presents a novel single-shooting algorithm for gradient-based solution of optimal control problems with vapor-liquid equilibrium constraints. Dynamic optimization of UV flash processes is relevant in nonlinear model predictive control of distillation columns, certain two-phase flow......-component flash process which demonstrate the importance of the optimization solver, the compiler, and the linear algebra software for the efficiency of dynamic optimization of UV flash processes....
SINGLE TREE DETECTION FROM AIRBORNE LASER SCANNING DATA USING A MARKED POINT PROCESS BASED METHOD
Directory of Open Access Journals (Sweden)
J. Zhang
2013-05-01
Full Text Available Tree detection and reconstruction is of great interest in large-scale city modelling. In this paper, we present a marked point process model to detect single trees from airborne laser scanning (ALS data. We consider single trees in ALS recovered canopy height model (CHM as a realization of point process of circles. Unlike traditional marked point process, we sample the model in a constraint configuration space by making use of image process techniques. A Gibbs energy is defined on the model, containing a data term which judge the fitness of the model with respect to the data, and prior term which incorporate the prior knowledge of object layouts. We search the optimal configuration through a steepest gradient descent algorithm. The presented hybrid framework was test on three forest plots and experiments show the effectiveness of the proposed method.
Single- versus Multiobjective Optimization for Evolution of Neural Controllers in Ms. Pac-Man
Directory of Open Access Journals (Sweden)
Tse Guan Tan
2013-01-01
Full Text Available The objective of this study is to focus on the automatic generation of game artificial intelligence (AI controllers for Ms. Pac-Man agent by using artificial neural network (ANN and multiobjective artificial evolution. The Pareto Archived Evolution Strategy (PAES is used to generate a Pareto optimal set of ANNs that optimize the conflicting objectives of maximizing Ms. Pac-Man scores (screen-capture mode and minimizing neural network complexity. This proposed algorithm is called Pareto Archived Evolution Strategy Neural Network or PAESNet. Three different architectures of PAESNet were investigated, namely, PAESNet with fixed number of hidden neurons (PAESNet_F, PAESNet with varied number of hidden neurons (PAESNet_V, and the PAESNet with multiobjective techniques (PAESNet_M. A comparison between the single- versus multiobjective optimization is conducted in both training and testing processes. In general, therefore, it seems that PAESNet_F yielded better results in training phase. But the PAESNet_M successfully reduces the runtime operation and complexity of ANN by minimizing the number of hidden neurons needed in hidden layer and also it provides better generalization capability for controlling the game agent in a nondeterministic and dynamic environment.
Optimization of pumping schemes for 160-Gb/s single channel Raman amplified systems
DEFF Research Database (Denmark)
Xu, Lin; Rottwitt, Karsten; Peucheret, Christophe
2004-01-01
Three different distributed Raman amplification schemes-backward pumping, bidirectional pumping, and second-order pumping-are evaluated numerically for 160-Gb/s single-channel transmission. The same longest transmission distance of 2500 km is achieved for all three pumping methods with a 105-km...
Pipeline heating method based on optimal control and state estimation
Energy Technology Data Exchange (ETDEWEB)
Vianna, F.L.V. [Dept. of Subsea Technology. Petrobras Research and Development Center - CENPES, Rio de Janeiro, RJ (Brazil)], e-mail: fvianna@petrobras.com.br; Orlande, H.R.B. [Dept. of Mechanical Engineering. POLI/COPPE, Federal University of Rio de Janeiro - UFRJ, Rio de Janeiro, RJ (Brazil)], e-mail: helcio@mecanica.ufrj.br; Dulikravich, G.S. [Dept. of Mechanical and Materials Engineering. Florida International University - FIU, Miami, FL (United States)], e-mail: dulikrav@fiu.edu
2010-07-01
In production of oil and gas wells in deep waters the flowing of hydrocarbon through pipeline is a challenging problem. This environment presents high hydrostatic pressures and low sea bed temperatures, which can favor the formation of solid deposits that in critical operating conditions, as unplanned shutdown conditions, may result in a pipeline blockage and consequently incur in large financial losses. There are different methods to protect the system, but nowadays thermal insulation and chemical injection are the standard solutions normally used. An alternative method of flow assurance is to heat the pipeline. This concept, which is known as active heating system, aims at heating the produced fluid temperature above a safe reference level in order to avoid the formation of solid deposits. The objective of this paper is to introduce a Bayesian statistical approach for the state estimation problem, in which the state variables are considered as the transient temperatures within a pipeline cross-section, and to use the optimal control theory as a design tool for a typical heating system during a simulated shutdown condition. An application example is presented to illustrate how Bayesian filters can be used to reconstruct the temperature field from temperature measurements supposedly available on the external surface of the pipeline. The temperatures predicted with the Bayesian filter are then utilized in a control approach for a heating system used to maintain the temperature within the pipeline above the critical temperature of formation of solid deposits. The physical problem consists of a pipeline cross section represented by a circular domain with four points over the pipe wall representing heating cables. The fluid is considered stagnant, homogeneous, isotropic and with constant thermo-physical properties. The mathematical formulation governing the direct problem was solved with the finite volume method and for the solution of the state estimation problem
Characterization of single-crystal sapphire substrates by X-ray methods and atomic force microscopy
International Nuclear Information System (INIS)
Prokhorov, I. A.; Zakharov, B. G.; Asadchikov, V. E.; Butashin, A. V.; Roshchin, B. S.; Tolstikhina, A. L.; Zanaveskin, M. L.; Grishchenko, Yu. V.; Muslimov, A. E.; Yakimchuk, I. V.; Volkov, Yu. O.; Kanevskii, V. M.; Tikhonov, E. O.
2011-01-01
The possibility of characterizing a number of practically important parameters of sapphire substrates by X-ray methods is substantiated. These parameters include wafer bending, traces of an incompletely removed damaged layer that formed as a result of mechanical treatment (scratches and marks), surface roughness, damaged layer thickness, and the specific features of the substrate real structure. The features of the real structure of single-crystal sapphire substrates were investigated by nondestructive methods of double-crystal X-ray diffraction and plane-wave X-ray topography. The surface relief of the substrates was investigated by atomic force microscopy and X-ray scattering. The use of supplementing analytical methods yields the most complete information about the structural inhomogeneities and state of crystal surface, which is extremely important for optimizing the technology of substrate preparation for epitaxy.
Methods for Optimizing CRISPR-Cas9 Genome Editing Specificity
Tycko, Josh; Myer, Vic E.; Hsu, Patrick D.
2016-01-01
Summary Advances in the development of delivery, repair, and specificity strategies for the CRISPR-Cas9 genome engineering toolbox are helping researchers understand gene function with unprecedented precision and sensitivity. CRISPR-Cas9 also holds enormous therapeutic potential for the treatment of genetic disorders by directly correcting disease-causing mutations. Although the Cas9 protein has been shown to bind and cleave DNA at off-target sites, the field of Cas9 specificity is rapidly progressing with marked improvements in guide RNA selection, protein and guide engineering, novel enzymes, and off-target detection methods. We review important challenges and breakthroughs in the field as a comprehensive practical guide to interested users of genome editing technologies, highlighting key tools and strategies for optimizing specificity. The genome editing community should now strive to standardize such methods for measuring and reporting off-target activity, while keeping in mind that the goal for specificity should be continued improvement and vigilance. PMID:27494557
Simple optimization method for partitioning purification of hydrogen networks
Directory of Open Access Journals (Sweden)
W.M. Shehata
2015-03-01
Full Text Available The Egyptian petroleum fuel market is increasing rapidly nowadays. These fuels must be in the standard specifications of the Egyptian General Petroleum Corporation (EGPC, which required lower sulfur gasoline and diesel fuels. So the fuels must be deep hydrotreated which resulted in increasing hydrogen (H2 consumption for deeper hydrotreating. Along with increased H2 consumption for deeper hydrotreating, additional H2 is needed for processing heavier and higher sulfur crude slates especially in hydrocracking process, in addition to hydrotreating unit, isomerization units and lubricant plants. Purification technology is used to increase the amount of recycled hydrogen. If the amount of recycled hydrogen is increased, the amount of hydrogen that is sent to the furnaces with the off gas will decrease. In this work, El Halwagi et al. (2003 and El Halwagi (2012 optimization methods which are used for recycle/reuse integration systems have been extended to be used in the partitioning purification of hydrogen networks to minimize the hydrogen consumption and the hydrogen discharge. An actual case study and two case studies from the literature are solved to illustrate the proposed method.
A Requirements-Driven Optimization Method for Acoustic Treatment Design
Berton, Jeffrey J.
2016-01-01
Acoustic treatment designers have long been able to target specific noise sources inside turbofan engines. Facesheet porosity and cavity depth are key design variables of perforate-over-honeycomb liners that determine levels of noise suppression as well as the frequencies at which suppression occurs. Layers of these structures can be combined to create a robust attenuation spectrum that covers a wide range of frequencies. Looking to the future, rapidly-emerging additive manufacturing technologies are enabling new liners with multiple degrees of freedom, and new adaptive liners with variable impedance are showing promise. More than ever, there is greater flexibility and freedom in liner design. Subject to practical considerations, liner design variables may be manipulated to achieve a target attenuation spectrum. But characteristics of the ideal attenuation spectrum can be difficult to know. Many multidisciplinary system effects govern how engine noise sources contribute to community noise. Given a hardwall fan noise source to be suppressed, and using an analytical certification noise model to compute a community noise measure of merit, the optimal attenuation spectrum can be derived using multidisciplinary systems analysis methods. The subject of this paper is an analytical method that derives the ideal target attenuation spectrum that minimizes noise perceived by observers on the ground.
Obitayo, Waris
The individual carbon nanotube (CNT) based strain sensors have been found to have excellent piezoresistive properties with a reported gauge factor (GF) of up to 3000. This GF on the other hand, has been shown to be structurally dependent on the nanotubes. In contrast, to individual CNT based strain sensors, the ensemble CNT based strain sensors have very low GFs e.g. for a single walled carbon nanotube (SWCNT) thin film strain sensor, GF is ~1. As a result, studies which are mostly numerical/analytical have revealed the dependence of piezoresistivity on key parameters like concentration, orientation, length and diameter, aspect ratio, energy barrier height and Poisson ratio of polymer matrix. The fundamental understanding of the piezoresistive mechanism in an ensemble CNT based strain sensor still remains unclear, largely due to discrepancies in the outcomes of these numerical studies. Besides, there have been little or no experimental confirmation of these studies. The goal of my PhD is to study the mechanism and the optimizing principle of a SWCNT thin film strain sensor and provide experimental validation of the numerical/analytical investigations. The dependence of the piezoresistivity on key parameters like orientation, network density, bundle diameter (effective tunneling area), and length is studied, and how one can effectively optimize the piezoresistive behavior of a SWCNT thin film strain sensors. To reach this goal, my first research accomplishment involves the study of orientation of SWCNTs and its effect on the piezoresistivity of mechanically drawn SWCNT thin film based piezoresistive sensors. Using polarized Raman spectroscopy analysis and coupled electrical-mechanical test, a quantitative relationship between the strain sensitivity and SWCNT alignment order parameter was established. As compared to randomly oriented SWCNT thin films, the one with draw ratio of 3.2 exhibited ~6x increase on the GF. My second accomplishment involves studying the
A hybrid 3D SEM reconstruction method optimized for complex geologic material surfaces.
Yan, Shang; Adegbule, Aderonke; Kibbey, Tohren C G
2017-08-01
Reconstruction methods are widely used to extract three-dimensional information from scanning electron microscope (SEM) images. This paper presents a new hybrid reconstruction method that combines stereoscopic reconstruction with shape-from-shading calculations to generate highly-detailed elevation maps from SEM image pairs. The method makes use of an imaged glass sphere to determine the quantitative relationship between observed intensity and angles between the beam and surface normal, and the detector and surface normal. Two specific equations are derived to make use of image intensity information in creating the final elevation map. The equations are used together, one making use of intensities in the two images, the other making use of intensities within a single image. The method is specifically designed for SEM images captured with a single secondary electron detector, and is optimized to capture maximum detail from complex natural surfaces. The method is illustrated with a complex structured abrasive material, and a rough natural sand grain. Results show that the method is capable of capturing details such as angular surface features, varying surface roughness, and surface striations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Solving optimum operation of single pump unit problem with ant colony optimization (ACO) algorithm
International Nuclear Information System (INIS)
Yuan, Y; Liu, C
2012-01-01
For pumping stations, the effective scheduling of daily pump operations from solutions to the optimum design operation problem is one of the greatest potential areas for energy cost-savings, there are some difficulties in solving this problem with traditional optimization methods due to the multimodality of the solution region. In this case, an ACO model for optimum operation of pumping unit is proposed and the solution method by ants searching is presented by rationally setting the object function and constrained conditions. A weighted directed graph was constructed and feasible solutions may be found by iteratively searching of artificial ants, and then the optimal solution can be obtained by applying the rule of state transition and the pheromone updating. An example calculation was conducted and the minimum cost was found as 4.9979. The result of ant colony algorithm was compared with the result from dynamic programming or evolutionary solving method in commercial software under the same discrete condition. The result of ACO is better and the computing time is shorter which indicates that ACO algorithm can provide a high application value to the field of optimal operation of pumping stations and related fields.
A second-order unconstrained optimization method for canonical-ensemble density-functional methods
Nygaard, Cecilie R.; Olsen, Jeppe
2013-03-01
A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.
Shivade, Anand S.; Shinde, Vasudev D.
2014-09-01
In this paper, wire electrical discharge machining of D3 tool steel is studied. Influence of pulse-on time, pulse-off time, peak current and wire speed are investigated for MRR, dimensional deviation, gap current and machining time, during intricate machining of D3 tool steel. Taguchi method is used for single characteristics optimization and to optimize all four process parameters simultaneously, Grey relational analysis (GRA) is employed along with Taguchi method. Through GRA, grey relational grade is used as a performance index to determine the optimal setting of process parameters for multi-objective characteristics. Analysis of variance (ANOVA) shows that the peak current is the most significant parameters affecting on multi-objective characteristics. Confirmatory results, proves the potential of GRA to optimize process parameters successfully for multi-objective characteristics.
Rosić, Miroslav; Pešić, Dalibor; Kukić, Dragoslav; Antić, Boris; Božović, Milan
2017-01-01
Concept of composite road safety index is a popular and relatively new concept among road safety experts around the world. As there is a constant need for comparison among different units (countries, municipalities, roads, etc.) there is need to choose an adequate method which will make comparison fair to all compared units. Usually comparisons using one specific indicator (parameter which describes safety or unsafety) can end up with totally different ranking of compared units which is quite complicated for decision maker to determine "real best performers". Need for composite road safety index is becoming dominant since road safety presents a complex system where more and more indicators are constantly being developed to describe it. Among wide variety of models and developed composite indexes, a decision maker can come to even bigger dilemma than choosing one adequate risk measure. As DEA and TOPSIS are well-known mathematical models and have recently been increasingly used for risk evaluation in road safety, we used efficiencies (composite indexes) obtained by different models, based on DEA and TOPSIS, to present PROMETHEE-RS model for selection of optimal method for composite index. Method for selection of optimal composite index is based on three parameters (average correlation, average rank variation and average cluster variation) inserted into a PROMETHEE MCDM method in order to choose the optimal one. The model is tested by comparing 27 police departments in Serbia. Copyright © 2016 Elsevier Ltd. All rights reserved.
Methods and tools for analysis and optimization of power plants
Energy Technology Data Exchange (ETDEWEB)
Assadi, Mohsen
2000-09-01
The most noticeable advantage of the introduction of the computer-aided tools in the field of power generation, has been the ability to study the plant's performance prior to the construction phase. The results of these studies have made it possible to change and adjust the plant layout to match the pre-defined requirements. Further development of computers in recent years has opened up for implementation of new features in the existing tools and also for the development of new tools for specific applications, like thermodynamic and economic optimization, prediction of the remaining component life time, and fault diagnostics, resulting in improvement of the plant's performance, availability and reliability. The most common tools for pre-design studies are heat and mass balance programs. Further thermodynamic and economic optimization of plant layouts, generated by the heat and mass balance programs, can be accomplished by using pinch programs, exergy analysis and thermoeconomics. Surveillance and fault diagnostics of existing systems can be performed by using tools like condition monitoring systems and artificial neural networks. The increased number of tools and their various construction and application areas make the choice of the most adequate tool for a certain application difficult. In this thesis the development of different categories of tools and techniques, and their application area are reviewed and presented. Case studies on both existing and theoretical power plant layouts have been performed using different commercially available tools to illuminate their advantages and shortcomings. The development of power plant technology and the requirements for new tools and measurement systems have been briefly reviewed. This thesis contains also programming techniques and calculation methods concerning part-load calculations using local linearization, which has been implemented in an inhouse heat and mass balance program developed by the author
Optimal Control with Time Delays via the Penalty Method
Directory of Open Access Journals (Sweden)
Mohammed Benharrat
2014-01-01
Full Text Available We prove necessary optimality conditions of Euler-Lagrange type for a problem of the calculus of variations with time delays, where the delay in the unknown function is different from the delay in its derivative. Then, a more general optimal control problem with time delays is considered. Main result gives a convergence theorem, allowing us to obtain a solution to the delayed optimal control problem by considering a sequence of delayed problems of the calculus of variations.
Optimal Homotopy Asymptotic Method for Solving System of Fredholm Integral Equations
Directory of Open Access Journals (Sweden)
Bahman Ghazanfari
2013-08-01
Full Text Available In this paper, optimal homotopy asymptotic method (OHAM is applied to solve system of Fredholm integral equations. The effectiveness of optimal homotopy asymptotic method is presented. This method provides easy tools to control the convergence region of approximating solution series wherever necessary. The results of OHAM are compared with homotopy perturbation method (HPM and Taylor series expansion method (TSEM.
Heyden, Andreas; Bell, Alexis T; Keil, Frerich J
2005-12-08
A combination of interpolation methods and local saddle-point search algorithms is probably the most efficient way of finding transition states in chemical reactions. Interpolation methods such as the growing-string method and the nudged-elastic band are able to find an approximation to the minimum-energy pathway and thereby provide a good initial guess for a transition state and imaginary mode connecting both reactant and product states. Since interpolation methods employ usually just a small number of configurations and converge slowly close to the minimum-energy pathway, local methods such as partitioned rational function optimization methods using either exact or approximate Hessians or minimum-mode-following methods such as the dimer or the Lanczos method have to be used to converge to the transition state. A modification to the original dimer method proposed by [Henkelman and Jonnson J. Chem. Phys. 111, 7010 (1999)] is presented, reducing the number of gradient calculations per cycle from six to four gradients or three gradients and one energy, and significantly improves the overall performance of the algorithm on quantum-chemical potential-energy surfaces, where forces are subject to numerical noise. A comparison is made between the dimer methods and the well-established partitioned rational function optimization methods for finding transition states after the use of interpolation methods. Results for 24 different small- to medium-sized chemical reactions covering a wide range of structural types demonstrate that the improved dimer method is an efficient alternative saddle-point search algorithm on medium-sized to large systems and is often even able to find transition states when partitioned rational function optimization methods fail to converge.
Adjoint-Baed Optimal Control on the Pitch Angle of a Single-Bladed Vertical-Axis Wind Turbine
Tsai, Hsieh-Chen; Colonius, Tim
2017-11-01
Optimal control on the pitch angle of a NACA0018 single-bladed vertical-axis wind turbine (VAWT) is numerically investigated at a low Reynolds number of 1500. With fixed tip-speed ratio, the input power is minimized and mean tangential force is maximized over a specific time horizon. The immersed boundary method is used to simulate the two-dimensional, incompressible flow around a horizontal cross section of the VAWT. The problem is formulated as a PDE constrained optimization problem and an iterative solution is obtained using adjoint-based conjugate gradient methods. By the end of the longest control horizon examined, two controls end up with time-invariant pitch angles of about the same magnitude but with the opposite signs. The results show that both cases lead to a reduction in the input power but not necessarily an enhancement in the mean tangential force. These reductions in input power are due to the removal of a power-damaging phenomenon that occurs when a vortex pair is captured by the blade in the upwind-half region of a cycle. This project was supported by Caltech FLOWE center/Gordon and Betty Moore Foundation.
Sharma, Puneet; Kalb, Bobby; Kitajima, Hiroumi D; Salman, Khalil N; Burrow, Bobbie; Ray, Gaye L; Martin, Diego R
2011-01-01
To measure contrast agent enhancement kinetics in the liver and to further evaluate and develop an optimized gadolinium enhanced MRI using a single injection real-time bolus-tracking method for reproducible imaging of the transient arterial-phase. A total of 18 subjects with hypervascular liver lesions were imaged with four dimensional (4D) perfusion scans to measure time-to-peak (TTP) delays of arterial (aorta-celiac axis), liver parenchyma, liver lesion, portal, and hepatic veins. Time delays were calculated from the TTP-aorta signal, and then related to the gradient echo (GRE) k-space acquisition design, to determine optimized timing for real-time bolus-track triggering methodology. As another measure of significance, 200 clinical patients were imaged with 3D-GRE using either a fixed time-interval or by individualized arterial bolus real-time triggering. Bolus TTP-aorta was calculated and arterial-phase acquisitions were compared for accuracy and reproducibility using specific vascular enhancement indicators. The mean bolus transit-time to peak-lesion contrast was 8.1 ± 2.7 seconds following arterial detection, compared to 32.1 ± 5.4 seconds from contrast injection, representing a 62.1% reduction in the time-variability among subjects (N = 18). The real-time bolus-triggered technique more consistently captured the targeted arterial phase (94%), compared to the fixed timing technique (73%), representing an expected improvement of timing accuracy in 28% of patients (P = 0.0001389). Our results show detailed timing window analysis required for optimized arterial real-time bolus-triggering acquisition of transient arterial phase features of liver lesions, with optimized arterial triggering expected to improve reproducibility in a significant number of patients. Copyright © 2010 Wiley-Liss, Inc.
Zhang, Bo; Sun, Jiwei; Wang, Qin; Fan, Niansi; Ni, Jialing; Li, Weicheng; Gao, Yingxin; Li, Yu-You; Xu, Changyou
2017-10-01
The electro-Fenton treatment of coking wastewater was evaluated experimentally in a batch electrochemical reactor. Based on central composite design coupled with response surface methodology, a regression quadratic equation was developed to model the total organic carbon (TOC) removal efficiency. This model was further proved to accurately predict the optimization of process variables by means of analysis of variance. With the aid of the convex optimization method, which is a global optimization method, the optimal parameters were determined as current density of 30.9 mA/cm 2 , Fe 2+ concentration of 0.35 mg/L, and pH of 4.05. Under the optimized conditions, the corresponding TOC removal efficiency was up to 73.8%. The maximum TOC removal efficiency achieved can be further confirmed by the results of gas chromatography-mass spectrum analysis.
Influence of Pareto optimality on the maximum entropy methods
Peddavarapu, Sreehari; Sunil, Gujjalapudi Venkata Sai; Raghuraman, S.
2017-07-01
Galerkin meshfree schemes are emerging as a viable substitute to finite element method to solve partial differential equations for the large deformations as well as crack propagation problems. However, the introduction of Shanon-Jayne's entropy principle in to the scattered data approximation has deviated from the trend of defining the approximation functions, resulting in maximum entropy approximants. Further in addition to this, an objective functional which controls the degree of locality resulted in Local maximum entropy approximants. These are based on information-theoretical Pareto optimality between entropy and degree of locality that are defining the basis functions to the scattered nodes. The degree of locality in turn relies on the choice of locality parameter and prior (weight) function. The proper choices of both plays vital role in attain the desired accuracy. Present work is focused on the choice of locality parameter which defines the degree of locality and priors: Gaussian, Cubic spline and quartic spline functions on the behavior of local maximum entropy approximants.
Optimized Audio Classification and Segmentation Algorithm by Using Ensemble Methods
Directory of Open Access Journals (Sweden)
Saadia Zahid
2015-01-01
Full Text Available Audio segmentation is a basis for multimedia content analysis which is the most important and widely used application nowadays. An optimized audio classification and segmentation algorithm is presented in this paper that segments a superimposed audio stream on the basis of its content into four main audio types: pure-speech, music, environment sound, and silence. An algorithm is proposed that preserves important audio content and reduces the misclassification rate without using large amount of training data, which handles noise and is suitable for use for real-time applications. Noise in an audio stream is segmented out as environment sound. A hybrid classification approach is used, bagged support vector machines (SVMs with artificial neural networks (ANNs. Audio stream is classified, firstly, into speech and nonspeech segment by using bagged support vector machines; nonspeech segment is further classified into music and environment sound by using artificial neural networks and lastly, speech segment is classified into silence and pure-speech segments on the basis of rule-based classifier. Minimum data is used for training classifier; ensemble methods are used for minimizing misclassification rate and approximately 98% accurate segments are obtained. A fast and efficient algorithm is designed that can be used with real-time multimedia applications.
Application of Taguchi method for cutting force optimization in rock
Indian Academy of Sciences (India)
In this paper, an optimization study was carried out for the cutting force (Fc) acting on circular diamond sawblades in rock sawing. The peripheral speed, traverse speed, cut depth and flow rate of cooling fluid were considered as operating variables and optimized by using Taguchi approach for the Fc. L16(44) orthogonal ...
Application of Taguchi method for cutting force optimization in rock ...
Indian Academy of Sciences (India)
In this paper, an optimization study was carried out for the cutting force (Fc) acting on circular diamond sawblades in rock sawing. The peripheral speed, traverse speed, cut depth and flow rate of cooling fluid were considered as operating variables and optimized by using Taguchi approach for the Fc. L16(44) orthogonal ...
Response surface method applied to optimization of estradiol ...
Indian Academy of Sciences (India)
An optimization process based on response surface methodology was carried out in order to develop a statistical model which describes the relationship between active independent variables and estradiol flux. This model can be used to find out a combination of factor levels during response optimization. Possible options ...
Control Methods Utilizing Energy Optimizing Schemes in Refrigeration Systems
DEFF Research Database (Denmark)
Larsen, L.S; Thybo, C.; Stoustrup, Jakob
2003-01-01
The potential energy savings in refrigeration systems using energy optimal control has been proved to be substantial. This however requires an intelligent control that drives the refrigeration systems towards the energy optimal state. This paper proposes an approach for a control, which drives...
Valles Sosa, Claudia Evangelina
Bioenergy has become an important alternative source of energy to alleviate the reliance on petroleum energy. Bioenergy offers diminishing climate change by reducing Green House Gas Emissions, as well as providing energy security and enhancing rural development. The Energy Independence and Security Act mandate the use of 21 billion gallons of advanced biofuels including 16 billion gallons of cellulosic biofuels by the year 2022. It is clear that Biomass can make a substantial contribution to supply future energy demand in a sustainable way. However, the supply of sustainable energy is one of the main challenges that mankind will face over the coming decades. For instance, many logistical challenges will be faced in order to provide an efficient and reliable supply of quality feedstock to biorefineries. 700 million tons of biomass will be required to be sustainably delivered to biorefineries annually to meet the projected use of biofuels by the year of 2022. Approaching this complex logistic problem as a multi-commodity network flow structure, the present work proposes the use of a genetic algorithm as a single objective optimization problem that considers the maximization of profit and the present work also proposes the use of a Multiple Objective Evolutionary Algorithm to simultaneously maximize profit while minimizing global warming potential. Most transportation optimization problems available in the literature have mostly considered the maximization of profit or the minimization of total travel time as potential objectives to be optimized. However, on this research work, we take a more conscious and sustainable approach for this logistic problem. Planners are increasingly expected to adopt a multi-disciplinary approach, especially due to the rising importance of environmental stewardship. The role of a transportation planner and designer is shifting from simple economic analysis to promoting sustainability through the integration of environmental objectives. To
Lin, Chao; Shen, Xueju; Hua, Binbin; Wang, Zhisong
2015-10-01
We demonstrate the feasibility of three dimensional (3D) polarization multiplexing by optimizing a single vectorial beam using a multiple-signal window multiple-plane (MSW-MP) phase retrieval algorithm. Original messages represented with multiple quick response (QR) codes are first partitioned into a series of subblocks. Then, each subblock is marked with a specific polarization state and randomly distributed in 3D space with both longitudinal and transversal adjustable freedoms. A generalized 3D polarization mapping protocol is established to generate a 3D polarization key. Finally, multiple-QR code is encrypted into one phase only mask and one polarization only mask based on the modified Gerchberg-Saxton (GS) algorithm. We take the polarization mask as the cyphertext and the phase only mask as additional dimension of key. Only when both the phase key and 3D polarization key are correct, original messages can be recovered. We verify our proposal with both simulation and experiment evidences.
Optimizing single mode robustness of the distributed modal filtering rod fiber amplifier
DEFF Research Database (Denmark)
Jørgensen, Mette Marie; Petersen, Sidsel Rübner; Laurila, Marko
2012-01-01
High-power fiber amplifiers for pulsed applications require large mode area (LMA) fibers having high pump absorption and near diffraction limited output. Photonic crystal fibers allow realization of short LMA fiber amplifiers having high pump absorption through a pump cladding that is decoupled...... from the outer fiber diameter. However, achieving ultra low NA for single mode (SM) guidance is challenging, thus different design strategies must be applied. The distributed modal filtering (DMF) design enables SM guidance in ultra low NA fibers with very large cores, where large preform tolerances...... can be compensated during the fiber draw. Design optimization of the SM bandwidth of the DMF rod fiber is presented. Analysis of band gap properties results in a fourfold increase of the SM bandwidth compared to previous results, achieved by utilizing the first band of cladding modes, which can cover...
Wang, Dong-Bo; Zhang, Jin-Chuan; Cheng, Feng-Min; Zhao, Yue; Zhuo, Ning; Zhai, Shen-Qiang; Wang, Li-Jun; Liu, Jun-Qi; Liu, Shu-Man; Liu, Feng-Qi; Wang, Zhan-Guo
2018-02-02
In this work, quantum cascade lasers (QCLs) based on strain compensation combined with two-phonon resonance design are presented. Distributed feedback (DFB) laser emitting at ~ 4.76 μm was fabricated through a standard buried first-order grating and buried heterostructure (BH) processing. Stable single-mode emission is achieved under all injection currents and temperature conditions without any mode hop by the optimized antireflection (AR) coating on the front facet. The AR coating consists of a double layer dielectric of Al 2 O 3 and Ge. For a 2-mm laser cavity, the maximum output power of the AR-coated DFB-QCL was more than 170 mW at 20 °C with a high wall-plug efficiency (WPE) of 4.7% in a continuous-wave (CW) mode.
Laser: a Tool for Optimization and Enhancement of Analytical Methods
Energy Technology Data Exchange (ETDEWEB)
Preisler, Jan [Iowa State Univ., Ames, IA (United States)
1997-01-01
In this work, we use lasers to enhance possibilities of laser desorption methods and to optimize coating procedure for capillary electrophoresis (CE). We use several different instrumental arrangements to characterize matrix-assisted laser desorption (MALD) at atmospheric pressure and in vacuum. In imaging mode, 488-nm argon-ion laser beam is deflected by two acousto-optic deflectors to scan plumes desorbed at atmospheric pressure via absorption. All absorbing species, including neutral molecules, are monitored. Interesting features, e.g. differences between the initial plume and subsequent plumes desorbed from the same spot, or the formation of two plumes from one laser shot are observed. Total plume absorbance can be correlated with the acoustic signal generated by the desorption event. A model equation for the plume velocity as a function of time is proposed. Alternatively, the use of a static laser beam for observation enables reliable determination of plume velocities even when they are very high. Static scattering detection reveals negative influence of particle spallation on MS signal. Ion formation during MALD was monitored using 193-nm light to photodissociate a portion of insulin ion plume. These results define the optimal conditions for desorbing analytes from matrices, as opposed to achieving a compromise between efficient desorption and efficient ionization as is practiced in mass spectrometry. In CE experiment, we examined changes in a poly(ethylene oxide) (PEO) coating by continuously monitoring the electroosmotic flow (EOF) in a fused-silica capillary during electrophoresis. An imaging CCD camera was used to follow the motion of a fluorescent neutral marker zone along the length of the capillary excited by 488-nm Ar-ion laser. The PEO coating was shown to reduce the velocity of EOF by more than an order of magnitude compared to a bare capillary at pH 7.0. The coating protocol was important, especially at an intermediate pH of 7.7. The increase of p
Optimization of NANOGrav's time allocation for maximum sensitivity to single sources
International Nuclear Information System (INIS)
Christy, Brian; Anella, Ryan; Lommen, Andrea; Camuccio, Richard; Handzo, Emma; Finn, Lee Samuel
2014-01-01
Pulsar timing arrays (PTAs) are a collection of precisely timed millisecond pulsars (MSPs) that can search for gravitational waves (GWs) in the nanohertz frequency range by observing characteristic signatures in the timing residuals. The sensitivity of a PTA depends on the direction of the propagating GW source, the timing accuracy of the pulsars, and the allocation of the available observing time. The goal of this paper is to determine the optimal time allocation strategy among the MSPs in the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) for a single source of GW under a particular set of assumptions. We consider both an isotropic distribution of sources across the sky and a specific source in the Virgo cluster. This work improves on previous efforts by modeling the effect of intrinsic spin noise for each pulsar. We find that, in general, the array is optimized by maximizing time spent on the best-timed pulsars, with sensitivity improvements typically ranging from a factor of 1.5 to 4.
Optimal Divergence-Free Hatch Filter for GNSS Single-Frequency Measurement
Directory of Open Access Journals (Sweden)
Byungwoon Park
2017-02-01
Full Text Available The Hatch filter is a code-smoothing technique that uses the variation of the carrier phase. It can effectively reduce the noise of a pseudo-range with a very simple filter construction, but it occasionally causes an ionosphere-induced error for low-lying satellites. Herein, we propose an optimal single-frequency (SF divergence-free Hatch filter that uses a satellite-based augmentation system (SBAS message to reduce the ionospheric divergence and applies the optimal smoothing constant for its smoothing window width. According to the data-processing results, the overall performance of the proposed filter is comparable to that of the dual frequency (DF divergence-free Hatch filter. Moreover, it can reduce the horizontal error of 57 cm to 37 cm and improve the vertical accuracy of the conventional Hatch filter by 25%. Considering that SF receivers dominate the global navigation satellite system (GNSS market and that most of these receivers include the SBAS function, the filter suggested in this paper is of great value in that it can make the differential GPS (DGPS performance of the low-cost SF receivers comparable to that of DF receivers.
Directory of Open Access Journals (Sweden)
Rajeshkumar N. Vadgama
2015-12-01
Full Text Available Isopropyl myristate finds many applications in food, cosmetic and pharmaceutical industries as an emollient, thickening agent, or lubricant. Using a homogeneous reaction phase, non-specific lipase derived from Candida antartica, marketed as Novozym 435, was determined to be most suitable for the enzymatic synthesis of isopropyl myristate. The high molar ratio of alcohol to acid creates novel single phase medium which overcomes mass transfer effects and facilitates downstream processing. The effect of various reaction parameters was optimized to obtain a high yield of isopropyl myristate. Effect of temperature, agitation speed, organic solvent, biocatalyst loading and batch operational stability of the enzyme was systematically studied. The conversion of 87.65% was obtained when the molar ratio of isopropyl alcohol to myristic acid (15:1 was used with 4% (w/w catalyst loading and agitation speed of 150 rpm at 60 °C. The enzyme has also shown good batch operational stability under optimized conditions.
Optimal Divergence-Free Hatch Filter for GNSS Single-Frequency Measurement.
Park, Byungwoon; Lim, Cheolsoon; Yun, Youngsun; Kim, Euiho; Kee, Changdon
2017-02-24
The Hatch filter is a code-smoothing technique that uses the variation of the carrier phase. It can effectively reduce the noise of a pseudo-range with a very simple filter construction, but it occasionally causes an ionosphere-induced error for low-lying satellites. Herein, we propose an optimal single-frequency (SF) divergence-free Hatch filter that uses a satellite-based augmentation system (SBAS) message to reduce the ionospheric divergence and applies the optimal smoothing constant for its smoothing window width. According to the data-processing results, the overall performance of the proposed filter is comparable to that of the dual frequency (DF) divergence-free Hatch filter. Moreover, it can reduce the horizontal error of 57 cm to 37 cm and improve the vertical accuracy of the conventional Hatch filter by 25%. Considering that SF receivers dominate the global navigation satellite system (GNSS) market and that most of these receivers include the SBAS function, the filter suggested in this paper is of great value in that it can make the differential GPS (DGPS) performance of the low-cost SF receivers comparable to that of DF receivers.
Model of a single mode energy harvester and properties for optimal power generation
International Nuclear Information System (INIS)
Liao Yabin; Sodano, Henry A
2008-01-01
The process of acquiring the energy surrounding a system and converting it into usable electrical energy is termed power harvesting. In the last few years, the field of power harvesting has experienced significant growth due to the ever increasing desire to produce portable and wireless electronics with extended life. Current portable and wireless devices must be designed to include electrochemical batteries as the power source. The use of batteries can be troublesome due to their finite energy supply, which necessitates their periodic replacement. In the case of wireless sensors that are to be placed in remote locations, the sensor must be easily accessible or of disposable nature to allow the device to function over extended periods of time. Energy scavenging devices are designed to capture the ambient energy surrounding the electronics and covert it into usable electrical energy. The concept of power harvesting works towards developing self-powered devices that do not require replaceable power supplies. The development of energy harvesting systems is greatly facilitated by an accurate model to assist in the design of the system. This paper will describe a theoretical model of a piezoelectric based energy harvesting system that is simple to apply yet provides an accurate prediction of the power generated around a single mode of vibration. Furthermore, this model will allow optimization of system parameters to be studied such that maximal performance can be achieved. Using this model an expression for the optimal resistance and a parameter describing the energy harvesting efficiency will be presented and evaluated through numerical simulations. The second part of this paper will present an experimental validation of the model and optimal parameters
Masquelier, Timothée
2017-06-29
Repeating spatiotemporal spike patterns exist and carry information. How this information is extracted by downstream neurons is unclear. Here we theoretically investigate to what extent a single cell could detect a given spike pattern and what the optimal parameters to do so are, in particular the membrane time constant τ. Using a leaky integrate-and-fire (LIF) neuron with homogeneous Poisson input, we computed this optimum analytically. We found that a relatively small τ (at most a few tens of ms) is usually optimal, even when the pattern is much longer. This is somewhat counter-intuitive as the resulting detector ignores most of the pattern, due to its fast memory decay. Next, we wondered if spike-timing-dependent plasticity (STDP) could enable a neuron to reach the theoretical optimum. We simulated a LIF equipped with additive STDP, and repeatedly exposed it to a given input spike pattern. As in previous studies, the LIF progressively became selective to the repeating pattern with no supervision, even when the pattern was embedded in Poisson activity. Here we show that, using certain STDP parameters, the resulting pattern detector is optimal. These mechanisms may explain how humans learn repeating sensory sequences. Long sequences could be recognized thanks to coincidence detectors working at a much shorter timescale. This is consistent with the fact that recognition is still possible if a sound sequence is compressed, played backward, or scrambled using 10-ms bins. Coincidence detection is a simple yet powerful mechanism, which could be the main function of neurons in the brain. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Directory of Open Access Journals (Sweden)
Zhengnan Li
2016-01-01
Full Text Available To solve the multiobjective optimization problem on hypersonic glider vehicle trajectory design subjected to complex constraints, this paper proposes a multiobjective trajectory optimization method that combines the boundary intersection method and pseudospectral method. The multiobjective trajectory optimization problem (MTOP is established based on the analysis of the feature of hypersonic glider vehicle trajectory. The MTOP is translated into a set of general optimization subproblems by using the boundary intersection method and pseudospectral method. The subproblems are solved by nonlinear programming algorithm. In this method, the solution that has been solved is employed as the initial guess for the next subproblem so that the time consumption of the entire multiobjective trajectory optimization problem shortens. The maximal range and minimal peak heat problem is solved by the proposed method. The numerical results demonstrate that the proposed method can obtain the Pareto front of the optimal trajectory, which can provide the reference for the trajectory design of hypersonic glider vehicle.
DEFF Research Database (Denmark)
Le, T.H.A.; Pham, D. T.; Canh, Nam Nguyen
2010-01-01
Both the efficient and weakly efficient sets of an affine fractional vector optimization problem, in general, are neither convex nor given explicitly. Optimization problems over one of these sets are thus nonconvex. We propose two methods for optimizing a real-valued function over the efficient...... and weakly efficient sets of an affine fractional vector optimization problem. The first method is a local one. By using a regularization function, we reformulate the problem into a standard smooth mathematical programming problem that allows applying available methods for smooth programming. In case...... the objective function is linear, we have investigated a global algorithm based upon a branch-and-bound procedure. The algorithm uses Lagrangian bound coupling with a simplicial bisection in the criteria space. Preliminary computational results show that the global algorithm is promising....
Analyses of Methods and Algorithms for Modelling and Optimization of Biotechnological Processes
Directory of Open Access Journals (Sweden)
Stoyan Stoyanov
2009-08-01
Full Text Available A review of the problems in modeling, optimization and control of biotechnological processes and systems is given in this paper. An analysis of existing and some new practical optimization methods for searching global optimum based on various advanced strategies - heuristic, stochastic, genetic and combined are presented in the paper. Methods based on the sensitivity theory, stochastic and mix strategies for optimization with partial knowledge about kinetic, technical and economic parameters in optimization problems are discussed. Several approaches for the multi-criteria optimization tasks are analyzed. The problems concerning optimal controls of biotechnological systems are also discussed.
Models and Methods for Structural Topology Optimization with Discrete Design Variables
DEFF Research Database (Denmark)
Stolpe, Mathias
Structural topology optimization is a multi-disciplinary research field covering optimal design of load carrying mechanical structures such as bridges, airplanes, wind turbines, cars, etc. Topology optimization is a collection of theory, mathematical models, and numerical methods and is often used...... such as bridges, airplanes, wind turbines, cars, etc. Topology optimization is a collection of theory, mathematical models, and numerical methods and is often used in the conceptual design phase to find innovative designs. The strength of topology optimization is the capability of determining both the optimal...
Determination of heterogeneous medium parameters by single fuel element method
International Nuclear Information System (INIS)
Veloso, M.A.F.
1985-01-01
The neutron pulse propagation technique was employed to study an heterogeneous system consisting of a single fuel element placed at the symmetry axis of a large cylindrical D 2 O tank. The response of system for the pulse propagation technique is related to the inverse complex relaxation length of the neutron waves also known as the system dispersion law ρ (ω). Experimental values of ρ (ω) were compared with the ones derived from Fermi age - Diffusion theory. The main purpose of the experiment was to obtain the Feinberg-Galanin thermal constant (γ), which is the logaritmic derivative of the neutron flux at the fuel-moderator interface and a such a main input data for heterogeneous reactor theory calculations. The γ thermal constant was determined as the number giving the best agreement between the theoretical and experimental values of ρ (ω). The simultaneous determination of two among four parameters η,ρ,τ and L s is possible through the intersection of dispersion laws of the pure moderator system and the fuel moderator system. The parameters τ and η were termined by this method. It was shown that the thermal constant γ and the product η ρ can be computed from the real and imaginary parts of the fuel-moderator dispersion law. The results for this evaluation scheme showns a not stable behavior of γ as a function of frequency, a result not foreseen by the theoretical model. (Author) [pt
Optimal Control of Micro Grid Operation Mode Seamless Switching Based on Radau Allocation Method
Chen, Xiaomin; Wang, Gang
2017-05-01
The seamless switching process of micro grid operation mode directly affects the safety and stability of its operation. According to the switching process from island mode to grid-connected mode of micro grid, we establish a dynamic optimization model based on two grid-connected inverters. We use Radau allocation method to discretize the model, and use Newton iteration method to obtain the optimal solution. Finally, we implement the optimization mode in MATLAB and get the optimal control trajectory of the inverters.
Genetic-evolution-based optimization methods for engineering design
Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.
1990-01-01
This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.
Hermans, Syrie M; Buckley, Hannah L; Lear, Gavin
2018-02-02
Using environmental DNA (eDNA) to assess the distribution of micro- and macroorganisms is becoming increasingly popular. However, the comparability and reliability of these studies is not well understood as we lack evidence on how different DNA extraction methods affect the detection of different organisms, and how this varies among sample types. Our aim was to quantify biases associated with six DNA extraction methods and identify one which is optimal for eDNA research targeting multiple organisms and sample types. We assessed each methods' ability to simultaneously extract bacterial, fungal, plant, animal and fish DNA from soil, leaf litter, stream water, stream sediment, stream biofilm and kick-net samples, as well as from mock communities. Method choice affected alpha-diversity for several combinations of taxon and sample type, with the majority of the differences occurring in the bacterial communities. While a single method performed optimally for the extraction of DNA from bacterial, fungal and plant mock communities, different methods performed best for invertebrate and fish mock communities. The consistency of methods, as measured by the similarity of community compositions resulting from replicate extractions, varied and was lowest for the animal communities. Collectively, these data provide the first comprehensive assessment of the biases associated with DNA extraction for both different sample types and taxa types, allowing us to identify DNeasy PowerSoil as a universal DNA extraction method. The adoption of standardized approaches for eDNA extraction will ensure that results can be more reliably compared, and biases quantified, thereby advancing eDNA as an ecological research tool. © 2018 John Wiley & Sons Ltd.
Iron Pole Shape Optimization of IPM Motors Using an Integrated Method
Directory of Open Access Journals (Sweden)
JABBARI, A.
2010-02-01
Full Text Available An iron pole shape optimization method to reduce cogging torque in Interior Permanent Magnet (IPM motors is developed by using the reduced basis technique coupled by finite element and design of experiments methods. Objective function is defined as the minimum cogging torque. The experimental design of Taguchi method is used to build the approximation model and to perform optimization. This method is demonstrated on the rotor pole shape optimization of a 4-poles/24-slots IPM motor.
Directory of Open Access Journals (Sweden)
Yuan Gao
2016-04-01
Full Text Available With the increasing demands for better transmission speed and robust quality of service (QoS, the capacity constrained backhaul gradually becomes a bottleneck in cooperative wireless networks, e.g., in the Internet of Things (IoT scenario in joint processing mode of LTE-Advanced Pro. This paper focuses on resource allocation within capacity constrained backhaul in uplink cooperative wireless networks, where two base stations (BSs equipped with single antennae serve multiple single-antennae users via multi-carrier transmission mode. In this work, we propose a novel cooperative transmission scheme based on compress-and-forward with user pairing to solve the joint mixed integer programming problem. To maximize the system capacity under the limited backhaul, we formulate the joint optimization problem of user sorting, subcarrier mapping and backhaul resource sharing among different pairs (subcarriers for users. A novel robust and efficient centralized algorithm based on alternating optimization strategy and perfect mapping is proposed. Simulations show that our novel method can improve the system capacity significantly under the constraint of the backhaul resource compared with the blind alternatives.
Sutrisno; Widowati; Heru Tjahjana, R.
2017-01-01
In this paper, we propose a mathematical model in the form of dynamic/multi-stage optimization to solve an integrated supplier selection problem and tracking control problem of single product inventory system with product discount. The product discount will be stated as a piece-wise linear function. We use dynamic programming to solve this proposed optimization to determine the optimal supplier and the optimal product volume that will be purchased from the optimal supplier for each time period so that the inventory level tracks a reference trajectory given by decision maker with minimal total cost. We give a numerical experiment to evaluate the proposed model. From the result, the optimal supplier was determined for each time period and the inventory level follows the given reference well.
Optimizing process and equipment efficiency using integrated methods
D'Elia, Michael J.; Alfonso, Ted F.
1996-09-01
The semiconductor manufacturing industry is continually riding the edge of technology as it tries to push toward higher design limits. Mature fabs must cut operating costs while increasing productivity to remain profitable and cannot justify large capital expenditures to improve productivity. Thus, they must push current tool production capabilities to cut manufacturing costs and remain viable. Working to continuously improve mature production methods requires innovation. Furthermore, testing and successful implementation of these ideas into modern production environments require both supporting technical data and commitment from those working with the process daily. At AMD, natural work groups (NWGs) composed of operators, technicians, engineers, and supervisors collaborate to foster innovative thinking and secure commitment. Recently, an AMD NWG improved equipment cycle time on the Genus tungsten silicide (WSi) deposition system. The team used total productive manufacturing (TPM) to identify areas for process improvement. Improved in-line equipment monitoring was achieved by constructing a real time overall equipment effectiveness (OEE) calculator which tracked equipment down, idle, qualification, and production times. In-line monitoring results indicated that qualification time associated with slow Inspex turn-around time and machine downtime associated with manual cleans contributed greatly to reduced availability. Qualification time was reduced by 75% by implementing a new Inspex monitor pre-staging technique. Downtime associated with manual cleans was reduced by implementing an in-situ plasma etch back to extend the time between manual cleans. A designed experiment was used to optimize the process. Time between 18 hour manual cleans has been improved from every 250 to every 1500 cycles. Moreover defect density realized a 3X improvement. Overall, the team achieved a 35% increase in tool availability. This paper details the above strategies and accomplishments.
International Nuclear Information System (INIS)
Wu, Horng-Wen; Wu, Zhan-Yi
2012-01-01
This study applies the L 9 orthogonal array of the Taguchi method to find out the best hydrogen injection timing, hydrogen-energy-share ratio, and the percentage of exhaust gas circulation (EGR) in a single DI diesel engine. The injection timing is controlled by an electronic control unit (ECU) and the quantity of hydrogen is controlled by hydrogen flow controller. For various engine loads, the authors determine the optimal operating factors for low BSFC (brake specific fuel consumption), NO X , and smoke. Moreover, net heat-release rate involving variable specific heat ratio is computed from the experimental in-cylinder pressure. In-cylinder pressure, net heat-release rate, A/F ratios, COV (coefficient of variations) of IMEP (indicated mean effective pressure), NO X , and smoke using the optimum condition factors are compared with those by original baseline diesel engine. The predictions made using Taguchi's parameter design technique agreed with the confirmation results on 95% confidence interval. At 45% and 60% loads the optimum factor combination compared with the original baseline diesel engine reduces 14.52% for BSFC, 60.5% for NO X and for 42.28% smoke and improves combustion performance such as peak in-cylinder pressure and net heat-release rate. Adding hydrogen and EGR would not generate unstable combustion due to lower COV of IMEP. -- Highlights: ► We use hydrogen injector controlled by ECU and cooled EGR system in a diesel engine. ► Optimal factors by Taguchi method are determined for low BSFC, NO X and smoke. ► The COV of IMEP is lower than 10% so it will not cause the unstable combustion. ► We improve A/F ratio, in-cylinder pressure, and heat-release at optimized engine. ► Decrease is 14.5% for BSFC, 60.5% for NO X , and 42.28% for smoke at optimized engine.
Zhang, Songchuan; Xia, Youshen
2018-01-01
Much research has been devoted to complex-variable optimization problems due to their engineering applications. However, the complex-valued optimization method for solving complex-variable optimization problems is still an active research area. This paper proposes two efficient complex-valued optimization methods for solving constrained nonlinear optimization problems of real functions in complex variables, respectively. One solves the complex-valued nonlinear programming problem with linear equality constraints. Another solves the complex-valued nonlinear programming problem with both linear equality constraints and an -norm constraint. Theoretically, we prove the global convergence of the proposed two complex-valued optimization algorithms under mild conditions. The proposed two algorithms can solve the complex-valued optimization problem completely in the complex domain and significantly extend existing complex-valued optimization algorithms. Numerical results further show that the proposed two algorithms have a faster speed than several conventional real-valued optimization algorithms.
International Nuclear Information System (INIS)
Sandler, K.L.; Markham, L.W.; Mah, M.L.; Byrum, E.P.; Williams, J.R.
2014-01-01
Aim: To identify adult patients with single-ventricle congenital heart disease and Fontan procedure palliation who have been misdiagnosed with or incompletely evaluated for pulmonary embolism. Additionally, this study was designed to demonstrate that simultaneous, dual-injection of contrast medium into an upper and lower extremity vein is superior to single-injection protocols for CT angiography (CTA) of the chest in this population. Materials and methods: Patients included in the study were retrospectively selected from the Adult Congenital Heart Disease (ACHD) database. Search criteria included history of Fontan palliation and available chest CT examination. Patients were evaluated for (1) type of congenital heart disease and prior operations;(2) indication for initial CT evaluation;(3) route of contrast medium administration for the initial CT examination and resulting diagnosis;(4) whether or not anticoagulation therapy was initiated; and (5) final diagnosis and treatment plan. Results: The query of the ACHD database resulted in 28 individuals or patients with Fontan palliation (superior and inferior venae cavae anastomosed to the pulmonary arteries). Of these, 19 patients with Fontan physiology underwent CTA of the pulmonary circulation, and 17 had suboptimal imaging studies. Unfortunately, seven of these 17 patients (41%) were started on anticoagulation therapy due to a diagnosis of pulmonary embolism that was later excluded. Conclusion: Patients with single-ventricle/Fontan physiology are at risk of thromboembolic disease. Therefore, studies evaluating their complex anatomy must be performed with the optimal imaging protocol to ensure diagnostic accuracy, which is best achieved with dual-injection of an upper and lower extremity central vein. - Highlights: • The adult congenital heart disease population is growing. • Many of these patients have single ventricle/Fontan physiology. • Patients with Fontan physiology are at increased risk for
Optimized Method for Untargeted Metabolomics Analysis of MDA-MB-231 Breast Cancer Cells
Directory of Open Access Journals (Sweden)
Amanda L. Peterson
2016-09-01
Full Text Available Cancer cells often have dysregulated metabolism, which is largely characterized by the Warburg effect—an increase in glycolytic activity at the expense of oxidative phosphorylation—and increased glutamine utilization. Modern metabolomics tools offer an efficient means to investigate metabolism in cancer cells. Currently, a number of protocols have been described for harvesting adherent cells for metabolomics analysis, but the techniques vary greatly and they lack specificity to particular cancer cell lines with diverse metabolic and structural features. Here we present an optimized method for untargeted metabolomics characterization of MDA-MB-231 triple negative breast cancer cells, which are commonly used to study metastatic breast cancer. We found that an approach that extracted all metabolites in a single step within the culture dish optimally detected both polar and non-polar metabolite classes with higher relative abundance than methods that involved removal of cells from the dish. We show that this method is highly suited to diverse applications, including the characterization of central metabolic flux by stable isotope labelling and differential analysis of cells subjected to specific pharmacological interventions.
Lu, Zheng; Chen, Xiaoyi; Zhou, Ying
2018-04-01
A particle tuned mass damper (PTMD) is a creative combination of a widely used tuned mass damper (TMD) and an efficient particle damper (PD) in the vibration control area. The performance of a one-storey steel frame attached with a PTMD is investigated through free vibration and shaking table tests. The influence of some key parameters (filling ratio of particles, auxiliary mass ratio, and particle density) on the vibration control effects is investigated, and it is shown that the attenuation level significantly depends on the filling ratio of particles. According to the experimental parametric study, some guidelines for optimization of the PTMD that mainly consider the filling ratio are proposed. Furthermore, an approximate analytical solution based on the concept of an equivalent single-particle damper is proposed, and it shows satisfied agreement between the simulation and experimental results. This simplified method is then used for the preliminary optimal design of a PTMD system, and a case study of a PTMD system attached to a five-storey steel structure following this optimization process is presented.
Single-photon source engineering using a Modal Method
DEFF Research Database (Denmark)
Gregersen, Niels
Solid-state sources of single indistinguishable photons are of great interest for quantum information applications. The semiconductor quantum dot embedded in a host material represents an attractive platform to realize such a single-photon source (SPS). A near-unity efficiency, defined as the num...... nanowire SPSs...
A new method to optimize natural convection heat sinks
Lampio, K.; Karvinen, R.
2017-08-01
The performance of a heat sink cooled by natural convection is strongly affected by its geometry, because buoyancy creates flow. Our model utilizes analytical results of forced flow and convection, and only conduction in a solid, i.e., the base plate and fins, is solved numerically. Sufficient accuracy for calculating maximum temperatures in practical applications is proved by comparing the results of our model with some simple analytical and computational fluid dynamics (CFD) solutions. An essential advantage of our model is that it cuts down on calculation CPU time by many orders of magnitude compared with CFD. The shorter calculation time makes our model well suited for multi-objective optimization, which is the best choice for improving heat sink geometry, because many geometrical parameters with opposite effects influence the thermal behavior. In multi-objective optimization, optimal locations of components and optimal dimensions of the fin array can be found by simultaneously minimizing the heat sink maximum temperature, size, and mass. This paper presents the principles of the particle swarm optimization (PSO) algorithm and applies it as a basis for optimizing existing heat sinks.
Single molecule force spectroscopy: methods and applications in biology
International Nuclear Information System (INIS)
Shen Yi; Hu Jun
2012-01-01
Single molecule measurements have transformed our view of biomolecules. Owing to the ability of monitoring the activity of individual molecules, we now see them as uniquely structured, fluctuating molecules that stochastically transition between frequently many substrates, as two molecules do not follow precisely the same trajectory. Indeed, it is this discovery of critical yet short-lived substrates that were often missed in ensemble measurements that has perhaps contributed most to the better understanding of biomolecular functioning resulting from single molecule experiments. In this paper, we give a review on the three major techniques of single molecule force spectroscopy, and their applications especially in biology. The single molecular study of biotin-streptavidin interactions is introduced as a successful example. The problems and prospects of the single molecule force spectroscopy are discussed, too. (authors)
Shi, Yuhu; Zeng, Weiming; Wang, Nizhuan; Zhao, Le
2017-05-01
Currently the problem of incorporating priori information into an independent component analysis (ICA) model is often solved under the framework of constrained ICA, which utilizes the priori information as a reference signal to form a constraint condition and then introduce it into classical ICA. However, it is difficult to pre-determine a suitable threshold parameter to constrain the closeness between the output signal and the reference signal in the constraint condition. In this paper, a new model of ICA with priori information as a reference signal is established on the framework of multi-objective optimization, where an adaptive weighted summation method is introduced to solve this multi-objective optimization problem with a new fixed-point learning algorithm. The experimental results of fMRI hybrid data and task-related data on the single-subject level have demonstrated that the proposed method has a better overall performance on the recover abilities of both spatial source and time course. At the same time, compared with traditional ICA with reference methods and classical ICA method, the experimental results of resting-state fMRI data on the group-level have showed that the group independent component calculated by the proposed method has a higher correlation with the corresponding independent component of each subject through T-test. The proposed method does not need us to select a threshold parameter to constrain the closeness between the output signal and the reference signal. In addition, the performance of functional connectivity detection has a great improvement in comparison with traditional methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Hezarjaribi, Mehrnoosh; Ardestani, Fatemeh; Ghorbani, Hamid Reza
2016-08-01
Saccharomyces cerevisiae PTCC5269 growth was evaluated to specify an optimum culture medium to reach the highest protein production. Experiment design was conducted using a fraction of the full factorial methodology, and signal to noise ratio was used for results analysis. Maximum cell of 8.84 log (CFU/mL) was resulted using optimized culture composed of 0.3, 0.15, 1, and 50 g L(-1) of ammonium sulfate, iron sulfate, glycine, and glucose, respectively at 300 rpm and 35 °C. Glycine concentration (39.32 % contribution) and glucose concentration (36.15 % contribution) were determined as the most effective factors on the biomass production, while Saccharomyces cerevisiae growth had showed the least dependence on ammonium sulfate (5.2 % contribution) and iron sulfate (19.28 % contribution). The most interaction was diagnosed between ammonium sulfate and iron sulfate concentrations with interaction severity index of 50.71 %, while the less one recorded for glycine and glucose concentration was equal to 8.12 %. An acceptable consistency of 84.26 % was obtained between optimum theoretical cell numbers determined by software of 8.91 log (CFU/mL), and experimentally measured one at optimal condition confirms the suitability of the applied method. High protein content of 44.6 % using optimum culture suggests that Saccharomyces cerevisiae is a good commercial case for single cell protein production.
Development of Combinatorial Methods for Alloy Design and Optimization
International Nuclear Information System (INIS)
Pharr, George M.; George, Easo P.; Santella, Michael L
2005-01-01
The primary goal of this research was to develop a comprehensive methodology for designing and optimizing metallic alloys by combinatorial principles. Because conventional techniques for alloy preparation are unavoidably restrictive in the range of alloy composition that can be examined, combinatorial methods promise to significantly reduce the time, energy, and expense needed for alloy design. Combinatorial methods can be developed not only to optimize existing alloys, but to explore and develop new ones as well. The scientific approach involved fabricating an alloy specimen with a continuous distribution of binary and ternary alloy compositions across its surface--an ''alloy library''--and then using spatially resolved probing techniques to characterize its structure, composition, and relevant properties. The three specific objectives of the project were: (1) to devise means by which simple test specimens with a library of alloy compositions spanning the range interest can be produced; (2) to assess how well the properties of the combinatorial specimen reproduce those of the conventionally processed alloys; and (3) to devise screening tools which can be used to rapidly assess the important properties of the alloys. As proof of principle, the methodology was applied to the Fe-Ni-Cr ternary alloy system that constitutes many commercially important materials such as stainless steels and the H-series and C-series heat and corrosion resistant casting alloys. Three different techniques were developed for making alloy libraries: (1) vapor deposition of discrete thin films on an appropriate substrate and then alloying them together by solid-state diffusion; (2) co-deposition of the alloying elements from three separate magnetron sputtering sources onto an inert substrate; and (3) localized melting of thin films with a focused electron-beam welding system. Each of the techniques was found to have its own advantages and disadvantages. A new and very powerful technique for
Development of Combinatorial Methods for Alloy Design and Optimization
Energy Technology Data Exchange (ETDEWEB)
Pharr, George M.; George, Easo P.; Santella, Michael L
2005-07-01
The primary goal of this research was to develop a comprehensive methodology for designing and optimizing metallic alloys by combinatorial principles. Because conventional techniques for alloy preparation are unavoidably restrictive in the range of alloy composition that can be examined, combinatorial methods promise to significantly reduce the time, energy, and expense needed for alloy design. Combinatorial methods can be developed not only to optimize existing alloys, but to explore and develop new ones as well. The scientific approach involved fabricating an alloy specimen with a continuous distribution of binary and ternary alloy compositions across its surface--an ''alloy library''--and then using spatially resolved probing techniques to characterize its structure, composition, and relevant properties. The three specific objectives of the project were: (1) to devise means by which simple test specimens with a library of alloy compositions spanning the range interest can be produced; (2) to assess how well the properties of the combinatorial specimen reproduce those of the conventionally processed alloys; and (3) to devise screening tools which can be used to rapidly assess the important properties of the alloys. As proof of principle, the methodology was applied to the Fe-Ni-Cr ternary alloy system that constitutes many commercially important materials such as stainless steels and the H-series and C-series heat and corrosion resistant casting alloys. Three different techniques were developed for making alloy libraries: (1) vapor deposition of discrete thin films on an appropriate substrate and then alloying them together by solid-state diffusion; (2) co-deposition of the alloying elements from three separate magnetron sputtering sources onto an inert substrate; and (3) localized melting of thin films with a focused electron-beam welding system. Each of the techniques was found to have its own advantages and disadvantages. A new and very
Directory of Open Access Journals (Sweden)
A. P. Karpenko
2014-01-01
Full Text Available We consider a class of stochastic search algorithms of global optimization which in various publications are called behavioural, intellectual, metaheuristic, inspired by the nature, swarm, multi-agent, population, etc. We use the last term.Experience in using the population algorithms to solve challenges of global optimization shows that application of one such algorithm may not always effective. Therefore now great attention is paid to hybridization of population algorithms of global optimization. Hybrid algorithms unite various algorithms or identical algorithms, but with various values of free parameters. Thus efficiency of one algorithm can compensate weakness of another.The purposes of the work are development of hybrid algorithm of global optimization based on known algorithms of harmony search (HS and swarm of particles (PSO, software implementation of algorithm, study of its efficiency using a number of known benchmark problems, and a problem of dimensional optimization of truss structure.We set a problem of global optimization, consider basic algorithms of HS and PSO, give a flow chart of the offered hybrid algorithm called PSO HS , present results of computing experiments with developed algorithm and software, formulate main results of work and prospects of its development.
Buyuk, Ersin; Karaman, Abdullah
2017-04-01
We estimated transmissivity and storage coefficient values from the single well water-level measurements positioned ahead of the mining face by using particle swarm optimization (PSO) technique. The water-level response to the advancing mining face contains an semi-analytical function that is not suitable for conventional inversion shemes because the partial derivative is difficult to calculate . Morever, the logaritmic behaviour of the model create difficulty for obtaining an initial model that may lead to a stable convergence. The PSO appears to obtain a reliable solution that produce a reasonable fit between water-level data and model function response. Optimization methods have been used to find optimum conditions consisting either minimum or maximum of a given objective function with regard to some criteria. Unlike PSO, traditional non-linear optimization methods have been used for many hydrogeologic and geophysical engineering problems. These methods indicate some difficulties such as dependencies to initial model, evolution of the partial derivatives that is required while linearizing the model and trapping at local optimum. Recently, Particle swarm optimization (PSO) became the focus of modern global optimization method that is inspired from the social behaviour of birds of swarms, and appears to be a reliable and powerful algorithms for complex engineering applications. PSO that is not dependent on an initial model, and non-derivative stochastic process appears to be capable of searching all possible solutions in the model space either around local or global optimum points.
Combined optimal-pathlengths method for near-infrared spectroscopy analysis
International Nuclear Information System (INIS)
Liu Rong; Xu Kexin; Lu Yanhui; Sun Huili
2004-01-01
Near-infrared (NIR) spectroscopy is a rapid, reagent-less and nondestructive analytical technique, which is being increasingly employed for quantitative application in chemistry, pharmaceutics and food industry, and for the optical analysis of biological tissue. The performance of NIR technology greatly depends on the abilities to control and acquire data from the instrument and to calibrate and analyse data. Optical pathlength is a key parameter of the NIR instrument, which has been thoroughly discussed in univariate quantitative analysis in the presence of photometric errors. Although multiple wavelengths can provide more chemical information, it is difficult to determine a single pathlength that is suitable for each wavelength region. A theoretical investigation of a selection procedure for multiple pathlengths, called the combined optimal-pathlengths (COP) method, is identified in this paper and an extensive comparison with the single pathlength method is also performed on simulated and experimental NIR spectral data sets. The results obtained show that the COP method can greatly improve the prediction accuracy in NIR spectroscopy quantitative analysis
Directory of Open Access Journals (Sweden)
Fernández, J.
2001-12-01
Full Text Available The maintenance of genetic diversity is, from a genetic point of view, a key objective of conservation programmes. The selection of individuals contributing offspring and the decision of the mating scheme are the steps on which managers can control genetic diversity, specially on ‘ex situ’ programmes. Previous studies have shown that the optimal management strategy is to look for the parents’ contributions that yield minimum group coancestry (overall probability of identity by descent in the population and, then, to arrange mating couples following minimum pairwise coancestry. However, physiological constraints make it necessary to account for mating restrictions when deciding the contributions and, therefore, these should be implemented in a single step along with the mating plan. In the present paper, a single-step method is proposed to optimise the management of a conservation programme when restrictions on the mating scheme exist. The performance of the method is tested by computer simulation. The strategy turns out to be as efficient as the two-step method, regarding both the genetic diversity preserved and the fitness of the population.
Mass Spectrometric Method for Analyzing Metabolites in Yeast with Single Cell Sensitivity
Amantonico, Andrea; Oh, Joo Yeon; Sobek, Jens; Heinemann, Matthias; Zenobi, Renato
2008-01-01
Getting a look-in: An optimized MALDI-MS procedure has been developed to detect endogenous primary metabolites directly in the cell extract. A detection limit corresponding to metabolites from less than a single cell has been attained, opening the door to single-cell metabolomics by mass
DEFF Research Database (Denmark)
Justesen, Kristian Kjær; Andreasen, Søren Juhl
2015-01-01
In this work a method for choosing the optimal reformer temperature for a reformed methanol fuel cell system is presented based on a case study of a H3 350 module produced by Serenergy A/S. The method is based on ANFIS models of the dependence of the reformer output gas composition on the reforme...
Optimal fits of diffusion constants from single-time data points of Brownian trajectories.
Boyer, Denis; Dean, David S; Mejía-Monasterio, Carlos; Oshanin, Gleb
2012-12-01
Experimental methods based on single particle tracking (SPT) are being increasingly employed in the physical and biological sciences, where nanoscale objects are visualized with high temporal and spatial resolution. SPT can probe interactions between a particle and its environment but the price to be paid is the absence of ensemble averaging and a consequent lack of statistics. Here we address the benchmark question of how to accurately extract the diffusion constant of one single Brownian trajectory. We analyze a class of estimators based on weighted functionals of the square displacement. For a certain choice of the weight function these functionals provide the true ensemble averaged diffusion coefficient, with a precision that increases with the trajectory resolution.
Energy Technology Data Exchange (ETDEWEB)
Ohana, M., E-mail: mickael.ohana@gmail.com [iCube Laboratory, Université de Strasbourg/CNRS, UMR 7357, 67400 Illkirch (France); Service de Radiologie B, Nouvel Hôpital Civil – Hôpitaux Universitaires de Strasbourg, 1 place de l’hôpital, 67000 Strasbourg (France); Labani, A., E-mail: aissam.labani@chru-strasbourg.fr [Service de Radiologie B, Nouvel Hôpital Civil – Hôpitaux Universitaires de Strasbourg, 1 place de l’hôpital, 67000 Strasbourg (France); Severac, F., E-mail: francois.severac@chru-strasbourg.fr [Département de Biostatistiques et d’Informatique Médicale, Hôpital Civil – Hôpitaux Universitaires de Strasbourg,1 place de l’hôpital, 67000 Strasbourg (France); Jeung, M.Y., E-mail: Mi-Young.Jeung@chru-strasbourg.fr [Service de Radiologie B, Nouvel Hôpital Civil – Hôpitaux Universitaires de Strasbourg, 1 place de l’hôpital, 67000 Strasbourg (France); Gaertner, S., E-mail: Sebastien.Gaertner@chru-strasbourg.fr [Service de Médecine Vasculaire, Nouvel Hôpital Civil – Hôpitaux Universitaires de Strasbourg,1 place de l’hôpital, 67000 Strasbourg (France); and others
2017-03-15
Highlights: • Lung parenchyma aspect varies with the monochromatic energy level in spectral CT. • Optimal diagnostic and image quality is obtained at 50–55 keV. • Mediastinum and parenchyma could be read on the same monochromatic energy level. - Abstract: Objective: To determine the optimal monochromatic energy level for lung parenchyma analysis in spectral CT. Methods: All 50 examinations (58% men, 64.8 ± 16yo) from an IRB-approved prospective study on single-source dual energy chest CT were retrospectively included and analyzed. Monochromatic images in lung window reconstructed every 5 keV from 40 to 140 keV were independently assessed by two chest radiologists. Based on the overall image quality and the depiction/conspicuity of parenchymal lesions, each reader had to designate for every patient the keV level providing the best diagnostic and image quality. Results: 72% of the examinations exhibited parenchymal lesions. Reader 1 picked the 55 keV monochromatic reconstruction in 52% of cases, 50 in 30% and 60 in 18%. Reader 2 chose 50 keV in 52% cases, 55 in 40%, 60 in 6% and 40 in 2%. The 50 and 55 keV levels were chosen by at least one reader in 64% and 76% of all patients, respectively. Merging 50 and 55 keV into one category results in an optimal setting selected by reader 1 in 82% of patients and by reader 2 in 92%, with a 74% concomitant agreement. Conclusion: The best image quality for lung parenchyma in spectral CT is obtained with the 50–55 keV monochromatic reconstructions.
Optimization of Classical Hydraulic Engine Mounts Based on RMS Method
Directory of Open Access Journals (Sweden)
J. Christopherson
2005-01-01
Full Text Available Based on RMS averaging of the frequency response functions of the absolute acceleration and relative displacement transmissibility, optimal parameters describing the hydraulic engine mount are determined to explain the internal mount geometry. More specifically, it is shown that a line of minima exists to define a relationship between the absolute acceleration and relative displacement transmissibility of a sprung mass using a hydraulic mount as a means of suspension. This line of minima is used to determine several optimal systems developed on the basis of different clearance requirements, hence different relative displacement requirements, and compare them by means of their respective acceleration and displacement transmissibility functions. In addition, the transient response of the mount to a step input is also investigated to show the effects of the optimization upon the time domain response of the hydraulic mount.
International Nuclear Information System (INIS)
Jiang, Jianjun; Wang, Yiqun; Zhang, Li; Xie, Tian; Li, Min; Peng, Yuyuan; Wu, Daqing; Li, Peiyao; Ma, Congmin; Shen, Mengxu; Wu, Xing; Weng, Mengyun; Wang, Shiwei; Xie, Cen
2016-01-01
Highlights: • The authors present an optimization algorithm for interface task layout. • The performing process of the proposed algorithm was depicted. • The performance evaluation method adopted neural network method. • The optimization layouts of an event interface tasks were obtained by experiments. - Abstract: This is the last in a series of papers describing the optimal design for a digital human–computer interface of a nuclear power plant (NPP) from three different points based on human reliability. The purpose of this series is to propose different optimization methods from varying perspectives to decrease human factor events that arise from the defects of a human–computer interface. The present paper mainly solves the optimization method as to how to effectively layout interface tasks into different screens. The purpose of this paper is to decrease human errors by reducing the distance that an operator moves among different screens in each operation. In order to resolve the problem, the authors propose an optimization process of interface task layout for digital human–computer interface of a NPP. As to how to automatically layout each interface task into one of screens in each operation, the paper presents a shortest moving path optimization algorithm with dynamic flag based on human reliability. To test the algorithm performance, the evaluation method uses neural network based on human reliability. The less the human error probabilities are, the better the interface task layouts among different screens are. Thus, by analyzing the performance of each interface task layout, the optimization result is obtained. Finally, the optimization layouts of spurious safety injection event interface tasks of the NPP are obtained by an experiment, the proposed methods has a good accuracy and stabilization.
Optimal reliability design method for remote solar systems
Suwapaet, Nuchida
A unique optimal reliability design algorithm is developed for remote communication systems. The algorithm deals with either minimizing an unavailability of the system within a fixed cost or minimizing the cost of the system with an unavailability constraint. The unavailability of the system is a function of three possible failure occurrences: individual component breakdown, solar energy deficiency (loss of load probability), and satellite/radio transmission loss. The three mathematical models of component failure, solar power failure, transmission failure are combined and formulated as a nonlinear programming optimization problem with binary decision variables, such as number and type (or size) of photovoltaic modules, batteries, radios, antennas, and controllers. Three possible failures are identified and integrated in computer algorithm to generate the parameters for the optimization algorithm. The optimization algorithm is implemented with a branch-and-bound technique solution in MS Excel Solver. The algorithm is applied to a case study design for an actual system that will be set up in remote mountainous areas of Peru. The automated algorithm is verified with independent calculations. The optimal results from minimizing the unavailability of the system with the cost constraint case and minimizing the total cost of the system with the unavailability constraint case are consistent with each other. The tradeoff feature in the algorithm allows designers to observe results of 'what-if' scenarios of relaxing constraint bounds, thus obtaining the most benefit from the optimization process. An example of this approach applied to an existing communication system in the Andes shows dramatic improvement in reliability for little increase in cost. The algorithm is a real design tool, unlike other existing simulation design tools. The algorithm should be useful for other stochastic systems where component reliability, random supply and demand, and communication are
Directory of Open Access Journals (Sweden)
M. A. Farkov
2014-01-01
Full Text Available An analysis of numerical optimization methods for solving a problem of molecular docking has been performed. Some additional requirements for optimization methods according to GPU architecture features were specified. A promising method for implementation on GPU was selected. Its implementation was described and performance and accuracy tests were performed.
Computational Methods for Aerodynamic Design (Inverse) and Optimization
1990-01-01
design and optimization, one cannot oversee-recent developments in the field of Artificial Intelligence (Al), i.e. In the study of-how to make...coupling Artifical Intelligence wih Aerodynamic Design may use much of the recent progress In systematic-design and optimization developments. References...NtIS ton Ilans-les regions- subsoniques. i)autre p~art,, il ii d’assurl’r l’unicit -Il’u111 sol utioni pliybique. -mn v iseositd artificielle est
A topology optimization method for design of negative permeability metamaterials
DEFF Research Database (Denmark)
Diaz, A. R.; Sigmund, Ole
2010-01-01
the effective permeability, obtained after solving Maxwell's equations on a representative cell of a periodic arrangement using a full 3D finite element model. The effective permeability depends on the layout of copper, and the subject of the topology optimization problem is to find layouts that result......A methodology based on topology optimization for the design of metamaterials with negative permeability is presented. The formulation is based on the design of a thin layer of copper printed on a dielectric, rectangular plate of fixed dimensions. An effective media theory is used to estimate...
A mixed optimization method for automated design of fuselage structures.
Sobieszczanski, J.; Loendorf, D.
1972-01-01
A procedure for automating the design of transport aircraft fuselage structures has been developed and implemented in the form of an operational program. The structure is designed in two stages. First, an overall distribution of structural material is obtained by means of optimality criteria to meet strength and displacement constraints. Subsequently, the detailed design of selected rings and panels consisting of skin and stringers is performed by mathematical optimization accounting for a set of realistic design constraints. The practicality and computer efficiency of the procedure is demonstrated on cylindrical and area-ruled large transport fuselages.
Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S
2017-03-01
Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Models and Methods for Structural Topology Optimization with Discrete Design Variables
DEFF Research Database (Denmark)
Stolpe, Mathias
Structural topology optimization is a multi-disciplinary research field covering optimal design of load carrying mechanical structures such as bridges, airplanes, wind turbines, cars, etc. Topology optimization is a collection of theory, mathematical models, and numerical methods and is often used...... or stresses, or fundamental frequencies. The design variables are either continuous or discrete and model dimensions, thicknesses, densities, or material properties. Structural topology optimization is a multi-disciplinary research field covering optimal design of load carrying mechanical structures......, densities, or material properties. This thesis is devoted to the development of mathematical models, theory, and advanced numerical optimization methods for solving structural topology optimization problems with discrete design variables to proven global optimality. The thesis begins with an introduction...
Optimal Control of Diesel Engines: Numerical Methods, Applications, and Experimental Validation
Directory of Open Access Journals (Sweden)
Jonas Asprion
2014-01-01
become complex systems. The exploitation of any leftover potential during transient operation is crucial. However, even an experienced calibration engineer cannot conceive all the dynamic cross couplings between the many actuators. Therefore, a highly iterative procedure is required to obtain a single engine calibration, which in turn causes a high demand for test-bench time. Physics-based mathematical models and a dynamic optimisation are the tools to alleviate this dilemma. This paper presents the methods required to implement such an approach. The optimisation-oriented modelling of diesel engines is summarised, and the numerical methods required to solve the corresponding large-scale optimal control problems are presented. The resulting optimal control input trajectories over long driving profiles are shown to provide enough information to allow conclusions to be drawn for causal control strategies. Ways of utilising this data are illustrated, which indicate that a fully automated dynamic calibration of the engine control unit is conceivable. An experimental validation demonstrates the meaningfulness of these results. The measurement results show that the optimisation predicts the reduction of the fuel consumption and the cumulative pollutant emissions with a relative error of around 10% on highly transient driving cycles.
Method of accounting and approaches to tax optimization of income tax of entities
Directory of Open Access Journals (Sweden)
V.V. Sokolovska
2016-12-01
Full Text Available The article focuses on the organization and methodology of income tax accounting. It describes the documented operations related to the calculation and payment of income tax and it suggests the standard form of the original document to reduce the time for calculation of tax and facilitation in filling in the tax return. The author describes the accounts system designed for income tax cost accounting, and gives their analytical sections. The article discloses the need of management reports for this tax and suggests to implement the standard form of report for an enterprise for the efficiency of management of revenues, costs, and as a result, income tax. The author singles out the methods of tax optimization of income tax calculation base in the following four areas: the methods related to the fixed assets of the enterprise, inventory, accounts receivable, and the employee's salary. The algorithm of the tax optimization in enterprises is developed. This algorithm, due to the simplicity of its shape, will help management personnel and an accountant of an enterprise to identify possible ways of reducing the amounts payable for income tax under the current legislation.
Single-crystal growth of Group IVB and VB carbides by the floating-zone method
International Nuclear Information System (INIS)
Finch, C.B.; Chang, Y.K.; Abraham, M.M.
1989-02-01
The floating-zone method for the growth of Group IVB and VB carbides is described and reviewed. We have systematically investigated the technique and confirmed the growth of large single crystals of TiC/sub 0.95/, ZrC/sub 0.93/, ZrC/sub 0.98/, VC/sub 0.80/, NbC/sub 0.95/, TaC/sub 0.89/. Optimal growth conditions were in the 0.5-2.0 cm/h range under 8-12 atm helium. Good crystal growth results were achieved with hot-pressed starting rods of 90-95% density, using a ''double pancake'' induction coil and a 200-kHz/100- kW rf power supply. 36 refs., 5 figs., 3 tabs
Optimization of Process Parameters for ε-Polylysine Production by Response Surface Methods
Directory of Open Access Journals (Sweden)
Maxiaoqi Zhu
2016-01-01
Full Text Available ε-Polylysine (ε-PL is a highly safe natural food preservative with a broad antimicrobial spectrum, excellent corrosion resistances, and great commercial potentials. In the present work, we evaluated the ε-PL adsorption performances of HZB-3B and D155 resins and optimized the adsorption and desorption conditions by single-factor test, response surface method, and orthogonal design. The complexes of resin and ε-PL were characterized by SEM and FITR. The results indicated that D155 resin had the best ε-PL adsorption performance and was selected for the separation and purification of ε-PL. The conditions for the static adsorption of ε-PL on D155 resin were optimized as follows: ε-PL solution 40 g/L, pH 8.5, resins 15 g/L, and absorption time 14 h. The adsorption efficiency of ε-PL under the optimal conditions was 96.84%. The ε-PL adsorbed on the D155 resin was easily desorbed with 0.4 mol/L HCl at 30°C in 10 h. The highest desorption efficiency was 97.57% and the overall recovery of ε-PL was 94.49% under the optimal conditions. The excellent ε-PL adsorption and desorption properties of D155 resin including high selectivity and adsorption capacity, easy desorption, and high stability make it a good candidate for the isolation of ε-PL from fermentation broths.
Experimental Methods for the Analysis of Optimization Algorithms
DEFF Research Database (Denmark)
In operations research and computer science it is common practice to evaluate the performance of optimization algorithms on the basis of computational results, and the experimental approach should follow accepted principles that guarantee the reliability and reproducibility of results. However, c...
Response surface method applied to optimization of estradiol ...
Indian Academy of Sciences (India)
torial design was built for the determination of the main factors affecting estradiol permeation. The independent factors analysed were: ... lation, waste water treatment, packaging in food industry and textile dyeing (Ravi Kumar 2000; ... Experimental design and optimization are tools that are used to systematically examine ...
Experimental Methods for the Analysis of Optimization Algorithms
DEFF Research Database (Denmark)
in algorithm design, statistical design, optimization and heuristics, and most chapters provide theoretical background and are enriched with case studies. This book is written for researchers and practitioners in operations research and computer science who wish to improve the experimental assessment...
Workload Indicators Of Staffing Need Method in determining optimal ...
African Journals Online (AJOL)
... available working hours, category and individual allowances, annual workloads from the previous year\\'s statistics and optimal departmental establishment of workers. Results: There was initial resentment to the exercise because of the notion that it was aimed at retrenching workers. The team was given autonomy by the ...