WorldWideScience

Sample records for single optimal method

  1. Oil Reservoir Production Optimization using Single Shooting and ESDIRK Methods

    DEFF Research Database (Denmark)

    Capolei, Andrea; Völcker, Carsten; Frydendall, Jan

    2012-01-01

    the injections and oil production such that flow is uniform in a given geological structure. Even in the case of conventional water flooding, feedback based optimal control technologies may enable higher oil recovery than with conventional operational strategies. The optimal control problems that must be solved...

  2. Combustion Model and Control Parameter Optimization Methods for Single Cylinder Diesel Engine

    Directory of Open Access Journals (Sweden)

    Bambang Wahono

    2014-01-01

    Full Text Available This research presents a method to construct a combustion model and a method to optimize some control parameters of diesel engine in order to develop a model-based control system. The construction purpose of the model is to appropriately manage some control parameters to obtain the values of fuel consumption and emission as the engine output objectives. Stepwise method considering multicollinearity was applied to construct combustion model with the polynomial model. Using the experimental data of a single cylinder diesel engine, the model of power, BSFC, NOx, and soot on multiple injection diesel engines was built. The proposed method succesfully developed the model that describes control parameters in relation to the engine outputs. Although many control devices can be mounted to diesel engine, optimization technique is required to utilize this method in finding optimal engine operating conditions efficiently beside the existing development of individual emission control methods. Particle swarm optimization (PSO was used to calculate control parameters to optimize fuel consumption and emission based on the model. The proposed method is able to calculate control parameters efficiently to optimize evaluation item based on the model. Finally, the model which added PSO then was compiled in a microcontroller.

  3. Thermodynamic optimization of ground heat exchangers with single U-tube by entropy generation minimization method

    International Nuclear Information System (INIS)

    Li Min; Lai, Alvin C.K.

    2013-01-01

    Highlights: ► A second-law-based analysis is performed for single U-tube ground heat exchangers. ► Two expressions for the optimal length and flow velocity are developed for GHEs. ► Empirical velocities of GHEs are large compared to thermodynamic optimum values. - Abstract: This paper investigates thermodynamic performance of borehole ground heat exchangers with a single U-tube by the entropy generation minimization method which requires information of heat transfer and fluid mechanics, in addition to thermodynamics analysis. This study first derives an expression for dimensionless entropy generation number, a function that consists of five dimensionless variables, including Reynolds number, dimensionless borehole length, scale factor of pressures, and two duty parameters of ground heat exchangers. The derivation combines a heat transfer model and a hydraulics model for borehole ground heat exchangers with the first law and the second law of thermodynamics. Next, the entropy generation number is minimized to produce two analytical expressions for the optimal length and the optimal flow velocity of ground heat exchangers. Then, this paper discusses and analyzes implications and applications of these optimization formulas with two case studies. An important finding from the case studies is that widely used empirical velocities of circulating fluid are too large to operate ground-coupled heat pump systems in a thermodynamic optimization way. This paper demonstrates that thermodynamic optimal parameters of ground heat exchangers can probably be determined by using the entropy generation minimization method.

  4. Optimizing the calculation of DM,CO and VC via the single breath single oxygen tension DLCO/NO method.

    Science.gov (United States)

    Coffman, Kirsten E; Taylor, Bryan J; Carlson, Alex R; Wentz, Robert J; Johnson, Bruce D

    2016-01-15

    Alveolar-capillary membrane conductance (D(M,CO)) and pulmonary-capillary blood volume (V(C)) are calculated via lung diffusing capacity for carbon monoxide (DL(CO)) and nitric oxide (DL(NO)) using the single breath, single oxygen tension (single-FiO2) method. However, two calculation parameters, the reaction rate of carbon monoxide with blood (θ(CO)) and the D(M,NO)/D(M,CO) ratio (α-ratio), are controversial. This study systematically determined optimal θ(CO) and α-ratio values to be used in the single-FiO2 method that yielded the most similar D(M,CO) and V(C) values compared to the 'gold-standard' multiple-FiO2 method. Eleven healthy subjects performed single breath DL(CO)/DL(NO) maneuvers at rest and during exercise. D(M,CO) and V(C) were calculated via the single-FiO2 and multiple-FiO2 methods by implementing seven θ(CO) equations and a range of previously reported α-ratios. The RP θ(CO) equation (Reeves, R.B., Park, H.K., 1992. Respiration Physiology 88 1-21) and an α-ratio of 4.0-4.4 yielded DM,CO and VC values that were most similar between methods. The RP θ(CO) equation and an experimental α-ratio should be used in future studies. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Optimal sampling strategies to assess inulin clearance in children by the inulin single-injection method

    NARCIS (Netherlands)

    van Rossum, Lyonne K.; Mathot, Ron A. A.; Cransberg, Karlien; Vulto, Arnold G.

    2003-01-01

    Glomerular filtration rate in patients can be determined by estimating the plasma clearance of inulin with the single-injection method. In this method, a single bolus injection of inulin is administered and several blood samples are collected. For practical and convenient application of this method

  6. Optimization of single-walled carbon nanotube solubility by noncovalent PEGylation using experimental design methods

    Directory of Open Access Journals (Sweden)

    Hadidi N

    2011-04-01

    Full Text Available Naghmeh Hadidi1, Farzad Kobarfard2, Nastaran Nafissi-Varcheh3, Reza Aboofazeli11Department of Pharmaceutics, 2Department of Pharmaceutical Chemistry, 3Department of Pharmaceutical Biotechnology, School of Pharmacy, Shaheed Beheshti University of Medical Sciences, Tehran, IranAbstract: In this study, noncovalent functionalization of single-walled carbon nanotubes (SWCNTs with phospholipid-polyethylene glycols (Pl-PEGs was performed to improve the solubility of SWCNTs in aqueous solution. Two kinds of PEG derivatives, ie, Pl-PEG 2000 and Pl-PEG 5000, were used for the PEGylation process. An experimental design technique (D-optimal design and second-order polynomial equations was applied to investigate the effect of variables on PEGylation and the solubility of SWCNTs. The type of PEG derivative was selected as a qualitative parameter, and the PEG/SWCNT weight ratio and sonication time were applied as quantitative variables for the experimental design. Optimization was performed for two responses, aqueous solubility and loading efficiency. The grafting of PEG to the carbon nanostructure was determined by thermogravimetric analysis, Raman spectroscopy, and scanning electron microscopy. Aqueous solubility and loading efficiency were determined by ultraviolet-visible spectrophotometry and measurement of free amine groups, respectively. Results showed that Pl-PEGs were grafted onto SWCNTs. Aqueous solubility of 0.84 mg/mL and loading efficiency of nearly 98% were achieved for the prepared Pl-PEG 5000-SWCNT conjugates. Evaluation of functionalized SWCNTs showed that our noncovalent functionalization protocol could considerably increase aqueous solubility, which is an essential criterion in the design of a carbon nanotube-based drug delivery system and its biodistribution.Keywords: phospholipid-PEG, D-optimal design, loading efficiency, Raman spectroscopy, scanning electron microscopy, theromogravimetric analysis, carbon nanotubes

  7. A method to generate fully multi-scale optimal interpolation by combining efficient single process analyses, illustrated by a DINEOF analysis spiced with a local optimal interpolation

    Directory of Open Access Journals (Sweden)

    J.-M. Beckers

    2014-10-01

    Full Text Available We present a method in which the optimal interpolation of multi-scale processes can be expanded into a succession of simpler interpolations. First, we prove how the optimal analysis of a superposition of two processes can be obtained by different mathematical formulations involving iterations and analysis focusing on a single process. From the different mathematical equivalent formulations, we then select the most efficient ones by analyzing the behavior of the different possibilities in a simple and well-controlled test case. The clear guidelines deduced from this experiment are then applied to a real situation in which we combine large-scale analysis of hourly Spinning Enhanced Visible and Infrared Imager (SEVIRI satellite images using data interpolating empirical orthogonal functions (DINEOF with a local optimal interpolation using a Gaussian covariance. It is shown that the optimal combination indeed provides the best reconstruction and can therefore be exploited to extract the maximum amount of useful information from the original data.

  8. An enhanced unified uncertainty analysis approach based on first order reliability method with single-level optimization

    International Nuclear Information System (INIS)

    Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; Tooren, Michel van

    2013-01-01

    In engineering, there exist both aleatory uncertainties due to the inherent variation of the physical system and its operational environment, and epistemic uncertainties due to lack of knowledge and which can be reduced with the collection of more data. To analyze the uncertain distribution of the system performance under both aleatory and epistemic uncertainties, combined probability and evidence theory can be employed to quantify the compound effects of the mixed uncertainties. The existing First Order Reliability Method (FORM) based Unified Uncertainty Analysis (UUA) approach nests the optimization based interval analysis in the improved Hasofer–Lind–Rackwitz–Fiessler (iHLRF) algorithm based Most Probable Point (MPP) searching procedure, which is computationally inhibitive for complex systems and may encounter convergence problem as well. Therefore, in this paper it is proposed to use general optimization solvers to search MPP in the outer loop and then reformulate the double-loop optimization problem into an equivalent single-level optimization (SLO) problem, so as to simplify the uncertainty analysis process, improve the robustness of the algorithm, and alleviate the computational complexity. The effectiveness and efficiency of the proposed method is demonstrated with two numerical examples and one practical satellite conceptual design problem. -- Highlights: ► Uncertainty analysis under mixed aleatory and epistemic uncertainties is studied. ► A unified uncertainty analysis method is proposed with combined probability and evidence theory. ► The traditional nested analysis method is converted to single level optimization for efficiency. ► The effectiveness and efficiency of the proposed method are testified with three examples

  9. Optimization analysis of the motor cooling method in semi-closed single screw refrigeration compressor

    Science.gov (United States)

    Wang, Z. L.; Shen, Y. F.; Wang, Z. B.; Wang, J.

    2017-08-01

    Semi-closed single screw refrigeration compressors (SSRC) are widely used in refrigeration and air conditioning systems owing to the advantages of simple structure, balanced forces on the rotor, high volumetric efficiency and so on. In semi-closed SSRCs, motor is often cooled by suction gas or injected refrigerant liquid. Motor cooling method will changes the suction gas temperature, this to a certain extent, is an important factor influencing the thermal dynamic performance of a compressor. Thus the effects of motor cooling method on the performance of the compressor must be studied. In this paper mathematical models of motor cooling process by using these two methods were established. Influences of motor cooling parameters such as suction gas temperature, suction gas quantity, temperature of the injected refrigerant liquid and quantity of the injected refrigerant liquid on the thermal dynamic performance of the compressor were analyzed. The performances of the compressor using these two kinds of motor cooling methods were compared. The motor cooling capacity of the injected refrigerant liquid is proved to be better than the suction gas. All analysis results obtained can be useful for optimum design of the motor cooling process to improve the efficiency and the energy efficiency of the compressor.

  10. Single Shooting and ESDIRK Methods for adjoint-based optimization of an oil reservoir

    DEFF Research Database (Denmark)

    Capolei, Andrea; Völcker, Carsten; Frydendall, Jan

    2012-01-01

    the injections and oil production such that ow is uniform in a given geological structure. Even in the case of conventional water ooding, feedback based optimal control technologies may enable higher oil recovery than with conventional operational strategies. The optimal control problems that must be solved...

  11. Optimal numerical methods for determining the orientation averages of single-scattering properties of atmospheric ice crystals

    International Nuclear Information System (INIS)

    Um, Junshik; McFarquhar, Greg M.

    2013-01-01

    The optimal orientation averaging scheme (regular lattice grid scheme or quasi Monte Carlo (QMC) method), the minimum number of orientations, and the corresponding computing time required to calculate the average single-scattering properties (i.e., asymmetry parameter (g), single-scattering albedo (ω o ), extinction efficiency (Q ext ), scattering efficiency (Q sca ), absorption efficiency (Q abs ), and scattering phase function at scattering angles of 90° (P 11 (90°)), and 180° (P 11 (180°))) within a predefined accuracy level (i.e., 1.0%) were determined for four different nonspherical atmospheric ice crystal models (Gaussian random sphere, droxtal, budding Bucky ball, and column) with maximum dimension D=10μm using the Amsterdam discrete dipole approximation at λ=0.55, 3.78, and 11.0μm. The QMC required fewer orientations and less computing time than the lattice grid. The calculations of P 11 (90°) and P 11 (180°) required more orientations than the calculations of integrated scattering properties (i.e., g, ω o , Q ext , Q sca , and Q abs ) regardless of the orientation average scheme. The fewest orientations were required for calculating g and ω o . The minimum number of orientations and the corresponding computing time for single-scattering calculations decreased with an increase of wavelength, whereas they increased with the surface-area ratio that defines particle nonsphericity. -- Highlights: •The number of orientations required to calculate the average single-scattering properties of nonspherical ice crystals is investigated. •Single-scattering properties of ice crystals are calculated using ADDA. •Quasi Monte Carlo method is more efficient than lattice grid method for scattering calculations. •Single-scattering properties of ice crystals depend on a newly defined parameter called surface area ratio

  12. Numerical method to optimize the polar-azimuthal orientation of infrared superconducting-nanowire single-photon detectors.

    Science.gov (United States)

    Csete, Mária; Sipos, Áron; Najafi, Faraz; Hu, Xiaolong; Berggren, Karl K

    2011-11-01

    A finite-element method for calculating the illumination-dependence of absorption in three-dimensional nanostructures is presented based on the radio frequency module of the Comsol Multiphysics software package (Comsol AB). This method is capable of numerically determining the optical response and near-field distribution of subwavelength periodic structures as a function of illumination orientations specified by polar angle, φ, and azimuthal angle, γ. The method was applied to determine the illumination-angle-dependent absorptance in cavity-based superconducting-nanowire single-photon detector (SNSPD) designs. Niobium-nitride stripes based on dimensions of conventional SNSPDs and integrated with ~ quarter-wavelength hydrogen-silsesquioxane-filled nano-optical cavity and covered by a thin gold film acting as a reflector were illuminated from below by p-polarized light in this study. The numerical results were compared to results from complementary transfer-matrix-method calculations on composite layers made of analogous film-stacks. This comparison helped to uncover the optical phenomena contributing to the appearance of extrema in the optical response. This paper presents an approach to optimizing the absorptance of different sensing and detecting devices via simultaneous numerical optimization of the polar and azimuthal illumination angles. © 2011 Optical Society of America

  13. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    Science.gov (United States)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  14. Upconversion study of singly activator ions doped La2O3 nanoparticle synthesized via optimized solvothermal method

    Science.gov (United States)

    Tiwari, S. P.; Singh, S.; Kumar, A.; Kumar, K.

    2016-05-01

    In present work, an optimized solvothermal method has been chosen to synthesize the singly doped Er3+ activator ions with La2O3 host matrix. The sample is annealed at 500 °C in order to remove the moisture and other organic impurities. The sample is characterized by using XRD and FESEM to find out the phase and surface morphology. The observed particle size is found almost 80 nm with spherical agglomerated shape. Upconversion spectra are recorded at room temperature using 976 nm diode laser excitation sources and consequently the emission peaks in green and red region are observed. The color coordinate diagram shows the results that the present material may be applicable in different light emitting sources.

  15. Methods of mathematical optimization

    Science.gov (United States)

    Vanderplaats, G. N.

    The fundamental principles of numerical optimization methods are reviewed, with an emphasis on potential engineering applications. The basic optimization process is described; unconstrained and constrained minimization problems are defined; a general approach to the design of optimization software programs is outlined; and drawings and diagrams are shown for examples involving (1) the conceptual design of an aircraft, (2) the aerodynamic optimization of an airfoil, (3) the design of an automotive-engine connecting rod, and (4) the optimization of a 'ski-jump' to assist aircraft in taking off from a very short ship deck.

  16. Stochastic optimization methods

    CERN Document Server

    Marti, Kurt

    2005-01-01

    Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.

  17. Practical methods of optimization

    CERN Document Server

    Fletcher, R

    2013-01-01

    Fully describes optimization methods that are currently most valuable in solving real-life problems. Since optimization has applications in almost every branch of science and technology, the text emphasizes their practical aspects in conjunction with the heuristics useful in making them perform more reliably and efficiently. To this end, it presents comparative numerical studies to give readers a feel for possibile applications and to illustrate the problems in assessing evidence. Also provides theoretical background which provides insights into how methods are derived. This edition offers rev

  18. An NS5A single optimized method to determine genotype, subtype and resistance profiles of Hepatitis C strains.

    Directory of Open Access Journals (Sweden)

    Elisabeth Andre-Garnier

    Full Text Available The objective was to develop a method of HCV genome sequencing that allowed simultaneous genotyping and NS5A inhibitor resistance profiling. In order to validate the use of a unique RT-PCR for genotypes 1-5, 142 plasma samples from patients infected with HCV were analysed. The NS4B-NS5A partial region was successfully amplified and sequenced in all samples. In parallel, partial NS3 sequences were analyzed obtained for genotyping. Phylogenetic analysis showed concordance of genotypes and subtypes with a bootstrap >95% for each type cluster. NS5A resistance mutations were analyzed using the Geno2pheno [hcv] v0.92 tool and compared to the list of known Resistant Associated Substitutions recently published. In conclusion, this tool allows determination of HCV genotypes, subtypes and identification of NS5A resistance mutations. This single method can be used to detect pre-existing resistance mutations in NS5A before treatment and to check the emergence of resistant viruses while undergoing treatment in major HCV genotypes (G1-5 in the EU and the US.

  19. Optimization and characterization of bulk hexagonal boron nitride single crystals grown by the nickel-chromium flux method

    Science.gov (United States)

    Hoffman, Tim

    Hexagonal boron nitride (hBN) is a wide bandgap III-V semiconductor that has seen new interest due to the development of other III-V LED devices and the advent of graphene and other 2-D materials. For device applications, high quality, low defect density materials are needed. Several applications for hBN crystals are being investigated, including as a neutron detector and interference-less infrared-absorbing material. Isotopically enriched crystals were utilized for enhanced propagation of phonon modes. These applications exploit the unique physical, electronic and nanophotonics applications for bulk hBN crystals. In this study, bulk hBN crystals were grown by the flux method using a molten Ni-Cr solvent at high temperatures (1500°C) and atmospheric pressures. The effects of growth parameters, source materials, and gas environment on the crystals size, morphology and purity were established and controlled, and the reliability of the process was greatly improved. Single-crystal domains exceeding 1mm in width and 200microm in thickness were produced and transferred to handle substrates for analysis. Grain size dependence with respect to dwell temperature, cooling rate and cooling temperature were analyzed and modeled using response surface morphology. Most significantly, crystal grain width was predicted to increase linearly with dwell temperature, with single-crystal domains exceeding 2mm in at 1700°C. Isotopically enriched 10B and 11B hBN crystal were produced using a Ni-Cr-B flux method, and their properties investigated. 10B concentration was evaluated using SIMS and correlated to the shift in the Raman peak of the E2g mode. Crystals with enrichment of 99% 10B and >99% 11B were achieved, with corresponding Raman shift peaks at 1392.0 cm-1 and 1356.6 cm-1, respectively. Peak FWHM also decreased as isotopic enrichment approached 100%, with widths as low as 3.5 cm-1 achieved, compared to 8.0 cm-1 for natural abundance samples. Defect selective etching was

  20. MO-FG-CAMPUS-TeP2-03: Multi-Criteria Optimization Using Taguchi Method for SRS of Multiple Lesions by Single Isocenter

    Energy Technology Data Exchange (ETDEWEB)

    Alani, S; Honig, N; Schlocker, A; Corn, B [Tel Aviv Medical Center, Tel Aviv (Israel)

    2016-06-15

    Purpose: This study utilizes the Taguchi Method to evaluate the VMAT planning parameters of single isocenter treatment plans for multiple brain metastases. An optimization model based on Taguchi and utility concept is employed to optimize the planning parameters including: arc arrangement, calculation grid size, calculation model, and beam energy on multiple performance characteristics namely conformity index and dose to normal brain. Methods: Treatment plans, each with 4 metastatic brain lesions were planned using single isocenter technique. The collimator angles were optimized to avoid open areas. In this analysis four planning parameters (a–d) were considered: (a)-Arc arrangements: set1: Gantry 181cw179 couch0; gantry179ccw0, couch315; and gantry0ccw181, couch45. set2: set1 plus additional arc: Gantry 0cw179, couch270. (b)-Energy: 6-MV; 6MV-FFF (c)-Calculation grid size: 1mm; 1.5mm (d)-Calculation models: AAA; Acuros Treatment planning was performed in Varian Eclipse (ver.11.0.30). A suitable orthogonal array was selected (L8) to perform the experiments. After conducting the experiments with the combinations of planning parameters the conformity index (CI) and the normal brain dose S/N ratio for each parameter was calculated. Optimum levels for the multiple response optimizations were determined. Results: We determined that the factors most affecting the conformity index are arc arrangement and beam energy. These tests were also used to evaluate dose to normal brain. In these evaluations, the significant parameters were grid size and calculation model. Using the utility concept we determined the combination of each of the four factors tested in this study that most significantly influence quality of the resulting treatment plans: (a)-arc arrangement-set2, (b)-6MV, (c)-calc.grid 1mm, (d)-Acuros algorithm. Overall, the dominant significant influences on plan quality are (a)-arcarrangement, and (b)-beamenergy. Conclusion: Results were analyzed using ANOVA and

  1. Optimization of a single-drop microextraction method for multielemental determination by electrothermal vaporization inductively coupled plasma mass spectrometry following in situ vapor generation

    International Nuclear Information System (INIS)

    Gil, Sandra; Loos-Vollebregt, Margaretha T.C. de; Bendicho, Carlos

    2009-01-01

    A headspace single-drop microextraction (HS-SDME) method has been developed in combination with electrothermal vaporization inductively coupled plasma mass spectrometry (ETV-ICP-MS) for the simultaneous determination of As, Sb, Bi, Pb, Sn and Hg in aqueous solutions. Vapor generation is carried out in a 40 mL volume closed-vial containing a solution with the target analytes in hydrochloric acid and potassium ferricyanide medium. Hydrides (As, Sb, Bi, Pb, Sn) and Hg vapor are trapped onto an aqueous single drop (3 μL volume) containing Pd(II), followed by the subsequent injection in the ETV. Experimental variables such as medium composition, sodium tetrahydroborate (III) volume and concentration, stirring rate, extraction time, sample volume, ascorbic acid concentration and palladium amount in the drop were fully optimized. The limits of detection (LOD) (3σ criterion) of the proposed method for As, Sb, Bi, Pb, Sn and Hg were 0.2, 0.04, 0.01, 0.07, 0.09 and 0.8 μg/L, respectively. Enrichment factors of 9, 85, 138, 130, 37 and 72 for As, Sb, Bi, Pb, Sn and Hg, respectively, were achieved in 210 s. The relative standard deviations (N = 5) ranged from 4 to 8%. The proposed HS-SDME-ETV-ICP-MS method has been applied for the determination of As, Sb, Bi, Pb, Sn and Hg in NWRI TM-28.3 certified reference material.

  2. Stochastic optimization methods

    CERN Document Server

    Marti, Kurt

    2008-01-01

    Optimization problems arising in practice involve random model parameters. This book features many illustrations, several examples, and applications to concrete problems from engineering and operations research.

  3. Optimization for Guitar Fingering on Single Notes

    Science.gov (United States)

    Itoh, Masaru; Hayashida, Takumi

    This paper presents an optimization method for guitar fingering. The fingering is to determine a unique combination of string, fret and finger corresponding to the note. The method aims to generate the best fingering pattern for guitar robots rather than beginners. Furthermore, it can be applied to any musical score on single notes. A fingering action can be decomposed into three motions, that is, a motion of press string, release string and move fretting hand. The cost for moving the hand is estimated on the basis of Manhattan distance which is the sum of distances along fret and string directions. The objective is to minimize the total fingering costs, subject to fret, string and finger constraints. As a sequence of notes on the score forms a line on time series, the optimization for guitar fingering can be resolved into a multistage decision problem. Dynamic programming is exceedingly effective to solve such a problem. A level concept is introduced into rendering states so as to make multiple DP solutions lead a unique one among the DP backward processes. For example, if two fingerings have the same value of cost at different states on a stage, then the low position would be taken precedence over the high position, and the index finger would be over the middle finger.

  4. Analytical methods of optimization

    CERN Document Server

    Lawden, D F

    2006-01-01

    Suitable for advanced undergraduates and graduate students, this text surveys the classical theory of the calculus of variations. It takes the approach most appropriate for applications to problems of optimizing the behavior of engineering systems. Two of these problem areas have strongly influenced this presentation: the design of the control systems and the choice of rocket trajectories to be followed by terrestrial and extraterrestrial vehicles.Topics include static systems, control systems, additional constraints, the Hamilton-Jacobi equation, and the accessory optimization problem. Prereq

  5. Optimize Etching Based Single Mode Fiber Optic Temperature Sensor

    OpenAIRE

    Ajay Kumar; Dr. Pramod Kumar

    2014-01-01

    This paper presents a description of etching process for fabrication single mode optical fiber sensors. The process of fabrication demonstrates an optimized etching based method to fabricate single mode fiber (SMF) optic sensors in specified constant time and temperature. We propose a single mode optical fiber based temperature sensor, where the temperature sensing region is obtained by etching its cladding diameter over small length to a critical value. It is observed that th...

  6. B-ALL minimal residual disease flow cytometry: an application of a novel method for optimization of a single-tube model.

    Science.gov (United States)

    Shaver, Aaron C; Greig, Bruce W; Mosse, Claudio A; Seegmiller, Adam C

    2015-05-01

    Optimizing a clinical flow cytometry panel can be a subjective process dependent on experience. We develop a quantitative method to make this process more rigorous and apply it to B lymphoblastic leukemia/lymphoma (B-ALL) minimal residual disease (MRD) testing. We retrospectively analyzed our existing three-tube, seven-color B-ALL MRD panel and used our novel method to develop an optimized one-tube, eight-color panel, which was tested prospectively. The optimized one-tube, eight-color panel resulted in greater efficiency of time and resources with no loss in diagnostic power. Constructing a flow cytometry panel using a rigorous, objective, quantitative method permits optimization and avoids problems of interdependence and redundancy in a large, multiantigen panel. Copyright© by the American Society for Clinical Pathology.

  7. Interactive Nonlinear Multiobjective Optimization Methods

    OpenAIRE

    Miettinen, Kaisa; Hakanen, Jussi; Podkopaev, Dmitry

    2016-01-01

    An overview of interactive methods for solving nonlinear multiobjective optimization problems is given. In interactive methods, the decision maker progressively provides preference information so that the most satisfactory Pareto optimal solution can be found for her or his. The basic features of several methods are introduced and some theoretical results are provided. In addition, references to modifications and applications as well as to other methods are indicated. As the...

  8. Optimization methods for logical inference

    CERN Document Server

    Chandru, Vijay

    2011-01-01

    Merging logic and mathematics in deductive inference-an innovative, cutting-edge approach. Optimization methods for logical inference? Absolutely, say Vijay Chandru and John Hooker, two major contributors to this rapidly expanding field. And even though ""solving logical inference problems with optimization methods may seem a bit like eating sauerkraut with chopsticks. . . it is the mathematical structure of a problem that determines whether an optimization model can help solve it, not the context in which the problem occurs."" Presenting powerful, proven optimization techniques for logic in

  9. Optimization methods in structural design

    CERN Document Server

    Rothwell, Alan

    2017-01-01

    This book offers an introduction to numerical optimization methods in structural design. Employing a readily accessible and compact format, the book presents an overview of optimization methods, and equips readers to properly set up optimization problems and interpret the results. A ‘how-to-do-it’ approach is followed throughout, with less emphasis at this stage on mathematical derivations. The book features spreadsheet programs provided in Microsoft Excel, which allow readers to experience optimization ‘hands-on.’ Examples covered include truss structures, columns, beams, reinforced shell structures, stiffened panels and composite laminates. For the last three, a review of relevant analysis methods is included. Exercises, with solutions where appropriate, are also included with each chapter. The book offers a valuable resource for engineering students at the upper undergraduate and postgraduate level, as well as others in the industry and elsewhere who are new to these highly practical techniques.Whi...

  10. Optimized Free Energies from Bidirectional Single-Molecule Force Spectroscopy

    Science.gov (United States)

    Minh, David D. L.; Adib, Artur B.

    2008-05-01

    An optimized method for estimating path-ensemble averages using data from processes driven in opposite directions is presented. Based on this estimator, bidirectional expressions for reconstructing free energies and potentials of mean force from single-molecule force spectroscopy—valid for biasing potentials of arbitrary stiffness—are developed. Numerical simulations on a model potential indicate that these methods perform better than unidirectional strategies.

  11. Optimization of Medical Teaching Methods

    Directory of Open Access Journals (Sweden)

    Wang Fei

    2015-12-01

    Full Text Available In order to achieve the goal of medical education, medicine and adapt to changes in the way doctors work, with the rapid medical teaching methods of modern science and technology must be reformed. Based on the current status of teaching in medical colleges method to analyze the formation and development of medical teaching methods, characteristics, about how to achieve optimal medical teaching methods for medical education teachers and management workers comprehensive and thorough change teaching ideas and teaching concepts provide a theoretical basis.

  12. Distributed optimization system and method

    Science.gov (United States)

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  13. Optimal control linear quadratic methods

    CERN Document Server

    Anderson, Brian D O

    2007-01-01

    This augmented edition of a respected text teaches the reader how to use linear quadratic Gaussian methods effectively for the design of control systems. It explores linear optimal control theory from an engineering viewpoint, with step-by-step explanations that show clearly how to make practical use of the material.The three-part treatment begins with the basic theory of the linear regulator/tracker for time-invariant and time-varying systems. The Hamilton-Jacobi equation is introduced using the Principle of Optimality, and the infinite-time problem is considered. The second part outlines the

  14. An Integrated Method for Airfoil Optimization

    Science.gov (United States)

    Okrent, Joshua B.

    Design exploration and optimization is a large part of the initial engineering and design process. To evaluate the aerodynamic performance of a design, viscous Navier-Stokes solvers can be used. However this method can prove to be overwhelmingly time consuming when performing an initial design sweep. Therefore, another evaluation method is needed to provide accurate results at a faster pace. To accomplish this goal, a coupled viscous-inviscid method is used. This thesis proposes an integrated method for analyzing, evaluating, and optimizing an airfoil using a coupled viscous-inviscid solver along with a genetic algorithm to find the optimal candidate. The method proposed is different from prior optimization efforts in that it greatly broadens the design space, while allowing the optimization to search for the best candidate that will meet multiple objectives over a characteristic mission profile rather than over a single condition and single optimization parameter. The increased design space is due to the use of multiple parametric airfoil families, namely the NACA 4 series, CST family, and the PARSEC family. Almost all possible airfoil shapes can be created with these three families allowing for all possible configurations to be included. This inclusion of multiple airfoil families addresses a possible criticism of prior optimization attempts since by only focusing on one airfoil family, they were inherently limiting the number of possible airfoil configurations. By using multiple parametric airfoils, it can be assumed that all reasonable airfoil configurations are included in the analysis and optimization and that a global and not local maximum is found. Additionally, the method used is amenable to customization to suit any specific needs as well as including the effects of other physical phenomena or design criteria and/or constraints. This thesis found that an airfoil configuration that met multiple objectives could be found for a given set of nominal

  15. Optimization of Single-Layer Braced Domes

    Directory of Open Access Journals (Sweden)

    Grzywiński Maksym

    2017-06-01

    Full Text Available The paper deals with discussion of optimization problem in civil engineering structural space design. Minimization of mass should satisfy the limit state capacity and serviceability conditions. The cross-sectional areas of bars and structural dimensions are taken as design variables. Variables are used in the form of continuous and discrete. The analysis is done using the Structural and Design of Experiments modules of Ansys Workbench v17.2. As result of the method a mass reduction of 46,6 % is achieved.

  16. Replica Analysis for Portfolio Optimization with Single-Factor Model

    Science.gov (United States)

    Shinzato, Takashi

    2017-06-01

    In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.

  17. Optimized multiple linear mappings for single image super-resolution

    Science.gov (United States)

    Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo

    2017-12-01

    Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.

  18. Optimization of the Single Staggered Wire and Tube Heat Exchanger

    Directory of Open Access Journals (Sweden)

    Arsana I Made

    2016-01-01

    Full Text Available Wire and tube heat exchanger consists of a coiled tube, and wire is welded on the two sides of it in normal direction of the tube. Generally,wire and tube heat exchanger uses inline wire arrangement between the two sides, whereas in this study, it used staggered wire arrangement that reduces the restriction of convection heat transfer. This study performed the optimization of single staggered wire and tube heat exchanger to increase the capacity and reduce the mass of the heat exchanger. Optimization was conducted with the Hooke-Jeeves method, which aims to optimize the geometry of the heat exchanger, especially on the diameter (dw and the distance between wires (pw. The model developed to present heat transfer correlations on single staggered wire and tube heat exchanger was valid. The maximum optimization factor obtained when the diameter wire was 0.9 mm and the distance between wires (pw was 11 mm with the fref value = 1.5837. It means that the optimized design only using mass of 59,10 % and could transfer heat about 98,5 % from the basis design.

  19. Development, optimization, and single laboratory validation of an event-specific real-time PCR method for the detection and quantification of Golden Rice 2 using a novel taxon-specific assay.

    Science.gov (United States)

    Jacchia, Sara; Nardini, Elena; Savini, Christian; Petrillo, Mauro; Angers-Loustau, Alexandre; Shim, Jung-Hyun; Trijatmiko, Kurniawan; Kreysa, Joachim; Mazzara, Marco

    2015-02-18

    In this study, we developed, optimized, and in-house validated a real-time PCR method for the event-specific detection and quantification of Golden Rice 2, a genetically modified rice with provitamin A in the grain. We optimized and evaluated the performance of the taxon (targeting rice Phospholipase D α2 gene)- and event (targeting the 3' insert-to-plant DNA junction)-specific assays that compose the method as independent modules, using haploid genome equivalents as unit of measurement. We verified the specificity of the two real-time PCR assays and determined their dynamic range, limit of quantification, limit of detection, and robustness. We also confirmed that the taxon-specific DNA sequence is present in single copy in the rice genome and verified its stability of amplification across 132 rice varieties. A relative quantification experiment evidenced the correct performance of the two assays when used in combination.

  20. Application of evolution strategy algorithm for optimization of a single-layer sound absorber

    Directory of Open Access Journals (Sweden)

    Morteza Gholamipoor

    2014-12-01

    Full Text Available Depending on different design parameters and limitations, optimization of sound absorbers has always been a challenge in the field of acoustic engineering. Various methods of optimization have evolved in the past decades with innovative method of evolution strategy gaining more attention in the recent years. Based on their simplicity and straightforward mathematical representations, single-layer absorbers have been widely used in both engineering and industrial applications and an optimized design for these absorbers has become vital. In the present study, the method of evolution strategy algorithm is used for optimization of a single-layer absorber at both a particular frequency and an arbitrary frequency band. Results of the optimization have been compared against different methods of genetic algorithm and penalty functions which are proved to be favorable in both effectiveness and accuracy. Finally, a single-layer absorber is optimized in a desired range of frequencies that is the main goal of an industrial and engineering optimization process.

  1. Adaptive scalarization methods in multiobjective optimization

    CERN Document Server

    Eichfelder, Gabriele

    2008-01-01

    This book presents adaptive solution methods for multiobjective optimization problems based on parameter dependent scalarization approaches. Readers will benefit from the new adaptive methods and ideas for solving multiobjective optimization.

  2. Optimization design of energy deposition on single expansion ramp nozzle

    Science.gov (United States)

    Ju, Shengjun; Yan, Chao; Wang, Xiaoyong; Qin, Yupei; Ye, Zhifei

    2017-11-01

    Optimization design has been widely used in the aerodynamic design process of scramjets. The single expansion ramp nozzle is an important component for scramjets to produces most of thrust force. A new concept of increasing the aerodynamics of the scramjet nozzle with energy deposition is presented. The essence of the method is to create a heated region in the inner flow field of the scramjet nozzle. In the current study, the two-dimensional coupled implicit compressible Reynolds Averaged Navier-Stokes and Menter's shear stress transport turbulence model have been applied to numerically simulate the flow fields of the single expansion ramp nozzle with and without energy deposition. The numerical results show that the proposal of energy deposition can be an effective method to increase force characteristics of the scramjet nozzle, the thrust coefficient CT increase by 6.94% and lift coefficient CN decrease by 26.89%. Further, the non-dominated sorting genetic algorithm coupled with the Radial Basis Function neural network surrogate model has been employed to determine optimum location and density of the energy deposition. The thrust coefficient CT and lift coefficient CN are selected as objective functions, and the sampling points are obtained numerically by using a Latin hypercube design method. The optimized thrust coefficient CT further increase by 1.94%, meanwhile, the optimized lift coefficient CN further decrease by 15.02% respectively. At the same time, the optimized performances are in good and reasonable agreement with the numerical predictions. The findings suggest that scramjet nozzle design and performance can benefit from the application of energy deposition.

  3. Optimization of single plate-serial dilution spotting (SP-SDS) with sample anchoring as an assured method for bacterial and yeast cfu enumeration and single colony isolation from diverse samples.

    Science.gov (United States)

    Thomas, Pious; Sekhar, Aparna C; Upreti, Reshmi; Mujawar, Mohammad M; Pasha, Sadiq S

    2015-12-01

    We propose a simple technique for bacterial and yeast cfu estimations from diverse samples with no prior idea of viable counts, designated as single plate-serial dilution spotting (SP-SDS) with the prime recommendation of sample anchoring (10 0 stocks). For pure cultures, serial dilutions were prepared from 0.1 OD (10 0 ) stock and 20 μl aliquots of six dilutions (10 1 -10 6 ) were applied as 10-15 micro-drops in six sectors over agar-gelled medium in 9-cm plates. For liquid samples 10 0 -10 5 dilutions, and for colloidal suspensions and solid samples (10% w/v), 10 1 -10 6 dilutions were used. Following incubation, at least one dilution level yielded 6-60 cfu per sector comparable to the standard method involving 100 μl samples. Tested on diverse bacteria, composite samples and Saccharomyces cerevisiae , SP-SDS offered wider applicability over alternative methods like drop-plating and track-dilution for cfu estimation, single colony isolation and culture purity testing, particularly suiting low resource settings.

  4. Optimal mechanisms for single machine scheduling

    NARCIS (Netherlands)

    Heydenreich, B.; Mishra, D.; Müller, R.; Uetz, Marc Jochen; Papadimitriou, C.; Zhang, S.

    2008-01-01

    We study the design of optimal mechanisms in a setting here job-agents compete for being processed by a service provider that can handle one job at a time. Each job has a processing time and incurs a waiting cost. Jobs need to be compensated for waiting. We consider two models, one where only the

  5. OPTIMIZATION METHODS AND SEO TOOLS

    Directory of Open Access Journals (Sweden)

    Maria Cristina ENACHE

    2014-06-01

    Full Text Available SEO is the activity of optimizing Web pages or whole sites in order to make them more search engine friendly, thus getting higher positions in search results. Search engine optimization (SEO involves designing, writing, and coding a website in a way that helps to improve the volume and quality of traffic to your website from people using search engines. While Search Engine Optimization is the focus of this booklet, keep in mind that it is one of many marketing techniques. A brief overview of other marketing techniques is provided at the end of this booklet.

  6. Biologically inspired optimization methods an introduction

    CERN Document Server

    Wahde, M

    2008-01-01

    The advent of rapid, reliable and cheap computing power over the last decades has transformed many, if not most, fields of science and engineering. The multidisciplinary field of optimization is no exception. First of all, with fast computers, researchers and engineers can apply classical optimization methods to problems of larger and larger size. In addition, however, researchers have developed a host of new optimization algorithms that operate in a rather different way than the classical ones, and that allow practitioners to attack optimization problems where the classical methods are either not applicable or simply too costly (in terms of time and other resources) to apply.This book is intended as a course book for introductory courses in stochastic optimization algorithms (in this book, the terms optimization method and optimization algorithm will be used interchangeably), and it has grown from a set of lectures notes used in courses, taught by the author, at the international master programme Complex Ada...

  7. Tax optimization methods of international companies

    OpenAIRE

    Černá, Kateřina

    2015-01-01

    This thesis is focusing on methods of tax optimization of international companies. These international concerns are endeavoring tax minimization. The disparity of the tax systems gives to these companies a possibility of profit and tax base shifting. At first this thesis compares the differences of tax optimization, aggressive tax planning and tax evasion. Among the areas of the optimization methods, which are described in this thesis, belongs tax residention, dividends, royalty payments, tra...

  8. Systematization of Accurate Discrete Optimization Methods

    Directory of Open Access Journals (Sweden)

    V. A. Ovchinnikov

    2015-01-01

    Full Text Available The object of study of this paper is to define accurate methods for solving combinatorial optimization problems of structural synthesis. The aim of the work is to systemize the exact methods of discrete optimization and define their applicability to solve practical problems.The article presents the analysis, generalization and systematization of classical methods and algorithms described in the educational and scientific literature.As a result of research a systematic presentation of combinatorial methods for discrete optimization described in various sources is given, their capabilities are described and properties of the tasks to be solved using the appropriate methods are specified.

  9. Intelligent structural optimization: Concept, Model and Methods

    International Nuclear Information System (INIS)

    Lu, Dagang; Wang, Guangyuan; Peng, Zhang

    2002-01-01

    Structural optimization has many characteristics of Soft Design, and so, it is necessary to apply the experience of human experts to solving the uncertain and multidisciplinary optimization problems in large-scale and complex engineering systems. With the development of artificial intelligence (AI) and computational intelligence (CI), the theory of structural optimization is now developing into the direction of intelligent optimization. In this paper, a concept of Intelligent Structural Optimization (ISO) is proposed. And then, a design process model of ISO is put forward in which each design sub-process model are discussed. Finally, the design methods of ISO are presented

  10. DESIGN OPTIMIZATION METHOD USED IN MECHANICAL ENGINEERING

    Directory of Open Access Journals (Sweden)

    SCURTU Iacob Liviu

    2016-11-01

    Full Text Available This paper presents an optimization study in mechanical engineering. First part of the research describe the structural optimization method used, followed by the presentation of several optimization studies conducted in recent years. The second part of the paper presents the CAD modelling of an agricultural plough component. The beam of the plough is analysed using finite element method. The plough component is meshed in solid elements, and the load case which mimics the working conditions of agricultural equipment of this are created. The model is prepared to find the optimal structural design, after the FEA study of the model is done. The mass reduction of part is the criterion applied for this optimization study. The end of this research presents the final results and the model optimized shape.

  11. OPTIMIZATION METHODS IN TRANSPORTATION OF FOREST PRODUCTS

    Directory of Open Access Journals (Sweden)

    Selçuk Gümüş

    2008-04-01

    Full Text Available Turkey has total of 21.2 million ha (27 % forest land. In this area, average 9 million m3 of logs and 5 million stere of fuel wood have been annually produced by the government forest enterprises. The total annual production is approximately 13million m3 Considering the fact that the costs of transporting forest products was about . 160 million TL in the year of 2006, the importance of optimizing the total costs in transportation can be better understood. Today, there is not common optimization method used at whole transportation problems. However, the decision makers select the most appropriate methods according to their aims.Comprehending of features and capacity of optimization methods is important for selecting of the most appropriate method. The evaluation of optimization methods that can be used at forest products transportation is aimed in this study.

  12. Engineering applications of heuristic multilevel optimization methods

    Science.gov (United States)

    Barthelemy, Jean-Francois M.

    1989-01-01

    Some engineering applications of heuristic multilevel optimization methods are presented and the discussion focuses on the dependency matrix that indicates the relationship between problem functions and variables. Coordination of the subproblem optimizations is shown to be typically achieved through the use of exact or approximate sensitivity analysis. Areas for further development are identified.

  13. Method optimization of ocular patches

    Directory of Open Access Journals (Sweden)

    Kamalesh Upreti

    2012-01-01

    Full Text Available The intraocular patches were prepared using gelatin as the polymer. Ocular patch were prepared by solvent casting method. The patches were prepared for six formulations GP1, GP2, GP3, GP4, GP5 and GP6. Petri dishes were used for formulation of ocular patch. Gelatin was used as a polymer of choice. Glutaraldehyde used as cross linking agent and (DMSO dimethylsulfoxide used as solubility enhancer. The elasticity depends upon the concentration of gelatin. 400 mg amount of polymer i.e gelatin gave the required elasticity for the formulation.

  14. Optimization of magnetic switches for single particle and cell transport

    Energy Technology Data Exchange (ETDEWEB)

    Abedini-Nassab, Roozbeh; Yellen, Benjamin B., E-mail: yellen@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Box 90300 Hudson Hall, Durham, North Carolina 27708 (United States); Joint Institute, University of Michigan—Shanghai Jiao Tong University, Shanghai Jiao Tong University, Shanghai 200240 (China); Murdoch, David M. [Department of Medicine, Duke University, Durham, North Carolina 27708 (United States); Kim, CheolGi [Department of Emerging Materials Science, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 711-873 (Korea, Republic of)

    2014-06-28

    The ability to manipulate an ensemble of single particles and cells is a key aim of lab-on-a-chip research; however, the control mechanisms must be optimized for minimal power consumption to enable future large-scale implementation. Recently, we demonstrated a matter transport platform, which uses overlaid patterns of magnetic films and metallic current lines to control magnetic particles and magnetic-nanoparticle-labeled cells; however, we have made no prior attempts to optimize the device geometry and power consumption. Here, we provide an optimization analysis of particle-switching devices based on stochastic variation in the particle's size and magnetic content. These results are immediately applicable to the design of robust, multiplexed platforms capable of transporting, sorting, and storing single cells in large arrays with low power and high efficiency.

  15. Evolutionary optimization methods for accelerator design

    Science.gov (United States)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained

  16. An analytical method for optimal design of MR valve structures

    International Nuclear Information System (INIS)

    Nguyen, Q H; Choi, S B; Lee, Y S; Han, M S

    2009-01-01

    This paper proposes an analytical methodology for the optimal design of a magnetorheological (MR) valve structure. The MR valve structure is constrained in a specific volume and the optimization problem identifies geometric dimensions of the valve structure that maximize the yield stress pressure drop of a MR valve or the yield stress damping force of a MR damper. In this paper, the single-coil and two-coil annular MR valve structures are considered. After describing the schematic configuration and operating principle of a typical MR valve and damper, a quasi-static model is derived based on the Bingham model of a MR fluid. The magnetic circuit of the valve and damper is then analyzed by applying Kirchoff's law and the magnetic flux conservation rule. Based on quasi-static modeling and magnetic circuit analysis, the optimization problem of the MR valve and damper is built. In order to reduce the computation load, the optimization problem is simplified and a procedure to obtain the optimal solution of the simplified optimization problem is presented. The optimal solution of the simplified optimization problem of the MR valve structure constrained in a specific volume is then obtained and compared with the solution of the original optimization problem and the optimal solution obtained from the finite element method

  17. A topological derivative method for topology optimization

    DEFF Research Database (Denmark)

    Norato, J.; Bendsøe, Martin P.; Haber, RB

    2007-01-01

    resource constraint. A smooth and consistent projection of the region bounded by the level set onto the fictitious analysis domain simplifies the response analysis and enhances the convergence of the optimization algorithm. Moreover, the projection supports the reintroduction of solid material in void......We propose a fictitious domain method for topology optimization in which a level set of the topological derivative field for the cost function identifies the boundary of the optimal design. We describe a fixed-point iteration scheme that implements this optimality criterion subject to a volumetric...... regions, a critical requirement for robust topology optimization. We present several numerical examples that demonstrate compliance minimization of fixed-volume, linearly elastic structures....

  18. Topology optimization and lattice Boltzmann methods

    DEFF Research Database (Denmark)

    Nørgaard, Sebastian Arlund

    This thesis demonstrates the application of the lattice Boltzmann method for topology optimization problems. Specifically, the focus is on problems in which time-dependent flow dynamics have significant impact on the performance of the devices to be optimized. The thesis introduces new topology...... a discrete adjoint approach. To handle the complexity of the discrete adjoint approach more easily, a method for computing it based on automatic differentiation is introduced, which can be adapted to any lattice Boltzmann type method. For example, while it is derived in the context of an isothermal lattice...... Boltzmann model, it is shown that the method can be easily extended to a thermal model as well. Finally, the predicted behavior of an optimized design is compared to the equiva-lent prediction from a commercial finite element solver. It is found that the weakly compressible nature of the lattice Boltzmann...

  19. Optimizing How We Teach Research Methods

    Science.gov (United States)

    Cvancara, Kristen E.

    2017-01-01

    Courses: Research Methods (undergraduate or graduate level). Objective: The aim of this exercise is to optimize the ability for students to integrate an understanding of various methodologies across research paradigms within a 15-week semester, including a review of procedural steps and experiential learning activities to practice each method, a…

  20. Optimization of breeding methods when introducing multiple ...

    African Journals Online (AJOL)

    Optimization of breeding methods when introducing multiple resistance genes from American to Chinese wheat. JN Qi, X Zhang, C Yin, H Li, F Lin. Abstract. Stripe rust is one of the most destructive diseases of wheat worldwide. Growing resistant cultivars with resistance genes is the most effective method to control this ...

  1. A method optimization study for atomic absorption ...

    African Journals Online (AJOL)

    A sensitive, reliable and relative fast method has been developed for the determination of total zinc in insulin by atomic absorption spectrophotometer. This designed study was used to optimize the procedures for the existing methods. Spectrograms of both standard and sample solutions of zinc were recorded by measuring ...

  2. Multi-disciplinary design optimization and performance evaluation of a single stage transonic axial compressor

    International Nuclear Information System (INIS)

    Lee, Sae Il; Lee, Dong Ho; Kim, Kyu Hong; Park, Tae Choon; Lim, Byeung Jun; Kang, Young Seok

    2013-01-01

    The multidisciplinary design optimization method, which integrates aerodynamic performance and structural stability, was utilized in the development of a single-stage transonic axial compressor. An approximation model was created using artificial neural network for global optimization within given ranges of variables and several design constraints. The genetic algorithm was used for the exploration of the Pareto front to find the maximum objective function value. The final design was chosen after a second stage gradient-based optimization process to improve the accuracy of the optimization. To validate the design procedure, numerical simulations and compressor tests were carried out to evaluate the aerodynamic performance and safety factor of the optimized compressor. Comparison between numerical optimal results and experimental data are well matched. The optimum shape of the compressor blade is obtained and compared to the baseline design. The proposed optimization framework improves the aerodynamic efficiency and the safety factor.

  3. Topology optimization using the finite volume method

    DEFF Research Database (Denmark)

    in this presentation is focused on a prototype model for topology optimization of steady heat diffusion. This allows for a study of the basic ingredients in working with FVM methods when dealing with topology optimization problems. The FVM and FEM based formulations differ both in how one computes the design...... derivative of the system matrix K and in how one computes the discretized version of certain objective functions. Thus for a cost function for minimum dissipated energy (like minimum compliance for an elastic structure) one obtains an expression c = u^\\T \\tilde{K}u $, where \\tilde{K} is different from K...... the well known Reuss lower bound. [1] Bendsøe, M.P.; Sigmund, O. 2004: Topology Optimization - Theory, Methods, and Applications. Berlin Heidelberg: Springer Verlag [2] Versteeg, H. K.; W. Malalasekera 1995: An introduction to Computational Fluid Dynamics: the Finite Volume Method. London: Longman...

  4. Sequential optimization and reliability assessment method for metal forming processes

    International Nuclear Information System (INIS)

    Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.

    2004-01-01

    Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations

  5. An introduction to harmony search optimization method

    CERN Document Server

    Wang, Xiaolei; Zenger, Kai

    2014-01-01

    This brief provides a detailed introduction, discussion and bibliographic review of the nature1-inspired optimization algorithm called Harmony Search. It uses a large number of simulation results to demonstrate the advantages of Harmony Search and its variants and also their drawbacks. The authors show how weaknesses can be amended by hybridization with other optimization methods. The Harmony Search Method with Applications will be of value to researchers in computational intelligence in demonstrating the state of the art of research on an algorithm of current interest. It also helps researche

  6. Optimal boarding method for airline passengers

    Energy Technology Data Exchange (ETDEWEB)

    Steffen, Jason H.; /Fermilab

    2008-02-01

    Using a Markov Chain Monte Carlo optimization algorithm and a computer simulation, I find the passenger ordering which minimizes the time required to board the passengers onto an airplane. The model that I employ assumes that the time that a passenger requires to load his or her luggage is the dominant contribution to the time needed to completely fill the aircraft. The optimal boarding strategy may reduce the time required to board and airplane by over a factor of four and possibly more depending upon the dimensions of the aircraft. I explore some features of the optimal boarding method and discuss practical modifications to the optimal. Finally, I mention some of the benefits that could come from implementing an improved passenger boarding scheme.

  7. Optimization methods applied to hybrid vehicle design

    Science.gov (United States)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  8. Optimization Methods in Emotion Recognition System

    Directory of Open Access Journals (Sweden)

    L. Povoda

    2016-09-01

    Full Text Available Emotions play big role in our everyday communication and contain important information. This work describes a novel method of automatic emotion recognition from textual data. The method is based on well-known data mining techniques, novel approach based on parallel run of SVM (Support Vector Machine classifiers, text preprocessing and 3 optimization methods: sequential elimination of attributes, parameter optimization based on token groups, and method of extending train data sets during practical testing and production release final tuning. We outperformed current state of the art methods and the results were validated on bigger data sets (3346 manually labelled samples which is less prone to overfitting when compared to related works. The accuracy achieved in this work is 86.89% for recognition of 5 emotional classes. The experiments were performed in the real world helpdesk environment, was processing Czech language but the proposed methodology is general and can be applied to many different languages.

  9. Path optimization method for the sign problem

    Directory of Open Access Journals (Sweden)

    Ohnishi Akira

    2018-01-01

    Full Text Available We propose a path optimization method (POM to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral method and the complex Langevin method are promising and extensively discussed. In these methods, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to optimize the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t(f ϵ R and by optimizing f(t to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin method is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to optimize the path.

  10. Adaptive finite element method for shape optimization

    KAUST Repository

    Morin, Pedro; Nochetto, Ricardo H.; Pauletti, Miguel S.; Verani, Marco

    2012-01-01

    We examine shape optimization problems in the context of inexact sequential quadratic programming. Inexactness is a consequence of using adaptive finite element methods (AFEM) to approximate the state and adjoint equations (via the dual weighted residual method), update the boundary, and compute the geometric functional. We present a novel algorithm that equidistributes the errors due to shape optimization and discretization, thereby leading to coarse resolution in the early stages and fine resolution upon convergence, and thus optimizing the computational effort. We discuss the ability of the algorithm to detect whether or not geometric singularities such as corners are genuine to the problem or simply due to lack of resolution - a new paradigm in adaptivity. © EDP Sciences, SMAI, 2012.

  11. Topology optimization using the finite volume method

    DEFF Research Database (Denmark)

    Gersborg-Hansen, Allan; Bendsøe, Martin P.; Sigmund, Ole

    2005-01-01

    in this presentation is focused on a prototype model for topology optimization of steady heat diffusion. This allows for a study of the basic ingredients in working with FVM methods when dealing with topology optimization problems. The FVM and FEM based formulations differ both in how one computes the design...... derivative of the system matrix $\\mathbf K$ and in how one computes the discretized version of certain objective functions. Thus for a cost function for minimum dissipated energy (like minimum compliance for an elastic structure) one obtains an expression $ c = \\mathbf u^\\T \\tilde{\\mathbf K} \\mathbf u...... the arithmetic and harmonic average with the latter being the well known Reuss lower bound. [1] Bendsøe, MP and Sigmund, O 2004: Topology Optimization - Theory, Methods, and Applications. Berlin Heidelberg: Springer Verlag [2] Versteeg, HK and Malalasekera, W 1995: An introduction to Computational Fluid Dynamics...

  12. Adaptive finite element method for shape optimization

    KAUST Repository

    Morin, Pedro

    2012-01-16

    We examine shape optimization problems in the context of inexact sequential quadratic programming. Inexactness is a consequence of using adaptive finite element methods (AFEM) to approximate the state and adjoint equations (via the dual weighted residual method), update the boundary, and compute the geometric functional. We present a novel algorithm that equidistributes the errors due to shape optimization and discretization, thereby leading to coarse resolution in the early stages and fine resolution upon convergence, and thus optimizing the computational effort. We discuss the ability of the algorithm to detect whether or not geometric singularities such as corners are genuine to the problem or simply due to lack of resolution - a new paradigm in adaptivity. © EDP Sciences, SMAI, 2012.

  13. Optimized method for manufacturing large aspheric surfaces

    Science.gov (United States)

    Zhou, Xusheng; Li, Shengyi; Dai, Yifan; Xie, Xuhui

    2007-12-01

    Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an optimized method for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this method. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the optimized material removal function, while medium-high frequency errors by using uniform removing principle. With this optimized method, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the optimized method can guide large aspheric surface manufacturing effectively.

  14. A Gradient Taguchi Method for Engineering Optimization

    Science.gov (United States)

    Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song

    2017-10-01

    To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.

  15. Computerized method for rapid optimization of immunoassays

    International Nuclear Information System (INIS)

    Rousseau, F.; Forest, J.C.

    1990-01-01

    The authors have developed an one step quantitative method for radioimmunoassay optimization. The method is rapid and necessitates only to perform a series of saturation curves with different titres of the antiserum. After calculating the saturation point at several antiserum titres using the Scatchard plot, the authors have produced a table that predicts the main characteristics of the standard curve (Bo/T, Bo and T) that will prevail for any combination of antiserum titre and percentage of sites saturation. The authors have developed a microcomputer program able to interpolate all the data needed to produce such a table from the results of the saturation curves. This computer program permits also to predict the sensitivity of the assay at any experimental conditions if the antibody does not discriminate between the labeled and the non labeled antigen. The authors have tested the accuracy of this optimization table with two in house RIA systems: 17-β-estradiol, and hLH. The results obtained experimentally, including sensitivity determinations, were concordant with those predicted from the optimization table. This method accerelates and improves greatly the process of optimization of radioimmunoassays [fr

  16. Global Optimization Ensemble Model for Classification Methods

    Science.gov (United States)

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  17. Global Optimization Ensemble Model for Classification Methods

    Directory of Open Access Journals (Sweden)

    Hina Anwar

    2014-01-01

    Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.

  18. Method for manufacturing a single crystal nanowire

    NARCIS (Netherlands)

    van den Berg, Albert; Bomer, Johan G.; Carlen, Edwin; Chen, S.; Kraaijenhagen, Roderik Adriaan; Pinedo, Herbert Michael

    2013-01-01

    A method for manufacturing a single crystal nano-structure is provided comprising the steps of providing a device layer with a 100 structure on a substrate; providing a stress layer onto the device layer; patterning the stress layer along the 110 direction of the device layer; selectively removing

  19. Method for manufacturing a single crystal nanowire

    NARCIS (Netherlands)

    van den Berg, Albert; Bomer, Johan G.; Carlen, Edwin; Chen, S.; Kraaijenhagen, R.A.; Pinedo, Herbert Michael

    2010-01-01

    A method for manufacturing a single crystal nano-structure is provided comprising the steps of providing a device layer with a 100 structure on a substrate; providing a stress layer onto the device layer; patterning the stress layer along the 110 direction of the device layer; selectively removing

  20. STOCHASTIC GRADIENT METHODS FOR UNCONSTRAINED OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Nataša Krejić

    2014-12-01

    Full Text Available This papers presents an overview of gradient based methods for minimization of noisy functions. It is assumed that the objective functions is either given with error terms of stochastic nature or given as the mathematical expectation. Such problems arise in the context of simulation based optimization. The focus of this presentation is on the gradient based Stochastic Approximation and Sample Average Approximation methods. The concept of stochastic gradient approximation of the true gradient can be successfully extended to deterministic problems. Methods of this kind are presented for the data fitting and machine learning problems.

  1. Bio Inspired Algorithms in Single and Multiobjective Reliability Optimization

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albeanu, Grigore; Burtschy, Bernard

    2014-01-01

    Non-traditional search and optimization methods based on natural phenomena have been proposed recently in order to avoid local or unstable behavior when run towards an optimum state. This paper describes the principles of bio inspired algorithms and reports on Migration Algorithms and Bees...

  2. Methods for Distributed Optimal Energy Management

    DEFF Research Database (Denmark)

    Brehm, Robert

    The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast to convent......The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast...... to conventional centralised optimal energy flow management systems, here-in, focus is set on how optimal energy management can be achieved in a decentralised distributed architecture such as a multi-agent system. Distributed optimisation methods are introduced, targeting optimisation of energy flow in virtual......-consumption of renewable energy resources in low voltage grids. It can be shown that this method prevents mutual discharging of batteries and prevents peak loads, a supervisory control instance can dictate the level of autarchy from the utility grid. Further it is shown that the problem of optimal energy flow management...

  3. Investigation on multi-objective performance optimization algorithm application of fan based on response surface method and entropy method

    Science.gov (United States)

    Zhang, Li; Wu, Kexin; Liu, Yang

    2017-12-01

    A multi-objective performance optimization method is proposed, and the problem that single structural parameters of small fan balance the optimization between the static characteristics and the aerodynamic noise is solved. In this method, three structural parameters are selected as the optimization variables. Besides, the static pressure efficiency and the aerodynamic noise of the fan are regarded as the multi-objective performance. Furthermore, the response surface method and the entropy method are used to establish the optimization function between the optimization variables and the multi-objective performances. Finally, the optimized model is found when the optimization function reaches its maximum value. Experimental data shows that the optimized model not only enhances the static characteristics of the fan but also obviously reduces the noise. The results of the study will provide some reference for the optimization of multi-objective performance of other types of rotating machinery.

  4. Layout optimization with algebraic multigrid methods

    Science.gov (United States)

    Regler, Hans; Ruede, Ulrich

    1993-01-01

    Finding the optimal position for the individual cells (also called functional modules) on the chip surface is an important and difficult step in the design of integrated circuits. This paper deals with the problem of relative placement, that is the minimization of a quadratic functional with a large, sparse, positive definite system matrix. The basic optimization problem must be augmented by constraints to inhibit solutions where cells overlap. Besides classical iterative methods, based on conjugate gradients (CG), we show that algebraic multigrid methods (AMG) provide an interesting alternative. For moderately sized examples with about 10000 cells, AMG is already competitive with CG and is expected to be superior for larger problems. Besides the classical 'multiplicative' AMG algorithm where the levels are visited sequentially, we propose an 'additive' variant of AMG where levels may be treated in parallel and that is suitable as a preconditioner in the CG algorithm.

  5. Single beam pass migmacell method and apparatus

    International Nuclear Information System (INIS)

    Maglich, B.C.; Nering, J.E.; Mazarakis, M.G.; Miller, R.A.

    1976-01-01

    The invention provides improvements in migmacell apparatus and method by dispensing with the need for metastable confinement of injected molecular ions for multiple precession periods. Injected molecular ions undergo a 'single pass' through the reaction volume. By preconditioning the injected beam such that it contains a population distribution of molecules in higher vibrational states than in the case of a normal distribution, injected molecules in the single pass exper-ience collisionless dissociation in the migmacell under magnetic influence, i.e., so-called Lorentz dissociation. Dissociationions then form atomic migma

  6. Frequency guided methods for demodulation of a single fringe pattern.

    Science.gov (United States)

    Wang, Haixia; Kemao, Qian

    2009-08-17

    Phase demodulation from a single fringe pattern is a challenging task but of interest. A frequency-guided regularized phase tracker and a frequency-guided sequential demodulation method with Levenberg-Marquardt optimization are proposed to demodulate a single fringe pattern. Demodulation path guided by the local frequency from the highest to the lowest is applied in both methods. Since critical points have low local frequency values, they are processed last so that the spurious sign problem caused by these points is avoided. These two methods can be considered as alternatives to the effective fringe follower regularized phase tracker. Demodulation results from one computer-simulated and two experimental fringe patterns using the proposed methods will be demonstrated. (c) 2009 Optical Society of America

  7. Layout optimization using the homogenization method

    Science.gov (United States)

    Suzuki, Katsuyuki; Kikuchi, Noboru

    1993-01-01

    A generalized layout problem involving sizing, shape, and topology optimization is solved by using the homogenization method for three-dimensional linearly elastic shell structures in order to seek a possibility of establishment of an integrated design system of automotive car bodies, as an extension of the previous work by Bendsoe and Kikuchi. A formulation of a three-dimensional homogenized shell, a solution algorithm, and several examples of computing the optimum layout are presented in this first part of the two articles.

  8. Hydrothermal optimal power flow using continuation method

    International Nuclear Information System (INIS)

    Raoofat, M.; Seifi, H.

    2001-01-01

    The problem of optimal economic operation of hydrothermal electric power systems is solved using powerful continuation method. While in conventional approach, fixed generation voltages are used to avoid convergence problems, in the algorithm, they are treated as variables so that better solutions can be obtained. The algorithm is tested for a typical 5-bus and 17-bus New Zealand networks. Its capabilities and promising results are assessed

  9. Methods for Large-Scale Nonlinear Optimization.

    Science.gov (United States)

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  10. Lifecycle-Based Swarm Optimization Method for Numerical Optimization

    Directory of Open Access Journals (Sweden)

    Hai Shen

    2014-01-01

    Full Text Available Bioinspired optimization algorithms have been widely used to solve various scientific and engineering problems. Inspired by biological lifecycle, this paper presents a novel optimization algorithm called lifecycle-based swarm optimization (LSO. Biological lifecycle includes four stages: birth, growth, reproduction, and death. With this process, even though individual organism died, the species will not perish. Furthermore, species will have stronger ability of adaptation to the environment and achieve perfect evolution. LSO simulates Biological lifecycle process through six optimization operators: chemotactic, assimilation, transposition, crossover, selection, and mutation. In addition, the spatial distribution of initialization population meets clumped distribution. Experiments were conducted on unconstrained benchmark optimization problems and mechanical design optimization problems. Unconstrained benchmark problems include both unimodal and multimodal cases the demonstration of the optimal performance and stability, and the mechanical design problem was tested for algorithm practicability. The results demonstrate remarkable performance of the LSO algorithm on all chosen benchmark functions when compared to several successful optimization techniques.

  11. portfolio optimization based on nonparametric estimation methods

    Directory of Open Access Journals (Sweden)

    mahsa ghandehari

    2017-03-01

    Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.

  12. A Survey of Methods for Gas-Lift Optimization

    Directory of Open Access Journals (Sweden)

    Kashif Rashid

    2012-01-01

    Full Text Available This paper presents a survey of methods and techniques developed for the solution of the continuous gas-lift optimization problem over the last two decades. These range from isolated single-well analysis all the way to real-time multivariate optimization schemes encompassing all wells in a field. While some methods are clearly limited due to their neglect of treating the effects of inter-dependent wells with common flow lines, other methods are limited due to the efficacy and quality of the solution obtained when dealing with large-scale networks comprising hundreds of difficult to produce wells. The aim of this paper is to provide an insight into the approaches developed and to highlight the challenges that remain.

  13. The construction of optimal stated choice experiments theory and methods

    CERN Document Server

    Street, Deborah J

    2007-01-01

    The most comprehensive and applied discussion of stated choice experiment constructions available The Construction of Optimal Stated Choice Experiments provides an accessible introduction to the construction methods needed to create the best possible designs for use in modeling decision-making. Many aspects of the design of a generic stated choice experiment are independent of its area of application, and until now there has been no single book describing these constructions. This book begins with a brief description of the various areas where stated choice experiments are applicable, including marketing and health economics, transportation, environmental resource economics, and public welfare analysis. The authors focus on recent research results on the construction of optimal and near-optimal choice experiments and conclude with guidelines and insight on how to properly implement these results. Features of the book include: Construction of generic stated choice experiments for the estimation of main effects...

  14. Optimal correction and design parameter search by modern methods of rigorous global optimization

    International Nuclear Information System (INIS)

    Makino, K.; Berz, M.

    2011-01-01

    optics for the computation of aberrations allow the determination of particularly sharp underestimators for large regions. As a consequence, the subsequent progressive pruning of the allowed search space as part of the optimization progresses is carried out particularly effectively. The end result is the rigorous determination of the single or multiple optimal solutions of the parameter optimization, regardless of their location, their number, and the starting values of optimization. The methods are particularly powerful if executed in interplay with genetic optimizers generating their new populations within the currently active unpruned space. Their current best guess provides rigorous upper bounds of the minima, which can then beneficially be used for better pruning. Examples of the method and its performance will be presented, including the determination of all operating points of desired tunes or chromaticities, etc. in storage ring lattices.

  15. METHODS OF INTEGRATED OPTIMIZATION MAGLEV TRANSPORT SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. Lasher

    2013-09-01

    example, this research proved the sustainability of the proposed integrated optimization parameters of transport systems. This approach could be applied not only for MTS, but also for other transport systems. Originality. The bases of the complex optimization of transport presented are the new system of universal scientific methods and approaches that ensure high accuracy and authenticity of calculations with the simulation of transport systems and transport networks taking into account the dynamics of their development. Practical value. The development of the theoretical and technological bases of conducting the complex optimization of transport makes it possible to create the scientific tool, which ensures the fulfillment of the automated simulation and calculating of technical and economic structure and technology of the work of different objects of transport, including its infrastructure.

  16. Optimal management strategies in variable environments: Stochastic optimal control methods

    Science.gov (United States)

    Williams, B.K.

    1985-01-01

    Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both

  17. Global optimization methods for engineering design

    Science.gov (United States)

    Arora, Jasbir S.

    1990-01-01

    The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.

  18. PRODUCT OPTIMIZATION METHOD BASED ON ANALYSIS OF OPTIMAL VALUES OF THEIR CHARACTERISTICS

    Directory of Open Access Journals (Sweden)

    Constantin D. STANESCU

    2016-05-01

    Full Text Available The paper presents an original method of optimizing products based on the analysis of optimal values of their characteristics . Optimization method comprises statistical model and analytical model . With this original method can easily and quickly obtain optimal product or material .

  19. Hybrid intelligent optimization methods for engineering problems

    Science.gov (United States)

    Pehlivanoglu, Yasin Volkan

    The purpose of optimization is to obtain the best solution under certain conditions. There are numerous optimization methods because different problems need different solution methodologies; therefore, it is difficult to construct patterns. Also mathematical modeling of a natural phenomenon is almost based on differentials. Differential equations are constructed with relative increments among the factors related to yield. Therefore, the gradients of these increments are essential to search the yield space. However, the landscape of yield is not a simple one and mostly multi-modal. Another issue is differentiability. Engineering design problems are usually nonlinear and they sometimes exhibit discontinuous derivatives for the objective and constraint functions. Due to these difficulties, non-gradient-based algorithms have become more popular in recent decades. Genetic algorithms (GA) and particle swarm optimization (PSO) algorithms are popular, non-gradient based algorithms. Both are population-based search algorithms and have multiple points for initiation. A significant difference from a gradient-based method is the nature of the search methodologies. For example, randomness is essential for the search in GA or PSO. Hence, they are also called stochastic optimization methods. These algorithms are simple, robust, and have high fidelity. However, they suffer from similar defects, such as, premature convergence, less accuracy, or large computational time. The premature convergence is sometimes inevitable due to the lack of diversity. As the generations of particles or individuals in the population evolve, they may lose their diversity and become similar to each other. To overcome this issue, we studied the diversity concept in GA and PSO algorithms. Diversity is essential for a healthy search, and mutations are the basic operators to provide the necessary variety within a population. After having a close scrutiny of the diversity concept based on qualification and

  20. Optimal Rules for Single Machine Scheduling with Stochastic Breakdowns

    Directory of Open Access Journals (Sweden)

    Jinwei Gu

    2014-01-01

    Full Text Available This paper studies the problem of scheduling a set of jobs on a single machine subject to stochastic breakdowns, where jobs have to be restarted if preemptions occur because of breakdowns. The breakdown process of the machine is independent of the jobs processed on the machine. The processing times required to complete the jobs are constants if no breakdown occurs. The machine uptimes are independently and identically distributed (i.i.d. and are subject to a uniform distribution. It is proved that the Longest Processing Time first (LPT rule minimizes the expected makespan. For the large-scale problem, it is also showed that the Shortest Processing Time first (SPT rule is optimal to minimize the expected total completion times of all jobs.

  1. The methods and applications of optimization of radiation protection

    International Nuclear Information System (INIS)

    Liu Hua

    2007-01-01

    Optimization is the most important principle in radiation protection. The present article briefs the concept and up-to-date progress of optimization of protection, introduces some methods used in current optimization analysis, and presents various applications of optimization of protection. The author emphasizes that optimization of protection is a forward-looking iterative process aimed at preventing exposures before they occur. (author)

  2. Circular SAR Optimization Imaging Method of Buildings

    Directory of Open Access Journals (Sweden)

    Wang Jian-feng

    2015-12-01

    Full Text Available The Circular Synthetic Aperture Radar (CSAR can obtain the entire scattering properties of targets because of its great ability of 360° observation. In this study, an optimal orientation of the CSAR imaging algorithm of buildings is proposed by applying a combination of coherent and incoherent processing techniques. FEKO software is used to construct the electromagnetic scattering modes and simulate the radar echo. The FEKO imaging results are compared with the isotropic scattering results. On comparison, the optimal azimuth coherent accumulation angle of CSAR imaging of buildings is obtained. Practically, the scattering directions of buildings are unknown; therefore, we divide the 360° echo of CSAR into many overlapped and few angle echoes corresponding to the sub-aperture and then perform an imaging procedure on each sub-aperture. Sub-aperture imaging results are applied to obtain the all-around image using incoherent fusion techniques. The polarimetry decomposition method is used to decompose the all-around image and further retrieve the edge information of buildings successfully. The proposed method is validated with P-band airborne CSAR data from Sichuan, China.

  3. Optimization methods for activities selection problems

    Science.gov (United States)

    Mahad, Nor Faradilah; Alias, Suriana; Yaakop, Siti Zulaika; Arshad, Norul Amanina Mohd; Mazni, Elis Sofia

    2017-08-01

    Co-curriculum activities must be joined by every student in Malaysia and these activities bring a lot of benefits to the students. By joining these activities, the students can learn about the time management and they can developing many useful skills. This project focuses on the selection of co-curriculum activities in secondary school using the optimization methods which are the Analytic Hierarchy Process (AHP) and Zero-One Goal Programming (ZOGP). A secondary school in Negeri Sembilan, Malaysia was chosen as a case study. A set of questionnaires were distributed randomly to calculate the weighted for each activity based on the 3 chosen criteria which are soft skills, interesting activities and performances. The weighted was calculated by using AHP and the results showed that the most important criteria is soft skills. Then, the ZOGP model will be analyzed by using LINGO Software version 15.0. There are two priorities to be considered. The first priority which is to minimize the budget for the activities is achieved since the total budget can be reduced by RM233.00. Therefore, the total budget to implement the selected activities is RM11,195.00. The second priority which is to select the co-curriculum activities is also achieved. The results showed that 9 out of 15 activities were selected. Thus, it can concluded that AHP and ZOGP approach can be used as the optimization methods for activities selection problem.

  4. An Enhanced Memetic Algorithm for Single-Objective Bilevel Optimization Problems.

    Science.gov (United States)

    Islam, Md Monjurul; Singh, Hemant Kumar; Ray, Tapabrata; Sinha, Ankur

    2017-01-01

    Bilevel optimization, as the name reflects, deals with optimization at two interconnected hierarchical levels. The aim is to identify the optimum of an upper-level  leader problem, subject to the optimality of a lower-level follower problem. Several problems from the domain of engineering, logistics, economics, and transportation have an inherent nested structure which requires them to be modeled as bilevel optimization problems. Increasing size and complexity of such problems has prompted active theoretical and practical interest in the design of efficient algorithms for bilevel optimization. Given the nested nature of bilevel problems, the computational effort (number of function evaluations) required to solve them is often quite high. In this article, we explore the use of a Memetic Algorithm (MA) to solve bilevel optimization problems. While MAs have been quite successful in solving single-level optimization problems, there have been relatively few studies exploring their potential for solving bilevel optimization problems. MAs essentially attempt to combine advantages of global and local search strategies to identify optimum solutions with low computational cost (function evaluations). The approach introduced in this article is a nested Bilevel Memetic Algorithm (BLMA). At both upper and lower levels, either a global or a local search method is used during different phases of the search. The performance of BLMA is presented on twenty-five standard test problems and two real-life applications. The results are compared with other established algorithms to demonstrate the efficacy of the proposed approach.

  5. Localization of multilayer networks by optimized single-layer rewiring.

    Science.gov (United States)

    Jalan, Sarika; Pradhan, Priodyuti

    2018-04-01

    We study localization properties of principal eigenvectors (PEVs) of multilayer networks (MNs). Starting with a multilayer network corresponding to a delocalized PEV, we rewire the network edges using an optimization technique such that the PEV of the rewired multilayer network becomes more localized. The framework allows us to scrutinize structural and spectral properties of the networks at various localization points during the rewiring process. We show that rewiring only one layer is enough to attain a MN having a highly localized PEV. Our investigation reveals that a single edge rewiring of the optimized MN can lead to the complete delocalization of a highly localized PEV. This sensitivity in the localization behavior of PEVs is accompanied with the second largest eigenvalue lying very close to the largest one. This observation opens an avenue to gain a deeper insight into the origin of PEV localization of networks. Furthermore, analysis of multilayer networks constructed using real-world social and biological data shows that the localization properties of these real-world multilayer networks are in good agreement with the simulation results for the model multilayer network. This paper is relevant to applications that require understanding propagation of perturbation in multilayer networks.

  6. Optimized variational analysis scheme of single Doppler radar wind data

    Science.gov (United States)

    Sasaki, Yoshi K.; Allen, Steve; Mizuno, Koki; Whitehead, Victor; Wilk, Kenneth E.

    1989-01-01

    A computer scheme for extracting singularities has been developed and applied to single Doppler radar wind data. The scheme is planned for use in real-time wind and singularity analysis and forecasting. The method, known as Doppler Operational Variational Extraction of Singularities is outlined, focusing on the principle of local symmetry. Results are presented from the application of the scheme to a storm-generated gust front in Oklahoma on May 28, 1987.

  7. Method of stabilizing single channel analyzers

    International Nuclear Information System (INIS)

    Fasching, G.E.; Patton, G.H.

    1975-01-01

    A method and the apparatus to reduce the drift of single channel analyzers are described. Essentially, this invention employs a time-sharing or multiplexing technique to insure that the outputs from two single channel analyzers (SCAS) maintain the same count ratio regardless of variations in the threshold voltage source or voltage changes, the multiplexing technique is accomplished when a flip flop, actuated by a clock, changes state to switch the output from the individual SCAS before these outputs are sent to a ratio counting scalar. In the particular system embodiment disclosed that illustrates this invention, the sulfur content of coal is determined by subjecting the coal to radiation from a neutron producing source. A photomultiplier and detector system equates the transmitted gamma radiation to an analog voltage signal and sends the same signal after amplification, to a SCA system that contains the invention. Therein, at least two single channel analyzers scan the analog signal over different parts of a spectral region. The two outputs may then be sent to a digital multiplexer so that the output from the multiplexer contains counts falling within two distinct segments of the region. By dividing the counts from the multiplexer by each other, the percentage of sulfur within the coal sample under observation may be determined. (U.S.)

  8. Multiobjective Optimization Modeling Approach for Multipurpose Single Reservoir Operation

    Directory of Open Access Journals (Sweden)

    Iosvany Recio Villa

    2018-04-01

    Full Text Available The water resources planning and management discipline recognizes the importance of a reservoir’s carryover storage. However, mathematical models for reservoir operation that include carryover storage are scarce. This paper presents a novel multiobjective optimization modeling framework that uses the constraint-ε method and genetic algorithms as optimization techniques for the operation of multipurpose simple reservoirs, including carryover storage. The carryover storage was conceived by modifying Kritsky and Menkel’s method for reservoir design at the operational stage. The main objective function minimizes the cost of the total annual water shortage for irrigation areas connected to a reservoir, while the secondary one maximizes its energy production. The model includes operational constraints for the reservoir, Kritsky and Menkel’s method, irrigation areas, and the hydropower plant. The study is applied to Carlos Manuel de Céspedes reservoir, establishing a 12-month planning horizon and an annual reliability of 75%. The results highly demonstrate the applicability of the model, obtaining monthly releases from the reservoir that include the carryover storage, degree of reservoir inflow regulation, water shortages in irrigation areas, and the energy generated by the hydroelectric plant. The main product is an operational graph that includes zones as well as rule and guide curves, which are used as triggers for long-term reservoir operation.

  9. Single Layer Extended Release Two-in-One Guaifenesin Matrix Tablet: Formulation Method, Optimization, Release Kinetics Evaluation and Its Comparison with Mucinex® Using Box-Behnken Design.

    Science.gov (United States)

    Morovati, Amirhosein; Ghaffari, Alireza; Erfani Jabarian, Lale; Mehramizi, Ali

    2017-01-01

    Guaifenesin, a highly water-soluble active (50 mg/mL), classified as a BCS class I drug. Owing to its poor flowability and compressibility, formulating tablets especially high-dose one, may be a challenge. Direct compression may not be feasible. Bilayer tablet technology applied to Mucinex®, endures challenges to deliver a robust formulation. To overcome challenges involved in bilayer-tablet manufacturing and powder compressibility, an optimized single layer tablet prepared by a binary mixture (Two-in-one), mimicking the dual drug release character of Mucinex ® was purposed. A 3-factor, 3-level Box-Behnken design was applied to optimize seven considered dependent variables (Release "%" in 1, 2, 4, 6, 8, 10 and 12 h) regarding different levels of independent one (X 1 : Cetyl alcohol, X 2 : Starch 1500 ® , X 3 : HPMC K100M amounts). Two granule portions were prepared using melt and wet granulations, blended together prior to compression. An optimum formulation was obtained (X 1 : 37.10, X 2 : 2, X 3 : 42.49 mg). Desirability function was 0.616. F2 and f1 between release profiles of Mucinex® and the optimum formulation were 74 and 3, respectively. An n-value of about 0.5 for both optimum and Mucinex® formulations showed diffusion (Fickian) control mechanism. However, HPMC K100M rise in 70 mg accompanied cetyl alcohol rise in 60 mg led to first order kinetic (n = 0.6962). The K values of 1.56 represented an identical burst drug releases. Cetyl alcohol and starch 1500 ® modulated guaifenesin release from HPMC K100M matrices, while due to their binding properties, improved its poor flowability and compressibility, too.

  10. On Best Practice Optimization Methods in R

    Directory of Open Access Journals (Sweden)

    John C. Nash

    2014-09-01

    Full Text Available R (R Core Team 2014 provides a powerful and flexible system for statistical computations. It has a default-install set of functionality that can be expanded by the use of several thousand add-in packages as well as user-written scripts. While R is itself a programming language, it has proven relatively easy to incorporate programs in other languages, particularly Fortran and C. Success, however, can lead to its own costs: • Users face a confusion of choice when trying to select packages in approaching a problem. • A need to maintain workable examples using early methods may mean some tools offered as a default may be dated. • In an open-source project like R, how to decide what tools offer "best practice" choices, and how to implement such a policy, present a serious challenge. We discuss these issues with reference to the tools in R for nonlinear parameter estimation (NLPE and optimization, though for the present article `optimization` will be limited to function minimization of essentially smooth functions with at most bounds constraints on the parameters. We will abbreviate this class of problems as NLPE. We believe that the concepts proposed are transferable to other classes of problems seen by R users.

  11. Online Adaptive Optimal Control of Vehicle Active Suspension Systems Using Single-Network Approximate Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Zhi-Jun Fu

    2017-01-01

    Full Text Available In view of the performance requirements (e.g., ride comfort, road holding, and suspension space limitation for vehicle suspension systems, this paper proposes an adaptive optimal control method for quarter-car active suspension system by using the approximate dynamic programming approach (ADP. Online optimal control law is obtained by using a single adaptive critic NN to approximate the solution of the Hamilton-Jacobi-Bellman (HJB equation. Stability of the closed-loop system is proved by Lyapunov theory. Compared with the classic linear quadratic regulator (LQR approach, the proposed ADP-based adaptive optimal control method demonstrates improved performance in the presence of parametric uncertainties (e.g., sprung mass and unknown road displacement. Numerical simulation results of a sedan suspension system are presented to verify the effectiveness of the proposed control strategy.

  12. Improvement of Source Number Estimation Method for Single Channel Signal.

    Directory of Open Access Journals (Sweden)

    Zhi Dong

    Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.

  13. Numerical methods and optimization a consumer guide

    CERN Document Server

    Walter, Éric

    2014-01-01

    Initial training in pure and applied sciences tends to present problem-solving as the process of elaborating explicit closed-form solutions from basic principles, and then using these solutions in numerical applications. This approach is only applicable to very limited classes of problems that are simple enough for such closed-form solutions to exist. Unfortunately, most real-life problems are too complex to be amenable to this type of treatment. Numerical Methods and Optimization – A Consumer Guide presents methods for dealing with them. Shifting the paradigm from formal calculus to numerical computation, the text makes it possible for the reader to ·         discover how to escape the dictatorship of those particular cases that are simple enough to receive a closed-form solution, and thus gain the ability to solve complex, real-life problems; ·         understand the principles behind recognized algorithms used in state-of-the-art numerical software; ·         learn the advantag...

  14. Design optimization of shell-and-tube heat exchangers using single objective and multiobjective particle swarm optimization

    International Nuclear Information System (INIS)

    Elsays, Mostafa A.; Naguib Aly, M; Badawi, Alya A.

    2010-01-01

    The Particle Swarm Optimization (PSO) algorithm is used to optimize the design of shell-and-tube heat exchangers and determine the optimal feasible solutions so as to eliminate trial-and-error during the design process. The design formulation takes into account the area and the total annual cost of heat exchangers as two objective functions together with operating as well as geometrical constraints. The Nonlinear Constrained Single Objective Particle Swarm Optimization (NCSOPSO) algorithm is used to minimize and find the optimal feasible solution for each of the nonlinear constrained objective functions alone, respectively. Then, a novel Nonlinear Constrained Mult-objective Particle Swarm Optimization (NCMOPSO) algorithm is used to minimize and find the Pareto optimal solutions for both of the nonlinear constrained objective functions together. The experimental results show that the two algorithms are very efficient, fast and can find the accurate optimal feasible solutions of the shell and tube heat exchangers design optimization problem. (orig.)

  15. Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks

    Science.gov (United States)

    Rai, Man Mohan

    2006-01-01

    Genetic and evolutionary algorithms have been applied to solve numerous problems in engineering design where they have been used primarily as optimization procedures. These methods have an advantage over conventional gradient-based search procedures became they are capable of finding global optima of multi-modal functions and searching design spaces with disjoint feasible regions. They are also robust in the presence of noisy data. Another desirable feature of these methods is that they can efficiently use distributed and parallel computing resources since multiple function evaluations (flow simulations in aerodynamics design) can be performed simultaneously and independently on ultiple processors. For these reasons genetic and evolutionary algorithms are being used more frequently in design optimization. Examples include airfoil and wing design and compressor and turbine airfoil design. They are also finding increasing use in multiple-objective and multidisciplinary optimization. This lecture will focus on an evolutionary method that is a relatively new member to the general class of evolutionary methods called differential evolution (DE). This method is easy to use and program and it requires relatively few user-specified constants. These constants are easily determined for a wide class of problems. Fine-tuning the constants will off course yield the solution to the optimization problem at hand more rapidly. DE can be efficiently implemented on parallel computers and can be used for continuous, discrete and mixed discrete/continuous optimization problems. It does not require the objective function to be continuous and is noise tolerant. DE and applications to single and multiple-objective optimization will be included in the presentation and lecture notes. A method for aerodynamic design optimization that is based on neural networks will also be included as a part of this lecture. The method offers advantages over traditional optimization methods. It is more

  16. Models and Methods for Free Material Optimization

    DEFF Research Database (Denmark)

    Weldeyesus, Alemseged Gebrehiwot

    Free Material Optimization (FMO) is a powerful approach for structural optimization in which the design parametrization allows the entire elastic stiffness tensor to vary freely at each point of the design domain. The only requirement imposed on the stiffness tensor lies on its mild necessary...

  17. Adjoint Optimization of a Wing Using the CSRT Method

    NARCIS (Netherlands)

    Straathof, M.H.; Van Tooren, M.J.L.

    2011-01-01

    This paper will demonstrate the potential of the Class-Shape-Refinement-Transformation (CSRT) method for aerodynamically optimizing three-dimensional surfaces. The CSRT method was coupled to an in-house Euler solver and this combination was used in an optimization framework to optimize the ONERA M6

  18. A new optimal seam method for seamless image stitching

    Science.gov (United States)

    Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng

    2017-07-01

    A novel optimal seam method which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain optimal seam method and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek optimal stitching path. To smooth the optimal stitching path, a simplified pixel correction and weighted average method are utilized individually. The proposed methods exhibit performance in eliminating the stitching seam compared with the traditional gradient optimal seam and high efficiency with multi-band blending algorithm.

  19. A Review of Design Optimization Methods for Electrical Machines

    Directory of Open Access Journals (Sweden)

    Gang Lei

    2017-11-01

    Full Text Available Electrical machines are the hearts of many appliances, industrial equipment and systems. In the context of global sustainability, they must fulfill various requirements, not only physically and technologically but also environmentally. Therefore, their design optimization process becomes more and more complex as more engineering disciplines/domains and constraints are involved, such as electromagnetics, structural mechanics and heat transfer. This paper aims to present a review of the design optimization methods for electrical machines, including design analysis methods and models, optimization models, algorithms and methods/strategies. Several efficient optimization methods/strategies are highlighted with comments, including surrogate-model based and multi-level optimization methods. In addition, two promising and challenging topics in both academic and industrial communities are discussed, and two novel optimization methods are introduced for advanced design optimization of electrical machines. First, a system-level design optimization method is introduced for the development of advanced electric drive systems. Second, a robust design optimization method based on the design for six-sigma technique is introduced for high-quality manufacturing of electrical machines in production. Meanwhile, a proposal is presented for the development of a robust design optimization service based on industrial big data and cloud computing services. Finally, five future directions are proposed, including smart design optimization method for future intelligent design and production of electrical machines.

  20. Method of optimization onboard communication network

    Science.gov (United States)

    Platoshin, G. A.; Selvesuk, N. I.; Semenov, M. E.; Novikov, V. M.

    2018-02-01

    In this article the optimization levels of onboard communication network (OCN) are proposed. We defined the basic parameters, which are necessary for the evaluation and comparison of modern OCN, we identified also a set of initial data for possible modeling of the OCN. We also proposed a mathematical technique for implementing the OCN optimization procedure. This technique is based on the principles and ideas of binary programming. It is shown that the binary programming technique allows to obtain an inherently optimal solution for the avionics tasks. An example of the proposed approach implementation to the problem of devices assignment in OCN is considered.

  1. Single-cell qPCR on dispersed primary pituitary cells -an optimized protocol

    Directory of Open Access Journals (Sweden)

    Haug Trude M

    2010-11-01

    Full Text Available Abstract Background The incidence of false positives is a potential problem in single-cell PCR experiments. This paper describes an optimized protocol for single-cell qPCR measurements in primary pituitary cell cultures following patch-clamp recordings. Two different cell harvesting methods were assessed using both the GH4 prolactin producing cell line from rat, and primary cell culture from fish pituitaries. Results Harvesting whole cells followed by cell lysis and qPCR performed satisfactory on the GH4 cell line. However, harvesting of whole cells from primary pituitary cultures regularly produced false positives, probably due to RNA leakage from cells ruptured during the dispersion of the pituitary cells. To reduce RNA contamination affecting the results, we optimized the conditions by harvesting only the cytosol through a patch pipette, subsequent to electrophysiological experiments. Two important factors proved crucial for reliable harvesting. First, silanizing the patch pipette glass prevented foreign extracellular RNA from attaching to charged residues on the glass surface. Second, substituting the commonly used perforating antibiotic amphotericin B with β-escin allowed efficient cytosol harvest without loosing the giga seal. Importantly, the two harvesting protocols revealed no difference in RNA isolation efficiency. Conclusion Depending on the cell type and preparation, validation of the harvesting technique is extremely important as contaminations may give false positives. Here we present an optimized protocol allowing secure harvesting of RNA from single cells in primary pituitary cell culture following perforated whole cell patch clamp experiments.

  2. A simple method to optimize HMC performance

    CERN Document Server

    Bussone, Andrea; Drach, Vincent; Hansen, Martin; Hietanen, Ari; Rantaharju, Jarno; Pica, Claudio

    2016-01-01

    We present a practical strategy to optimize a set of Hybrid Monte Carlo parameters in simulations of QCD and QCD-like theories. We specialize to the case of mass-preconditioning, with multiple time-step Omelyan integrators. Starting from properties of the shadow Hamiltonian we show how the optimal setup for the integrator can be chosen once the forces and their variances are measured, assuming that those only depend on the mass-preconditioning parameter.

  3. Topology optimization based on the harmony search method

    International Nuclear Information System (INIS)

    Lee, Seung-Min; Han, Seog-Young

    2017-01-01

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  4. Topology optimization based on the harmony search method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Min; Han, Seog-Young [Hanyang University, Seoul (Korea, Republic of)

    2017-06-15

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  5. Element stacking method for topology optimization with material-dependent boundary and loading conditions

    DEFF Research Database (Denmark)

    Yoon, Gil Ho; Park, Y.K.; Kim, Y.Y.

    2007-01-01

    A new topology optimization scheme, called the element stacking method, is developed to better handle design optimization involving material-dependent boundary conditions and selection of elements of different types. If these problems are solved by existing standard approaches, complicated finite...... element models or topology optimization reformulation may be necessary. The key idea of the proposed method is to stack multiple elements on the same discretization pixel and select a single or no element. In this method, stacked elements on the same pixel have the same coordinates but may have...... independent degrees of freedom. Some test problems are considered to check the effectiveness of the proposed stacking method....

  6. FUZZY LOGIC BASED OPTIMIZATION OF CAPACITOR VALUE FOR SINGLE PHASE OPEN WELL SUBMERSIBLE INDUCTION MOTOR

    Directory of Open Access Journals (Sweden)

    R. Subramanian

    2011-01-01

    Full Text Available Purpose – The aim of this paper is to optimize the capacitor value of a single phase open well submersible motor operating under extreme voltage conditions using fuzzy logic optimization technique and compared with no-load volt-ampere method. This is done by keeping the displacement angle (a between main winding and auxiliary winding near 90o, phase angle (f between the supply voltage and line current near 0o. The optimization work is carried out by using Fuzzy Logic Toolbox software built on the MATLAB technical computing environment with Simulink software. Findings – The optimum capacitor value obtained is used with a motor and tested for different supply voltage conditions. The vector diagrams obtained from the experimental test results indicates that the performance is improved from the existing value. Originality/value – This method will be highly useful for the practicing design engineers in selecting the optimum capacitance value for single phase induction motors to achieve the best performance for operating at extreme supply voltage conditions.

  7. Review of design optimization methods for turbomachinery aerodynamics

    Science.gov (United States)

    Li, Zhihui; Zheng, Xinqian

    2017-08-01

    In today's competitive environment, new turbomachinery designs need to be not only more efficient, quieter, and ;greener; but also need to be developed at on much shorter time scales and at lower costs. A number of advanced optimization strategies have been developed to achieve these requirements. This paper reviews recent progress in turbomachinery design optimization to solve real-world aerodynamic problems, especially for compressors and turbines. This review covers the following topics that are important for optimizing turbomachinery designs. (1) optimization methods, (2) stochastic optimization combined with blade parameterization methods and the design of experiment methods, (3) gradient-based optimization methods for compressors and turbines and (4) data mining techniques for Pareto Fronts. We also present our own insights regarding the current research trends and the future optimization of turbomachinery designs.

  8. Multi-objective compared to single-objective optimization with application to model validation and uncertainty quantification

    Energy Technology Data Exchange (ETDEWEB)

    Schulze-Riegert, R.; Krosche, M.; Stekolschikov, K. [Scandpower Petroleum Technology GmbH, Hamburg (Germany); Fahimuddin, A. [Technische Univ. Braunschweig (Germany)

    2007-09-13

    History Matching in Reservoir Simulation, well location and production optimization etc. is generally a multi-objective optimization problem. The problem statement of history matching for a realistic field case includes many field and well measurements in time and type, e.g. pressure measurements, fluid rates, events such as water and gas break-throughs, etc. Uncertainty parameters modified as part of the history matching process have varying impact on the improvement of the match criteria. Competing match criteria often reduce the likelihood of finding an acceptable history match. It is an engineering challenge in manual history matching processes to identify competing objectives and to implement the changes required in the simulation model. In production optimization or scenario optimization the focus on one key optimization criterion such as NPV limits the identification of alternatives and potential opportunities, since multiple objectives are summarized in a predefined global objective formulation. Previous works primarily focus on a specific optimization method. Few works actually concentrate on the objective formulation and multi-objective optimization schemes have not yet been applied to reservoir simulations. This paper presents a multi-objective optimization approach applicable to reservoir simulation. It addresses the problem of multi-objective criteria in a history matching study and presents analysis techniques identifying competing match criteria. A Pareto-Optimizer is discussed and the implementation of that multi-objective optimization scheme is applied to a case study. Results are compared to a single-objective optimization method. (orig.)

  9. A Method for Determining Optimal Residential Energy Efficiency Packages

    Energy Technology Data Exchange (ETDEWEB)

    Polly, B. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gestwick, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bianchi, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Anderson, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Horowitz, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Christensen, C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Judkoff, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2011-04-01

    This report describes an analysis method for determining optimal residential energy efficiency retrofit packages and, as an illustrative example, applies the analysis method to a 1960s-era home in eight U.S. cities covering a range of International Energy Conservation Code (IECC) climate regions. The method uses an optimization scheme that considers average energy use (determined from building energy simulations) and equivalent annual cost to recommend optimal retrofit packages specific to the building, occupants, and location.

  10. An Optimization Method of Passenger Assignment for Customized Bus

    OpenAIRE

    Yang Cao; Jian Wang

    2017-01-01

    This study proposes an optimization method of passenger assignment on customized buses (CB). Our proposed method guarantees benefits to passengers by balancing the elements of travel time, waiting time, delay, and economic cost. The optimization problem was solved using a Branch and Bound (B&B) algorithm based on the shortest path for the selected stations. A simulation-based evaluation of the proposed optimization method was conducted. We find that a CB service can save 38.33% in average tra...

  11. Optimal reload and depletion method for pressurized water reactors

    International Nuclear Information System (INIS)

    Ahn, D.H.

    1984-01-01

    A new method has been developed to automatically reload and deplete a PWR so that both the enriched inventory requirements during the reactor cycle and the cost of reloading the core are minimized. This is achieved through four stepwise optimization calculations: 1) determination of the minimum fuel requirement for an equivalent three-region core model, 2) optimal selection and allocation of fuel requirement for an equivalent three-region core model, 2) optimal selection and allocation of fuel assemblies for each of the three regions to minimize the cost of the fresh reload fuel, 3) optimal placement of fuel assemblies to conserve regionwise optimal conditions and 4) optimal control through poison management to deplete individual fuel assemblies to maximize EOC k/sub eff/. Optimizing the fuel cost of reloading and depleting a PWR reactor cycle requires solutions to two separate optimization calculations. One of these minimizes the enriched fuel inventory in the core by optimizing the EOC k/sub eff/. The other minimizes the cost of the fresh reload cost. Both of these optimization calculations have now been combined to provide a new method for performing an automatic optimal reload of PWR's. The new method differs from previous methods in that the optimization process performs all tasks required to reload and deplete a PWR

  12. Optimal scheduling of micro grids based on single objective programming

    Science.gov (United States)

    Chen, Yue

    2018-04-01

    Faced with the growing demand for electricity and the shortage of fossil fuels, how to optimally optimize the micro-grid has become an important research topic to maximize the economic, technological and environmental benefits of the micro-grid. This paper considers the role of the battery and the micro-grid and power grid to allow the exchange of power not exceeding 150kW preconditions, the main study of the economy to load for the goal is to minimize the electricity cost (abandonment of wind), to establish an optimization model, and to solve the problem by genetic algorithm. The optimal scheduling scheme is obtained and the utilization of renewable energy and the impact of the battery involved in regulation are analyzed.

  13. Augmented Lagrangian Method For Discretized Optimal Control ...

    African Journals Online (AJOL)

    In this paper, we are concerned with one-dimensional time invariant optimal control problem, whose objective function is quadratic and the dynamical system is a differential equation with initial condition .Since most real life problems are nonlinear and their analytical solutions are not readily available, we resolve to ...

  14. METHOD FOR OPTIMIZING THE ENERGY OF PUMPS

    NARCIS (Netherlands)

    Skovmose Kallesøe, Carsten; De Persis, Claudio

    2013-01-01

    The device for energy-optimization on operation of several centrifugal pumps controlled in rotational speed, in a hydraulic installation, begins firstly with determining which pumps as pilot pumps are assigned directly to a consumer and which pumps are hydraulically connected in series upstream of

  15. State space Newton's method for topology optimization

    DEFF Research Database (Denmark)

    Evgrafov, Anton

    2014-01-01

    /10/1-type constraints on the design field through penalties in many topology optimization approaches. We test the algorithm on the benchmark problems of dissipated power minimization for Stokes flows, and in all cases the algorithm outperforms the traditional first order reduced space/nested approaches...

  16. COMPARISON OF NONLINEAR DYNAMICS OPTIMIZATION METHODS FOR APS-U

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Y.; Borland, Michael

    2017-06-25

    Many different objectives and genetic algorithms have been proposed for storage ring nonlinear dynamics performance optimization. These optimization objectives include nonlinear chromaticities and driving/detuning terms, on-momentum and off-momentum dynamic acceptance, chromatic detuning, local momentum acceptance, variation of transverse invariant, Touschek lifetime, etc. In this paper, the effectiveness of several different optimization methods and objectives are compared for the nonlinear beam dynamics optimization of the Advanced Photon Source upgrade (APS-U) lattice. The optimized solutions from these different methods are preliminarily compared in terms of the dynamic acceptance, local momentum acceptance, chromatic detuning, and other performance measures.

  17. Logic-based methods for optimization combining optimization and constraint satisfaction

    CERN Document Server

    Hooker, John

    2011-01-01

    A pioneering look at the fundamental role of logic in optimization and constraint satisfaction While recent efforts to combine optimization and constraint satisfaction have received considerable attention, little has been said about using logic in optimization as the key to unifying the two fields. Logic-Based Methods for Optimization develops for the first time a comprehensive conceptual framework for integrating optimization and constraint satisfaction, then goes a step further and shows how extending logical inference to optimization allows for more powerful as well as flexible

  18. Trajectory Optimization Based on Multi-Interval Mesh Refinement Method

    Directory of Open Access Journals (Sweden)

    Ningbo Li

    2017-01-01

    Full Text Available In order to improve the optimization accuracy and convergence rate for trajectory optimization of the air-to-air missile, a multi-interval mesh refinement Radau pseudospectral method was introduced. This method made the mesh endpoints converge to the practical nonsmooth points and decreased the overall collocation points to improve convergence rate and computational efficiency. The trajectory was divided into four phases according to the working time of engine and handover of midcourse and terminal guidance, and then the optimization model was built. The multi-interval mesh refinement Radau pseudospectral method with different collocation points in each mesh interval was used to solve the trajectory optimization model. Moreover, this method was compared with traditional h method. Simulation results show that this method can decrease the dimensionality of nonlinear programming (NLP problem and therefore improve the efficiency of pseudospectral methods for solving trajectory optimization problems.

  19. Topology Optimization of Passive Micromixers Based on Lagrangian Mapping Method

    Directory of Open Access Journals (Sweden)

    Yuchen Guo

    2018-03-01

    Full Text Available This paper presents an optimization-based design method of passive micromixers for immiscible fluids, which means that the Peclet number infinitely large. Based on topology optimization method, an optimization model is constructed to find the optimal layout of the passive micromixers. Being different from the topology optimization methods with Eulerian description of the convection-diffusion dynamics, this proposed method considers the extreme case, where the mixing is dominated completely by the convection with negligible diffusion. In this method, the mixing dynamics is modeled by the mapping method, a Lagrangian description that can deal with the case with convection-dominance. Several numerical examples have been presented to demonstrate the validity of the proposed method.

  20. Practical optimization of Steiner trees via the cavity method

    Science.gov (United States)

    Braunstein, Alfredo; Muntoni, Anna

    2016-07-01

    The optimization version of the cavity method for single instances, called Max-Sum, has been applied in the past to the minimum Steiner tree problem on graphs and variants. Max-Sum has been shown experimentally to give asymptotically optimal results on certain types of weighted random graphs, and to give good solutions in short computation times for some types of real networks. However, the hypotheses behind the formulation and the cavity method itself limit substantially the class of instances on which the approach gives good results (or even converges). Moreover, in the standard model formulation, the diameter of the tree solution is limited by a predefined bound, that affects both computation time and convergence properties. In this work we describe two main enhancements to the Max-Sum equations to be able to cope with optimization of real-world instances. First, we develop an alternative ‘flat’ model formulation that allows the relevant configuration space to be reduced substantially, making the approach feasible on instances with large solution diameter, in particular when the number of terminal nodes is small. Second, we propose an integration between Max-Sum and three greedy heuristics. This integration allows Max-Sum to be transformed into a highly competitive self-contained algorithm, in which a feasible solution is given at each step of the iterative procedure. Part of this development participated in the 2014 DIMACS Challenge on Steiner problems, and we report the results here. The performance on the challenge of the proposed approach was highly satisfactory: it maintained a small gap to the best bound in most cases, and obtained the best results on several instances in two different categories. We also present several improvements with respect to the version of the algorithm that participated in the competition, including new best solutions for some of the instances of the challenge.

  1. Spectral methods. Fundamentals in single domains

    International Nuclear Information System (INIS)

    Canuto, C.

    2006-01-01

    Since the publication of ''Spectral Methods in Fluid Dynamics'' 1988, spectral methods have become firmly established as a mainstream tool for scientific and engineering computation. The authors of that book have incorporated into this new edition the many improvements in the algorithms and the theory of spectral methods that have been made since then. This latest book retains the tight integration between the theoretical and practical aspects of spectral methods, and the chapters are enhanced with material on the Galerkin with numerical integration version of spectral methods. The discussion of direct and iterative solution methods is also greatly expanded. (orig.)

  2. Methods of orbit correction system optimization

    International Nuclear Information System (INIS)

    Chao, Yu-Chiu.

    1997-01-01

    Extracting optimal performance out of an orbit correction system is an important component of accelerator design and evaluation. The question of effectiveness vs. economy, however, is not always easily tractable. This is especially true in cases where betatron function magnitude and phase advance do not have smooth or periodic dependencies on the physical distance. In this report a program is presented using linear algebraic techniques to address this problem. A systematic recipe is given, supported with quantitative criteria, for arriving at an orbit correction system design with the optimal balance between performance and economy. The orbit referred to in this context can be generalized to include angle, path length, orbit effects on the optical transfer matrix, and simultaneous effects on multiple pass orbits

  3. Mathematical programming methods for large-scale topology optimization problems

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana

    for mechanical problems, but has rapidly extended to many other disciplines, such as fluid dynamics and biomechanical problems. However, the novelty and improvements of optimization methods has been very limited. It is, indeed, necessary to develop of new optimization methods to improve the final designs......, and at the same time, reduce the number of function evaluations. Nonlinear optimization methods, such as sequential quadratic programming and interior point solvers, have almost not been embraced by the topology optimization community. Thus, this work is focused on the introduction of this kind of second...... for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...

  4. Primal Interior-Point Method for Large Sparse Minimax Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 45, č. 5 (2009), s. 841-864 ISSN 0023-5954 R&D Projects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : unconstrained optimization * large-scale optimization * minimax optimization * nonsmooth optimization * interior-point methods * modified Newton methods * variable metric methods * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.445, year: 2009 http://dml.cz/handle/10338.dmlcz/140034

  5. Numerical methods of mathematical optimization with Algol and Fortran programs

    CERN Document Server

    Künzi, Hans P; Zehnder, C A; Rheinboldt, Werner

    1971-01-01

    Numerical Methods of Mathematical Optimization: With ALGOL and FORTRAN Programs reviews the theory and the practical application of the numerical methods of mathematical optimization. An ALGOL and a FORTRAN program was developed for each one of the algorithms described in the theoretical section. This should result in easy access to the application of the different optimization methods.Comprised of four chapters, this volume begins with a discussion on the theory of linear and nonlinear optimization, with the main stress on an easily understood, mathematically precise presentation. In addition

  6. Stochastic optimal control of single neuron spike trains

    DEFF Research Database (Denmark)

    Iolov, Alexandre; Ditlevsen, Susanne; Longtin, Andrë

    2014-01-01

    stimulation of a neuron to achieve a target spike train under the physiological constraint to not damage tissue. Approach. We pose a stochastic optimal control problem to precisely specify the spike times in a leaky integrate-and-fire (LIF) model of a neuron with noise assumed to be of intrinsic or synaptic...... origin. In particular, we allow for the noise to be of arbitrary intensity. The optimal control problem is solved using dynamic programming when the controller has access to the voltage (closed-loop control), and using a maximum principle for the transition density when the controller only has access...... to the spike times (open-loop control). Main results. We have developed a stochastic optimal control algorithm to obtain precise spike times. It is applicable in both the supra-threshold and sub-threshold regimes, under open-loop and closed-loop conditions and with an arbitrary noise intensity; the accuracy...

  7. Method for the irradiation of single targets

    International Nuclear Information System (INIS)

    Krimmel, E.; Dullnig, H.

    1977-01-01

    The invention pertains to a system for the irradiation of single targets with particle beams. The targets all have frames around them. The system consists of an automatic advance leading into a high-vacuum chamber, and a positioning element which guides one target after the other into the irradiation position, at right angles to the automatic advance, and back into the automatic advance after irradiation. (GSCH) [de

  8. Review of dynamic optimization methods in renewable natural resource management

    Science.gov (United States)

    Williams, B.K.

    1989-01-01

    In recent years, the applications of dynamic optimization procedures in natural resource management have proliferated. A systematic review of these applications is given in terms of a number of optimization methodologies and natural resource systems. The applicability of the methods to renewable natural resource systems are compared in terms of system complexity, system size, and precision of the optimal solutions. Recommendations are made concerning the appropriate methods for certain kinds of biological resource problems.

  9. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    Science.gov (United States)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  10. Optimization and control methods in industrial engineering and construction

    CERN Document Server

    Wang, Xiangyu

    2014-01-01

    This book presents recent advances in optimization and control methods with applications to industrial engineering and construction management. It consists of 15 chapters authored by recognized experts in a variety of fields including control and operation research, industrial engineering, and project management. Topics include numerical methods in unconstrained optimization, robust optimal control problems, set splitting problems, optimum confidence interval analysis, a monitoring networks optimization survey, distributed fault detection, nonferrous industrial optimization approaches, neural networks in traffic flows, economic scheduling of CCHP systems, a project scheduling optimization survey, lean and agile construction project management, practical construction projects in Hong Kong, dynamic project management, production control in PC4P, and target contracts optimization.   The book offers a valuable reference work for scientists, engineers, researchers and practitioners in industrial engineering and c...

  11. An Improved Marriage in Honey Bees Optimization Algorithm for Single Objective Unconstrained Optimization

    Directory of Open Access Journals (Sweden)

    Yuksel Celik

    2013-01-01

    Full Text Available Marriage in honey bees optimization (MBO is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm’s performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms.

  12. Gradient-based methods for production optimization of oil reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Suwartadi, Eka

    2012-07-01

    Production optimization for water flooding in the secondary phase of oil recovery is the main topic in this thesis. The emphasis has been on numerical optimization algorithms, tested on case examples using simple hypothetical oil reservoirs. Gradientbased optimization, which utilizes adjoint-based gradient computation, is used to solve the optimization problems. The first contribution of this thesis is to address output constraint problems. These kinds of constraints are natural in production optimization. Limiting total water production and water cut at producer wells are examples of such constraints. To maintain the feasibility of an optimization solution, a Lagrangian barrier method is proposed to handle the output constraints. This method incorporates the output constraints into the objective function, thus avoiding additional computations for the constraints gradient (Jacobian) which may be detrimental to the efficiency of the adjoint method. The second contribution is the study of the use of second-order adjoint-gradient information for production optimization. In order to speedup convergence rate in the optimization, one usually uses quasi-Newton approaches such as BFGS and SR1 methods. These methods compute an approximation of the inverse of the Hessian matrix given the first-order gradient from the adjoint method. The methods may not give significant speedup if the Hessian is ill-conditioned. We have developed and implemented the Hessian matrix computation using the adjoint method. Due to high computational cost of the Newton method itself, we instead compute the Hessian-timesvector product which is used in a conjugate gradient algorithm. Finally, the last contribution of this thesis is on surrogate optimization for water flooding in the presence of the output constraints. Two kinds of model order reduction techniques are applied to build surrogate models. These are proper orthogonal decomposition (POD) and the discrete empirical interpolation method (DEIM

  13. A New Optimization Method for Centrifugal Compressors Based on 1D Calculations and Analyses

    Directory of Open Access Journals (Sweden)

    Pei-Yuan Li

    2015-05-01

    Full Text Available This paper presents an optimization design method for centrifugal compressors based on one-dimensional calculations and analyses. It consists of two parts: (1 centrifugal compressor geometry optimization based on one-dimensional calculations and (2 matching optimization of the vaned diffuser with an impeller based on the required throat area. A low pressure stage centrifugal compressor in a MW level gas turbine is optimized by this method. One-dimensional calculation results show that D3/D2 is too large in the original design, resulting in the low efficiency of the entire stage. Based on the one-dimensional optimization results, the geometry of the diffuser has been redesigned. The outlet diameter of the vaneless diffuser has been reduced, and the original single stage diffuser has been replaced by a tandem vaned diffuser. After optimization, the entire stage pressure ratio is increased by approximately 4%, and the efficiency is increased by approximately 2%.

  14. Toward solving the sign problem with path optimization method

    Science.gov (United States)

    Mori, Yuto; Kashiwa, Kouji; Ohnishi, Akira

    2017-12-01

    We propose a new approach to circumvent the sign problem in which the integration path is optimized to control the sign problem. We give a trial function specifying the integration path in the complex plane and tune it to optimize the cost function which represents the seriousness of the sign problem. We call it the path optimization method. In this method, we do not need to solve the gradient flow required in the Lefschetz-thimble method and then the construction of the integration-path contour arrives at the optimization problem where several efficient methods can be applied. In a simple model with a serious sign problem, the path optimization method is demonstrated to work well; the residual sign problem is resolved and precise results can be obtained even in the region where the global sign problem is serious.

  15. Handbook of statistical methods single subject design

    CERN Document Server

    Satake, Eiki; Maxwell, David L

    2008-01-01

    This book is a practical guide of the most commonly used approaches in analyzing and interpreting single-subject data. It arranges the methodologies used in a logical sequence using an array of research studies from the existing published literature to illustrate specific applications. The book provides a brief discussion of each approach such as visual, inferential, and probabilistic model, the applications for which it is intended, and a step-by-step illustration of the test as used in an actual research study.

  16. Probabilistic methods for maintenance program optimization

    International Nuclear Information System (INIS)

    Liming, J.K.; Smith, M.J.; Gekler, W.C.

    1989-01-01

    In today's regulatory and economic environments, it is more important than ever that managers, engineers, and plant staff join together in developing and implementing effective management plans for safety and economic risk. This need applied to both power generating stations and other process facilities. One of the most critical parts of these management plans is the development and continuous enhancement of a maintenance program that optimizes plant or facility safety and profitability. The ultimate objective is to maximize the potential for station or facility success, usually measured in terms of projected financial profitability, while meeting or exceeding meaningful and reasonable safety goals, usually measured in terms of projected damage or consequence frequencies. This paper describes the use of the latest concepts in developing and evaluating maintenance programs to achieve maintenance program optimization (MPO). These concepts are based on significant field experience gained through the integration and application of fundamentals developed for industry and Electric Power Research Institute (EPRI)-sponsored projects on preventive maintenance (PM) program development and reliability-centered maintenance (RCM)

  17. Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation.

    Science.gov (United States)

    Frick, Eric; Rahmatalla, Salam

    2018-04-04

    The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO). First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated ( r > 0.82) with the true, time-varying joint center solution.

  18. Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Eric Frick

    2018-04-01

    Full Text Available The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA. This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO. First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated (r > 0.82 with the true, time-varying joint center solution.

  19. Computation of Optimal Monotonicity Preserving General Linear Methods

    KAUST Repository

    Ketcheson, David I.

    2009-07-01

    Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.

  20. Electrochemical Single-Molecule Transistors with Optimized Gate Coupling

    DEFF Research Database (Denmark)

    Osorio, Henrry M.; Catarelli, Samantha; Cea, Pilar

    2015-01-01

    Electrochemical gating at the single molecule level of viologen molecular bridges in ionic liquids is examined. Contrary to previous data recorded in aqueous electrolytes, a clear and sharp peak in the single molecule conductance versus electrochemical potential data is obtained in ionic liquids....... These data are rationalized in terms of a two-step electrochemical model for charge transport across the redox bridge. In this model the gate coupling in the ionic liquid is found to be fully effective with a modeled gate coupling parameter, ξ, of unity. This compares to a much lower gate coupling parameter...

  1. Optimization of time-correlated single photon counting spectrometer

    International Nuclear Information System (INIS)

    Zhang Xiufeng; Du Haiying; Sun Jinsheng

    2011-01-01

    The paper proposes a performance improving scheme for the conventional time-correlated single photon counting spectrometer and develops a high speed data acquisition card based on PCI bus and FPGA technologies. The card is used to replace the multi-channel analyzer to improve the capability and decrease the volume of the spectrometer. The process of operation is introduced along with the integration of the spectrometer system. Many standard samples are measured. The experimental results show that the sensitivity of the spectrometer is single photon counting, and the time resolution of fluorescence lifetime measurement can be picosecond level. The instrument could measure the time-resolved spectroscopy. (authors)

  2. A short numerical study on the optimization methods influence on topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Sigmund, Ole; Stolpe, Mathias

    2017-01-01

    Structural topology optimization problems are commonly defined using continuous design variables combined with material interpolation schemes. One of the challenges for density based topology optimization observed in the review article (Sigmund and Maute Struct Multidiscip Optim 48(6):1031–1055...... 2013) is the slow convergence that is often encountered in practice, when an almost solid-and-void design is found. The purpose of this forum article is to present some preliminary observations on how designs evolves during the optimization process for different choices of optimization methods...

  3. Present-day Problems and Methods of Optimization in Mechatronics

    Directory of Open Access Journals (Sweden)

    Tarnowski Wojciech

    2017-06-01

    Full Text Available It is justified that design is an inverse problem, and the optimization is a paradigm. Classes of design problems are proposed and typical obstacles are recognized. Peculiarities of the mechatronic designing are specified as a proof of a particle importance of optimization in the mechatronic design. Two main obstacles of optimization are discussed: a complexity of mathematical models and an uncertainty of the value system, in concrete case. Then a set of non-standard approaches and methods are presented and discussed, illustrated by examples: a fuzzy description, a constraint-based iterative optimization, AHP ranking method and a few MADM functions in Matlab.

  4. Control Methods Utilizing Energy Optimizing Schemes in Refrigeration Systems

    DEFF Research Database (Denmark)

    Larsen, L.S; Thybo, C.; Stoustrup, Jakob

    2003-01-01

    The potential energy savings in refrigeration systems using energy optimal control has been proved to be substantial. This however requires an intelligent control that drives the refrigeration systems towards the energy optimal state. This paper proposes an approach for a control, which drives th...... the condenser pressure towards an optimal state. The objective of this is to present a feasible method that can be used for energy optimizing control. A simulation model of a simple refrigeration system will be used as basis for testing the control method....

  5. Time-optimal thermalization of single-mode Gaussian states

    Science.gov (United States)

    Carlini, Alberto; Mari, Andrea; Giovannetti, Vittorio

    2014-11-01

    We consider the problem of time-optimal control of a continuous bosonic quantum system subject to the action of a Markovian dissipation. In particular, we consider the case of a one-mode Gaussian quantum system prepared in an arbitrary initial state and which relaxes to the steady state due to the action of the dissipative channel. We assume that the unitary part of the dynamics is represented by Gaussian operations which preserve the Gaussian nature of the quantum state, i.e., arbitrary phase rotations, bounded squeezing, and unlimited displacements. In the ideal ansatz of unconstrained quantum control (i.e., when the unitary phase rotations, squeezing, and displacement of the mode can be performed instantaneously), we study how control can be optimized for speeding up the relaxation towards the fixed point of the dynamics and we analytically derive the optimal relaxation time. Our model has potential and interesting applications to the control of modes of electromagnetic radiation and of trapped levitated nanospheres.

  6. Optimizing Usability Studies by Complementary Evaluation Methods

    NARCIS (Netherlands)

    Schmettow, Martin; Bach, Cedric; Scapin, Dominique

    2014-01-01

    This paper examines combinations of complementary evaluation methods as a strategy for efficient usability problem discovery. A data set from an earlier study is re-analyzed, involving three evaluation methods applied to two virtual environment applications. Results of a mixed-effects logistic

  7. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  8. Sonochemical optimization of the conductivity of single wall nanotube networks

    NARCIS (Netherlands)

    Kaempgen, M.; Lebert, M.; Haluska, M.; Nicoloso, N.; Roth, S.

    2008-01-01

    Networks of single-wall carbon nanotubes (SWCNTs) are covalently functionalized with oxygen-containing groups. In lower concentration, these functional groups act as stable dopands improving the conductivity of the SWCNT material. In higher concentration however, their role as defects with a certain

  9. A brief introduction to single-molecule fluorescence methods

    NARCIS (Netherlands)

    Wildenberg, S.M.J.L.; Prevo, B.; Peterman, E.J.G.; Peterman, EJG; Wuite, GJL

    2011-01-01

    One of the more popular single-molecule approaches in biological science is single-molecule fluorescence microscopy, which is the subject of the following section of this volume. Fluorescence methods provide the sensitivity required to study biology on the single-molecule level, but they also allow

  10. A brief introduction to single-molecule fluorescence methods

    NARCIS (Netherlands)

    van den Wildenberg, Siet M.J.L.; Prevo, Bram; Peterman, Erwin J.G.

    2018-01-01

    One of the more popular single-molecule approaches in biological science is single-molecule fluorescence microscopy, which will be the subject of the following section of this volume. Fluorescence methods provide the sensitivity required to study biology on the single-molecule level, but they also

  11. Nanoscale methods for single-molecule electrochemistry.

    Science.gov (United States)

    Mathwig, Klaus; Aartsma, Thijs J; Canters, Gerard W; Lemay, Serge G

    2014-01-01

    The development of experiments capable of probing individual molecules has led to major breakthroughs in fields ranging from molecular electronics to biophysics, allowing direct tests of knowledge derived from macroscopic measurements and enabling new assays that probe population heterogeneities and internal molecular dynamics. Although still somewhat in their infancy, such methods are also being developed for probing molecular systems in solution using electrochemical transduction mechanisms. Here we outline the present status of this emerging field, concentrating in particular on optical methods, metal-molecule-metal junctions, and electrochemical nanofluidic devices.

  12. A Method for Solving Combinatoral Optimization Problems

    National Research Council Canada - National Science Library

    Ruffa, Anthony A

    2008-01-01

    .... The method discloses that when the boundaries create zones with boundary vertices confined to the adjacent zones, the sets of candidate HPs are found by advancing one zone at a time, considering...

  13. An efficient multilevel optimization method for engineering design

    Science.gov (United States)

    Vanderplaats, G. N.; Yang, Y. J.; Kim, D. S.

    1988-01-01

    An efficient multilevel deisgn optimization technique is presented. The proposed method is based on the concept of providing linearized information between the system level and subsystem level optimization tasks. The advantages of the method are that it does not require optimum sensitivities, nonlinear equality constraints are not needed, and the method is relatively easy to use. The disadvantage is that the coupling between subsystems is not dealt with in a precise mathematical manner.

  14. Optimization Models and Methods Developed at the Energy Systems Institute

    OpenAIRE

    N.I. Voropai; V.I. Zorkaltsev

    2013-01-01

    The paper presents shortly some optimization models of energy system operation and expansion that have been created at the Energy Systems Institute of the Siberian Branch of the Russian Academy of Sciences. Consideration is given to the optimization models of energy development in Russia, a software package intended for analysis of power system reliability, and model of flow distribution in hydraulic systems. A general idea of the optimization methods developed at the Energy Systems Institute...

  15. Gadolinium burnable absorber optimization by the method of conjugate gradients

    International Nuclear Information System (INIS)

    Drumm, C.R.; Lee, J.C.

    1987-01-01

    The optimal axial distribution of gadolinium burnable poison in a pressurized water reactor is determined to yield an improved power distribution. The optimization scheme is based on Pontryagin's maximum principle, with the objective function accounting for a target power distribution. The conjugate gradients optimization method is used to solve the resulting Euler-Lagrange equations iteratively, efficiently handling the high degree of nonlinearity of the problem

  16. An optimization method for parameters in reactor nuclear physics

    International Nuclear Information System (INIS)

    Jachic, J.

    1982-01-01

    An optimization method for two basic problems of Reactor Physics was developed. The first is the optimization of a plutonium critical mass and the bruding ratio for fast reactors in function of the radial enrichment distribution of the fuel used as control parameter. The second is the maximization of the generation and the plutonium burnup by an optimization of power temporal distribution. (E.G.) [pt

  17. Instrument design optimization with computational methods

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Michael H. [Old Dominion Univ., Norfolk, VA (United States)

    2017-08-01

    Using Finite Element Analysis to approximate the solution of differential equations, two different instruments in experimental Hall C at the Thomas Jefferson National Accelerator Facility are analyzed. The time dependence of density uctuations from the liquid hydrogen (LH2) target used in the Qweak experiment (2011-2012) are studied with Computational Fluid Dynamics (CFD) and the simulation results compared to data from the experiment. The 2.5 kW liquid hydrogen target was the highest power LH2 target in the world and the first to be designed with CFD at Jefferson Lab. The first complete magnetic field simulation of the Super High Momentum Spectrometer (SHMS) is presented with a focus on primary electron beam deflection downstream of the target. The SHMS consists of a superconducting horizontal bending magnet (HB) and three superconducting quadrupole magnets. The HB allows particles scattered at an angle of 5:5 deg to the beam line to be steered into the quadrupole magnets which make up the optics of the spectrometer. Without mitigation, remnant fields from the SHMS may steer the unscattered beam outside of the acceptable envelope on the beam dump and limit beam operations at small scattering angles. A solution is proposed using optimal placement of a minimal amount of shielding iron around the beam line.

  18. Optimization of the southern electrophoretic transfer method

    International Nuclear Information System (INIS)

    Allison, M.A.; Fujimura, R.K.

    1987-01-01

    The technique of separating DNA fragments using agarose gel electrophoresis is essential in the analysis of nucleic acids. Further, after the method of transferring specific DNA fragments from those agarose gels to cellulose nitrate membranes was developed in 1975, a method was developed to transfer DNA, RNA, protein and ribonucleoprotein particles from various gels onto diazobenzyloxymethyl (DBM) paper using electrophoresis as well. This paper describes the optimum conditions for quantitative electrophoretic transfer of DNA onto nylon membranes. This method exemplifies the ability to hybridize the membrane more than once with specific RNA probes by providing sufficient retention of the DNA. Furthermore, the intrinsic properties of the nylon membrane allow for an increase in the efficiency and resolution of transfer while using somewhat harsh alkaline conditions. The use of alkaline conditions is of critical importance since we can now denature the DNA during transfer and thus only a short pre-treatment in acid is required for depurination. 9 refs., 7 figs

  19. A hybrid optimization method for biplanar transverse gradient coil design

    International Nuclear Information System (INIS)

    Qi Feng; Tang Xin; Jin Zhe; Jiang Zhongde; Shen Yifei; Meng Bin; Zu Donglin; Wang Weimin

    2007-01-01

    The optimization of transverse gradient coils is one of the fundamental problems in designing magnetic resonance imaging gradient systems. A new approach is presented in this paper to optimize the transverse gradient coils' performance. First, in the traditional spherical harmonic target field method, high order coefficients, which are commonly ignored, are used in the first stage of the optimization process to give better homogeneity. Then, some cosine terms are introduced into the series expansion of stream function. These new terms provide simulated annealing optimization with new freedoms. Comparison between the traditional method and the optimized method shows that the inhomogeneity in the region of interest can be reduced from 5.03% to 1.39%, the coil efficiency increased from 3.83 to 6.31 mT m -1 A -1 and the minimum distance of these discrete coils raised from 1.54 to 3.16 mm

  20. Exact and useful optimization methods for microeconomics

    NARCIS (Netherlands)

    Balder, E.J.

    2011-01-01

    This paper points out that the treatment of utility maximization in current textbooks on microeconomic theory is deficient in at least three respects: breadth of coverage, completeness-cum-coherence of solution methods and mathematical correctness. Improvements are suggested in the form of a

  1. Process control and optimization with simple interval calculation method

    DEFF Research Database (Denmark)

    Pomerantsev, A.; Rodionova, O.; Høskuldsson, Agnar

    2006-01-01

    for the quality improvement in the course of production. The latter is an active quality optimization, which takes into account the actual history of the process. The advocate approach is allied to the conventional method of multivariate statistical process control (MSPC) as it also employs the historical process......Methods of process control and optimization are presented and illustrated with a real world example. The optimization methods are based on the PLS block modeling as well as on the simple interval calculation methods of interval prediction and object status classification. It is proposed to employ...... the series of expanding PLS/SIC models in order to support the on-line process improvements. This method helps to predict the effect of planned actions on the product quality and thus enables passive quality control. We have also considered an optimization approach that proposes the correcting actions...

  2. Methods for Gas Sensing with Single-Walled Carbon Nanotubes

    Science.gov (United States)

    Kaul, Anupama B. (Inventor)

    2013-01-01

    Methods for gas sensing with single-walled carbon nanotubes are described. The methods comprise biasing at least one carbon nanotube and exposing to a gas environment to detect variation in temperature as an electrical response.

  3. CC-MUSIC: An Optimization Estimator for Mutual Coupling Correction of L-Shaped Nonuniform Array with Single Snapshot

    Directory of Open Access Journals (Sweden)

    Yuguan Hou

    2015-01-01

    Full Text Available For the case of the single snapshot, the integrated SNR gain could not be obtained without the multiple snapshots, which degrades the mutual coupling correction performance under the lower SNR case. In this paper, a Convex Chain MUSIC (CC-MUSIC algorithm is proposed for the mutual coupling correction of the L-shaped nonuniform array with single snapshot. It is an online self-calibration algorithm and does not require the prior knowledge of the correction matrix initialization and the calibration source with the known position. An optimization for the approximation between the no mutual coupling covariance matrix without the interpolated transformation and the covariance matrix with the mutual coupling and the interpolated transformation is derived. A global optimization problem is formed for the mutual coupling correction and the spatial spectrum estimation. Furthermore, the nonconvex optimization problem of this global optimization is transformed as a chain of the convex optimization, which is basically an alternating optimization routine. The simulation results demonstrate the effectiveness of the proposed method, which improve the resolution ability and the estimation accuracy of the multisources with the single snapshot.

  4. Shape Optimization of the Assisted Bi-directional Glenn surgery for stage-1 single ventricle palliation

    Science.gov (United States)

    Verma, Aekaansh; Shang, Jessica; Esmaily-Moghadam, Mahdi; Wong, Kwai; Marsden, Alison

    2016-11-01

    Babies born with a single functional ventricle typically undergo three open-heart surgeries starting as neonates. The first of these stages (BT shunt or Norwood) has the highest mortality rates of the three, approaching 30%. Proceeding directly to a stage-2 Glenn surgery has historically demonstrated inadequate pulmonary flow (PF) & high mortality. Recently, the Assisted Bi-directional Glenn (ABG) was proposed as a promising means to achieve a stable physiology by assisting the PF via an 'ejector pump' from the systemic circulation. We present preliminary parametrization and optimization results for the ABG geometry, with the goal of increasing PF. To limit excessive pressure increases in the Superior Vena Cava (SVC), the SVC pressure is included as a constraint. We use 3-D finite element flow simulations coupled with a single ventricle lumped parameter network to evaluate PF & the pressure constraint. We employ a derivative free optimization method- the Surrogate Management Framework, in conjunction with the OpenDIEL framework to simulate multiple simultaneous evaluations. Results show that nozzle diameter is the most important design parameter affecting ABG performance. The application of these results to patient specific situations will be discussed. This work was supported by an NSF CAREER award (OCI1150184) and by the XSEDE National Computing Resource.

  5. Maximum super angle optimization method for array antenna pattern synthesis

    DEFF Research Database (Denmark)

    Wu, Ji; Roederer, A. G

    1991-01-01

    Different optimization criteria related to antenna pattern synthesis are discussed. Based on the maximum criteria and vector space representation, a simple and efficient optimization method is presented for array and array fed reflector power pattern synthesis. A sector pattern synthesized by a 2...

  6. Cost Optimal Design of a Single-Phase Dry Power Transformer

    Directory of Open Access Journals (Sweden)

    Raju Basak

    2015-08-01

    Full Text Available The Dry type transformers are preferred to their oil-immersed counterparts for various reasons, particularly because their operation is hazardless. The application of dry transformers was limited to small ratings in the earlier days. But now these are being used for considerably higher ratings.  Therefore, their cost-optimal design has gained importance. This paper deals with the design procedure for achieving cost optimal design of a dry type single-phase power transformer of small rating, subject to usual design constraints on efficiency and voltage regulation. The selling cost for the transformer has been taken as the objective function. Only two key variables have been chosen, the turns/volt and the height: width ratio of window, which affects the cost function to high degrees. Other variables have been chosen on the basis of designers’ experience. Copper has been used as conductor material and CRGOS as core material to achieve higher efficiency, lower running cost and compact design. The electrical and magnetic loadings have been kept at their maximum values without violating the design constraints. The optimal solution has been obtained by the method of exhaustive search using nested loops.

  7. Optimization of large-scale industrial systems : an emerging method

    Energy Technology Data Exchange (ETDEWEB)

    Hammache, A.; Aube, F.; Benali, M.; Cantave, R. [Natural Resources Canada, Varennes, PQ (Canada). CANMET Energy Technology Centre

    2006-07-01

    This paper reviewed optimization methods of large-scale industrial production systems and presented a novel systematic multi-objective and multi-scale optimization methodology. The methodology was based on a combined local optimality search with global optimality determination, and advanced system decomposition and constraint handling. The proposed method focused on the simultaneous optimization of the energy, economy and ecology aspects of industrial systems (E{sup 3}-ISO). The aim of the methodology was to provide guidelines for decision-making strategies. The approach was based on evolutionary algorithms (EA) with specifications including hybridization of global optimality determination with a local optimality search; a self-adaptive algorithm to account for the dynamic changes of operating parameters and design variables occurring during the optimization process; interactive optimization; advanced constraint handling and decomposition strategy; and object-oriented programming and parallelization techniques. Flowcharts of the working principles of the basic EA were presented. It was concluded that the EA uses a novel decomposition and constraint handling technique to enhance the Pareto solution search procedure for multi-objective problems. 6 refs., 9 figs.

  8. Novel Verification Method for Timing Optimization Based on DPSO

    Directory of Open Access Journals (Sweden)

    Chuandong Chen

    2018-01-01

    Full Text Available Timing optimization for logic circuits is one of the key steps in logic synthesis. Extant research data are mainly proposed based on various intelligence algorithms. Hence, they are neither comparable with timing optimization data collected by the mainstream electronic design automation (EDA tool nor able to verify the superiority of intelligence algorithms to the EDA tool in terms of optimization ability. To address these shortcomings, a novel verification method is proposed in this study. First, a discrete particle swarm optimization (DPSO algorithm was applied to optimize the timing of the mixed polarity Reed-Muller (MPRM logic circuit. Second, the Design Compiler (DC algorithm was used to optimize the timing of the same MPRM logic circuit through special settings and constraints. Finally, the timing optimization results of the two algorithms were compared based on MCNC benchmark circuits. The timing optimization results obtained using DPSO are compared with those obtained from DC, and DPSO demonstrates an average reduction of 9.7% in the timing delays of critical paths for a number of MCNC benchmark circuits. The proposed verification method directly ascertains whether the intelligence algorithm has a better timing optimization ability than DC.

  9. OPTIMAL SIGNAL PROCESSING METHODS IN GPR

    Directory of Open Access Journals (Sweden)

    Saeid Karamzadeh

    2014-01-01

    Full Text Available In the past three decades, a lot of various applications of Ground Penetrating Radar (GPR took place in real life. There are important challenges of this radar in civil applications and also in military applications. In this paper, the fundamentals of GPR systems will be covered and three important signal processing methods (Wavelet Transform, Matched Filter and Hilbert Huang will be compared to each other in order to get most accurate information about objects which are in subsurface or behind the wall.

  10. Optimization Methods for Supply Chain Activities

    Directory of Open Access Journals (Sweden)

    Balasescu S.

    2014-12-01

    Full Text Available This paper approach the theme of supply chain activities for medium and large companies which run many operations and need many facilities. The first goal is to analyse the influence of optimisation methods of supply chain activities on the success rate for a business. The second goal is to compare some logistic strategies applied by companies with the same profile to see which is the most effective. The final goal is to show which is the necessity of strategic optimum for a company and how can be achieved the considering the demand uncertainty.

  11. Application of improved AHP method to radiation protection optimization

    International Nuclear Information System (INIS)

    Wang Chuan; Zhang Jianguo; Yu Lei

    2014-01-01

    Aimed at the deficiency of traditional AHP method, a hierarchy model for optimum project selection of radiation protection was established with the improved AHP method. The result of comparison between the improved AHP method and the traditional AHP method shows that the improved AHP method can reduce personal judgment subjectivity, and its calculation process is compact and reasonable. The improved AHP method can provide scientific basis for radiation protection optimization. (authors)

  12. Design optimization of single mixed refrigerant natural gas liquefaction process using the particle swarm paradigm with nonlinear constraints

    International Nuclear Information System (INIS)

    Khan, Mohd Shariq; Lee, Moonyong

    2013-01-01

    The particle swarm paradigm is employed to optimize single mixed refrigerant natural gas liquefaction process. Liquefaction design involves multivariable problem solving and non-optimal execution of these variables can waste energy and contribute to process irreversibilities. Design optimization requires these variables to be optimized simultaneously; minimizing the compression energy requirement is selected as the optimization objective. Liquefaction is modeled using Honeywell UniSim Design ™ and the resulting rigorous model is connected with the particle swarm paradigm coded in MATLAB. Design constraints are folded into the objective function using the penalty function method. Optimization successfully improved efficiency by reducing the compression energy requirement by ca. 10% compared with the base case. -- Highlights: ► The particle swarm paradigm (PSP) is employed for design optimization of SMR NG liquefaction process. ► Rigorous SMR process model based on UniSim is connected with PSP coded in MATLAB. ► Stochastic features of PSP give more confidence in the optimality of complex nonlinear problems. ► Optimization with PSP notably improves energy efficiency of the SMR process.

  13. Proposal of Evolutionary Simplex Method for Global Optimization Problem

    Science.gov (United States)

    Shimizu, Yoshiaki

    To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.

  14. Image Registration Using Single Cluster PHD Methods

    Science.gov (United States)

    Campbell, M.; Schlangen, I.; Delande, E.; Clark, D.

    Cadets in the Department of Physics at the United States Air Force Academy are using the technique of slitless spectroscopy to analyze the spectra from geostationary satellites during glint season. The equinox periods of the year are particularly favorable for earth-based observers to detect specular reflections off satellites (glints), which have been observed in the past using broadband photometry techniques. Three seasons of glints were observed and analyzed for multiple satellites, as measured across the visible spectrum using a diffraction grating on the Academy’s 16-inch, f/8.2 telescope. It is clear from the results that the glint maximum wavelength decreases relative to the time periods before and after the glint, and that the spectral reflectance during the glint is less like a blackbody. These results are consistent with the presumption that solar panels are the predominant source of specular reflection. The glint spectra are also quantitatively compared to different blackbody curves and the solar spectrum by means of absolute differences and standard deviations. Our initial analysis appears to indicate a potential method of determining relative power capacity.

  15. OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods

    Science.gov (United States)

    Heath, Christopher M.; Gray, Justin S.

    2012-01-01

    The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.

  16. Advanced Topology Optimization Methods for Conceptual Architectural Design

    DEFF Research Database (Denmark)

    Aage, Niels; Amir, Oded; Clausen, Anders

    2015-01-01

    This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities...

  17. Advanced Topology Optimization Methods for Conceptual Architectural Design

    DEFF Research Database (Denmark)

    Aage, Niels; Amir, Oded; Clausen, Anders

    2014-01-01

    This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities...

  18. Distributed optimization for systems design : an augmented Lagrangian coordination method

    NARCIS (Netherlands)

    Tosserams, S.

    2008-01-01

    This thesis presents a coordination method for the distributed design optimization of engineering systems. The design of advanced engineering systems such as aircrafts, automated distribution centers, and microelectromechanical systems (MEMS) involves multiple components that together realize the

  19. Comparative evaluation of various optimization methods and the development of an optimization code system SCOOP

    International Nuclear Information System (INIS)

    Suzuki, Tadakazu

    1979-11-01

    Thirty two programs for linear and nonlinear optimization problems with or without constraints have been developed or incorporated, and their stability, convergence and efficiency have been examined. On the basis of these evaluations, the first version of the optimization code system SCOOP-I has been completed. The SCOOP-I is designed to be an efficient, reliable, useful and also flexible system for general applications. The system enables one to find global optimization point for a wide class of problems by selecting the most appropriate optimization method built in it. (author)

  20. [Optimized application of nested PCR method for detection of malaria].

    Science.gov (United States)

    Yao-Guang, Z; Li, J; Zhen-Yu, W; Li, C

    2017-04-28

    Objective To optimize the application of the nested PCR method for the detection of malaria according to the working practice, so as to improve the efficiency of malaria detection. Methods Premixing solution of PCR, internal primers for further amplification and new designed primers that aimed at two Plasmodium ovale subspecies were employed to optimize the reaction system, reaction condition and specific primers of P . ovale on basis of routine nested PCR. Then the specificity and the sensitivity of the optimized method were analyzed. The positive blood samples and examination samples of malaria were detected by the routine nested PCR and the optimized method simultaneously, and the detection results were compared and analyzed. Results The optimized method showed good specificity, and its sensitivity could reach the pg to fg level. The two methods were used to detect the same positive malarial blood samples simultaneously, the results indicated that the PCR products of the two methods had no significant difference, but the non-specific amplification reduced obviously and the detection rates of P . ovale subspecies improved, as well as the total specificity also increased through the use of the optimized method. The actual detection results of 111 cases of malarial blood samples showed that the sensitivity and specificity of the routine nested PCR were 94.57% and 86.96%, respectively, and those of the optimized method were both 93.48%, and there was no statistically significant difference between the two methods in the sensitivity ( P > 0.05), but there was a statistically significant difference between the two methods in the specificity ( P PCR can improve the specificity without reducing the sensitivity on the basis of the routine nested PCR, it also can save the cost and increase the efficiency of malaria detection as less experiment links.

  1. Optimal random search for a single hidden target.

    Science.gov (United States)

    Snider, Joseph

    2011-01-01

    A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.

  2. Optimal PMU Placement with Uncertainty Using Pareto Method

    Directory of Open Access Journals (Sweden)

    A. Ketabi

    2012-01-01

    Full Text Available This paper proposes a method for optimal placement of Phasor Measurement Units (PMUs in state estimation considering uncertainty. State estimation has first been turned into an optimization exercise in which the objective function is selected to be the number of unobservable buses which is determined based on Singular Value Decomposition (SVD. For the normal condition, Differential Evolution (DE algorithm is used to find the optimal placement of PMUs. By considering uncertainty, a multiobjective optimization exercise is hence formulated. To achieve this, DE algorithm based on Pareto optimum method has been proposed here. The suggested strategy is applied on the IEEE 30-bus test system in several case studies to evaluate the optimal PMUs placement.

  3. A loading pattern optimization method for nuclear fuel management

    International Nuclear Information System (INIS)

    Argaud, J.P.

    1997-01-01

    Nuclear fuel reload of PWR core leads to the search of an optimal nuclear fuel assemblies distribution, namely of loading pattern. This large discrete optimization problem is here expressed as a cost function minimization. To deal with this problem, an approach based on gradient information is used to direct the search in the patterns discrete space. A method using an adjoint state formulation is then developed, and final results of complete patterns search tests by this method are presented. (author)

  4. Optimized operation of dielectric laser accelerators: Single bunch

    Directory of Open Access Journals (Sweden)

    Adi Hanuka

    2018-05-01

    Full Text Available We introduce a general approach to determine the optimal charge, efficiency and gradient for laser driven accelerators in a self-consistent way. We propose a way to enhance the operational gradient of dielectric laser accelerators by leverage of beam-loading effect. While the latter may be detrimental from the perspective of the effective gradient experienced by the particles, it can be beneficial as the effective field experienced by the accelerating structure, is weaker. As a result, the constraint imposed by the damage threshold fluence is accordingly weakened and our self-consistent approach predicts permissible gradients of ∼10  GV/m, one order of magnitude higher than previously reported experimental results—with unbunched pulse of electrons. Our approach leads to maximum efficiency to occur for higher gradients as compared with a scenario in which the beam-loading effect on the material is ignored. In any case, maximum gradient does not occur for the same conditions that maximum efficiency does—a trade-off set of parameters is suggested.

  5. Comparison of optimal design methods in inverse problems

    International Nuclear Information System (INIS)

    Banks, H T; Holm, K; Kappel, F

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst–Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667–77; De Gaetano A and Arino O 2000 J. Math. Biol. 40 136–68; Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979–90)

  6. Comparison of optimal design methods in inverse problems

    Science.gov (United States)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  7. Evaluation of a proposed optimization method for discrete-event simulation models

    Directory of Open Access Journals (Sweden)

    Alexandre Ferreira de Pinho

    2012-12-01

    Full Text Available Optimization methods combined with computer-based simulation have been utilized in a wide range of manufacturing applications. However, in terms of current technology, these methods exhibit low performance levels which are only able to manipulate a single decision variable at a time. Thus, the objective of this article is to evaluate a proposed optimization method for discrete-event simulation models based on genetic algorithms which exhibits more efficiency in relation to computational time when compared to software packages on the market. It should be emphasized that the variable's response quality will not be altered; that is, the proposed method will maintain the solutions' effectiveness. Thus, the study draws a comparison between the proposed method and that of a simulation instrument already available on the market and has been examined in academic literature. Conclusions are presented, confirming the proposed optimization method's efficiency.

  8. Multi-objective optimization design method of radiation shielding

    International Nuclear Information System (INIS)

    Yang Shouhai; Wang Weijin; Lu Daogang; Chen Yixue

    2012-01-01

    Due to the shielding design goals of diversification and uncertain process of many factors, it is necessary to develop an optimization design method of intelligent shielding by which the shielding scheme selection will be achieved automatically and the uncertainties of human impact will be reduced. For economical feasibility to achieve a radiation shielding design for automation, the multi-objective genetic algorithm optimization of screening code which combines the genetic algorithm and discrete-ordinate method was developed to minimize the costs, size, weight, and so on. This work has some practical significance for gaining the optimization design of shielding. (authors)

  9. A discrete optimization method for nuclear fuel management

    International Nuclear Information System (INIS)

    Argaud, J.P.

    1993-01-01

    Nuclear fuel management can be seen as a large discrete optimization problem under constraints, and optimization methods on such problems are numerically costly. After an introduction of the main aspects of nuclear fuel management, this paper presents a new way to treat the combinatorial problem by using information included in the gradient of optimized cost function. A new search process idea is to choose, by direct observation of the gradient, the more interesting changes in fuel loading patterns. An example is then developed to illustrate an operating mode of the method. Finally, connections with classical simulated annealing and genetic algorithms are described as an attempt to improve search processes. 16 refs., 2 figs

  10. Modifying nodal pricing method considering market participants optimality and reliability

    Directory of Open Access Journals (Sweden)

    A. R. Soofiabadi

    2015-06-01

    Full Text Available This paper develops a method for nodal pricing and market clearing mechanism considering reliability of the system. The effects of components reliability on electricity price, market participants’ profit and system social welfare is considered. This paper considers reliability both for evaluation of market participant’s optimality as well as for fair pricing and market clearing mechanism. To achieve fair pricing, nodal price has been obtained through a two stage optimization problem and to achieve fair market clearing mechanism, comprehensive criteria has been introduced for optimality evaluation of market participant. Social welfare of the system and system efficiency are increased under proposed modified nodal pricing method.

  11. Local Approximation and Hierarchical Methods for Stochastic Optimization

    Science.gov (United States)

    Cheng, Bolong

    In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the

  12. On Equivalence between Optimality Criteria and Projected Gradient Methods with Application to Topology Optimization Problem

    OpenAIRE

    Ananiev, Sergey

    2006-01-01

    The paper demonstrates the equivalence between the optimality criteria (OC) method, initially proposed by Bendsoe & Kikuchi for topology optimization problem, and the projected gradient method. The equivalence is shown using Hestenes definition of Lagrange multipliers. Based on this development, an alternative formulation of the Karush-Kuhn-Tucker (KKT) condition is suggested. Such reformulation has some advantages, which will be also discussed in the paper. For verification purposes the modi...

  13. Thickness optimization of fiber reinforced laminated composites using the discrete material optimization method

    DEFF Research Database (Denmark)

    Sørensen, Søren Nørgaard; Lund, Erik

    2012-01-01

    This work concerns a novel large-scale multi-material topology optimization method for simultaneous determination of the optimum variable integer thickness and fiber orientation throughout laminate structures with fixed outer geometries while adhering to certain manufacturing constraints....... The conceptual combinatorial/integer problem is relaxed to a continuous problem and solved on basis of the so-called Discrete Material Optimization method, explicitly including the manufacturing constraints as linear constraints....

  14. Czochralski method of growing single crystals. State-of-art

    International Nuclear Information System (INIS)

    Bukowski, A.; Zabierowski, P.

    1999-01-01

    Modern Czochralski method of single crystal growing has been described. The example of Czochralski process is given. The advantages that caused the rapid progress of the method have been presented. The method limitations that motivated the further research and new solutions are also presented. As the example two different ways of the technique development has been described: silicon single crystals growth in the magnetic field; continuous liquid feed of silicon crystals growth. (author)

  15. An historical survey of computational methods in optimal control.

    Science.gov (United States)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  16. Method for Determining Optimal Residential Energy Efficiency Retrofit Packages

    Energy Technology Data Exchange (ETDEWEB)

    Polly, B.; Gestwick, M.; Bianchi, M.; Anderson, R.; Horowitz, S.; Christensen, C.; Judkoff, R.

    2011-04-01

    Businesses, government agencies, consumers, policy makers, and utilities currently have limited access to occupant-, building-, and location-specific recommendations for optimal energy retrofit packages, as defined by estimated costs and energy savings. This report describes an analysis method for determining optimal residential energy efficiency retrofit packages and, as an illustrative example, applies the analysis method to a 1960s-era home in eight U.S. cities covering a range of International Energy Conservation Code (IECC) climate regions. The method uses an optimization scheme that considers average energy use (determined from building energy simulations) and equivalent annual cost to recommend optimal retrofit packages specific to the building, occupants, and location. Energy savings and incremental costs are calculated relative to a minimum upgrade reference scenario, which accounts for efficiency upgrades that would occur in the absence of a retrofit because of equipment wear-out and replacement with current minimum standards.

  17. Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective

    Science.gov (United States)

    Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith

    2010-01-01

    The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…

  18. Optimization of Single Point Incremental Forming of Al5052-O Sheet

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Chan Il; Xiao, Xiao; Do, Van Cuong; Kim, Young Suk [Kyungpook Nat’l Univ., Daegu (Korea, Republic of)

    2017-03-15

    Single point incremental forming (SPIF) is a sheet-forming technique. It is a die-less sheet metal manufacturing process for rapid prototyping and small batch production. The Critical parameters in the forming process include tool diameter, step depth, feed rate, spindle speed, etc. In this study, these parameters and the die shape corresponding to the Varying Wall Angle Conical Frustum(VWACF) model were used for forming 0.8mm in thick Al5052-O sheets. The Taguchi method of Experiments of Design (DOE) and Grey relational optimization were used to determine the optimum parameters in SPIF. A response study was performed on formability, spring back, and thickness reduction. The research shows that the optimum combination of these parameters that yield best performance of SPIF is as follows: tool diameter, 6mm; spin speed, 60rpm; step depth, 0.3mm; and feed rate, 500mm/min.

  19. On some other preferred method for optimizing the welded joint

    Directory of Open Access Journals (Sweden)

    Pejović Branko B.

    2016-01-01

    Full Text Available The paper shows an example of performed optimization of sizes in terms of welding costs in a characteristic loaded welded joint. Hence, in the first stage, the variables and constant parameters are defined, and mathematical shape of the optimization function is determined. The following stage of the procedure defines and places the most important constraint functions that limit the design of structures, that the technologist and the designer should take into account. Subsequently, a mathematical optimization model of the problem is derived, that is efficiently solved by a proposed method of geometric programming. Further, a mathematically based thorough optimization algorithm is developed of the proposed method, with a main set of equations defining the problem that are valid under certain conditions. Thus, the primary task of optimization is reduced to the dual task through a corresponding function, which is easier to solve than the primary task of the optimized objective function. The main reason for this is a derived set of linear equations. Apparently, a correlation is used between the optimal primary vector that minimizes the objective function and the dual vector that maximizes the dual function. The method is illustrated on a computational practical example with a different number of constraint functions. It is shown that for the case of a lower level of complexity, a solution is reached through an appropriate maximization of the dual function by mathematical analysis and differential calculus.

  20. GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2012-01-01

    Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.

  1. Review: Optimization methods for groundwater modeling and management

    Science.gov (United States)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  2. The UIC 406 capacity method used on single track sections

    DEFF Research Database (Denmark)

    Landex, Alex; Kaas, Anders H.; Jacobsen, Erik M.

    2007-01-01

    This paper describes the relatively new UIC 406 capacity method which is an easy and effective way of calculating capacity consumption on railway lines. However, it is possible to expound the method in different ways which can lead to different capacity consumptions. This paper describes the UIC...... 406 method for single track lines and how it is expounded in Denmark. Many capacity analyses using the UIC 406 capacity method for double track lines have been carried out and presented internationally but only few capacity analyses using the UIC 406 capacity method on single track lines have been...... presented. Therefore, the differences between capacity analysis for double track lines and single track lines are discussed in the beginning of this paper. Many of the principles of the UIC 406 capacity analyses on double track lines can be used on single track lines – at least when more than one train...

  3. Methods for forming particles from single source precursors

    Science.gov (United States)

    Fox, Robert V [Idaho Falls, ID; Rodriguez, Rene G [Pocatello, ID; Pak, Joshua [Pocatello, ID

    2011-08-23

    Single source precursors are subjected to carbon dioxide to form particles of material. The carbon dioxide may be in a supercritical state. Single source precursors also may be subjected to supercritical fluids other than supercritical carbon dioxide to form particles of material. The methods may be used to form nanoparticles. In some embodiments, the methods are used to form chalcopyrite materials. Devices such as, for example, semiconductor devices may be fabricated that include such particles. Methods of forming semiconductor devices include subjecting single source precursors to carbon dioxide to form particles of semiconductor material, and establishing electrical contact between the particles and an electrode.

  4. ROTAX: a nonlinear optimization program by axes rotation method

    International Nuclear Information System (INIS)

    Suzuki, Tadakazu

    1977-09-01

    A nonlinear optimization program employing the axes rotation method has been developed for solving nonlinear problems subject to nonlinear inequality constraints and its stability and convergence efficiency were examined. The axes rotation method is a direct search of the optimum point by rotating the orthogonal coordinate system in a direction giving the minimum objective. The searching direction is rotated freely in multi-dimensional space, so the method is effective for the problems represented with the contours having deep curved valleys. In application of the axes rotation method to the optimization problems subject to nonlinear inequality constraints, an improved version of R.R. Allran and S.E.J. Johnsen's method is used, which deals with a new objective function composed of the original objective and a penalty term to consider the inequality constraints. The program is incorporated in optimization code system SCOOP. (auth.)

  5. A method for optimizing the performance of buildings

    DEFF Research Database (Denmark)

    Pedersen, Frank

    2007-01-01

    needed for solving the optimization problem. Furthermore, the algorithm uses so-called domain constraint functions in order to ensure that the input to the simulation software is feasible. Using this technique avoids performing time-consuming simulations for unrealistic design decisions. The algorithm......This thesis describes a method for optimizing the performance of buildings. Design decisions made in early stages of the building design process have a significant impact on the performance of buildings, for instance, the performance with respect to the energy consumption, economical aspects......, and the indoor environment. The method is intended for supporting design decisions for buildings, by combining methods for calculating the performance of buildings with numerical optimization methods. The method is able to find optimum values of decision variables representing different features of the building...

  6. A Novel Optimal Control Method for Impulsive-Correction Projectile Based on Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Ruisheng Sun

    2016-01-01

    Full Text Available This paper presents a new parametric optimization approach based on a modified particle swarm optimization (PSO to design a class of impulsive-correction projectiles with discrete, flexible-time interval, and finite-energy control. In terms of optimal control theory, the task is described as the formulation of minimum working number of impulses and minimum control error, which involves reference model linearization, boundary conditions, and discontinuous objective function. These result in difficulties in finding the global optimum solution by directly utilizing any other optimization approaches, for example, Hp-adaptive pseudospectral method. Consequently, PSO mechanism is employed for optimal setting of impulsive control by considering the time intervals between two neighboring lateral impulses as design variables, which makes the briefness of the optimization process. A modification on basic PSO algorithm is developed to improve the convergence speed of this optimization through linearly decreasing the inertial weight. In addition, a suboptimal control and guidance law based on PSO technique are put forward for the real-time consideration of the online design in practice. Finally, a simulation case coupled with a nonlinear flight dynamic model is applied to validate the modified PSO control algorithm. The results of comparative study illustrate that the proposed optimal control algorithm has a good performance in obtaining the optimal control efficiently and accurately and provides a reference approach to handling such impulsive-correction problem.

  7. A kriging metamodel-assisted robust optimization method based on a reverse model

    Science.gov (United States)

    Zhou, Hui; Zhou, Qi; Liu, Congwei; Zhou, Taotao

    2018-02-01

    The goal of robust optimization methods is to obtain a solution that is both optimum and relatively insensitive to uncertainty factors. Most existing robust optimization approaches use outer-inner nested optimization structures where a large amount of computational effort is required because the robustness of each candidate solution delivered from the outer level should be evaluated in the inner level. In this article, a kriging metamodel-assisted robust optimization method based on a reverse model (K-RMRO) is first proposed, in which the nested optimization structure is reduced into a single-loop optimization structure to ease the computational burden. Ignoring the interpolation uncertainties from kriging, K-RMRO may yield non-robust optima. Hence, an improved kriging-assisted robust optimization method based on a reverse model (IK-RMRO) is presented to take the interpolation uncertainty of kriging metamodel into consideration. In IK-RMRO, an objective switching criterion is introduced to determine whether the inner level robust optimization or the kriging metamodel replacement should be used to evaluate the robustness of design alternatives. The proposed criterion is developed according to whether or not the robust status of the individual can be changed because of the interpolation uncertainties from the kriging metamodel. Numerical and engineering cases are used to demonstrate the applicability and efficiency of the proposed approach.

  8. Aerodynamic shape optimization using preconditioned conjugate gradient methods

    Science.gov (United States)

    Burgreen, Greg W.; Baysal, Oktay

    1993-01-01

    In an effort to further improve upon the latest advancements made in aerodynamic shape optimization procedures, a systematic study is performed to examine several current solution methodologies as applied to various aspects of the optimization procedure. It is demonstrated that preconditioned conjugate gradient-like methodologies dramatically decrease the computational efforts required for such procedures. The design problem investigated is the shape optimization of the upper and lower surfaces of an initially symmetric (NACA-012) airfoil in inviscid transonic flow and at zero degree angle-of-attack. The complete surface shape is represented using a Bezier-Bernstein polynomial. The present optimization method then automatically obtains supercritical airfoil shapes over a variety of freestream Mach numbers. Furthermore, the best optimization strategy examined resulted in a factor of 8 decrease in computational time as well as a factor of 4 decrease in memory over the most efficient strategies in current use.

  9. SOLVING ENGINEERING OPTIMIZATION PROBLEMS WITH THE SWARM INTELLIGENCE METHODS

    Directory of Open Access Journals (Sweden)

    V. Panteleev Andrei

    2017-01-01

    Full Text Available An important stage in problem solving process for aerospace and aerostructures designing is calculating their main charac- teristics optimization. The results of the four constrained optimization problems related to the design of various technical systems: such as determining the best parameters of welded beams, pressure vessel, gear, spring are presented. The purpose of each task is to minimize the cost and weight of the construction. The object functions in optimization practical problem are nonlinear functions with a lot of variables and a complex layer surface indentations. That is why using classical approach for extremum seeking is not efficient. Here comes the necessity of using such methods of optimization that allow to find a near optimal solution in acceptable amount of time with the minimum waste of computer power. Such methods include the methods of Swarm Intelligence: spiral dy- namics algorithm, stochastic diffusion search, hybrid seeker optimization algorithm. The Swarm Intelligence methods are designed in such a way that a swarm consisting of agents carries out the search for extremum. In search for the point of extremum, the parti- cles exchange information and consider their experience as well as the experience of population leader and the neighbors in some area. To solve the listed problems there has been designed a program complex, which efficiency is illustrated by the solutions of four applied problems. Each of the considered applied optimization problems is solved with all the three chosen methods. The ob- tained numerical results can be compared with the ones found in a swarm with a particle method. The author gives recommenda- tions on how to choose methods parameters and penalty function value, which consider inequality constraints.

  10. A new method of preparing single-walled carbon nanotubes

    Indian Academy of Sciences (India)

    A novel method of purification for single-walled carbon nanotubes, prepared by an arc-discharge method, is described. The method involves a combination of acid washing followed by high temperature hydrogen treatment to remove the metal nanoparticles and amorphous carbon present in the as-synthesized singlewalled ...

  11. A Finite Element Removal Method for 3D Topology Optimization

    Directory of Open Access Journals (Sweden)

    M. Akif Kütük

    2013-01-01

    Full Text Available Topology optimization provides great convenience to designers during the designing stage in many industrial applications. With this method, designers can obtain a rough model of any part at the beginning of a designing stage by defining loading and boundary conditions. At the same time the optimization can be used for the modification of a product which is being used. Lengthy solution time is a disadvantage of this method. Therefore, the method cannot be widespread. In order to eliminate this disadvantage, an element removal algorithm has been developed for topology optimization. In this study, the element removal algorithm is applied on 3-dimensional parts, and the results are compared with the ones available in the related literature. In addition, the effects of the method on solution times are investigated.

  12. An analytical optimization method for electric propulsion orbit transfer vehicles

    International Nuclear Information System (INIS)

    Oleson, S.R.

    1993-01-01

    Due to electric propulsion's inherent propellant mass savings over chemical propulsion, electric propulsion orbit transfer vehicles (EPOTVs) are a highly efficient mode of orbit transfer. When selecting an electric propulsion device (ion, MPD, or arcjet) and propellant for a particular mission, it is preferable to use quick, analytical system optimization methods instead of time intensive numerical integration methods. It is also of interest to determine each thruster's optimal operating characteristics for a specific mission. Analytical expressions are derived which determine the optimal specific impulse (Isp) for each type of electric thruster to maximize payload fraction for a desired thrusting time. These expressions take into account the variation of thruster efficiency with specific impulse. Verification of the method is made with representative electric propulsion values on a LEO-to-GEO mission. Application of the method to specific missions is discussed

  13. Improving Battery Reactor Core Design Using Optimization Method

    International Nuclear Information System (INIS)

    Son, Hyung M.; Suh, Kune Y.

    2011-01-01

    The Battery Omnibus Reactor Integral System (BORIS) is a small modular fast reactor being designed at Seoul National University to satisfy various energy demands, to maintain inherent safety by liquid-metal coolant lead for natural circulation heat transport, and to improve power conversion efficiency with the Modular Optimal Balance Integral System (MOBIS) using the supercritical carbon dioxide as working fluid. This study is focused on developing the Neutronics Optimized Reactor Analysis (NORA) method that can quickly generate conceptual design of a battery reactor core by means of first principle calculations, which is part of the optimization process for reactor assembly design of BORIS

  14. Polyhedral and semidefinite programming methods in combinatorial optimization

    CERN Document Server

    Tunçel, Levent

    2010-01-01

    Since the early 1960s, polyhedral methods have played a central role in both the theory and practice of combinatorial optimization. Since the early 1990s, a new technique, semidefinite programming, has been increasingly applied to some combinatorial optimization problems. The semidefinite programming problem is the problem of optimizing a linear function of matrix variables, subject to finitely many linear inequalities and the positive semidefiniteness condition on some of the matrix variables. On certain problems, such as maximum cut, maximum satisfiability, maximum stable set and geometric r

  15. A QFD-based optimization method for a scalable product platform

    Science.gov (United States)

    Luo, Xinggang; Tang, Jiafu; Kwong, C. K.

    2010-02-01

    In order to incorporate the customer into the early phase of the product development cycle and to better satisfy customers' requirements, this article adopts quality function deployment (QFD) for optimal design of a scalable product platform. A five-step QFD-based method is proposed to determine the optimal values for platform engineering characteristics (ECs) and non-platform ECs of the products within a product family. First of all, the houses of quality (HoQs) for all product variants are developed and a QFD-based optimization approach is used to determine the optimal ECs for each product variant. Sensitivity analysis is performed for each EC with respect to overall customer satisfaction (OCS). Based on the obtained sensitivity indices of ECs, a mathematical model is established to simultaneously optimize the values of the platform and the non-platform ECs. Finally, by comparing and analysing the optimal solutions with different number of platform ECs, the ECs with which the worst OCS loss can be avoided are selected as platform ECs. An illustrative example is used to demonstrate the feasibility of this method. A comparison between the proposed method and a two-step approach is conducted on the example. The comparison shows that, as a kind of single-stage approach, the proposed method yields better average degree of customer satisfaction due to the simultaneous optimization of platform and non-platform ECs.

  16. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  17. Enhanced Multi-Objective Energy Optimization by a Signaling Method

    OpenAIRE

    Soares, João; Borges, Nuno; Vale, Zita; Oliveira, P.B.

    2016-01-01

    In this paper three metaheuristics are used to solve a smart grid multi-objective energy management problem with conflictive design: how to maximize profits and minimize carbon dioxide (CO2) emissions, and the results compared. The metaheuristics implemented are: weighted particle swarm optimization (W-PSO), multi-objective particle swarm optimization (MOPSO) and non-dominated sorting genetic algorithm II (NSGA-II). The performance of these methods with the use of multi-dimensi...

  18. Efficient solution method for optimal control of nuclear systems

    International Nuclear Information System (INIS)

    Naser, J.A.; Chambre, P.L.

    1981-01-01

    To improve the utilization of existing fuel sources, the use of optimization techniques is becoming more important. A technique for solving systems of coupled ordinary differential equations with initial, boundary, and/or intermediate conditions is given. This method has a number of inherent advantages over existing techniques as well as being efficient in terms of computer time and space requirements. An example of computing the optimal control for a spatially dependent reactor model with and without temperature feedback is given. 10 refs

  19. Optimal layout of radiological environment monitoring based on TOPSIS method

    International Nuclear Information System (INIS)

    Li Sufen; Zhou Chunlin

    2006-01-01

    TOPSIS is a method for multi-objective-decision-making, which can be applied to comprehensive assessment of environmental quality. This paper adopts it to get the optimal layout of radiological environment monitoring, it is proved that this method is a correct, simple and convenient, practical one, and beneficial to supervision departments to scientifically and reasonably layout Radiological Environment monitoring sites. (authors)

  20. Primal-Dual Interior Point Multigrid Method for Topology Optimization

    Czech Academy of Sciences Publication Activity Database

    Kočvara, Michal; Mohammed, S.

    2016-01-01

    Roč. 38, č. 5 (2016), B685-B709 ISSN 1064-8275 Grant - others:European Commission - EC(XE) 313781 Institutional support: RVO:67985556 Keywords : topology optimization * multigrid method s * interior point method Subject RIV: BA - General Mathematics Impact factor: 2.195, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/kocvara-0462418.pdf

  1. Optimization method for quantitative calculation of clay minerals in soil

    Indian Academy of Sciences (India)

    However, no reliable method for quantitative analysis of clay minerals has been established so far. In this study, an attempt was made to propose an optimization method for the quantitative ... 2. Basic principles. The mineralogical constitution of soil is rather complex. ... K2O, MgO, and TFe as variables for the calculation.

  2. A new hybrid optimization method inspired from swarm intelligence: Fuzzy adaptive swallow swarm optimization algorithm (FASSO

    Directory of Open Access Journals (Sweden)

    Mehdi Neshat

    2015-11-01

    Full Text Available In this article, the objective was to present effective and optimal strategies aimed at improving the Swallow Swarm Optimization (SSO method. The SSO is one of the best optimization methods based on swarm intelligence which is inspired by the intelligent behaviors of swallows. It has been able to offer a relatively strong method for solving optimization problems. However, despite its many advantages, the SSO suffers from two shortcomings. Firstly, particles movement speed is not controlled satisfactorily during the search due to the lack of an inertia weight. Secondly, the variables of the acceleration coefficient are not able to strike a balance between the local and the global searches because they are not sufficiently flexible in complex environments. Therefore, the SSO algorithm does not provide adequate results when it searches in functions such as the Step or Quadric function. Hence, the fuzzy adaptive Swallow Swarm Optimization (FASSO method was introduced to deal with these problems. Meanwhile, results enjoy high accuracy which are obtained by using an adaptive inertia weight and through combining two fuzzy logic systems to accurately calculate the acceleration coefficients. High speed of convergence, avoidance from falling into local extremum, and high level of error tolerance are the advantages of proposed method. The FASSO was compared with eleven of the best PSO methods and SSO in 18 benchmark functions. Finally, significant results were obtained.

  3. Deterministic methods for multi-control fuel loading optimization

    Science.gov (United States)

    Rahman, Fariz B. Abdul

    We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.

  4. Shape optimization of high power centrifugal compressor using multi-objective optimal method

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Hyun Soo; Lee, Jeong Min; Kim, Youn Jea [School of Mechanical Engineering, Sungkyunkwan University, Seoul (Korea, Republic of)

    2015-03-15

    In this study, a method for optimal design of impeller and diffuser blades in the centrifugal compressor using response surface method (RSM) and multi-objective genetic algorithm (MOGA) was evaluated. A numerical simulation was conducted using ANSYS CFX with various values of impeller and diffuser parameters, which consist of leading edge (LE) angle, trailing edge (TE) angle, and blade thickness. Each of the parameters was divided into three levels. A total of 45 design points were planned using central composite design (CCD), which is one of the design of experiment (DOE) techniques. Response surfaces that were generated on the basis of the results of DOE were used to determine the optimal shape of impeller and diffuser blade. The entire process of optimization was conducted using ANSYS Design Xplorer (DX). Through the optimization, isentropic efficiency and pressure recovery coefficient, which are the main performance parameters of the centrifugal compressor, were increased by 0.3 and 5, respectively.

  5. Shape optimization of high power centrifugal compressor using multi-objective optimal method

    International Nuclear Information System (INIS)

    Kang, Hyun Soo; Lee, Jeong Min; Kim, Youn Jea

    2015-01-01

    In this study, a method for optimal design of impeller and diffuser blades in the centrifugal compressor using response surface method (RSM) and multi-objective genetic algorithm (MOGA) was evaluated. A numerical simulation was conducted using ANSYS CFX with various values of impeller and diffuser parameters, which consist of leading edge (LE) angle, trailing edge (TE) angle, and blade thickness. Each of the parameters was divided into three levels. A total of 45 design points were planned using central composite design (CCD), which is one of the design of experiment (DOE) techniques. Response surfaces that were generated on the basis of the results of DOE were used to determine the optimal shape of impeller and diffuser blade. The entire process of optimization was conducted using ANSYS Design Xplorer (DX). Through the optimization, isentropic efficiency and pressure recovery coefficient, which are the main performance parameters of the centrifugal compressor, were increased by 0.3 and 5, respectively

  6. A novel optimal coordinated control strategy for the updated robot system for single port surgery.

    Science.gov (United States)

    Bai, Weibang; Cao, Qixin; Leng, Chuntao; Cao, Yang; Fujie, Masakatsu G; Pan, Tiewen

    2017-09-01

    Research into robotic systems for single port surgery (SPS) has become widespread around the world in recent years. A new robot arm system for SPS was developed, but its positioning platform and other hardware components were not efficient. Special features of the developed surgical robot system make good teleoperation with safety and efficiency difficult. A robot arm is combined and used as new positioning platform, and the remote center motion is realized by a new method using active motion control. A new mapping strategy based on kinematics computation and a novel optimal coordinated control strategy based on real-time approaching to a defined anthropopathic criterion configuration that is referred to the customary ease state of human arms and especially the configuration of boxers' habitual preparation posture are developed. The hardware components, control architecture, control system, and mapping strategy of the robotic system has been updated. A novel optimal coordinated control strategy is proposed and tested. The new robot system can be more dexterous, intelligent, convenient and safer for preoperative positioning and intraoperative adjustment. The mapping strategy can achieve good following and representation for the slave manipulator arms. And the proposed novel control strategy can enable them to complete tasks with higher maneuverability, lower possibility of self-interference and singularity free while teleoperating. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Optimal noise reduction in 3D reconstructions of single particles using a volume-normalized filter

    Science.gov (United States)

    Sindelar, Charles V.; Grigorieff, Nikolaus

    2012-01-01

    The high noise level found in single-particle electron cryo-microscopy (cryo-EM) image data presents a special challenge for three-dimensional (3D) reconstruction of the imaged molecules. The spectral signal-to-noise ratio (SSNR) and related Fourier shell correlation (FSC) functions are commonly used to assess and mitigate the noise-generated error in the reconstruction. Calculation of the SSNR and FSC usually includes the noise in the solvent region surrounding the particle and therefore does not accurately reflect the signal in the particle density itself. Here we show that the SSNR in a reconstructed 3D particle map is linearly proportional to the fractional volume occupied by the particle. Using this relationship, we devise a novel filter (the “single-particle Wiener filter”) to minimize the error in a reconstructed particle map, if the particle volume is known. Moreover, we show how to approximate this filter even when the volume of the particle is not known, by optimizing the signal within a representative interior region of the particle. We show that the new filter improves on previously proposed error-reduction schemes, including the conventional Wiener filter as well as figure-of-merit weighting, and quantify the relationship between all of these methods by theoretical analysis as well as numeric evaluation of both simulated and experimentally collected data. The single-particle Wiener filter is applicable across a broad range of existing 3D reconstruction techniques, but is particularly well suited to the Fourier inversion method, leading to an efficient and accurate implementation. PMID:22613568

  8. An Optimization Method for Condition Based Maintenance of Aircraft Fleet Considering Prognostics Uncertainty

    Directory of Open Access Journals (Sweden)

    Qiang Feng

    2014-01-01

    Full Text Available An optimization method for condition based maintenance (CBM of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the optimization problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial optimization problem of single aircraft strategy. Remain useful life (RUL distribution of the key line replaceable Module (LRM has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation method of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an optimization method for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed method. The results shows that it could realize optimization and control of the aircraft fleet oriented to mission success.

  9. A simple optimization can improve the performance of single feature polymorphism detection by Affymetrix expression arrays

    Directory of Open Access Journals (Sweden)

    Fujisawa Hironori

    2010-05-01

    Full Text Available Abstract Background High-density oligonucleotide arrays are effective tools for genotyping numerous loci simultaneously. In small genome species (genome size: Results We compared the single feature polymorphism (SFP detection performance of whole-genome and transcript hybridizations using the Affymetrix GeneChip® Rice Genome Array, using the rice cultivars with full genome sequence, japonica cultivar Nipponbare and indica cultivar 93-11. Both genomes were surveyed for all probe target sequences. Only completely matched 25-mer single copy probes of the Nipponbare genome were extracted, and SFPs between them and 93-11 sequences were predicted. We investigated optimum conditions for SFP detection in both whole genome and transcript hybridization using differences between perfect match and mismatch probe intensities of non-polymorphic targets, assuming that these differences are representative of those between mismatch and perfect targets. Several statistical methods of SFP detection by whole-genome hybridization were compared under the optimized conditions. Causes of false positives and negatives in SFP detection in both types of hybridization were investigated. Conclusions The optimizations allowed a more than 20% increase in true SFP detection in whole-genome hybridization and a large improvement of SFP detection performance in transcript hybridization. Significance analysis of the microarray for log-transformed raw intensities of PM probes gave the best performance in whole genome hybridization, and 22,936 true SFPs were detected with 23.58% false positives by whole genome hybridization. For transcript hybridization, stable SFP detection was achieved for highly expressed genes, and about 3,500 SFPs were detected at a high sensitivity (> 50% in both shoot and young panicle transcripts. High SFP detection performances of both genome and transcript hybridizations indicated that microarrays of a complex genome (e.g., of Oryza sativa can be

  10. A novel optimization method, Gravitational Search Algorithm (GSA), for PWR core optimization

    International Nuclear Information System (INIS)

    Mahmoudi, S.M.; Aghaie, M.; Bahonar, M.; Poursalehi, N.

    2016-01-01

    Highlights: • The Gravitational Search Algorithm (GSA) is introduced. • The advantage of GSA is verified in Shekel’s Foxholes. • Reload optimizing in WWER-1000 and WWER-440 cases are performed. • Maximizing K eff , minimizing PPFs and flattening power density is considered. - Abstract: In-core fuel management optimization (ICFMO) is one of the most challenging concepts of nuclear engineering. In recent decades several meta-heuristic algorithms or computational intelligence methods have been expanded to optimize reactor core loading pattern. This paper presents a new method of using Gravitational Search Algorithm (GSA) for in-core fuel management optimization. The GSA is constructed based on the law of gravity and the notion of mass interactions. It uses the theory of Newtonian physics and searcher agents are the collection of masses. In this work, at the first step, GSA method is compared with other meta-heuristic algorithms on Shekel’s Foxholes problem. In the second step for finding the best core, the GSA algorithm has been performed for three PWR test cases including WWER-1000 and WWER-440 reactors. In these cases, Multi objective optimizations with the following goals are considered, increment of multiplication factor (K eff ), decrement of power peaking factor (PPF) and power density flattening. It is notable that for neutronic calculation, PARCS (Purdue Advanced Reactor Core Simulator) code is used. The results demonstrate that GSA algorithm have promising performance and could be proposed for other optimization problems of nuclear engineering field.

  11. METHOD OF CALCULATING THE OPTIMAL HEAT EMISSION GEOTHERMAL WELLS

    Directory of Open Access Journals (Sweden)

    A. I. Akaev

    2015-01-01

    Full Text Available This paper presents a simplified method of calculating the optimal regimes of the fountain and the pumping exploitation of geothermal wells, reducing scaling and corrosion during operation. Comparative characteristics to quantify the heat of formation for these methods of operation under the same pressure at the wellhead. The problem is solved graphic-analytical method based on a balance of pressure in the well with the heat pump. 

  12. Non-linear programming method in optimization of fast reactors

    International Nuclear Information System (INIS)

    Pavelesku, M.; Dumitresku, Kh.; Adam, S.

    1975-01-01

    Application of the non-linear programming methods on optimization of nuclear materials distribution in fast reactor is discussed. The programming task composition is made on the basis of the reactor calculation dependent on the fuel distribution strategy. As an illustration of this method application the solution of simple example is given. Solution of the non-linear program is done on the basis of the numerical method SUMT. (I.T.)

  13. Optimization of Inventories for Multiple Companies by Fuzzy Control Method

    OpenAIRE

    Kawase, Koichi; Konishi, Masami; Imai, Jun

    2008-01-01

    In this research, Fuzzy control theory is applied to the inventory control of the supply chain between multiple companies. The proposed control method deals with the amountof inventories expressing supply chain between multiple companies. Referring past demand and tardiness, inventory amounts of raw materials are determined by Fuzzy inference. The method that an appropriate inventory control becomes possible optimizing fuzzy control gain by using SA method for Fuzzy control. The variation of ...

  14. QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.

    Science.gov (United States)

    Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy

    We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.

  15. Investigation of Optimal Integrated Circuit Raster Image Vectorization Method

    Directory of Open Access Journals (Sweden)

    Leonas Jasevičius

    2011-03-01

    Full Text Available Visual analysis of integrated circuit layer requires raster image vectorization stage to extract layer topology data to CAD tools. In this paper vectorization problems of raster IC layer images are presented. Various line extraction from raster images algorithms and their properties are discussed. Optimal raster image vectorization method was developed which allows utilization of common vectorization algorithms to achieve the best possible extracted vector data match with perfect manual vectorization results. To develop the optimal method, vectorized data quality dependence on initial raster image skeleton filter selection was assessed.Article in Lithuanian

  16. On projection methods, convergence and robust formulations in topology optimization

    DEFF Research Database (Denmark)

    Wang, Fengwen; Lazarov, Boyan Stefanov; Sigmund, Ole

    2011-01-01

    alleviated using various projection methods. In this paper we show that simple projection methods do not ensure local mesh-convergence and propose a modified robust topology optimization formulation based on erosion, intermediate and dilation projections that ensures both global and local mesh-convergence.......Mesh convergence and manufacturability of topology optimized designs have previously mainly been assured using density or sensitivity based filtering techniques. The drawback of these techniques has been gray transition regions between solid and void parts, but this problem has recently been...

  17. Optimal mesh hierarchies in Multilevel Monte Carlo methods

    KAUST Repository

    Von Schwerin, Erik

    2016-01-08

    I will discuss how to choose optimal mesh hierarchies in Multilevel Monte Carlo (MLMC) simulations when computing the expected value of a quantity of interest depending on the solution of, for example, an Ito stochastic differential equation or a partial differential equation with stochastic data. I will consider numerical schemes based on uniform discretization methods with general approximation orders and computational costs. I will compare optimized geometric and non-geometric hierarchies and discuss how enforcing some domain constraints on parameters of MLMC hierarchies affects the optimality of these hierarchies. I will also discuss the optimal tolerance splitting between the bias and the statistical error contributions and its asymptotic behavior. This talk presents joint work with N.Collier, A.-L.Haji-Ali, F. Nobile, and R. Tempone.

  18. Optimal mesh hierarchies in Multilevel Monte Carlo methods

    KAUST Repository

    Von Schwerin, Erik

    2016-01-01

    I will discuss how to choose optimal mesh hierarchies in Multilevel Monte Carlo (MLMC) simulations when computing the expected value of a quantity of interest depending on the solution of, for example, an Ito stochastic differential equation or a partial differential equation with stochastic data. I will consider numerical schemes based on uniform discretization methods with general approximation orders and computational costs. I will compare optimized geometric and non-geometric hierarchies and discuss how enforcing some domain constraints on parameters of MLMC hierarchies affects the optimality of these hierarchies. I will also discuss the optimal tolerance splitting between the bias and the statistical error contributions and its asymptotic behavior. This talk presents joint work with N.Collier, A.-L.Haji-Ali, F. Nobile, and R. Tempone.

  19. Exergetic optimization of turbofan engine with genetic algorithm method

    Energy Technology Data Exchange (ETDEWEB)

    Turan, Onder [Anadolu University, School of Civil Aviation (Turkey)], e-mail: onderturan@anadolu.edu.tr

    2011-07-01

    With the growth of passenger numbers, emissions from the aeronautics sector are increasing and the industry is now working on improving engine efficiency to reduce fuel consumption. The aim of this study is to present the use of genetic algorithms, an optimization method based on biological principles, to optimize the exergetic performance of turbofan engines. The optimization was carried out using exergy efficiency, overall efficiency and specific thrust of the engine as evaluation criteria and playing on pressure and bypass ratio, turbine inlet temperature and flight altitude. Results showed exergy efficiency can be maximized with higher altitudes, fan pressure ratio and turbine inlet temperature; the turbine inlet temperature is the most important parameter for increased exergy efficiency. This study demonstrated that genetic algorithms are effective in optimizing complex systems in a short time.

  20. Coordinated Optimal Operation Method of the Regional Energy Internet

    Directory of Open Access Journals (Sweden)

    Rishang Long

    2017-05-01

    Full Text Available The development of the energy internet has become one of the key ways to solve the energy crisis. This paper studies the system architecture, energy flow characteristics and coordinated optimization method of the regional energy internet. Considering the heat-to-electric ratio of a combined cooling, heating and power unit, energy storage life and real-time electricity price, a double-layer optimal scheduling model is proposed, which includes economic and environmental benefit in the upper layer and energy efficiency in the lower layer. A particle swarm optimizer–individual variation ant colony optimization algorithm is used to solve the computational efficiency and accuracy. Through the calculation and simulation of the simulated system, the energy savings, level of environmental protection and economic optimal dispatching scheme are realized.

  1. Trafficability Analysis at Traffic Crossing and Parameters Optimization Based on Particle Swarm Optimization Method

    Directory of Open Access Journals (Sweden)

    Bin He

    2014-01-01

    Full Text Available In city traffic, it is important to improve transportation efficiency and the spacing of platoon should be shortened when crossing the street. The best method to deal with this problem is automatic control of vehicles. In this paper, a mathematical model is established for the platoon’s longitudinal movement. A systematic analysis of longitudinal control law is presented for the platoon of vehicles. However, the parameter calibration for the platoon model is relatively difficult because the platoon model is complex and the parameters are coupled with each other. In this paper, the particle swarm optimization method is introduced to effectively optimize the parameters of platoon. The proposed method effectively finds the optimal parameters based on simulations and makes the spacing of platoon shorter.

  2. Method: a single nucleotide polymorphism genotyping method for Wheat streak mosaic virus

    Science.gov (United States)

    2012-01-01

    Background The September 11, 2001 attacks on the World Trade Center and the Pentagon increased the concern about the potential for terrorist attacks on many vulnerable sectors of the US, including agriculture. The concentrated nature of crops, easily obtainable biological agents, and highly detrimental impacts make agroterrorism a potential threat. Although procedures for an effective criminal investigation and attribution following such an attack are available, important enhancements are still needed, one of which is the capability for fine discrimination among pathogen strains. The purpose of this study was to develop a molecular typing assay for use in a forensic investigation, using Wheat streak mosaic virus (WSMV) as a model plant virus. Method This genotyping technique utilizes single base primer extension to generate a genetic fingerprint. Fifteen single nucleotide polymorphisms (SNPs) within the coat protein and helper component-protease genes were selected as the genetic markers for this assay. Assay optimization and sensitivity testing was conducted using synthetic targets. WSMV strains and field isolates were collected from regions around the world and used to evaluate the assay for discrimination. The assay specificity was tested against a panel of near-neighbors consisting of genetic and environmental near-neighbors. Result Each WSMV strain or field isolate tested produced a unique SNP fingerprint, with the exception of three isolates collected within the same geographic location that produced indistinguishable fingerprints. The results were consistent among replicates, demonstrating the reproducibility of the assay. No SNP fingerprints were generated from organisms included in the near-neighbor panel, suggesting the assay is specific for WSMV. Using synthetic targets, a complete profile could be generated from as low as 7.15 fmoles of cDNA. Conclusion The molecular typing method presented is one tool that could be incorporated into the forensic

  3. An Improved Real-Coded Population-Based Extremal Optimization Method for Continuous Unconstrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Guo-Qiang Zeng

    2014-01-01

    Full Text Available As a novel evolutionary optimization method, extremal optimization (EO has been successfully applied to a variety of combinatorial optimization problems. However, the applications of EO in continuous optimization problems are relatively rare. This paper proposes an improved real-coded population-based EO method (IRPEO for continuous unconstrained optimization problems. The key operations of IRPEO include generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The experimental results on 10 benchmark test functions with the dimension N=30 have shown that IRPEO is competitive or even better than the recently reported various genetic algorithm (GA versions with different mutation operations in terms of simplicity, effectiveness, and efficiency. Furthermore, the superiority of IRPEO to other evolutionary algorithms such as original population-based EO, particle swarm optimization (PSO, and the hybrid PSO-EO is also demonstrated by the experimental results on some benchmark functions.

  4. Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis

    Science.gov (United States)

    Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao

    2016-08-01

    Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.

  5. Cheap arbitrary high order methods for single integrand SDEs

    DEFF Research Database (Denmark)

    Debrabant, Kristian; Kværnø, Anne

    2017-01-01

    For a particular class of Stratonovich SDE problems, here denoted as single integrand SDEs, we prove that by applying a deterministic Runge-Kutta method of order $p_d$ we obtain methods converging in the mean-square and weak sense with order $\\lfloor p_d/2\\rfloor$. The reason is that the B-series...

  6. An Asymmetrical Space Vector Method for Single Phase Induction Motor

    DEFF Research Database (Denmark)

    Cui, Yuanhai; Blaabjerg, Frede; Andersen, Gert Karmisholt

    2002-01-01

    Single phase induction motors are the workhorses in low-power applications in the world, and also the variable speed is necessary. Normally it is achieved either by the mechanical method or by controlling the capacitor connected with the auxiliary winding. Any above method has some drawback which...

  7. Design optimization of single mixed refrigerant LNG process using a hybrid modified coordinate descent algorithm

    Science.gov (United States)

    Qyyum, Muhammad Abdul; Long, Nguyen Van Duc; Minh, Le Quang; Lee, Moonyong

    2018-01-01

    Design optimization of the single mixed refrigerant (SMR) natural gas liquefaction (LNG) process involves highly non-linear interactions between decision variables, constraints, and the objective function. These non-linear interactions lead to an irreversibility, which deteriorates the energy efficiency of the LNG process. In this study, a simple and highly efficient hybrid modified coordinate descent (HMCD) algorithm was proposed to cope with the optimization of the natural gas liquefaction process. The single mixed refrigerant process was modeled in Aspen Hysys® and then connected to a Microsoft Visual Studio environment. The proposed optimization algorithm provided an improved result compared to the other existing methodologies to find the optimal condition of the complex mixed refrigerant natural gas liquefaction process. By applying the proposed optimization algorithm, the SMR process can be designed with the 0.2555 kW specific compression power which is equivalent to 44.3% energy saving as compared to the base case. Furthermore, in terms of coefficient of performance (COP), it can be enhanced up to 34.7% as compared to the base case. The proposed optimization algorithm provides a deep understanding of the optimization of the liquefaction process in both technical and numerical perspectives. In addition, the HMCD algorithm can be employed to any mixed refrigerant based liquefaction process in the natural gas industry.

  8. Topology optimization of hyperelastic structures using a level set method

    Science.gov (United States)

    Chen, Feifei; Wang, Yiqiang; Wang, Michael Yu; Zhang, Y. F.

    2017-12-01

    Soft rubberlike materials, due to their inherent compliance, are finding widespread implementation in a variety of applications ranging from assistive wearable technologies to soft material robots. Structural design of such soft and rubbery materials necessitates the consideration of large nonlinear deformations and hyperelastic material models to accurately predict their mechanical behaviour. In this paper, we present an effective level set-based topology optimization method for the design of hyperelastic structures that undergo large deformations. The method incorporates both geometric and material nonlinearities where the strain and stress measures are defined within the total Lagrange framework and the hyperelasticity is characterized by the widely-adopted Mooney-Rivlin material model. A shape sensitivity analysis is carried out, in the strict sense of the material derivative, where the high-order terms involving the displacement gradient are retained to ensure the descent direction. As the design velocity enters into the shape derivative in terms of its gradient and divergence terms, we develop a discrete velocity selection strategy. The whole optimization implementation undergoes a two-step process, where the linear optimization is first performed and its optimized solution serves as the initial design for the subsequent nonlinear optimization. It turns out that this operation could efficiently alleviate the numerical instability and facilitate the optimization process. To demonstrate the validity and effectiveness of the proposed method, three compliance minimization problems are studied and their optimized solutions present significant mechanical benefits of incorporating the nonlinearities, in terms of remarkable enhancement in not only the structural stiffness but also the critical buckling load.

  9. Panorama parking assistant system with improved particle swarm optimization method

    Science.gov (United States)

    Cheng, Ruzhong; Zhao, Yong; Li, Zhichao; Jiang, Weigang; Wang, Xin'an; Xu, Yong

    2013-10-01

    A panorama parking assistant system (PPAS) for the automotive aftermarket together with a practical improved particle swarm optimization method (IPSO) are proposed in this paper. In the PPAS system, four fisheye cameras are installed in the vehicle with different views, and four channels of video frames captured by the cameras are processed as a 360-deg top-view image around the vehicle. Besides the embedded design of PPAS, the key problem for image distortion correction and mosaicking is the efficiency of parameter optimization in the process of camera calibration. In order to address this problem, an IPSO method is proposed. Compared with other parameter optimization methods, the proposed method allows a certain range of dynamic change for the intrinsic and extrinsic parameters, and can exploit only one reference image to complete all of the optimization; therefore, the efficiency of the whole camera calibration is increased. The PPAS is commercially available, and the IPSO method is a highly practical way to increase the efficiency of the installation and the calibration of PPAS in automobile 4S shops.

  10. Optimization of MIMO Systems Capacity Using Large Random Matrix Methods

    Directory of Open Access Journals (Sweden)

    Philippe Loubaton

    2012-11-01

    Full Text Available This paper provides a comprehensive introduction of large random matrix methods for input covariance matrix optimization of mutual information of MIMO systems. It is first recalled informally how large system approximations of mutual information can be derived. Then, the optimization of the approximations is discussed, and important methodological points that are not necessarily covered by the existing literature are addressed, including the strict concavity of the approximation, the structure of the argument of its maximum, the accuracy of the large system approach with regard to the number of antennas, or the justification of iterative water-filling optimization algorithms. While the existing papers have developed methods adapted to a specific model, this contribution tries to provide a unified view of the large system approximation approach.

  11. Optimal control methods for rapidly time-varying Hamiltonians

    International Nuclear Information System (INIS)

    Motzoi, F.; Merkel, S. T.; Wilhelm, F. K.; Gambetta, J. M.

    2011-01-01

    In this article, we develop a numerical method to find optimal control pulses that accounts for the separation of timescales between the variation of the input control fields and the applied Hamiltonian. In traditional numerical optimization methods, these timescales are treated as being the same. While this approximation has had much success, in applications where the input controls are filtered substantially or mixed with a fast carrier, the resulting optimized pulses have little relation to the applied physical fields. Our technique remains numerically efficient in that the dimension of our search space is only dependent on the variation of the input control fields, while our simulation of the quantum evolution is accurate on the timescale of the fast variation in the applied Hamiltonian.

  12. Control and Optimization Methods for Electric Smart Grids

    CERN Document Server

    Ilić, Marija

    2012-01-01

    Control and Optimization Methods for Electric Smart Grids brings together leading experts in power, control and communication systems,and consolidates some of the most promising recent research in smart grid modeling,control and optimization in hopes of laying the foundation for future advances in this critical field of study. The contents comprise eighteen essays addressing wide varieties of control-theoretic problems for tomorrow’s power grid. Topics covered include: Control architectures for power system networks with large-scale penetration of renewable energy and plug-in vehicles Optimal demand response New modeling methods for electricity markets Control strategies for data centers Cyber-security Wide-area monitoring and control using synchronized phasor measurements. The authors present theoretical results supported by illustrative examples and practical case studies, making the material comprehensible to a wide audience. The results reflect the exponential transformation that today’s grid is going...

  13. Optimization of automation: III. Development of optimization method for determining automation rate in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun

    2016-01-01

    Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the

  14. The Design and Optimization of GaAs Single Solar Cells Using the Genetic Algorithm and Silvaco ATLAS

    Directory of Open Access Journals (Sweden)

    Kamal Attari

    2017-01-01

    Full Text Available Single-junction solar cells are the most available in the market and the most simple in terms of the realization and fabrication comparing to the other solar devices. However, these single-junction solar cells need more development and optimization for higher conversion efficiency. In addition to the doping densities and compromises between different layers and their best thickness value, the choice of the materials is also an important factor on improving the efficiency. In this paper, an efficient single-junction solar cell model of GaAs is presented and optimized. In the first step, an initial model was simulated and then the results were processed by an algorithm code. In this work, the proposed optimization method is a genetic search algorithm implemented in Matlab receiving ATLAS data to generate an optimum output power solar cell. Other performance parameters such as photogeneration rates, external quantum efficiency (EQE, and internal quantum efficiency (EQI are also obtained. The simulation shows that the proposed method provides significant conversion efficiency improvement of 29.7% under AM1.5G illumination. The other results were Jsc = 34.79 mA/cm2, Voc = 1 V, and fill factor (FF = 85%.

  15. Incorporating single detector failure into the ROP detector layout optimization for CANDU reactors

    Energy Technology Data Exchange (ETDEWEB)

    Kastanya, Doddy, E-mail: Doddy.Kastanya@snclavalin.com

    2015-12-15

    Highlights: • ROP TSP value needs to be adjusted when any detector in the system fails. • Single detector failure criterion has been incorporated into the detector layout optimization as a constraint. • Results show that the optimized detector layout is more robust with respect to its vulnerability to a single detector failure. • An early rejection scheme has been introduced to speed-up the optimization process. - Abstract: In CANDU{sup ®} reactors, the regional overpower protection (ROP) systems are designed to protect the reactor against overpower in the fuel which could reduce the safety margin-to-dryout. In the CANDU{sup ®} 600 MW (CANDU 6) design, there are two ROP systems in the core, each of which is connected to a fast-acting shutdown system. Each ROP system consists of a number of fast-responding, self-powered flux detectors suitably distributed throughout the core within vertical and horizontal flux detector assemblies. The placement of these ROP detectors is a challenging discrete optimization problem. In the past few years, two algorithms, DETPLASA and ADORE, have been developed to optimize the detector layout for the ROP systems in CANDU reactors. These algorithms utilize the simulated annealing (SA) technique to optimize the placement of the detectors in the core. The objective of the optimization process is typically either to maximize the TSP value for a given number of detectors in the system or to minimize the number of detectors in the system to obtain a target TSP value. One measure to determine the robustness of the optimized detector layout is to evaluate the maximum decrease (penalty) in TSP value when any single detector in the system fails. The smaller the penalty, the more robust the design is. Therefore, in order to ensure that the optimized detector layout is robust, the single detector failure (SDF) criterion has been incorporated as an additional constraint into the ADORE algorithm. Results from this study indicate that there

  16. Clustering methods for the optimization of atomic cluster structure

    Science.gov (United States)

    Bagattini, Francesco; Schoen, Fabio; Tigli, Luca

    2018-04-01

    In this paper, we propose a revised global optimization method and apply it to large scale cluster conformation problems. In the 1990s, the so-called clustering methods were considered among the most efficient general purpose global optimization techniques; however, their usage has quickly declined in recent years, mainly due to the inherent difficulties of clustering approaches in large dimensional spaces. Inspired from the machine learning literature, we redesigned clustering methods in order to deal with molecular structures in a reduced feature space. Our aim is to show that by suitably choosing a good set of geometrical features coupled with a very efficient descent method, an effective optimization tool is obtained which is capable of finding, with a very high success rate, all known putative optima for medium size clusters without any prior information, both for Lennard-Jones and Morse potentials. The main result is that, beyond being a reliable approach, the proposed method, based on the idea of starting a computationally expensive deep local search only when it seems worth doing so, is capable of saving a huge amount of searches with respect to an analogous algorithm which does not employ a clustering phase. In this paper, we are not claiming the superiority of the proposed method compared to specific, refined, state-of-the-art procedures, but rather indicating a quite straightforward way to save local searches by means of a clustering scheme working in a reduced variable space, which might prove useful when included in many modern methods.

  17. Response surface method to optimize the low cost medium for ...

    African Journals Online (AJOL)

    A protease producing Bacillus sp. GA CAS10 was isolated from ascidian Phallusia arabica, Tuticorin, Southeast coast of India. Response surface methodology was employed for the optimization of different nutritional and physical factors for the production of protease. Plackett-Burman method was applied to identify ...

  18. Optimization Methods in Operations Research and Systems Analysis

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 6. Optimization Methods in Operations Research and Systems Analysis. V G Tikekar. Book Review Volume 2 Issue 6 June 1997 pp 91-92. Fulltext. Click here to view fulltext PDF. Permanent link:

  19. Optimization-based Method for Automated Road Network Extraction

    International Nuclear Information System (INIS)

    Xiong, D

    2001-01-01

    Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction

  20. Method: a single nucleotide polymorphism genotyping method for Wheat streak mosaic virus.

    Science.gov (United States)

    Rogers, Stephanie M; Payton, Mark; Allen, Robert W; Melcher, Ulrich; Carver, Jesse; Fletcher, Jacqueline

    2012-05-17

    The September 11, 2001 attacks on the World Trade Center and the Pentagon increased the concern about the potential for terrorist attacks on many vulnerable sectors of the US, including agriculture. The concentrated nature of crops, easily obtainable biological agents, and highly detrimental impacts make agroterrorism a potential threat. Although procedures for an effective criminal investigation and attribution following such an attack are available, important enhancements are still needed, one of which is the capability for fine discrimination among pathogen strains. The purpose of this study was to develop a molecular typing assay for use in a forensic investigation, using Wheat streak mosaic virus (WSMV) as a model plant virus. This genotyping technique utilizes single base primer extension to generate a genetic fingerprint. Fifteen single nucleotide polymorphisms (SNPs) within the coat protein and helper component-protease genes were selected as the genetic markers for this assay. Assay optimization and sensitivity testing was conducted using synthetic targets. WSMV strains and field isolates were collected from regions around the world and used to evaluate the assay for discrimination. The assay specificity was tested against a panel of near-neighbors consisting of genetic and environmental near-neighbors. Each WSMV strain or field isolate tested produced a unique SNP fingerprint, with the exception of three isolates collected within the same geographic location that produced indistinguishable fingerprints. The results were consistent among replicates, demonstrating the reproducibility of the assay. No SNP fingerprints were generated from organisms included in the near-neighbor panel, suggesting the assay is specific for WSMV. Using synthetic targets, a complete profile could be generated from as low as 7.15 fmoles of cDNA. The molecular typing method presented is one tool that could be incorporated into the forensic science tool box after a thorough

  1. An express method for optimally tuning an analog controller with respect to integral quality criteria

    Science.gov (United States)

    Golinko, I. M.; Kovrigo, Yu. M.; Kubrak, A. I.

    2014-03-01

    An express method for optimally tuning analog PI and PID controllers is considered. An integral quality criterion with minimizing the control output is proposed for optimizing control systems. The suggested criterion differs from existing ones in that the control output applied to the technological process is taken into account in a correct manner, due to which it becomes possible to maximally reduce the expenditure of material and/or energy resources in performing control of industrial equipment sets. With control organized in such manner, smaller wear and longer service life of control devices are achieved. A unimodal nature of the proposed criterion for optimally tuning a controller is numerically demonstrated using the methods of optimization theory. A functional interrelation between the optimal controller parameters and dynamic properties of a controlled plant is numerically determined for a single-loop control system. The results obtained from simulation of transients in a control system carried out using the proposed and existing functional dependences are compared with each other. The proposed calculation formulas differ from the existing ones by a simple structure and highly accurate search for the optimal controller tuning parameters. The obtained calculation formulas are recommended for being used by specialists in automation for design and optimization of control systems.

  2. An Optimal Calibration Method for a MEMS Inertial Measurement Unit

    Directory of Open Access Journals (Sweden)

    Bin Fang

    2014-02-01

    Full Text Available An optimal calibration method for a micro-electro-mechanical inertial measurement unit (MIMU is presented in this paper. The accuracy of the MIMU is highly dependent on calibration to remove the deterministic errors of systematic errors, which also contain random errors. The overlapping Allan variance is applied to characterize the types of random error terms in the measurements. The calibration model includes package misalignment error, sensor-to-sensor misalignment error and bias, and a scale factor is built. The new concept of a calibration method, which includes a calibration scheme and a calibration algorithm, is proposed. The calibration scheme is designed by D-optimal and the calibration algorithm is deduced by a Kalman filter. In addition, the thermal calibration is investigated, as the bias and scale factor varied with temperature. The simulations and real tests verify the effectiveness of the proposed calibration method and show that it is better than the traditional method.

  3. Robust optimization methods for cardiac sparing in tangential breast IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Mahmoudzadeh, Houra, E-mail: houra@mie.utoronto.ca [Mechanical and Industrial Engineering Department, University of Toronto, Toronto, Ontario M5S 3G8 (Canada); Lee, Jenny [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Chan, Timothy C. Y. [Mechanical and Industrial Engineering Department, University of Toronto, Toronto, Ontario M5S 3G8, Canada and Techna Institute for the Advancement of Technology for Health, Toronto, Ontario M5G 1P5 (Canada); Purdie, Thomas G. [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3S2 (Canada); Techna Institute for the Advancement of Technology for Health, Toronto, Ontario M5G 1P5 (Canada)

    2015-05-15

    Purpose: In left-sided tangential breast intensity modulated radiation therapy (IMRT), the heart may enter the radiation field and receive excessive radiation while the patient is breathing. The patient’s breathing pattern is often irregular and unpredictable. We verify the clinical applicability of a heart-sparing robust optimization approach for breast IMRT. We compare robust optimized plans with clinical plans at free-breathing and clinical plans at deep inspiration breath-hold (DIBH) using active breathing control (ABC). Methods: Eight patients were included in the study with each patient simulated using 4D-CT. The 4D-CT image acquisition generated ten breathing phase datasets. An average scan was constructed using all the phase datasets. Two of the eight patients were also imaged at breath-hold using ABC. The 4D-CT datasets were used to calculate the accumulated dose for robust optimized and clinical plans based on deformable registration. We generated a set of simulated breathing probability mass functions, which represent the fraction of time patients spend in different breathing phases. The robust optimization method was applied to each patient using a set of dose-influence matrices extracted from the 4D-CT data and a model of the breathing motion uncertainty. The goal of the optimization models was to minimize the dose to the heart while ensuring dose constraints on the target were achieved under breathing motion uncertainty. Results: Robust optimized plans were improved or equivalent to the clinical plans in terms of heart sparing for all patients studied. The robust method reduced the accumulated heart dose (D10cc) by up to 801 cGy compared to the clinical method while also improving the coverage of the accumulated whole breast target volume. On average, the robust method reduced the heart dose (D10cc) by 364 cGy and improved the optBreast dose (D99%) by 477 cGy. In addition, the robust method had smaller deviations from the planned dose to the

  4. Resampling: An optimization method for inverse planning in robotic radiosurgery

    International Nuclear Information System (INIS)

    Schweikard, Achim; Schlaefer, Alexander; Adler, John R. Jr.

    2006-01-01

    By design, the range of beam directions in conventional radiosurgery are constrained to an isocentric array. However, the recent introduction of robotic radiosurgery dramatically increases the flexibility of targeting, and as a consequence, beams need be neither coplanar nor isocentric. Such a nonisocentric design permits a large number of distinct beam directions to be used in one single treatment. These major technical differences provide an opportunity to improve upon the well-established principles for treatment planning used with GammaKnife or LINAC radiosurgery. With this objective in mind, our group has developed over the past decade an inverse planning tool for robotic radiosurgery. This system first computes a set of beam directions, and then during an optimization step, weights each individual beam. Optimization begins with a feasibility query, the answer to which is derived through linear programming. This approach offers the advantage of completeness and avoids local optima. Final beam selection is based on heuristics. In this report we present and evaluate a new strategy for utilizing the advantages of linear programming to improve beam selection. Starting from an initial solution, a heuristically determined set of beams is added to the optimization problem, while beams with zero weight are removed. This process is repeated to sample a set of beams much larger compared with typical optimization. Experimental results indicate that the planning approach efficiently finds acceptable plans and that resampling can further improve its efficiency

  5. Exergetic optimization of a thermoacoustic engine using the particle swarm optimization method

    International Nuclear Information System (INIS)

    Chaitou, Hussein; Nika, Philippe

    2012-01-01

    Highlights: ► Optimization of a thermoacoustic engine using the particle swarm optimization method. ► Exergetic efficiency, acoustic power and their product are the optimized functions. ► PSO method is used successfully for the first time in the TA research. ► The powerful PSO tool is advised to be more involved in the TA research and design. ► EE times AP optimized function is highly recommended to design any new TA devices. - Abstract: Thermoacoustic engines convert heat energy into acoustic energy. Then, the acoustic energy can be used to pump heat or to generate electricity. It is well-known that the acoustic energy and therefore the exergetic efficiency depend on parameters such as the stack’s hydraulic radius, the stack’s position in the resonator and the traveling–standing-wave ratio. In this paper, these three parameters are investigated in order to study and analyze the best value of the produced acoustic energy, the exergetic efficiency and the product of the acoustic energy by the exergetic efficiency of a thermoacoustic engine with a parallel-plate stack. The dimensionless expressions of the thermoacoustic equations are derived and calculated. Then, the Particle Swarm Optimization method (PSO) is introduced and used for the first time in the thermoacoustic research. The use of the PSO method and the optimization of the acoustic energy multiplied by the exergetic efficiency are novel contributions to this domain of research. This paper discusses some significant conclusions which are useful for the design of new thermoacoustic engines.

  6. Superalloy design - A Monte Carlo constrained optimization method

    CSIR Research Space (South Africa)

    Stander, CM

    1996-01-01

    Full Text Available optimization method C. M. Stander Division of Materials Science and Technology, CSIR, PO Box 395, Pretoria, Republic of South Africa Received 74 March 1996; accepted 24 June 1996 A method, based on Monte Carlo constrained... successful hit, i.e. when Liow < LMP,,, < Lhiph, and for all the properties, Pj?, < P, < Pi@?. If successful this hit falls within the ROA. Repeat steps 4 and 5 to find at least ten (or more) successful hits with values...

  7. METHODS FOR DETERMINATION AND OPTIMIZATION OF LOGISTICS COSTS

    Directory of Open Access Journals (Sweden)

    Mihaela STET

    2016-12-01

    Full Text Available The paper is dealing with the problems of logistics costs, highlighting some methods for estimation and determination of specific costs for different transport modes in freight distribution. There are highlighted, besides costs of transports, the other costs in supply chain, as well as costing methods used in logistics activities. In this context, there are also revealed some optimization means of transport costs in logistics chain.

  8. METHODS FOR DETERMINATION AND OPTIMIZATION OF LOGISTICS COSTS

    OpenAIRE

    Mihaela STET

    2016-01-01

    The paper is dealing with the problems of logistics costs, highlighting some methods for estimation and determination of specific costs for different transport modes in freight distribution. There are highlighted, besides costs of transports, the other costs in supply chain, as well as costing methods used in logistics activities. In this context, there are also revealed some optimization means of transport costs in logistics chain.

  9. Several Guaranteed Descent Conjugate Gradient Methods for Unconstrained Optimization

    Directory of Open Access Journals (Sweden)

    San-Yang Liu

    2014-01-01

    Full Text Available This paper investigates a general form of guaranteed descent conjugate gradient methods which satisfies the descent condition gkTdk≤-1-1/4θkgk2  θk>1/4 and which is strongly convergent whenever the weak Wolfe line search is fulfilled. Moreover, we present several specific guaranteed descent conjugate gradient methods and give their numerical results for large-scale unconstrained optimization.

  10. Hybrid robust predictive optimization method of power system dispatch

    Science.gov (United States)

    Chandra, Ramu Sharat [Niskayuna, NY; Liu, Yan [Ballston Lake, NY; Bose, Sumit [Niskayuna, NY; de Bedout, Juan Manuel [West Glenville, NY

    2011-08-02

    A method of power system dispatch control solves power system dispatch problems by integrating a larger variety of generation, load and storage assets, including without limitation, combined heat and power (CHP) units, renewable generation with forecasting, controllable loads, electric, thermal and water energy storage. The method employs a predictive algorithm to dynamically schedule different assets in order to achieve global optimization and maintain the system normal operation.

  11. A Rapid Aeroelasticity Optimization Method Based on the Stiffness characteristics

    OpenAIRE

    Yuan, Zhe; Huo, Shihui; Ren, Jianting

    2018-01-01

    A rapid aeroelasticity optimization method based on the stiffness characteristics was proposed in the present study. Large time expense in static aeroelasticity analysis based on traditional time domain aeroelasticity method is solved. Elastic axis location and torsional stiffness are discussed firstly. Both torsional stiffness and the distance between stiffness center and aerodynamic center have a direct impact on divergent velocity. The divergent velocity can be adjusted by changing the cor...

  12. Optimization of the crystallizability of a single-chain antibody fragment

    Czech Academy of Sciences Publication Activity Database

    Škerlová, Jana; Král, Vlastimil; Fábry, Milan; Sedláček, Juraj; Veverka, Václav; Řezáčová, Pavlína

    2014-01-01

    Roč. 70, č. 12 (2014), s. 1701-1706 ISSN 1744-3091 R&D Projects: GA MŠk(CZ) LK11205 Institutional support: RVO:61388963 ; RVO:68378050 Keywords : single-chain antibody fragment * Thermofluor assay * differential scanning fluorimetry * crystallizability optimization * oligomerization * crystallization Subject RIV: CE - Biochemistry Impact factor: 0.527, year: 2014

  13. Development of two dimensional electrophoresis method using single chain DNA

    International Nuclear Information System (INIS)

    Ikeda, Junichi; Hidaka, So

    1998-01-01

    By combining a separation method due to molecular weight and a method to distinguish difference of mono-bases, it was aimed to develop a two dimensional single chain DNA labeled with Radioisotope (RI). From electrophoretic pattern difference of parent and variant strands, it was investigated to isolate the root module implantation control gene. At first, a Single Strand Conformation Polymorphism (SSCP) method using concentration gradient gel was investigated. As a result, it was formed that intervals between double chain and single chain DNAs expanded, but intervals of both single chain DNAs did not expand. On next, combination of non-modified acrylic amide electrophoresis method and Denaturing Gradient-Gel Electrophoresis (DGGE) method was examined. As a result, hybrid DNA developed by two dimensional electrophoresis arranged on two lines. But, among them a band of DNA modified by high concentration of urea could not be found. Therefore, in this fiscal year's experiments, no preferable result could be obtained. By the used method, it was thought to be impossible to detect the differences. (G.K.)

  14. The Experiment Method for Manufacturing Grid Development on Single Computer

    Institute of Scientific and Technical Information of China (English)

    XIAO Youan; ZHOU Zude

    2006-01-01

    In this paper, an experiment method for the Manufacturing Grid application system development in the single personal computer environment is proposed. The characteristic of the proposed method is constructing a full prototype Manufacturing Grid application system which is hosted on a single personal computer with the virtual machine technology. Firstly, it builds all the Manufacturing Grid physical resource nodes on an abstraction layer of a single personal computer with the virtual machine technology. Secondly, all the virtual Manufacturing Grid resource nodes will be connected with virtual network and the application software will be deployed on each Manufacturing Grid nodes. Then, we can obtain a prototype Manufacturing Grid application system which is working in the single personal computer, and can carry on the experiment on this foundation. Compared with the known experiment methods for the Manufacturing Grid application system development, the proposed method has the advantages of the known methods, such as cost inexpensively, operation simple, and can get the confidence experiment result easily. The Manufacturing Grid application system constructed with the proposed method has the high scalability, stability and reliability. It is can be migrated to the real application environment rapidly.

  15. Autonomous guided vehicles methods and models for optimal path planning

    CERN Document Server

    Fazlollahtabar, Hamed

    2015-01-01

      This book provides readers with extensive information on path planning optimization for both single and multiple Autonomous Guided Vehicles (AGVs), and discusses practical issues involved in advanced industrial applications of AGVs. After discussing previously published research in the field and highlighting the current gaps, it introduces new models developed by the authors with the goal of reducing costs and increasing productivity and effectiveness in the manufacturing industry. The new models address the increasing complexity of manufacturing networks, due for example to the adoption of flexible manufacturing systems that involve automated material handling systems, robots, numerically controlled machine tools, and automated inspection stations, while also considering the uncertainty and stochastic nature of automated equipment such as AGVs. The book discusses and provides solutions to important issues concerning the use of AGVs in the manufacturing industry, including material flow optimization with A...

  16. Two optimal control methods for PWR core control

    International Nuclear Information System (INIS)

    Karppinen, J.; Blomsnes, B.; Versluis, R.M.

    1976-01-01

    The Multistage Mathematical Programming (MMP) and State Variable Feedback (SVF) methods for PWR core control are presented in this paper. The MMP method is primarily intended for optimization of the core behaviour with respect to xenon induced power distribution effects in load cycle operation. The SVF method is most suited for xenon oscillation damping in situations where the core load is unpredictable or expected to stay constant. Results from simulation studies in which the two methods have been applied for control of simple PWR core models are presented. (orig./RW) [de

  17. Robust fluence map optimization via alternating direction method of multipliers with empirical parameter optimization

    International Nuclear Information System (INIS)

    Gao, Hao

    2016-01-01

    For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT. (paper)

  18. An hp symplectic pseudospectral method for nonlinear optimal control

    Science.gov (United States)

    Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong

    2017-01-01

    An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.

  19. Optimization of PID Parameters Utilizing Variable Weight Grey-Taguchi Method and Particle Swarm Optimization

    Science.gov (United States)

    Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd

    2018-03-01

    Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.

  20. Optimized iterative decoding method for TPC coded CPM

    Science.gov (United States)

    Ma, Yanmin; Lai, Penghui; Wang, Shilian; Xie, Shunqin; Zhang, Wei

    2018-05-01

    Turbo Product Code (TPC) coded Continuous Phase Modulation (CPM) system (TPC-CPM) has been widely used in aeronautical telemetry and satellite communication. This paper mainly investigates the improvement and optimization on the TPC-CPM system. We first add the interleaver and deinterleaver to the TPC-CPM system, and then establish an iterative system to iteratively decode. However, the improved system has a poor convergence ability. To overcome this issue, we use the Extrinsic Information Transfer (EXIT) analysis to find the optimal factors for the system. The experiments show our method is efficient to improve the convergence performance.

  1. Method of optimization of the natural gas refining process

    Energy Technology Data Exchange (ETDEWEB)

    Sadykh-Zade, E.S.; Bagirov, A.A.; Mardakhayev, I.M.; Razamat, M.S.; Tagiyev, V.G.

    1980-01-01

    The SATUM (automatic control system of technical operations) system introduced at the Shatlyk field should assure good quality of gas refining. In order to optimize the natural gas refining processes and experimental-analytical method is used in compiling the mathematical descriptions. The program, compiled in Fortran language, in addition to parameters of optimal conditions gives information on the yield of concentrate and water, concentration and consumption of DEG, composition and characteristics of the gas and condensate. The algorithm for calculating optimum engineering conditions of gas refining is proposed to be used in ''advice'' mode, and also for monitoring progress of the gas refining process.

  2. Automated selection of the optimal cardiac phase for single-beat coronary CT angiography reconstruction

    International Nuclear Information System (INIS)

    Stassi, D.; Ma, H.; Schmidt, T. G.; Dutta, S.; Soderman, A.; Pazzani, D.; Gros, E.; Okerlund, D.

    2016-01-01

    Purpose: Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, making it suited for prospectively gated studies where only a subset of phases are available. Methods: An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three

  3. A Brief Introduction to Single-Molecule Fluorescence Methods.

    Science.gov (United States)

    van den Wildenberg, Siet M J L; Prevo, Bram; Peterman, Erwin J G

    2018-01-01

    One of the more popular single-molecule approaches in biological science is single-molecule fluorescence microscopy, which will be the subject of the following section of this volume. Fluorescence methods provide the sensitivity required to study biology on the single-molecule level, but they also allow access to useful measurable parameters on time and length scales relevant for the biomolecular world. Before several detailed experimental approaches will be addressed, we will first give a general overview of single-molecule fluorescence microscopy. We start with discussing the phenomenon of fluorescence in general and the history of single-molecule fluorescence microscopy. Next, we will review fluorescent probes in more detail and the equipment required to visualize them on the single-molecule level. We will end with a description of parameters measurable with such approaches, ranging from protein counting and tracking, single-molecule localization super-resolution microscopy, to distance measurements with Förster Resonance Energy Transfer and orientation measurements with fluorescence polarization.

  4. Kinoform design with an optimal-rotation-angle method.

    Science.gov (United States)

    Bengtsson, J

    1994-10-10

    Kinoforms (i.e., computer-generated phase holograms) are designed with a new algorithm, the optimalrotation- angle method, in the paraxial domain. This is a direct Fourier method (i.e., no inverse transform is performed) in which the height of the kinoform relief in each discrete point is chosen so that the diffraction efficiency is increased. The optimal-rotation-angle algorithm has a straightforward geometrical interpretation. It yields excellent results close to, or better than, those obtained with other state-of-the-art methods. The optimal-rotation-angle algorithm can easily be modified to take different restraints into account; as an example, phase-swing-restricted kinoforms, which distribute the light into a number of equally bright spots (so called fan-outs), were designed. The phase-swing restriction lowers the efficiency, but the uniformity can still be made almost perfect.

  5. Optimization of Excitation in FDTD Method and Corresponding Source Modeling

    Directory of Open Access Journals (Sweden)

    B. Dimitrijevic

    2015-04-01

    Full Text Available Source and excitation modeling in FDTD formulation has a significant impact on the method performance and the required simulation time. Since the abrupt source introduction yields intensive numerical variations in whole computational domain, a generally accepted solution is to slowly introduce the source, using appropriate shaping functions in time. The main goal of the optimization presented in this paper is to find balance between two opposite demands: minimal required computation time and acceptable degradation of simulation performance. Reducing the time necessary for source activation and deactivation is an important issue, especially in design of microwave structures, when the simulation is intensively repeated in the process of device parameter optimization. Here proposed optimized source models are realized and tested within an own developed FDTD simulation environment.

  6. Imaging Live Cells at the Nanometer-Scale with Single-Molecule Microscopy: Obstacles and Achievements in Experiment Optimization for Microbiology

    Science.gov (United States)

    Haas, Beth L.; Matson, Jyl S.; DiRita, Victor J.; Biteen, Julie S.

    2015-01-01

    Single-molecule fluorescence microscopy enables biological investigations inside living cells to achieve millisecond- and nanometer-scale resolution. Although single-molecule-based methods are becoming increasingly accessible to non-experts, optimizing new single-molecule experiments can be challenging, in particular when super-resolution imaging and tracking are applied to live cells. In this review, we summarize common obstacles to live-cell single-molecule microscopy and describe the methods we have developed and applied to overcome these challenges in live bacteria. We examine the choice of fluorophore and labeling scheme, approaches to achieving single-molecule levels of fluorescence, considerations for maintaining cell viability, and strategies for detecting single-molecule signals in the presence of noise and sample drift. We also discuss methods for analyzing single-molecule trajectories and the challenges presented by the finite size of a bacterial cell and the curvature of the bacterial membrane. PMID:25123183

  7. Optimal treatment cost allocation methods in pollution control

    International Nuclear Information System (INIS)

    Chen Wenying; Fang Dong; Xue Dazhi

    1999-01-01

    Total emission control is an effective pollution control strategy. However, Chinese application of total emission control lacks reasonable and fair methods for optimal treatment cost allocation, a critical issue in total emission control. The author considers four approaches to allocate treatment costs. The first approach is to set up a multiple-objective planning model and to solve the model using the shortest distance ideal point method. The second approach is to define degree of satisfaction for cost allocation results for each polluter and to establish a method based on this concept. The third is to apply bargaining and arbitration theory to develop a model. The fourth is to establish a cooperative N-person game model which can be solved using the Shapley value method, the core method, the Cost Gap Allocation method or the Minimum Costs-Remaining Savings method. These approaches are compared using a practicable case study

  8. Time-explicit methods for joint economical and geological risk mitigation in production optimization

    DEFF Research Database (Denmark)

    Christiansen, Lasse Hjuler; Capolei, Andrea; Jørgensen, John Bagterp

    2016-01-01

    Real-life applications of production optimization face challenges of risks related to unpredictable fluctuations in oil prices and sparse geological data. Consequently, operating companies are reluctant to adopt model-based production optimization into their operations. Conventional production...... of mitigating economical and geological risks. As opposed to conventional strategies that focus on a single long-term objective, TE methods seek to reduce risks and promote returns over the entire reservoir life by optimization of a given ensemble-based geological risk measure over time. By explicit involvement...... of time, economical risks are implicitly addressed by balancing short-term and long-term objectives throughout the reservoir life. Open-loop simulations of a two-phase synthetic reservoir demonstrate that TE methods may significantly improve short-term risk measures such as expected return, standard...

  9. Optimizing sonication parameters for dispersion of single-walled carbon nanotubes

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Haibo [Fraunhofer Institute for Electronic Nano Systems (Fraunhofer ENAS), 09126 Chemnitz (Germany); Graduate University of the Chinese Academy of Sciences, Beijing (China); State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, 110016 Shenyang (China); Hermann, Sascha, E-mail: sascha.hermann@zfm.tu-chemnitz.de [Center for Microtechnologies (ZfM), Chemnitz University of Technology, 09126 Chemnitz (Germany); Schulz, Stefan E.; Gessner, Thomas [Fraunhofer Institute for Electronic Nano Systems (Fraunhofer ENAS), 09126 Chemnitz (Germany); Center for Microtechnologies (ZfM), Chemnitz University of Technology, 09126 Chemnitz (Germany); Dong, Zaili [State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, 110016 Shenyang (China); Li, Wen J., E-mail: wenjungli@gmail.com [State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, 110016 Shenyang (China); Department of Mechanical and Biomedical Engineering, City University of Hong Kong, Hong Kong SAR (China)

    2012-10-26

    Graphical abstract: We study the dispersing behavior of SWCNTs based on the surfactant and the optimization of sonication parameters including the sonication power and running time. Highlights: Black-Right-Pointing-Pointer We study the optimization of sonication for the surfactant-based dispersion of SWCNTs. Black-Right-Pointing-Pointer The absorption spectrum of SWCNT solution strongly depend on the sonication conditions. Black-Right-Pointing-Pointer The sonication process has an important influence on the average length and diameters of SWCNTs in solution. Black-Right-Pointing-Pointer Centrifugation mainly contributes to the decrease of nonresonant absorption background. Black-Right-Pointing-Pointer Under the same sonication parameters, the large-diameter tip performs dispersion of SWCNTs better than the small-diameter tip. -- Abstract: Non-covalent functionalization based on surfactants has become one of the most common methods for dispersing of single-walled carbon nanotubes (SWCNTs). Previously, efforts have mainly been focused on experimenting with different surfactant systems, varying their concentrations and solvents. However sonication plays a very important role during the surfactant-based dispersion process for SWCNTs. The sonication treatment enables the surfactant molecules to adsorb onto the surface of SWCNTs by overcoming the interactions induced by the hydrophobic, electrostatic and van der Waals forces. This work describes a systematic study of the influence of the sonication power and time on the dispersion of SWCNTs. UV-vis-NIR absorption spectra is used to analyze and to evaluate the dispersion of SWCNTs in an aqueous solution of 1 w/v% sodium deoxycholate (DOC) showing that the resonant and nonresonant background absorption strongly depends on the sonication conditions. Furthermore, the diameter and length of SWCNTs under different sonication parameters are investigated using atomic force microscopy (AFM).

  10. A method for optimizing the performance of buildings

    Energy Technology Data Exchange (ETDEWEB)

    Pedersen, Frank

    2006-07-01

    This thesis describes a method for optimizing the performance of buildings. Design decisions made in early stages of the building design process have a significant impact on the performance of buildings, for instance, the performance with respect to the energy consumption, economical aspects, and the indoor environment. The method is intended for supporting design decisions for buildings, by combining methods for calculating the performance of buildings with numerical optimization methods. The method is able to find optimum values of decision variables representing different features of the building, such as its shape, the amount and type of windows used, and the amount of insulation used in the building envelope. The parties who influence design decisions for buildings, such as building owners, building users, architects, consulting engineers, contractors, etc., often have different and to some extent conflicting requirements to buildings. For instance, the building owner may be more concerned about the cost of constructing the building, rather than the quality of the indoor climate, which is more likely to be a concern of the building user. In order to support the different types of requirements made by decision-makers for buildings, an optimization problem is formulated, intended for representing a wide range of design decision problems for buildings. The problem formulation involves so-called performance measures, which can be calculated with simulation software for buildings. For instance, the annual amount of energy required by the building, the cost of constructing the building, and the annual number of hours where overheating occurs, can be used as performance measures. The optimization problem enables the decision-makers to specify many different requirements to the decision variables, as well as to the performance of the building. Performance measures can for instance be required to assume their minimum or maximum value, they can be subjected to upper or

  11. First-principle optimal local pseudopotentials construction via optimized effective potential method

    International Nuclear Information System (INIS)

    Mi, Wenhui; Zhang, Shoutao; Wang, Yanchao; Ma, Yanming; Miao, Maosheng

    2016-01-01

    The local pseudopotential (LPP) is an important component of orbital-free density functional theory, a promising large-scale simulation method that can maintain information on a material’s electron state. The LPP is usually extracted from solid-state density functional theory calculations, thereby it is difficult to assess its transferability to cases involving very different chemical environments. Here, we reveal a fundamental relation between the first-principles norm-conserving pseudopotential (NCPP) and the LPP. On the basis of this relationship, we demonstrate that the LPP can be constructed optimally from the NCPP for a large number of elements using the optimized effective potential method. Specially, our method provides a unified scheme for constructing and assessing the LPP within the framework of first-principles pseudopotentials. Our practice reveals that the existence of a valid LPP with high transferability may strongly depend on the element.

  12. Grey Wolf Optimizer Based on Powell Local Optimization Method for Clustering Analysis

    Directory of Open Access Journals (Sweden)

    Sen Zhang

    2015-01-01

    Full Text Available One heuristic evolutionary algorithm recently proposed is the grey wolf optimizer (GWO, inspired by the leadership hierarchy and hunting mechanism of grey wolves in nature. This paper presents an extended GWO algorithm based on Powell local optimization method, and we call it PGWO. PGWO algorithm significantly improves the original GWO in solving complex optimization problems. Clustering is a popular data analysis and data mining technique. Hence, the PGWO could be applied in solving clustering problems. In this study, first the PGWO algorithm is tested on seven benchmark functions. Second, the PGWO algorithm is used for data clustering on nine data sets. Compared to other state-of-the-art evolutionary algorithms, the results of benchmark and data clustering demonstrate the superior performance of PGWO algorithm.

  13. Optimization of a single stage inverter with one cycle control for photovoltaic power generation

    Energy Technology Data Exchange (ETDEWEB)

    Egiziano, L.; Femia, N.; Granozio, D.; Petrone, G.; Spagnuolo, G. [Salermo Univ., Salermo (Italy); Vitelli, M. [Seconda Univ. di Napoli, Napoli (Italy)

    2006-07-01

    An optimized one-cycle control (OCC) for maximum power point tracking and power factor correction in grid-connected photovoltaic (PV) applications was described. OCC is a nonlinear control technique that rejects line perturbations and allows both output power factor co-reaction and tracking of input PV fields. An OCC system was analyzed in order to select optimal design parameters. Parameters were refined through the selection of suitable design constraints. A stochastic search was then performed. Criteria were then developed to distinguish appropriate design parameters for the optimized OCC. The optimization was based on advanced heuristic techniques for non-linear constrained optimization. Performance indices were calculated for each feasible set of parameters. A customized perturb and observe control was then applied to the single-stage inverter. Results of the optimization process were validated by a series of time-domain simulations conducted under heavy, varying irradiance conditions. Results of the simulations showed that the optimized controllers showed improved performance in terms of power drawn from the PV field. 7 refs., 1 tab., 5 figs.

  14. Optimal interpolation method for intercomparison of atmospheric measurements.

    Science.gov (United States)

    Ridolfi, Marco; Ceccherini, Simone; Carli, Bruno

    2006-04-01

    Intercomparison of atmospheric measurements is often a difficult task because of the different spatial response functions of the experiments considered. We propose a new method for comparison of two atmospheric profiles characterized by averaging kernels with different vertical resolutions. The method minimizes the smoothing error induced by the differences in the averaging kernels by exploiting an optimal interpolation rule to map one profile into the retrieval grid of the other. Compared with the techniques published so far, this method permits one to retain the vertical resolution of the less-resolved profile involved in the intercomparison.

  15. Improvement in PWR automatic optimization reloading methods using genetic algorithm

    International Nuclear Information System (INIS)

    Levine, S.H.; Ivanov, K.; Feltus, M.

    1996-01-01

    The objective of using automatic optimized reloading methods is to provide the Nuclear Engineer with an efficient method for reloading a nuclear reactor which results in superior core configurations that minimize fuel costs. Previous methods developed by Levine et al required a large effort to develop the initial core loading using a priority loading scheme. Subsequent modifications to this core configuration were made using expert rules to produce the final core design. Improvements in this technique have been made by using a genetic algorithm to produce improved core reload designs for PWRs more efficiently (authors)

  16. Improvement in PWR automatic optimization reloading methods using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Levine, S H; Ivanov, K; Feltus, M [Pennsylvania State Univ., University Park, PA (United States)

    1996-12-01

    The objective of using automatic optimized reloading methods is to provide the Nuclear Engineer with an efficient method for reloading a nuclear reactor which results in superior core configurations that minimize fuel costs. Previous methods developed by Levine et al required a large effort to develop the initial core loading using a priority loading scheme. Subsequent modifications to this core configuration were made using expert rules to produce the final core design. Improvements in this technique have been made by using a genetic algorithm to produce improved core reload designs for PWRs more efficiently (authors).

  17. Mechanical design optimization of a single-axis MOEMS accelerometer based on a grating interferometry cavity for ultrahigh sensitivity

    Science.gov (United States)

    Lu, Qianbo; Bai, Jian; Wang, Kaiwei; Lou, Shuqi; Jiao, Xufen; Han, Dandan; Yang, Guoguang

    2016-08-01

    The ultrahigh static displacement-acceleration sensitivity of a mechanical sensing chip is essential primarily for an ultrasensitive accelerometer. In this paper, an optimal design to implement to a single-axis MOEMS accelerometer consisting of a grating interferometry cavity and a micromachined sensing chip is presented. The micromachined sensing chip is composed of a proof mass along with its mechanical cantilever suspension and substrate. The dimensional parameters of the sensing chip, including the length, width, thickness and position of the cantilevers are evaluated and optimized both analytically and by finite-element-method (FEM) simulation to yield an unprecedented acceleration-displacement sensitivity. Compared with one of the most sensitive single-axis MOEMS accelerometers reported in the literature, the optimal mechanical design can yield a profound sensitivity improvement with an equal footprint area, specifically, 200% improvement in displacement-acceleration sensitivity with moderate resonant frequency and dynamic range. The modified design was microfabricated, packaged with the grating interferometry cavity and tested. The experimental results demonstrate that the MOEMS accelerometer with modified design can achieve the acceleration-displacement sensitivity of about 150μm/g and acceleration sensitivity of greater than 1500V/g, which validates the effectiveness of the optimal design.

  18. METHOD FOR MANUFACTURING A SINGLE CRYSTAL NANO-WIRE

    NARCIS (Netherlands)

    Van Den Berg, Albert; Bomer, Johan; Carlen Edwin, Thomas; Chen, Songyue; Kraaijenhagen Roderik, Adriaan; Pinedo Herbert, Michael

    2012-01-01

    A method for manufacturing a single crystal nano-structure includes providing a device layer with a 100 structure on a substrate; providing a stress layer onto the device layer; patterning the stress layer along the 110 direction of the device layer; selectively removing parts of the stress layer to

  19. METHOD FOR MANUFACTURING A SINGLE CRYSTAL NANO-WIRE.

    NARCIS (Netherlands)

    Van Den Berg, Albert; Bomer, Johan; Carlen Edwin, Thomas; Chen, Songyue; Kraaijenhagen Roderik, Adriaan; Pinedo Herbert, Michael

    2011-01-01

    A method for manufacturing a single crystal nano-structure is provided comprising the steps of providing a device layer with a 100 structure on a substrate; providing a stress layer onto the device layer; patterning the stress layer along the 110 direction of the device layer; selectively removing

  20. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods

    International Nuclear Information System (INIS)

    Berthiau, G.

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program

  1. A method of combined single-cell electrophysiology and electroporation.

    Science.gov (United States)

    Graham, Lyle J; Del Abajo, Ricardo; Gener, Thomas; Fernandez, Eduardo

    2007-02-15

    This paper describes a method of extracellular recording and subsequent electroporation with the same electrode in single retinal ganglion cells in vitro. We demonstrate anatomical identification of neurons whose receptive fields were measured quantitatively. We discuss how this simple method should also be applicable for the delivery of a variety of intracellular agents, including gene delivery, to physiologically characterized neurons, both in vitro and in vivo.

  2. Application of Taguchi methods to dual mixture ratio propulsion system optimization for SSTO vehicles

    Science.gov (United States)

    Stanley, Douglas O.; Unal, Resit; Joyner, C. R.

    1992-01-01

    The application of advanced technologies to future launch vehicle designs would allow the introduction of a rocket-powered, single-stage-to-orbit (SSTO) launch system early in the next century. For a selected SSTO concept, a dual mixture ratio, staged combustion cycle engine that employs a number of innovative technologies was selected as the baseline propulsion system. A series of parametric trade studies are presented to optimize both a dual mixture ratio engine and a single mixture ratio engine of similar design and technology level. The effect of varying lift-off thrust-to-weight ratio, engine mode transition Mach number, mixture ratios, area ratios, and chamber pressure values on overall vehicle weight is examined. The sensitivity of the advanced SSTO vehicle to variations in each of these parameters is presented, taking into account the interaction of each of the parameters with each other. This parametric optimization and sensitivity study employs a Taguchi design method. The Taguchi method is an efficient approach for determining near-optimum design parameters using orthogonal matrices from design of experiments (DOE) theory. Using orthogonal matrices significantly reduces the number of experimental configurations to be studied. The effectiveness and limitations of the Taguchi method for propulsion/vehicle optimization studies as compared to traditional single-variable parametric trade studies is also discussed.

  3. A Method for Correcting IMRT Optimizer Heterogeneity Dose Calculations

    International Nuclear Information System (INIS)

    Zacarias, Albert S.; Brown, Mellonie F.; Mills, Michael D.

    2010-01-01

    Radiation therapy treatment planning for volumes close to the patient's surface, in lung tissue and in the head and neck region, can be challenging for the planning system optimizer because of the complexity of the treatment and protected volumes, as well as striking heterogeneity corrections. Because it is often the goal of the planner to produce an isodose plan with uniform dose throughout the planning target volume (PTV), there is a need for improved planning optimization procedures for PTVs located in these anatomical regions. To illustrate such an improved procedure, we present a treatment planning case of a patient with a lung lesion located in the posterior right lung. The intensity-modulated radiation therapy (IMRT) plan generated using standard optimization procedures produced substantial dose nonuniformity across the tumor caused by the effect of lung tissue surrounding the tumor. We demonstrate a novel iterative method of dose correction performed on the initial IMRT plan to produce a more uniform dose distribution within the PTV. This optimization method corrected for the dose missing on the periphery of the PTV and reduced the maximum dose on the PTV to 106% from 120% on the representative IMRT plan.

  4. Design of large Francis turbine using optimal methods

    Science.gov (United States)

    Flores, E.; Bornard, L.; Tomas, L.; Liu, J.; Couston, M.

    2012-11-01

    Among a high number of Francis turbine references all over the world, covering the whole market range of heads, Alstom has especially been involved in the development and equipment of the largest power plants in the world : Three Gorges (China -32×767 MW - 61 to 113 m), Itaipu (Brazil- 20x750 MW - 98.7m to 127m) and Xiangjiaba (China - 8x812 MW - 82.5m to 113.6m - in erection). Many new projects are under study to equip new power plants with Francis turbines in order to answer an increasing demand of renewable energy. In this context, Alstom Hydro is carrying out many developments to answer those needs, especially for jumbo units such the planned 1GW type units in China. The turbine design for such units requires specific care by using the state of the art in computation methods and the latest technologies in model testing as well as the maximum feedback from operation of Jumbo plants already in operation. We present in this paper how a large Francis turbine can be designed using specific design methods, including the global and local optimization methods. The design of the spiral case, the tandem cascade profiles, the runner and the draft tube are designed with optimization loops involving a blade design tool, an automatic meshing software and a Navier-Stokes solver, piloted by a genetic algorithm. These automated optimization methods, presented in different papers over the last decade, are nowadays widely used, thanks to the growing computation capacity of the HPC clusters: the intensive use of such optimization methods at the turbine design stage allows to reach very high level of performances, while the hydraulic flow characteristics are carefully studied over the whole water passage to avoid any unexpected hydraulic phenomena.

  5. Design of large Francis turbine using optimal methods

    International Nuclear Information System (INIS)

    Flores, E; Bornard, L; Tomas, L; Couston, M; Liu, J

    2012-01-01

    Among a high number of Francis turbine references all over the world, covering the whole market range of heads, Alstom has especially been involved in the development and equipment of the largest power plants in the world : Three Gorges (China −32×767 MW - 61 to 113 m), Itaipu (Brazil- 20x750 MW - 98.7m to 127m) and Xiangjiaba (China - 8x812 MW - 82.5m to 113.6m - in erection). Many new projects are under study to equip new power plants with Francis turbines in order to answer an increasing demand of renewable energy. In this context, Alstom Hydro is carrying out many developments to answer those needs, especially for jumbo units such the planned 1GW type units in China. The turbine design for such units requires specific care by using the state of the art in computation methods and the latest technologies in model testing as well as the maximum feedback from operation of Jumbo plants already in operation. We present in this paper how a large Francis turbine can be designed using specific design methods, including the global and local optimization methods. The design of the spiral case, the tandem cascade profiles, the runner and the draft tube are designed with optimization loops involving a blade design tool, an automatic meshing software and a Navier-Stokes solver, piloted by a genetic algorithm. These automated optimization methods, presented in different papers over the last decade, are nowadays widely used, thanks to the growing computation capacity of the HPC clusters: the intensive use of such optimization methods at the turbine design stage allows to reach very high level of performances, while the hydraulic flow characteristics are carefully studied over the whole water passage to avoid any unexpected hydraulic phenomena.

  6. METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS

    Directory of Open Access Journals (Sweden)

    V. Panteleev Andrei

    2017-01-01

    Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and

  7. Designing a hand rest tremor dynamic vibration absorber using H{sub 2} optimization method

    Energy Technology Data Exchange (ETDEWEB)

    Rahnavard, Mostafa; Dizaji, Ahmad F. [Tehran University, Tehran (Iran, Islamic Republic of); Hashemi, Mojtaba [Amirkabir University, Tehran (Iran, Islamic Republic of); Faramand, Farzam [Sharif University, Tehran (Iran, Islamic Republic of)

    2014-05-15

    An optimal single DOF dynamic absorber is presented. A tremor has a random nature and then the system is subjected to a random excitation instead of a sinusoidal one; so the H{sub 2} optimization criterion is probably more desirable than the popular H{sub ∞} optimization method and was implemented in this research. The objective of H{sub 2} optimization criterion is to reduce the total vibration energy of the system for overall frequencies. An objective function, considering the elbow joint angle, θ {sub 2}, tremor suppression as the main goal, was selected. The optimization was done by minimization of this objective function. The optimal system, including the absorber, performance was analyzed in both time and frequency domains. Implementing the optimal absorber, the frequency response amplitude of θ{sub 2} was reduced by more than 98% and 80% at the first and second natural frequencies of the primary system, respectively. A reduction of more than 94% and 78%, was observed for the shoulder joint angle, θ{sub 1}. The objective function also decreased by more than 46%. Then, two types of random inputs were considered. For the first type, θ{sub 1} and θ {sub 2} revealed 60% and 39% reduction in their rms values, whereas for the second type, 33% and 50% decrease was observed.

  8. Designing a hand rest tremor dynamic vibration absorber using H2 optimization method

    International Nuclear Information System (INIS)

    Rahnavard, Mostafa; Dizaji, Ahmad F.; Hashemi, Mojtaba; Faramand, Farzam

    2014-01-01

    An optimal single DOF dynamic absorber is presented. A tremor has a random nature and then the system is subjected to a random excitation instead of a sinusoidal one; so the H 2 optimization criterion is probably more desirable than the popular H ∞ optimization method and was implemented in this research. The objective of H 2 optimization criterion is to reduce the total vibration energy of the system for overall frequencies. An objective function, considering the elbow joint angle, θ 2 , tremor suppression as the main goal, was selected. The optimization was done by minimization of this objective function. The optimal system, including the absorber, performance was analyzed in both time and frequency domains. Implementing the optimal absorber, the frequency response amplitude of θ 2 was reduced by more than 98% and 80% at the first and second natural frequencies of the primary system, respectively. A reduction of more than 94% and 78%, was observed for the shoulder joint angle, θ 1 . The objective function also decreased by more than 46%. Then, two types of random inputs were considered. For the first type, θ 1 and θ 2 revealed 60% and 39% reduction in their rms values, whereas for the second type, 33% and 50% decrease was observed.

  9. An Efficient Optimization Method for Solving Unsupervised Data Classification Problems

    Directory of Open Access Journals (Sweden)

    Parvaneh Shabanzadeh

    2015-01-01

    Full Text Available Unsupervised data classification (or clustering analysis is one of the most useful tools and a descriptive task in data mining that seeks to classify homogeneous groups of objects based on similarity and is used in many medical disciplines and various applications. In general, there is no single algorithm that is suitable for all types of data, conditions, and applications. Each algorithm has its own advantages, limitations, and deficiencies. Hence, research for novel and effective approaches for unsupervised data classification is still active. In this paper a heuristic algorithm, Biogeography-Based Optimization (BBO algorithm, was adapted for data clustering problems by modifying the main operators of BBO algorithm, which is inspired from the natural biogeography distribution of different species. Similar to other population-based algorithms, BBO algorithm starts with an initial population of candidate solutions to an optimization problem and an objective function that is calculated for them. To evaluate the performance of the proposed algorithm assessment was carried on six medical and real life datasets and was compared with eight well known and recent unsupervised data classification algorithms. Numerical results demonstrate that the proposed evolutionary optimization algorithm is efficient for unsupervised data classification.

  10. Utilization of niching methods of genetic algorithms in nuclear reactor problems optimization

    International Nuclear Information System (INIS)

    Sacco, Wagner Figueiredo; Schirru, Roberto

    2000-01-01

    Genetic Algorithms (GAs) are biologically motivated adaptive systems which have been used, with good results, in function optimization. However, traditional GAs rapidly push an artificial population toward convergence. That is, all individuals in the population soon become nearly identical. Niching Methods allow genetic algorithms to maintain a population of diverse individuals. GAs that incorporate these methods are capable of locating multiple, optimal solutions within a single population. The purpose of this study is to test existing niching techniques and two methods introduced herein, bearing in mind their eventual application in nuclear reactor related problems, specially the nuclear reactor core reload one, which has multiple solutions. Tests are performed using widely known test functions and their results show that the new methods are quite promising, specially in real world problems like the nuclear reactor core reload. (author)

  11. Optimized Method for Knee Displacement Measurement in Vehicle Sled Crash Test

    Directory of Open Access Journals (Sweden)

    Sun Hang

    2017-01-01

    Full Text Available This paper provides an optimized method for measuring dummy’s knee displacement in vehicle sled crash test. The proposed method utilizes completely new elements for measurement, which are acceleration and angular velocity of dummy’s pelvis, as well as the rotational angle of its femur. Compared with the traditional measurement only using camera-based high-speed motion image analysis, the optimized one can not only maintain the measuring accuracy, but also avoid the disturbance caused by dummy movement, dashboard blocking and knee deformation during the crash. An experiment is made to verify the accuracy of the proposed method, which eliminates the strong dependence on single target tracing in traditional method. Moreover, it is very appropriate for calculating the penetration depth to the dashboard.

  12. Optimized assembly and covalent coupling of single-molecule DNA origami nanoarrays.

    Science.gov (United States)

    Gopinath, Ashwin; Rothemund, Paul W K

    2014-12-23

    Artificial DNA nanostructures, such as DNA origami, have great potential as templates for the bottom-up fabrication of both biological and nonbiological nanodevices at a resolution unachievable by conventional top-down approaches. However, because origami are synthesized in solution, origami-templated devices cannot easily be studied or integrated into larger on-chip architectures. Electrostatic self-assembly of origami onto lithographically defined binding sites on Si/SiO2 substrates has been achieved, but conditions for optimal assembly have not been characterized, and the method requires high Mg2+ concentrations at which most devices aggregate. We present a quantitative study of parameters affecting origami placement, reproducibly achieving single-origami binding at 94±4% of sites, with 90% of these origami having an orientation within ±10° of their target orientation. Further, we introduce two techniques for converting electrostatic DNA-surface bonds to covalent bonds, allowing origami arrays to be used under a wide variety of Mg2+-free solution conditions.

  13. Design and optimization for the occupant restraint system of vehicle based on a single freedom model

    Science.gov (United States)

    Zhang, Junyuan; Ma, Yue; Chen, Chao; Zhang, Yan

    2013-05-01

    Throughout the vehicle crash event, the interactions between vehicle, occupant, restraint system (VOR) are complicated and highly non-linear. CAE and physical tests are the most widely used in vehicle passive safety development, but they can only be done with the detailed 3D model or physical samples. Often some design errors and imperfections are difficult to correct at that time, and a large amount of time will be needed. A restraint system concept design approach which based on single-degree-of-freedom occupant-vehicle model (SDOF) is proposed in this paper. The interactions between the restraint system parameters and the occupant responses in a crash are studied from the view of mechanics and energy. The discrete input and the iterative algorithm method are applied to the SDOF model to get the occupant responses quickly for arbitrary excitations (impact pulse) by MATLAB. By studying the relationships between the ridedown efficiency, the restraint stiffness, and the occupant response, the design principle of the restraint stiffness aiming to reduce occupant injury level during conceptual design is represented. Higher ridedown efficiency means more occupant energy absorbed by the vehicle, but the research result shows that higher ridedown efficiency does not mean lower occupant injury level. A proper restraint system design principle depends on two aspects. On one hand, the restraint system should lead to as high ridedown efficiency as possible, and at the same time, the restraint system should maximize use of the survival space to reduce the occupant deceleration level. As an example, an optimization of a passenger vehicle restraint system is designed by the concept design method above, and the final results are validated by MADYMO, which is the most widely used software in restraint system design, and the sled test. Consequently, a guideline and method for the occupant restraint system concept design is established in this paper.

  14. Optimal and adaptive methods of processing hydroacoustic signals (review)

    Science.gov (United States)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  15. Optimization of cooling tower performance analysis using Taguchi method

    Directory of Open Access Journals (Sweden)

    Ramkumar Ramakrishnan

    2013-01-01

    Full Text Available This study discuss the application of Taguchi method in assessing maximum cooling tower effectiveness for the counter flow cooling tower using expanded wire mesh packing. The experiments were planned based on Taguchi’s L27 orthogonal array .The trail was performed under different inlet conditions of flow rate of water, air and water temperature. Signal-to-noise ratio (S/N analysis, analysis of variance (ANOVA and regression were carried out in order to determine the effects of process parameters on cooling tower effectiveness and to identity optimal factor settings. Finally confirmation tests verified this reliability of Taguchi method for optimization of counter flow cooling tower performance with sufficient accuracy.

  16. A multidimensional pseudospectral method for optimal control of quantum ensembles

    International Nuclear Information System (INIS)

    Ruths, Justin; Li, Jr-Shin

    2011-01-01

    In our previous work, we have shown that the pseudospectral method is an effective and flexible computation scheme for deriving pulses for optimal control of quantum systems. In practice, however, quantum systems often exhibit variation in the parameters that characterize the system dynamics. This leads us to consider the control of an ensemble (or continuum) of quantum systems indexed by the system parameters that show variation. We cast the design of pulses as an optimal ensemble control problem and demonstrate a multidimensional pseudospectral method with several challenging examples of both closed and open quantum systems from nuclear magnetic resonance spectroscopy in liquid. We give particular attention to the ability to derive experimentally viable pulses of minimum energy or duration.

  17. Comparison of operation optimization methods in energy system modelling

    DEFF Research Database (Denmark)

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian

    2013-01-01

    In areas with large shares of Combined Heat and Power (CHP) production, significant introduction of intermittent renewable power production may lead to an increased number of operational constraints. As the operation pattern of each utility plant is determined by optimization of economics......, possibilities for decoupling production constraints may be valuable. Introduction of heat pumps in the district heating network may pose this ability. In order to evaluate if the introduction of heat pumps is economically viable, we develop calculation methods for the operation patterns of each of the used...... energy technologies. In the paper, three frequently used operation optimization methods are examined with respect to their impact on operation management of the combined technologies. One of the investigated approaches utilises linear programming for optimisation, one uses linear programming with binary...

  18. Experimental methods for the analysis of optimization algorithms

    CERN Document Server

    Bartz-Beielstein, Thomas; Paquete, Luis; Preuss, Mike

    2010-01-01

    In operations research and computer science it is common practice to evaluate the performance of optimization algorithms on the basis of computational results, and the experimental approach should follow accepted principles that guarantee the reliability and reproducibility of results. However, computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on diffe

  19. Methods of Choosing an Optimal Portfolio of Projects

    OpenAIRE

    Yakovlev, A.; Chernenko, M.

    2016-01-01

    This paper presents an analysis of existing methods for a portfolio of project optimization. The necessity for their improvement is shown. It is suggested to assess the portfolio of projects on the basis of the amount in the difference between the results and costs during development and implementation of selected projects and the losses caused by non-implementation or delayed implementation of projects that were not included in the portfolio. Consideration of capital and current costs compon...

  20. A Spectral Conjugate Gradient Method for Unconstrained Optimization

    International Nuclear Information System (INIS)

    Birgin, E. G.; Martinez, J. M.

    2001-01-01

    A family of scaled conjugate gradient algorithms for large-scale unconstrained minimization is defined. The Perry, the Polak-Ribiere and the Fletcher-Reeves formulae are compared using a spectral scaling derived from Raydan's spectral gradient optimization method. The best combination of formula, scaling and initial choice of step-length is compared against well known algorithms using a classical set of problems. An additional comparison involving an ill-conditioned estimation problem in Optics is presented

  1. An Optimized Method for Terrain Reconstruction Based on Descent Images

    Directory of Open Access Journals (Sweden)

    Xu Xinchao

    2016-02-01

    Full Text Available An optimization method is proposed to perform high-accuracy terrain reconstruction of the landing area of Chang’e III. First, feature matching is conducted using geometric model constraints. Then, the initial terrain is obtained and the initial normal vector of each point is solved on the basis of the initial terrain. By changing the vector around the initial normal vector in small steps a set of new vectors is obtained. By combining these vectors with the direction of light and camera, the functions are set up on the basis of a surface reflection model. Then, a series of gray values is derived by solving the equations. The new optimized vector is recorded when the obtained gray value is closest to the corresponding pixel. Finally, the optimized terrain is obtained after iteration of the vector field. Experiments were conducted using the laboratory images and descent images of Chang’e III. The results showed that the performance of the proposed method was better than that of the classical feature matching method. It can provide a reference for terrain reconstruction of the landing area in subsequent moon exploration missions.

  2. A Global Network Alignment Method Using Discrete Particle Swarm Optimization.

    Science.gov (United States)

    Huang, Jiaxiang; Gong, Maoguo; Ma, Lijia

    2016-10-19

    Molecular interactions data increase exponentially with the advance of biotechnology. This makes it possible and necessary to comparatively analyse the different data at a network level. Global network alignment is an important network comparison approach to identify conserved subnetworks and get insight into evolutionary relationship across species. Network alignment which is analogous to subgraph isomorphism is known to be an NP-hard problem. In this paper, we introduce a novel heuristic Particle-Swarm-Optimization based Network Aligner (PSONA), which optimizes a weighted global alignment model considering both protein sequence similarity and interaction conservations. The particle statuses and status updating rules are redefined in a discrete form by using permutation. A seed-and-extend strategy is employed to guide the searching for the superior alignment. The proposed initialization method "seeds" matches with high sequence similarity into the alignment, which guarantees the functional coherence of the mapping nodes. A greedy local search method is designed as the "extension" procedure to iteratively optimize the edge conservations. PSONA is compared with several state-of-art methods on ten network pairs combined by five species. The experimental results demonstrate that the proposed aligner can map the proteins with high functional coherence and can be used as a booster to effectively refine the well-studied aligners.

  3. A seismic fault recognition method based on ant colony optimization

    Science.gov (United States)

    Chen, Lei; Xiao, Chuangbai; Li, Xueliang; Wang, Zhenli; Huo, Shoudong

    2018-05-01

    Fault recognition is an important section in seismic interpretation and there are many methods for this technology, but no one can recognize fault exactly enough. For this problem, we proposed a new fault recognition method based on ant colony optimization which can locate fault precisely and extract fault from the seismic section. Firstly, seismic horizons are extracted by the connected component labeling algorithm; secondly, the fault location are decided according to the horizontal endpoints of each horizon; thirdly, the whole seismic section is divided into several rectangular blocks and the top and bottom endpoints of each rectangular block are considered as the nest and food respectively for the ant colony optimization algorithm. Besides that, the positive section is taken as an actual three dimensional terrain by using the seismic amplitude as a height. After that, the optimal route from nest to food calculated by the ant colony in each block is judged as a fault. Finally, extensive comparative tests were performed on the real seismic data. Availability and advancement of the proposed method were validated by the experimental results.

  4. One directional polarized neutron reflectometry with optimized reference layer method

    International Nuclear Information System (INIS)

    Masoudi, S. Farhad; Jahromi, Saeed S.

    2012-01-01

    In the past decade, several neutron reflectometry methods for determining the modulus and phase of the complex reflection coefficient of an unknown multilayer thin film have been worked out among which the method of variation of surroundings and reference layers are of highest interest. These methods were later modified for measurement of the polarization of the reflected beam instead of the measurement of the intensities. In their new architecture, these methods not only suffered from the necessity of change of experimental setup but also another difficulty was added to their experimental implementations. This deficiency was related to the limitations of the technology of the neutron reflectometers that could only measure the polarization of the reflected neutrons in the same direction as the polarization of the incident beam. As the instruments are limited, the theory has to be optimized so that the experiment could be performed. In a recent work, we developed the method of variation of surroundings for one directional polarization analysis. In this new work, the method of reference layer with polarization analysis has been optimized to determine the phase and modulus of the unknown film with measurement of the polarization of the reflected neutrons in the same direction as the polarization of the incident beam.

  5. Single-gene testing combined with single nucleotide polymorphism microarray preimplantation genetic diagnosis for aneuploidy: a novel approach in optimizing pregnancy outcome.

    Science.gov (United States)

    Brezina, Paul R; Benner, Andrew; Rechitsky, Svetlana; Kuliev, Anver; Pomerantseva, Ekaterina; Pauling, Dana; Kearns, William G

    2011-04-01

    To describe a method of amplifying DNA from blastocyst trophectoderm cells (two or three cells) and simultaneously performing 23-chromosome single nucleotide polymorphism microarrays and single-gene preimplantation genetic diagnosis. Case report. IVF clinic and preimplantation genetic diagnostic centers. A 36-year-old woman, gravida 2, para 1011, and her husband who both were carriers of GM(1) gangliosidosis. The couple wished to proceed with microarray analysis for aneuploidy detection coupled with DNA sequencing for GM(1) gangliosidosis. An IVF cycle was performed. Ten blastocyst-stage embryos underwent trophectoderm biopsy. Twenty-three-chromosome microarray analysis for aneuploidy and specific DNA sequencing for GM(1) gangliosidosis mutations were performed. Viable pregnancy. After testing, elective single embryo transfer was performed followed by an intrauterine pregnancy with documented fetal cardiac activity by ultrasound. Twenty-three-chromosome microarray analysis for aneuploidy detection and single-gene evaluation via specific DNA sequencing and linkage analysis are used for preimplantation diagnosis for single-gene disorders and aneuploidy. Because of the minimal amount of genetic material obtained from the day 3 to 5 embryos (up to 6 pg), these modalities have been used in isolation of each other. The use of preimplantation genetic diagnosis for aneuploidy coupled with testing for single-gene disorders via trophectoderm biopsy is a novel approach to maximize pregnancy outcomes. Although further investigation is warranted, preimplantation genetic diagnosis for aneuploidy and single-gene testing seem destined to be used increasingly to optimize ultimate pregnancy success. Copyright © 2011 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  6. Single-mismatch 2LSB embedding method of steganography

    OpenAIRE

    Khalind, Omed; Aziz, Benjamin

    2013-01-01

    This paper proposes a new method of 2LSB embedding steganography in still images. The proposed method considers a single mismatch in each 2LSB embedding between the 2LSB of the pixel value and the 2-bits of the secret message, while the 2LSB replacement overwrites the 2LSB of the image’s pixel value with 2-bits of the secret message. The number of bit-changes needed for the proposed method is 0.375 bits from the 2LSBs of the cover image, and is much less than the 2LSB replacement which is 0.5...

  7. Real stabilization method for nuclear single-particle resonances

    International Nuclear Information System (INIS)

    Zhang Li; Zhou Shangui; Meng Jie; Zhao Enguang

    2008-01-01

    We develop the real stabilization method within the framework of the relativistic mean-field (RMF) model. With the self-consistent nuclear potentials from the RMF model, the real stabilization method is used to study single-particle resonant states in spherical nuclei. As examples, the energies, widths, and wave functions of low-lying neutron resonant states in 120 Sn are obtained. These results are compared with those from the scattering phase-shift method and the analytic continuation in the coupling constant approach and satisfactory agreements are found

  8. Robust Optimal Adaptive Control Method with Large Adaptive Gain

    Science.gov (United States)

    Nguyen, Nhan T.

    2009-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.

  9. Self-seeded single-frequency laser peening method

    Science.gov (United States)

    DAne, C Brent; Hackey, Lloyd A; Harris, Fritz B

    2012-06-26

    A method of operating a laser to obtain an output pulse having a single wavelength, comprises inducing an intracavity loss into a laser resonator having an amount that prevents oscillation during a time that energy from the pump source is being stored in the gain medium. Gain is built up in the gain medium with energy from the pump source until formation of a single-frequency relaxation oscillation pulse in the resonator. Upon detection of the onset of the relaxation oscillation pulse, the intracavity loss is reduced, such as by Q-switching, so that the built-up gain stored in the gain medium is output from the resonator in the form of an output pulse at a single frequency. An electronically controllable output coupler is controlled to affect output pulse characteristics. The laser acts a master oscillator in a master oscillator power amplifier configuration. The laser is used for laser peening.

  10. Comparison of optimization methods for electronic-structure calculations

    International Nuclear Information System (INIS)

    Garner, J.; Das, S.G.; Min, B.I.; Woodward, C.; Benedek, R.

    1989-01-01

    The performance of several local-optimization methods for calculating electronic structure is compared. The fictitious first-order equation of motion proposed by Williams and Soler is integrated numerically by three procedures: simple finite-difference integration, approximate analytical integration (the Williams-Soler algorithm), and the Born perturbation series. These techniques are applied to a model problem for which exact solutions are known, the Mathieu equation. The Williams-Soler algorithm and the second Born approximation converge equally rapidly, but the former involves considerably less computational effort and gives a more accurate converged solution. Application of the method of conjugate gradients to the Mathieu equation is discussed

  11. A discrete optimization method for nuclear fuel management

    International Nuclear Information System (INIS)

    Argaud, J.P.

    1993-04-01

    Nuclear loading pattern elaboration can be seen as a combinational optimization problem of tremendous size and with non-linear cost-functions, and search are always numerically expensive. After a brief introduction of the main aspects of nuclear fuel management, this paper presents a new idea to treat the combinational problem by using informations included in the gradient of a cost function. The method is to choose, by direct observation of the gradient, the more interesting changes in fuel loading patterns. An example is then developed to illustrate an operating mode of the method, and finally, connections with simulated annealing and genetic algorithms are described as an attempt to improve search processes

  12. Method for depleting BWRs using optimal control rod patterns

    International Nuclear Information System (INIS)

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1991-01-01

    Control rod (CR) programming is an essential core management activity for boiling water reactors (BWRs). After establishing a core reload design for a BWR, CR programming is performed to develop a sequence of exposure-dependent CR patterns that assure the safe and effective depletion of the core through a reactor cycle. A time-variant target power distribution approach has been assumed in this study. The authors have developed OCTOPUS to implement a new two-step method for designing semioptimal CR programs for BWRs. The optimization procedure of OCTOPUS is based on the method of approximation programming and uses the SIMULATE-E code for nucleonics calculations

  13. Kernel method for clustering based on optimal target vector

    International Nuclear Information System (INIS)

    Angelini, Leonardo; Marinazzo, Daniele; Pellicoro, Mario; Stramaglia, Sebastiano

    2006-01-01

    We introduce Ising models, suitable for dichotomic clustering, with couplings that are (i) both ferro- and anti-ferromagnetic (ii) depending on the whole data-set and not only on pairs of samples. Couplings are determined exploiting the notion of optimal target vector, here introduced, a link between kernel supervised and unsupervised learning. The effectiveness of the method is shown in the case of the well-known iris data-set and in benchmarks of gene expression levels, where it works better than existing methods for dichotomic clustering

  14. Optimization in engineering sciences approximate and metaheuristic methods

    CERN Document Server

    Stefanoiu, Dan; Popescu, Dumitru; Filip, Florin Gheorghe; El Kamel, Abdelkader

    2014-01-01

    The purpose of this book is to present the main metaheuristics and approximate and stochastic methods for optimization of complex systems in Engineering Sciences. It has been written within the framework of the European Union project ERRIC (Empowering Romanian Research on Intelligent Information Technologies), which is funded by the EU's FP7 Research Potential program and has been developed in co-operation between French and Romanian teaching researchers. Through the principles of various proposed algorithms (with additional references) this book allows the reader to explore various methods o

  15. Principles of crystallization, and methods of single crystal growth

    International Nuclear Information System (INIS)

    Chacra, T.

    2010-01-01

    Most of single crystals (monocrystals), have distinguished optical, electrical, or magnetic properties, which make from single crystals, key elements in most of technical modern devices, as they may be used as lenses, Prisms, or grating sin optical devises, or Filters in X-Ray and spectrographic devices, or conductors and semiconductors in electronic, and computer industries. Furthermore, Single crystals are used in transducer devices. Moreover, they are indispensable elements in Laser and Maser emission technology.Crystal Growth Technology (CGT), has started, and developed in the international Universities and scientific institutions, aiming at some of single crystals, which may have significant properties and industrial applications, that can attract the attention of international crystal growth centers, to adopt the industrial production and marketing of such crystals. Unfortunately, Arab universities generally, and Syrian universities specifically, do not give even the minimum interest, to this field of Science.The purpose of this work is to attract the attention of Crystallographers, Physicists and Chemists in the Arab universities and research centers to the importance of crystal growth, and to work on, in the first stage to establish simple, uncomplicated laboratories for the growth of single crystal. Such laboratories can be supplied with equipment, which are partly available or can be manufactured in the local market. Many references (Articles, Papers, Diagrams, etc..) has been studied, to conclude the most important theoretical principles of Phase transitions,especially of crystallization. The conclusions of this study, are summarized in three Principles; Thermodynamic-, Morphologic-, and Kinetic-Principles. The study is completed by a brief description of the main single crystal growth methods with sketches, of equipment used in each method, which can be considered as primary designs for the equipment, of a new crystal growth laboratory. (author)

  16. A collimator optimization method for quantitative imaging: application to Y-90 bremsstrahlung SPECT.

    Science.gov (United States)

    Rong, Xing; Frey, Eric C

    2013-08-01

    Post-therapy quantitative 90Y bremsstrahlung single photon emission computed tomography (SPECT) has shown great potential to provide reliable activity estimates, which are essential for dose verification. Typically 90Y imaging is performed with high- or medium-energy collimators. However, the energy spectrum of 90Y bremsstrahlung photons is substantially different than typical for these collimators. In addition, dosimetry requires quantitative images, and collimators are not typically optimized for such tasks. Optimizing a collimator for 90Y imaging is both novel and potentially important. Conventional optimization methods are not appropriate for 90Y bremsstrahlung photons, which have a continuous and broad energy distribution. In this work, the authors developed a parallel-hole collimator optimization method for quantitative tasks that is particularly applicable to radionuclides with complex emission energy spectra. The authors applied the proposed method to develop an optimal collimator for quantitative 90Y bremsstrahlung SPECT in the context of microsphere radioembolization. To account for the effects of the collimator on both the bias and the variance of the activity estimates, the authors used the root mean squared error (RMSE) of the volume of interest activity estimates as the figure of merit (FOM). In the FOM, the bias due to the null space of the image formation process was taken in account. The RMSE was weighted by the inverse mass to reflect the application to dosimetry; for a different application, more relevant weighting could easily be adopted. The authors proposed a parameterization for the collimator that facilitates the incorporation of the important factors (geometric sensitivity, geometric resolution, and septal penetration fraction) determining collimator performance, while keeping the number of free parameters describing the collimator small (i.e., two parameters). To make the optimization results for quantitative 90Y bremsstrahlung SPECT more

  17. Detection of colonic polyps in the elderly: Optimization of the single-contrast barium enema examination

    International Nuclear Information System (INIS)

    Gelfand, D.W.; Chen, Y.M.; Ott, D.J.; Munitz, H.A.

    1986-01-01

    Single-contrast studies account for 75% of barium enema examinations and are often performed in the elderly. By optimizing all factors, the following results were obtained: for polyps of less than 1 cm, 40 of 57 were detected (sensitivity, 70.2%); for polyps of 1 cm or larger, 33 of 35 were detected (sensitivity, 94%). Overall, 73 of 92 polyps were detected (sensitivity, 79.3%). These sensitivities result from meticulous preparation and the use of compression filming, low-density barium, moderate kilovoltages, high-resolution screens, remote control apparatus, and high-bandpass TV fluoroscopy. The authors conclude that an optimal single-contrast barium enema examination detects colonic polyps with a sensitivity approaching that of the double-contrast study and may be employed in elderly patients who cannot undergo the double-contrast study

  18. Optimal analytic method for the nonlinear Hasegawa-Mima equation

    Science.gov (United States)

    Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle

    2014-05-01

    The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.

  19. Optimization and modification of the method for detection of rhamnolipids

    Directory of Open Access Journals (Sweden)

    Takeshi Tabuchi

    2015-10-01

    Full Text Available Use of biosurfactants in bioremediation, facilitates and accelerates microbial degradation of hydrocarbons. CTAB/MB agar method created by Siegmund & Wagner for screening of rhamnolipids (RL producing strains, has been widely used but has not improved significantly for more than 20 years. To optimize the technique as a quantitative method, CTAB/MB agar plates were made and different variables were tested, like incubation time, cooling, CTAB concentration, methylene blue presence, wells diameter and inocula volume. Furthermore, a new method for RL detection within halos was developed: precipitation of RL with HCl, allows the formation a new halos pattern, easier to observe and to measure. This research reaffirm that this method is not totally suitable for a fine quantitative analysis, because of the difficulty to accurately correlate RL concentration and the area of the halos. RL diffusion does not seem to have a simple behavior and there are a lot of factors that affect RL migration rate.

  20. Optimized optical clearing method for imaging central nervous system

    Science.gov (United States)

    Yu, Tingting; Qi, Yisong; Gong, Hui; Luo, Qingming; Zhu, Dan

    2015-03-01

    The development of various optical clearing methods provides a great potential for imaging entire central nervous system by combining with multiple-labelling and microscopic imaging techniques. These methods had made certain clearing contributions with respective weaknesses, including tissue deformation, fluorescence quenching, execution complexity and antibody penetration limitation that makes immunostaining of tissue blocks difficult. The passive clarity technique (PACT) bypasses those problems and clears the samples with simple implementation, excellent transparency with fine fluorescence retention, but the passive tissue clearing method needs too long time. In this study, we not only accelerate the clearing speed of brain blocks but also preserve GFP fluorescence well by screening an optimal clearing temperature. The selection of proper temperature will make PACT more applicable, which evidently broaden the application range of this method.

  1. Analysis and optimization with ecological objective function of irreversible single resonance energy selective electron heat engines

    International Nuclear Information System (INIS)

    Zhou, Junle; Chen, Lingen; Ding, Zemin; Sun, Fengrui

    2016-01-01

    Ecological performance of a single resonance ESE heat engine with heat leakage is conducted by applying finite time thermodynamics. By introducing Nielsen function and numerical calculations, expressions about power output, efficiency, entropy generation rate and ecological objective function are derived; relationships between ecological objective function and power output, between ecological objective function and efficiency as well as between power output and efficiency are demonstrated; influences of system parameters of heat leakage, boundary energy and resonance width on the optimal performances are investigated in detail; a specific range of boundary energy is given as a compromise to make ESE heat engine system work at optimal operation regions. Comparing performance characteristics with different optimization objective functions, the significance of selecting ecological objective function as the design objective is clarified specifically: when changing the design objective from maximum power output into maximum ecological objective function, the improvement of efficiency is 4.56%, while the power output drop is only 2.68%; when changing the design objective from maximum efficiency to maximum ecological objective function, the improvement of power output is 229.13%, and the efficiency drop is only 13.53%. - Highlights: • An irreversible single resonance energy selective electron heat engine is studied. • Heat leakage between two reservoirs is considered. • Power output, efficiency and ecological objective function are derived. • Optimal performance comparison for three objective functions is carried out.

  2. Optimal multi-photon phase sensing with a single interference fringe

    Science.gov (United States)

    Xiang, G. Y.; Hofmann, H. F.; Pryde, G. J.

    2013-01-01

    Quantum entanglement can help to increase the precision of optical phase measurements beyond the shot noise limit (SNL) to the ultimate Heisenberg limit. However, the N-photon parity measurements required to achieve this optimal sensitivity are extremely difficult to realize with current photon detection technologies, requiring high-fidelity resolution of N + 1 different photon distributions between the output ports. Recent experimental demonstrations of precision beyond the SNL have therefore used only one or two photon-number detection patterns instead of parity measurements. Here we investigate the achievable phase sensitivity of the simple and efficient single interference fringe detection technique. We show that the maximally-entangled “NOON” state does not achieve optimal phase sensitivity when N > 4, rather, we show that the Holland-Burnett state is optimal. We experimentally demonstrate this enhanced sensitivity using a single photon-counted fringe of the six-photon Holland-Burnett state. Specifically, our single-fringe six-photon measurement achieves a phase variance three times below the SNL. PMID:24067490

  3. Validity of single-cycle objective functions for multicycle reload design optimization

    International Nuclear Information System (INIS)

    Kropaczek, D.J.; McElroy, J.; Turinsky, P.J.

    1993-01-01

    Beyond the equilibrium cycle scoping calculations used for determining numbers of feed assemblies and enrichment estimates, multicycle reload design currently consists of stagewise optimization of single-cycle core loading patterns, typically extending over a short-term planning horizon of perhaps three reload cycles. Particularly in transition cycles, however, optimizing a loading pattern over a single cycle for a stated objective, such as minimum core leakage, may have an adverse impact on subsequent cycles. The penalties paid may be in the form of reduced thermal margin or an increase in feed enrichment due to insufficient reactivity carryover from the open-quotes optimizedclose quotes cycle. In view of current practices, a study was performed that examined the behavior of the loading pattern as a function of the objective functions selected as implemented in the stagewise optimization of single-cycle core loading patterns from initial transition cycle through equilibrium using the FORMOSA-P code. The objective functions studied were region average discharge burnup maximization (with enrichment search) and feed enrichment minimization. It is noted at the beginning that the maximization of region average discharge has no meaning for the equilibrium cycle because region average discharge burnup is explicitly set by the feed size and cycle length independent of the loading pattern. In the nonequilibrium cycle, however, it was reasoned that this objective would provide the maximum reactivity carryover throughout the transition and thus have a direct effect on minimizing the multicycle levelized fuel cost

  4. Validation of single-sample doubly labeled water method

    International Nuclear Information System (INIS)

    Webster, M.D.; Weathers, W.W.

    1989-01-01

    We have experimentally validated a single-sample variant of the doubly labeled water method for measuring metabolic rate and water turnover in a very small passerine bird, the verdin (Auriparus flaviceps). We measured CO 2 production using the Haldane gravimetric technique and compared these values with estimates derived from isotopic data. Doubly labeled water results based on the one-sample calculations differed from Haldane values by less than 0.5% on average (range -8.3 to 11.2%, n = 9). Water flux computed by the single-sample method differed by -1.5% on average from results for the same birds based on the standard, two-sample technique (range -13.7 to 2.0%, n = 9)

  5. Methods of forming single source precursors, methods of forming polymeric single source precursors, and single source precursors and intermediate products formed by such methods

    Science.gov (United States)

    Fox, Robert V.; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin; Margulieux, Kelsey R.; Holland, Andrew W.

    2012-12-04

    Methods of forming single source precursors (SSPs) include forming intermediate products having the empirical formula 1/2{L.sub.2N(.mu.-X).sub.2M'X.sub.2}.sub.2, and reacting MER with the intermediate products to form SSPs of the formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2, wherein L is a Lewis base, M is a Group IA atom, N is a Group IB atom, M' is a Group IIIB atom, each E is a Group VIB atom, each X is a Group VIIA atom or a nitrate group, and each R group is an alkyl, aryl, vinyl, (per)fluoro alkyl, (per)fluoro aryl, silane, or carbamato group. Methods of forming polymeric or copolymeric SSPs include reacting at least one of HE.sup.1R.sup.1E.sup.1H and MER with one or more substances having the empirical formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2 or L.sub.2N(.mu.-X).sub.2M'(X).sub.2 to form a polymeric or copolymeric SSP. New SSPs and intermediate products are formed by such methods.

  6. Methods of forming single source precursors, methods of forming polymeric single source precursors, and single source precursors formed by such methods

    Science.gov (United States)

    Fox, Robert V.; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin; Margulieux, Kelsey R.; Holland, Andrew W.

    2014-09-09

    Methods of forming single source precursors (SSPs) include forming intermediate products having the empirical formula 1/2{L.sub.2N(.mu.-X).sub.2M'X.sub.2}.sub.2, and reacting MER with the intermediate products to form SSPs of the formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2, wherein L is a Lewis base, M is a Group IA atom, N is a Group IB atom, M' is a Group IIIB atom, each E is a Group VIB atom, each X is a Group VIIA atom or a nitrate group, and each R group is an alkyl, aryl, vinyl, (per)fluoro alkyl, (per)fluoro aryl, silane, or carbamato group. Methods of forming polymeric or copolymeric SSPs include reacting at least one of HE.sup.1R.sup.1E.sup.1H and MER with one or more substances having the empirical formula L.sub.2N(.mu.-ER).sub.2M'(ER).sub.2 or L.sub.2N(.mu.-X).sub.2M'(X).sub.2 to form a polymeric or copolymeric SSP. New SSPs and intermediate products are formed by such methods.

  7. Nuclear-fuel-cycle optimization: methods and modelling techniques

    International Nuclear Information System (INIS)

    Silvennoinen, P.

    1982-01-01

    This book present methods applicable to analyzing fuel-cycle logistics and optimization as well as in evaluating the economics of different reactor strategies. After an introduction to the phases of a fuel cycle, uranium cost trends are assessed in a global perspective. Subsequent chapters deal with the fuel-cycle problems faced by a power utility. The fuel-cycle models cover the entire cycle from the supply of uranium to the disposition of spent fuel. The chapter headings are: Nuclear Fuel Cycle, Uranium Supply and Demand, Basic Model of the LWR (light water reactor) Fuel Cycle, Resolution of Uncertainties, Assessment of Proliferation Risks, Multigoal Optimization, Generalized Fuel-Cycle Models, Reactor Strategy Calculations, and Interface with Energy Strategies. 47 references, 34 figures, 25 tables

  8. Newton-type methods for optimization and variational problems

    CERN Document Server

    Izmailov, Alexey F

    2014-01-01

    This book presents comprehensive state-of-the-art theoretical analysis of the fundamental Newtonian and Newtonian-related approaches to solving optimization and variational problems. A central focus is the relationship between the basic Newton scheme for a given problem and algorithms that also enjoy fast local convergence. The authors develop general perturbed Newtonian frameworks that preserve fast convergence and consider specific algorithms as particular cases within those frameworks, i.e., as perturbations of the associated basic Newton iterations. This approach yields a set of tools for the unified treatment of various algorithms, including some not of the Newton type per se. Among the new subjects addressed is the class of degenerate problems. In particular, the phenomenon of attraction of Newton iterates to critical Lagrange multipliers and its consequences as well as stabilized Newton methods for variational problems and stabilized sequential quadratic programming for optimization. This volume will b...

  9. Stochastic Recursive Algorithms for Optimization Simultaneous Perturbation Methods

    CERN Document Server

    Bhatnagar, S; Prashanth, L A

    2013-01-01

    Stochastic Recursive Algorithms for Optimization presents algorithms for constrained and unconstrained optimization and for reinforcement learning. Efficient perturbation approaches form a thread unifying all the algorithms considered. Simultaneous perturbation stochastic approximation and smooth fractional estimators for gradient- and Hessian-based methods are presented. These algorithms: • are easily implemented; • do not require an explicit system model; and • work with real or simulated data. Chapters on their application in service systems, vehicular traffic control and communications networks illustrate this point. The book is self-contained with necessary mathematical results placed in an appendix. The text provides easy-to-use, off-the-shelf algorithms that are given detailed mathematical treatment so the material presented will be of significant interest to practitioners, academic researchers and graduate students alike. The breadth of applications makes the book appropriate for reader from sim...

  10. Experimental Methods for the Analysis of Optimization Algorithms

    DEFF Research Database (Denmark)

    , computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on different...... in algorithm design, statistical design, optimization and heuristics, and most chapters provide theoretical background and are enriched with case studies. This book is written for researchers and practitioners in operations research and computer science who wish to improve the experimental assessment......In operations research and computer science it is common practice to evaluate the performance of optimization algorithms on the basis of computational results, and the experimental approach should follow accepted principles that guarantee the reliability and reproducibility of results. However...

  11. Predicting the optimal process window for the coating of single-crystalline organic films with mobilities exceeding 7 cm2/Vs.

    Science.gov (United States)

    Janneck, Robby; Vercesi, Federico; Heremans, Paul; Genoe, Jan; Rolin, Cedric

    2016-09-01

    Organic thin film transistors (OTFTs) based on single crystalline thin films of organic semiconductors have seen considerable development in the recent years. The most successful method for the fabrication of single crystalline films are solution-based meniscus guided coating techniques such as dip-coating, solution shearing or zone casting. These upscalable methods enable rapid and efficient film formation without additional processing steps. The single-crystalline film quality is strongly dependent on solvent choice, substrate temperature and coating speed. So far, however, process optimization has been conducted by trial and error methods, involving, for example, the variation of coating speeds over several orders of magnitude. Through a systematic study of solvent phase change dynamics in the meniscus region, we develop a theoretical framework that links the optimal coating speed to the solvent choice and the substrate temperature. In this way, we can accurately predict an optimal processing window, enabling fast process optimization. Our approach is verified through systematic OTFT fabrication based on films grown with different semiconductors, solvents and substrate temperatures. The use of best predicted coating speeds delivers state of the art devices. In the case of C8BTBT, OTFTs show well-behaved characteristics with mobilities up to 7 cm2/Vs and onset voltages close to 0 V. Our approach also explains well optimal recipes published in the literature. This route considerably accelerates parameter screening for all meniscus guided coating techniques and unveils the physics of single crystalline film formation.

  12. Optimized Charging Scheduling with Single Mobile Charger for Wireless Rechargeable Sensor Networks

    Directory of Open Access Journals (Sweden)

    Qihua Wang

    2017-11-01

    Full Text Available Due to the rapid development of wireless charging technology, the recharging issue in wireless rechargeable sensor network (WRSN has been a popular research problem in the past few years. The weakness of previous work is that charging route planning is not reasonable. In this work, a dynamic optimal scheduling scheme aiming to maximize the vacation time ratio of a single mobile changer for WRSN is proposed. In the proposed scheme, the wireless sensor network is divided into several sub-networks according to the initial topology of deployed sensor networks. After comprehensive analysis of energy states, working state and constraints for different sensor nodes in WRSN, we transform the optimized charging path problem of the whole network into the local optimization problem of the sub networks. The optimized charging path with respect to dynamic network topology in each sub-network is obtained by solving an optimization problem, and the lifetime of the deployed wireless sensor network can be prolonged. Simulation results show that the proposed scheme has good and reliable performance for a small wireless rechargeable sensor network.

  13. Optimization of a method for the profiling and quantification of saponins in different green asparagus genotypes.

    Science.gov (United States)

    Vázquez-Castilla, Sara; Jaramillo-Carmona, Sara; Fuentes-Alventosa, Jose María; Jiménez-Araujo, Ana; Rodriguez-Arcos, Rocío; Cermeño-Sacristán, Pedro; Espejo-Calvo, Juan Antonio; Guillén-Bejarano, Rafael

    2013-07-03

    The main goal of this study was the optimization of a HPLC-MS method for the qualitative and quantitative analysis of asparagus saponins. The method includes extraction with aqueous ethanol, cleanup by solid phase extraction, separation by reverse phase chromatography, electrospray ionization, and detection in a single quadrupole mass analyzer. The method was used for the comparison of selected genotypes of Huétor-Tájar asparagus landrace and selected varieties of commercial diploid hybrids of green asparagus. The results showed that while protodioscin was almost the only saponin detected in the commercial hybrids, eight different saponins were detected in the Huétor-Tájar asparagus genotypes. The mass spectra indicated that HT saponins are derived from a furostan type steroidal genin having a single bond between carbons 5 and 6 of the B ring. The total concentration of saponins was found to be higher in triguero asparagus than in commercial hybrids.

  14. Identification of metabolic system parameters using global optimization methods

    Directory of Open Access Journals (Sweden)

    Gatzke Edward P

    2006-01-01

    Full Text Available Abstract Background The problem of estimating the parameters of dynamic models of complex biological systems from time series data is becoming increasingly important. Methods and results Particular consideration is given to metabolic systems that are formulated as Generalized Mass Action (GMA models. The estimation problem is posed as a global optimization task, for which novel techniques can be applied to determine the best set of parameter values given the measured responses of the biological system. The challenge is that this task is nonconvex. Nonetheless, deterministic optimization techniques can be used to find a global solution that best reconciles the model parameters and measurements. Specifically, the paper employs branch-and-bound principles to identify the best set of model parameters from observed time course data and illustrates this method with an existing model of the fermentation pathway in Saccharomyces cerevisiae. This is a relatively simple yet representative system with five dependent states and a total of 19 unknown parameters of which the values are to be determined. Conclusion The efficacy of the branch-and-reduce algorithm is illustrated by the S. cerevisiae example. The method described in this paper is likely to be widely applicable in the dynamic modeling of metabolic networks.

  15. Spectral Analysis of Large Finite Element Problems by Optimization Methods

    Directory of Open Access Journals (Sweden)

    Luca Bergamaschi

    1994-01-01

    Full Text Available Recently an efficient method for the solution of the partial symmetric eigenproblem (DACG, deflated-accelerated conjugate gradient was developed, based on the conjugate gradient (CG minimization of successive Rayleigh quotients over deflated subspaces of decreasing size. In this article four different choices of the coefficient βk required at each DACG iteration for the computation of the new search direction Pk are discussed. The “optimal” choice is the one that yields the same asymptotic convergence rate as the CG scheme applied to the solution of linear systems. Numerical results point out that the optimal βk leads to a very cost effective algorithm in terms of CPU time in all the sample problems presented. Various preconditioners are also analyzed. It is found that DACG using the optimal βk and (LLT−1 as a preconditioner, L being the incomplete Cholesky factor of A, proves a very promising method for the partial eigensolution. It appears to be superior to the Lanczos method in the evaluation of the 40 leftmost eigenpairs of five finite element problems, and particularly for the largest problem, with size equal to 4560, for which the speed gain turns out to fall between 2.5 and 6.0, depending on the eigenpair level.

  16. An Optimal Method for Developing Global Supply Chain Management System

    Directory of Open Access Journals (Sweden)

    Hao-Chun Lu

    2013-01-01

    Full Text Available Owing to the transparency in supply chains, enhancing competitiveness of industries becomes a vital factor. Therefore, many developing countries look for a possible method to save costs. In this point of view, this study deals with the complicated liberalization policies in the global supply chain management system and proposes a mathematical model via the flow-control constraints, which are utilized to cope with the bonded warehouses for obtaining maximal profits. Numerical experiments illustrate that the proposed model can be effectively solved to obtain the optimal profits in the global supply chain environment.

  17. Optimization of sequential decisions by least squares Monte Carlo method

    DEFF Research Database (Denmark)

    Nishijima, Kazuyoshi; Anders, Annett

    change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which...... is proposed by Longstaff and Schwartz (2001) for pricing of American options. The present paper formulates the decision problem in a more general manner and explains how the solution scheme proposed by Anders and Nishijima (2011) is implemented for the optimization of the formulated decision problem...

  18. Allogeneic cell therapy bioprocess economics and optimization: single-use cell expansion technologies.

    Science.gov (United States)

    Simaria, Ana S; Hassan, Sally; Varadaraju, Hemanthram; Rowley, Jon; Warren, Kim; Vanek, Philip; Farid, Suzanne S

    2014-01-01

    For allogeneic cell therapies to reach their therapeutic potential, challenges related to achieving scalable and robust manufacturing processes will need to be addressed. A particular challenge is producing lot-sizes capable of meeting commercial demands of up to 10(9) cells/dose for large patient numbers due to the current limitations of expansion technologies. This article describes the application of a decisional tool to identify the most cost-effective expansion technologies for different scales of production as well as current gaps in the technology capabilities for allogeneic cell therapy manufacture. The tool integrates bioprocess economics with optimization to assess the economic competitiveness of planar and microcarrier-based cell expansion technologies. Visualization methods were used to identify the production scales where planar technologies will cease to be cost-effective and where microcarrier-based bioreactors become the only option. The tool outputs also predict that for the industry to be sustainable for high demand scenarios, significant increases will likely be needed in the performance capabilities of microcarrier-based systems. These data are presented using a technology S-curve as well as windows of operation to identify the combination of cell productivities and scale of single-use bioreactors required to meet future lot sizes. The modeling insights can be used to identify where future R&D investment should be focused to improve the performance of the most promising technologies so that they become a robust and scalable option that enables the cell therapy industry reach commercially relevant lot sizes. The tool outputs can facilitate decision-making very early on in development and be used to predict, and better manage, the risk of process changes needed as products proceed through the development pathway. © 2013 Wiley Periodicals, Inc.

  19. Comparison between statistical and optimization methods in accessing unmixing of spectrally similar materials

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-11-01

    Full Text Available This paper reports on the results from ordinary least squares and ridge regression as statistical methods, and is compared to numerical optimization methods such as the stochastic method for global optimization, simulated annealing, particle swarm...

  20. An Evaluation of the Use of Simulated Annealing to Optimize Thinning Rates for Single Even-Aged Stands

    Directory of Open Access Journals (Sweden)

    Kai Moriguchi

    2015-01-01

    Full Text Available We evaluated the potential of simulated annealing as a reliable method for optimizing thinning rates for single even-aged stands. Four types of yield models were used as benchmark models to examine the algorithm’s versatility. Thinning rate, which was constrained to 0–50% every 5 years at stand ages of 10–45 years, was optimized to maximize the net present value for one fixed rotation term (50 years. The best parameters for the simulated annealing were chosen from 113 patterns, using the mean of the net present value from 39 runs to ensure the best performance. We compared the solutions with those from coarse full enumeration to evaluate the method’s reliability and with 39 runs of random search to evaluate its efficiency. In contrast to random search, the best run of simulated annealing for each of the four yield models resulted in a better solution than coarse full enumeration. However, variations in the objective function for two yield models obtained with simulated annealing were significantly larger than those of random search. In conclusion, simulated annealing with optimized parameters is more efficient for optimizing thinning rates than random search. However, it is necessary to execute multiple runs to obtain reliable solutions.

  1. Optimal Control for Bufferbloat Queue Management Using Indirect Method with Parametric Optimization

    Directory of Open Access Journals (Sweden)

    Amr Radwan

    2016-01-01

    Full Text Available Because memory buffers become larger and cheaper, they have been put into network devices to reduce the number of loss packets and improve network performance. However, the consequences of large buffers are long queues at network bottlenecks and throughput saturation, which has been recently noticed in research community as bufferbloat phenomenon. To address such issues, in this article, we design a forward-backward optimal control queue algorithm based on an indirect approach with parametric optimization. The cost function which we want to minimize represents a trade-off between queue length and packet loss rate performance. Through the integration of an indirect approach with parametric optimization, our proposal has advantages of scalability and accuracy compared to direct approaches, while still maintaining good throughput and shorter queue length than several existing queue management algorithms. All numerical analysis, simulation in ns-2, and experiment results are provided to solidify the efficiency of our proposal. In detailed comparisons to other conventional algorithms, the proposed procedure can run much faster than direct collocation methods while maintaining a desired short queue (≈40 packets in simulation and 80 (ms in experiment test.

  2. A novel algorithm for solving optimal path planning problems based on parametrization method and fuzzy aggregation

    International Nuclear Information System (INIS)

    Zamirian, M.; Kamyad, A.V.; Farahi, M.H.

    2009-01-01

    In this Letter a new approach for solving optimal path planning problems for a single rigid and free moving object in a two and three dimensional space in the presence of stationary or moving obstacles is presented. In this approach the path planning problems have some incompatible objectives such as the length of path that must be minimized, the distance between the path and obstacles that must be maximized and etc., then a multi-objective dynamic optimization problem (MODOP) is achieved. Considering the imprecise nature of decision maker's (DM) judgment, these multiple objectives are viewed as fuzzy variables. By determining intervals for the values of these fuzzy variables, flexible monotonic decreasing or increasing membership functions are determined as the degrees of satisfaction of these fuzzy variables on their intervals. Then, the optimal path planning policy is searched by maximizing the aggregated fuzzy decision values, resulting in a fuzzy multi-objective dynamic optimization problem (FMODOP). Using a suitable t-norm, the FMODOP is converted into a non-linear dynamic optimization problem (NLDOP). By using parametrization method and some calculations, the NLDOP is converted into the sequence of conventional non-linear programming problems (NLPP). It is proved that the solution of this sequence of the NLPPs tends to a Pareto optimal solution which, among other Pareto optimal solutions, has the best satisfaction of DM for the MODOP. Finally, the above procedure as a novel algorithm integrating parametrization method and fuzzy aggregation to solve the MODOP is proposed. Efficiency of our approach is confirmed by some numerical examples.

  3. A MISO-ARX-Based Method for Single-Trial Evoked Potential Extraction

    Directory of Open Access Journals (Sweden)

    Nannan Yu

    2017-01-01

    Full Text Available In this paper, we propose a novel method for solving the single-trial evoked potential (EP estimation problem. In this method, the single-trial EP is considered as a complex containing many components, which may originate from different functional brain sites; these components can be distinguished according to their respective latencies and amplitudes and are extracted simultaneously by multiple-input single-output autoregressive modeling with exogenous input (MISO-ARX. The extraction process is performed in three stages: first, we use a reference EP as a template and decompose it into a set of components, which serve as subtemplates for the remaining steps. Then, a dictionary is constructed with these subtemplates, and EPs are preliminarily extracted by sparse coding in order to roughly estimate the latency of each component. Finally, the single-trial measurement is parametrically modeled by MISO-ARX while characterizing spontaneous electroencephalographic activity as an autoregression model driven by white noise and with each component of the EP modeled by autoregressive-moving-average filtering of the subtemplates. Once optimized, all components of the EP can be extracted. Compared with ARX, our method has greater tracking capabilities of specific components of the EP complex as each component is modeled individually in MISO-ARX. We provide exhaustive experimental results to show the effectiveness and feasibility of our method.

  4. An Optimization Method for Virtual Globe Ocean Surface Dynamic Visualization

    Directory of Open Access Journals (Sweden)

    HUANG Wumeng

    2016-12-01

    Full Text Available The existing visualization method in the virtual globe mainly uses the projection grid to organize the ocean grid. This special grid organization has the defects in reflecting the difference characteristics of different ocean areas. The method of global ocean visualization based on global discrete grid can make up the defect of the projection grid method by matching with the discrete space of the virtual globe, so it is more suitable for the virtual ocean surface simulation application.But the available global discrete grids method has many problems which limiting its application such as the low efficiency of rendering and loading, the need of repairing grid crevices. To this point, we propose an optimization for the global discrete grids method. At first, a GPU-oriented multi-scale grid model of ocean surface which develops on the foundation of global discrete grids was designed to organize and manage the ocean surface grids. Then, in order to achieve the wind-drive wave dynamic rendering, this paper proposes a dynamic wave rendering method based on the multi-scale ocean surface grid model to support real-time wind field updating. At the same time, considering the effect of repairing grid crevices on the system efficiency, this paper presents an efficient method for repairing ocean surface grid crevices based on the characteristics of ocean grid and GPU technology. At last, the feasibility and validity of the method are verified by the comparison experiment. The experimental results show that the proposed method is efficient, stable and fast, and can compensate for the lack of function of the existing methods, so the application range is more extensive.

  5. GNSS Single Frequency, Single Epoch Reliable Attitude Determination Method with Baseline Vector Constraint

    Directory of Open Access Journals (Sweden)

    Ang Gong

    2015-12-01

    Full Text Available For Global Navigation Satellite System (GNSS single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.

  6. Multi-Objective Optimization of a Turbofan for an Advanced, Single-Aisle Transport

    Science.gov (United States)

    Berton, Jeffrey J.; Guynn, Mark D.

    2012-01-01

    Considerable interest surrounds the design of the next generation of single-aisle commercial transports in the Boeing 737 and Airbus A320 class. Aircraft designers will depend on advanced, next-generation turbofan engines to power these airplanes. The focus of this study is to apply single- and multi-objective optimization algorithms to the conceptual design of ultrahigh bypass turbofan engines for this class of aircraft, using NASA s Subsonic Fixed Wing Project metrics as multidisciplinary objectives for optimization. The independent design variables investigated include three continuous variables: sea level static thrust, wing reference area, and aerodynamic design point fan pressure ratio, and four discrete variables: overall pressure ratio, fan drive system architecture (i.e., direct- or gear-driven), bypass nozzle architecture (i.e., fixed- or variable geometry), and the high- and low-pressure compressor work split. Ramp weight, fuel burn, noise, and emissions are the parameters treated as dependent objective functions. These optimized solutions provide insight to the ultrahigh bypass engine design process and provide information to NASA program management to help guide its technology development efforts.

  7. Optimized quantum sensing with a single electron spin using real-time adaptive measurements

    Science.gov (United States)

    Bonato, C.; Blok, M. S.; Dinani, H. T.; Berry, D. W.; Markham, M. L.; Twitchen, D. J.; Hanson, R.

    2016-03-01

    Quantum sensors based on single solid-state spins promise a unique combination of sensitivity and spatial resolution. The key challenge in sensing is to achieve minimum estimation uncertainty within a given time and with high dynamic range. Adaptive strategies have been proposed to achieve optimal performance, but their implementation in solid-state systems has been hindered by the demanding experimental requirements. Here, we realize adaptive d.c. sensing by combining single-shot readout of an electron spin in diamond with fast feedback. By adapting the spin readout basis in real time based on previous outcomes, we demonstrate a sensitivity in Ramsey interferometry surpassing the standard measurement limit. Furthermore, we find by simulations and experiments that adaptive protocols offer a distinctive advantage over the best known non-adaptive protocols when overhead and limited estimation time are taken into account. Using an optimized adaptive protocol we achieve a magnetic field sensitivity of 6.1 ± 1.7 nT Hz-1/2 over a wide range of 1.78 mT. These results open up a new class of experiments for solid-state sensors in which real-time knowledge of the measurement history is exploited to obtain optimal performance.

  8. Optimization method for dimensioning a geological HLW waste repository

    International Nuclear Information System (INIS)

    Ouvrier, N.; Chaudon, L.; Malherbe, L.

    1990-01-01

    This method was developed by the CEA to optimize the dimensions of a geological repository by taking account of technical and economic parameters. It involves optimizing radioactive waste storage conditions on the basis of economic criteria with allowance for specified thermal constraints. The results are intended to identify trends and guide the choice from among available options: simple and highly flexible models were therefore used in this study, and only nearfield thermal constraints were taken into consideration. Because of the present uncertainty on the physicochemical properties of the repository environment and on the unit cost figures, this study focused on developing a suitable method rather than on obtaining definitive results. The optimum values found for the two media investigated (granite and salt) show that it is advisable to minimize the interim storage time, implying the containers must be separated by buffer material, whereas vertical spacing may not be required after a 30-year interim storage period. Moreover, the boreholes should be as deep as possible, on a close pitch in widely spaced handling drifts. These results depend to a considerable extent on the assumption of high interim storage costs

  9. A Fast Optimization Method for General Binary Code Learning.

    Science.gov (United States)

    Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng

    2016-09-22

    Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.

  10. An optimization method for gas refrigeration cycle based on the combination of both thermodynamics and entransy theory

    International Nuclear Information System (INIS)

    Chen, Qun; Xu, Yun-Chao; Hao, Jun-Hong

    2014-01-01

    Highlights: • An optimization method for practical thermodynamic cycle is developed. • The entransy-based heat transfer analysis and thermodynamic analysis are combined. • Theoretical relation between system requirements and design parameters is derived. • The optimization problem can be converted into conditional extremum problem. • The proposed method provides several useful optimization criteria. - Abstract: A thermodynamic cycle usually consists of heat transfer processes in heat exchangers and heat-work conversion processes in compressors, expanders and/or turbines. This paper presents a new optimization method for effective improvement of thermodynamic cycle performance with the combination of entransy theory and thermodynamics. The heat transfer processes in a gas refrigeration cycle are analyzed by entransy theory and the heat-work conversion processes are analyzed by thermodynamics. The combination of these two analysis yields a mathematical relation directly connecting system requirements, e.g. cooling capacity rate and power consumption rate, with design parameters, e.g. heat transfer area of each heat exchanger and heat capacity rate of each working fluid, without introducing any intermediate variable. Based on this relation together with the conditional extremum method, we theoretically derive an optimization equation group. Simultaneously solving this equation group offers the optimal structural and operating parameters for every single gas refrigeration cycle and furthermore provides several useful optimization criteria for all the cycles. Finally, a practical gas refrigeration cycle is taken as an example to show the application and validity of the newly proposed optimization method

  11. An n -material thresholding method for improving integerness of solutions in topology optimization

    International Nuclear Information System (INIS)

    Watts, Seth; Engineering); Tortorelli, Daniel A.; Engineering)

    2016-01-01

    It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, the canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.

  12. ARSTEC, Nonlinear Optimization Program Using Random Search Method

    International Nuclear Information System (INIS)

    Rasmuson, D. M.; Marshall, N. H.

    1979-01-01

    1 - Description of problem or function: The ARSTEC program was written to solve nonlinear, mixed integer, optimization problems. An example of such a problem in the nuclear industry is the allocation of redundant parts in the design of a nuclear power plant to minimize plant unavailability. 2 - Method of solution: The technique used in ARSTEC is the adaptive random search method. The search is started from an arbitrary point in the search region and every time a point that improves the objective function is found, the search region is centered at that new point. 3 - Restrictions on the complexity of the problem: Presently, the maximum number of independent variables allowed is 10. This can be changed by increasing the dimension of the arrays

  13. Nuclear fuel cycle optimization - methods and modelling techniques

    International Nuclear Information System (INIS)

    Silvennoinen, P.

    1982-01-01

    This book is aimed at presenting methods applicable in the analysis of fuel cycle logistics and optimization as well as in evaluating the economics of different reactor strategies. After a succinct introduction to the phases of a fuel cycle, uranium cost trends are assessed in a global perspective and subsequent chapters deal with the fuel cycle problems faced by a power utility. A fundamental material flow model is introduced first in the context of light water reactor fuel cycles. Besides the minimum cost criterion, the text also deals with other objectives providing for a treatment of cost uncertainties and of the risk of proliferation of nuclear weapons. Methods to assess mixed reactor strategies, comprising also other reactor types than the light water reactor, are confined to cost minimization. In the final Chapter, the integration of nuclear capacity within a generating system is examined. (author)

  14. A discrete optimization method for nuclear fuel management

    International Nuclear Information System (INIS)

    Argaud, J.P.

    1993-04-01

    Nuclear loading pattern elaboration can be seen as a combinational optimization problem, of tremendous size and with non-linear cost-functions, and search are always numerically expensive. After a brief introduction of the main aspects of nuclear fuel management, this note presents a new idea to treat the combinational problem by using informations included in the gradient of a cost function. The method is to choose, by direct observation of the gradient, the more interesting changes in fuel loading patterns. An example is then developed to illustrate an operating mode of the method, and finally, connections with simulated annealing and genetic algorithms are described as an attempt to improve search processes. (author). 1 fig., 16 refs

  15. A Single-Degree-of-Freedom Energy Optimization Strategy for Power-Split Hybrid Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Chaoying Xia

    2017-07-01

    Full Text Available This paper presents a single-degree-of-freedom energy optimization strategy to solve the energy management problem existing in power-split hybrid electric vehicles (HEVs. The proposed strategy is based on a quadratic performance index, which is innovatively designed to simultaneously restrict the fluctuation of battery state of charge (SOC and reduce fuel consumption. An extended quadratic optimal control problem is formulated by approximating the fuel consumption rate as a quadratic polynomial of engine power. The approximated optimal control law is obtained by utilizing the solution properties of the Riccati equation and adjoint equation. It is easy to implement in real-time and the engineering significance is explained in details. In order to validate the effectiveness of the proposed strategy, the forward-facing vehicle simulation model is established based on the ADVISOR software (Version 2002, National Renewable Energy Laboratory, Golden, CO, USA. The simulation results show that there is only a little fuel consumption difference between the proposed strategy and the Pontryagin’s minimum principle (PMP-based global optimal strategy, and the proposed strategy also exhibits good adaptability under different initial battery SOC, cargo mass and road slope conditions.

  16. Development and application of computer assisted optimal method for treatment of femoral neck fracture.

    Science.gov (United States)

    Wang, Monan; Zhang, Kai; Yang, Ning

    2018-04-09

    To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.

  17. Optimal PMU placement using topology transformation method in power systems

    Directory of Open Access Journals (Sweden)

    Nadia H.A. Rahman

    2016-09-01

    Full Text Available Optimal phasor measurement units (PMUs placement involves the process of minimizing the number of PMUs needed while ensuring the entire power system completely observable. A power system is identified observable when the voltages of all buses in the power system are known. This paper proposes selection rules for topology transformation method that involves a merging process of zero-injection bus with one of its neighbors. The result from the merging process is influenced by the selection of bus selected to merge with the zero-injection bus. The proposed method will determine the best candidate bus to merge with zero-injection bus according to the three rules created in order to determine the minimum number of PMUs required for full observability of the power system. In addition, this paper also considered the case of power flow measurements. The problem is formulated as integer linear programming (ILP. The simulation for the proposed method is tested by using MATLAB for different IEEE bus systems. The explanation of the proposed method is demonstrated by using IEEE 14-bus system. The results obtained in this paper proved the effectiveness of the proposed method since the number of PMUs obtained is comparable with other available techniques.

  18. Optimal PMU placement using topology transformation method in power systems.

    Science.gov (United States)

    Rahman, Nadia H A; Zobaa, Ahmed F

    2016-09-01

    Optimal phasor measurement units (PMUs) placement involves the process of minimizing the number of PMUs needed while ensuring the entire power system completely observable. A power system is identified observable when the voltages of all buses in the power system are known. This paper proposes selection rules for topology transformation method that involves a merging process of zero-injection bus with one of its neighbors. The result from the merging process is influenced by the selection of bus selected to merge with the zero-injection bus. The proposed method will determine the best candidate bus to merge with zero-injection bus according to the three rules created in order to determine the minimum number of PMUs required for full observability of the power system. In addition, this paper also considered the case of power flow measurements. The problem is formulated as integer linear programming (ILP). The simulation for the proposed method is tested by using MATLAB for different IEEE bus systems. The explanation of the proposed method is demonstrated by using IEEE 14-bus system. The results obtained in this paper proved the effectiveness of the proposed method since the number of PMUs obtained is comparable with other available techniques.

  19. Green synthesis of isopropyl myristate in novel single phase medium Part I: Batch optimization studies.

    Science.gov (United States)

    Vadgama, Rajeshkumar N; Odaneth, Annamma A; Lali, Arvind M

    2015-12-01

    Isopropyl myristate finds many applications in food, cosmetic and pharmaceutical industries as an emollient, thickening agent, or lubricant. Using a homogeneous reaction phase, non-specific lipase derived from Candida antartica, marketed as Novozym 435, was determined to be most suitable for the enzymatic synthesis of isopropyl myristate. The high molar ratio of alcohol to acid creates novel single phase medium which overcomes mass transfer effects and facilitates downstream processing. The effect of various reaction parameters was optimized to obtain a high yield of isopropyl myristate. Effect of temperature, agitation speed, organic solvent, biocatalyst loading and batch operational stability of the enzyme was systematically studied. The conversion of 87.65% was obtained when the molar ratio of isopropyl alcohol to myristic acid (15:1) was used with 4% (w/w) catalyst loading and agitation speed of 150 rpm at 60 °C. The enzyme has also shown good batch operational stability under optimized conditions.

  20. Setting value optimization method in integration for relay protection based on improved quantum particle swarm optimization algorithm

    Science.gov (United States)

    Yang, Guo Sheng; Wang, Xiao Yang; Li, Xue Dong

    2018-03-01

    With the establishment of the integrated model of relay protection and the scale of the power system expanding, the global setting and optimization of relay protection is an extremely difficult task. This paper presents a kind of application in relay protection of global optimization improved particle swarm optimization algorithm and the inverse time current protection as an example, selecting reliability of the relay protection, selectivity, quick action and flexibility as the four requires to establish the optimization targets, and optimizing protection setting values of the whole system. Finally, in the case of actual power system, the optimized setting value results of the proposed method in this paper are compared with the particle swarm algorithm. The results show that the improved quantum particle swarm optimization algorithm has strong search ability, good robustness, and it is suitable for optimizing setting value in the relay protection of the whole power system.

  1. The Application of Fitness Sharing Method in Evolutionary Algorithm to Optimizing the Travelling Salesman Problem (TSP

    Directory of Open Access Journals (Sweden)

    Nurmaulidar Nurmaulidar

    2015-04-01

    Full Text Available Travelling Salesman Problem (TSP is one of complex optimization problem that is difficult to be solved, and require quite a long time for a large number of cities. Evolutionary algorithm is a precise algorithm used in solving complex optimization problem as it is part of heuristic method. Evolutionary algorithm, like many other algorithms, also experiences a premature convergence phenomenon, whereby variation is eliminated from a population of fairly fit individuals before a complete solution is achieved. Therefore it requires a method to delay the convergence. A specific method of fitness sharing called phenotype fitness sharing has been used in this research. The aim of this research is to find out whether fitness sharing in evolutionary algorithm is able to optimize TSP. There are two concepts of evolutionary algorithm being used in this research. the first one used single elitism and the other one used federated solution. The two concepts had been tested to the method of fitness sharing by using the threshold of 0.25, 0.50 and 0.75. The result was then compared to a non fitness sharing method. The result in this study indicated that by using single elitism concept, fitness sharing was able to give a more optimum result for the data of 100-1000 cities. On the other hand, by using federation solution concept, fitness sharing can yield a more optimum result for the data above 1000 cities, as well as a better solution of data-spreading compared to the method without fitness sharing.

  2. A Method for Turbocharging Four-Stroke Single Cylinder Engines

    Science.gov (United States)

    Buchman, Michael; Winter, Amos

    2014-11-01

    Turbocharging is not conventionally used with single cylinder engines due to the timing mismatch between when the turbo is powered and when it can deliver air to the cylinder. The proposed solution involves a fixed, pressurized volume - which we call an air capacitor - on the intake side of the engine between the turbocharger and intake valves. The capacitor acts as a buffer and would be implemented as a new style of intake manifold with a larger volume than traditional systems. This talk will present the flow analysis used to determine the optimal size for the capacitor, which was found to be four to five times the engine capacity, as well as its anticipated contributions to engine performance. For a capacitor sized for a one-liter engine, the time to reach operating pressure was found to be approximately two seconds, which would be acceptable for slowly accelerating applications and steady state applications. The air density increase that could be achieved, compared to ambient air, was found to vary between fifty percent for adiabatic compression and no heat transfer from the capacitor, to eighty percent for perfect heat transfer. These increases in density are proportional to, to first order, the anticipated power increases that could be realized. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1122374.

  3. Strained Silicon Single Nanowire Gate-All-Around TFETs with Optimized Tunneling Junctions

    Directory of Open Access Journals (Sweden)

    Keyvan Narimani

    2018-04-01

    Full Text Available In this work, we demonstrate a strained Si single nanowire tunnel field effect transistor (TFET with gate-all-around (GAA structure yielding Ion-current of 15 μA/μm at the supply voltage of Vdd = 0.5V with linear onset at low drain voltages. The subthreshold swing (SS at room temperature shows an average of 76 mV/dec over 4 orders of drain current Id from 5 × 10−6 to 5 × 10−2 µA/µm Optimized devices also show excellent current saturation, an important feature for analog performance.

  4. Highly optimized tunable Er3+-doped single longitudinal mode fiber ring laser, experiment and model

    DEFF Research Database (Denmark)

    Poulsen, Christian; Sejka, Milan

    1993-01-01

    A continuous wave (CW) tunable diode-pumped Er3+-doped fiber ring laser, pumped by diode laser at wavelengths around 1480 nm, is discussed. Wavelength tuning range of 42 nm, maximum slope efficiency of 48% and output power of 14.4 mW have been achieved. Single longitudinal mode lasing...... with a linewidth of 6 kHz has been measured. A fast model of erbium-doped fiber laser was developed and used to optimize output parameters of the laser...

  5. Optimized design and performance of a shared pump single clad 2 μm TDFA

    Science.gov (United States)

    Tench, Robert E.; Romano, Clément; Delavaux, Jean-Marc

    2018-05-01

    We report the design, experimental performance, and simulation of a single stage, co- and counter-pumped Tm-doped fiber amplifier (TDFA) in the 2 μm signal wavelength band with an optimized 1567 nm shared pump source. We investigate the dependence of output power, gain, and efficiency on pump coupling ratio and signal wavelength. Small signal gains of >50 dB, an output power of 2 W, and small signal noise figures of performance agree well with the experimental data. We also discuss performance tradeoffs with respect to amplifier topology for this simple and efficient TDFA.

  6. Single Allocation Hub-and-spoke Networks Design Based on Ant Colony Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Yang Pingle

    2014-10-01

    Full Text Available Capacitated single allocation hub-and-spoke networks can be abstracted as a mixed integer linear programming model equation with three variables. Introducing an improved ant colony algorithm, which has six local search operators. Meanwhile, introducing the "Solution Pair" concept to decompose and optimize the composition of the problem, the problem can become more specific and effectively meet the premise and advantages of using ant colony algorithm. Finally, location simulation experiment is made according to Australia Post data to demonstrate this algorithm has good efficiency and stability for solving this problem.

  7. Developing a Model for Optimizing Inventory of Repairable Items at Single Operating Base

    OpenAIRE

    Le, Tin

    2016-01-01

    The use of EOQ model in inventory management is popular. However, EOQ models has many disadvantages, especially, when the model is applied to manage repairable items. In order to deal with high-cost and repairable items, Craig C. Sherbrooke introduced a model in his book “Optimal Inventory Modeling of Systems: Multi-Echelon Techniques”. The research focus is to implement and develop a program to execute the single-site in-ventory model for repairable items. The model helps to significantl...

  8. Optimal retirement planning with a focus on single and multilife annuities

    DEFF Research Database (Denmark)

    Konicz, Agnieszka Karolina; Weissensteiner, Alex

    a single or a joint life, and pay fixed or variable benefits. We further include transaction costs on stocks and bonds, and surrender charges on pure endowments. We show that despite high surrender charges, annuities are the primary asset class in a portfolio, and that annuity income is never fully...... consumed, but used for rebalancing purposes. We argue that the optimal retirement product for a household is much more complex than any of those available in the market. Every household should be offered an annuity tailored to its needs, using a unique combination of assets and mortality protection levels....

  9. A Single Image Dehazing Method Using Average Saturation Prior

    Directory of Open Access Journals (Sweden)

    Zhenfei Gu

    2017-01-01

    Full Text Available Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, the model is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity. Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP, which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness.

  10. A single-image method of aberration retrieval for imaging systems under partially coherent illumination

    International Nuclear Information System (INIS)

    Xu, Shuang; Liu, Shiyuan; Zhang, Chuanwei; Wei, Haiqing

    2014-01-01

    We propose a method for retrieving small lens aberrations in optical imaging systems under partially coherent illumination, which only requires to measure one single defocused image of intensity. By deriving a linear theory of imaging systems, we obtain a generalized formulation of aberration sensitivity in a matrix form, which provides a set of analytic kernels that relate the measured intensity distribution directly to the unknown Zernike coefficients. Sensitivity analysis is performed and test patterns are optimized to ensure well-posedness of the inverse problem. Optical lithography simulations have validated the theoretical derivation and confirmed its simplicity and superior performance in retrieving small lens aberrations. (fast track communication)

  11. Development of an optimal velocity selection method with velocity obstacle

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min Geuk; Oh, Jun Ho [KAIST, Daejeon (Korea, Republic of)

    2015-08-15

    The Velocity obstacle (VO) method is one of the most well-known methods for local path planning, allowing consideration of dynamic obstacles and unexpected obstacles. Typical VO methods separate a velocity map into a collision area and a collision-free area. A robot can avoid collisions by selecting its velocity from within the collision-free area. However, if there are numerous obstacles near a robot, the robot will have very few velocity candidates. In this paper, a method for choosing optimal velocity components using the concept of pass-time and vertical clearance is proposed for the efficient movement of a robot. The pass-time is the time required for a robot to pass by an obstacle. By generating a latticized available velocity map for a robot, each velocity component can be evaluated using a cost function that considers the pass-time and other aspects. From the output of the cost function, even a velocity component that will cause a collision in the future can be chosen as a final velocity if the pass-time is sufficiently long enough.

  12. Optimized t-expansion method for the Rabi Hamiltonian

    International Nuclear Information System (INIS)

    Travenec, Igor; Samaj, Ladislav

    2011-01-01

    A polemic arose recently about the applicability of the t-expansion method to the calculation of the ground state energy E 0 of the Rabi model. For specific choices of the trial function and very large number of involved connected moments, the t-expansion results are rather poor and exhibit considerable oscillations. In this Letter, we formulate the t-expansion method for trial functions containing two free parameters which capture two exactly solvable limits of the Rabi Hamiltonian. At each order of the t-series, E 0 is assumed to be stationary with respect to the free parameters. A high accuracy of E 0 estimates is achieved for small numbers (5 or 6) of involved connected moments, the relative error being smaller than 10 -4 (0.01%) within the whole parameter space of the Rabi Hamiltonian. A special symmetrization of the trial function enables us to calculate also the first excited energy E 1 , with the relative error smaller than 10 -2 (1%). -- Highlights: → We study the ground state energy of the Rabi Hamiltonian. → We use the t-expansion method with an optimized trial function. → High accuracy of estimates is achieved, the relative error being smaller than 0.01%. → The calculation of the first excited state energy is made. The method has a general applicability.

  13. Information theoretic methods for image processing algorithm optimization

    Science.gov (United States)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  14. Methods for the design and optimization of shaped tokamaks

    International Nuclear Information System (INIS)

    Haney, S.W.

    1988-05-01

    Two major questions associated with the design and optimization of shaped tokamaks are considered. How do physics and engineering constraints affect the design of shaped tokamaks? How can the process of designing shaped tokamaks be improved? The first question is addressed with the aid of a completely analytical procedure for optimizing the design of a resistive-magnet tokamak reactor. It is shown that physics constraints---particularly the MHD beta limits and the Murakami density limit---have an enormous, and sometimes, unexpected effect on the final design. The second question is addressed through the development of a series of computer models for calculating plasma equilibria, estimating poloidal field coil currents, and analyzing axisymmetric MHD stability in the presence of resistive conductors and feedback. The models offer potential advantages over conventional methods since they are characterized by extremely fast computer execution times, simplicity, and robustness. Furthermore, evidence is presented that suggests that very little loss of accuracy is required to achieve these desirable features. 94 refs., 66 figs., 14 tabs

  15. A non-linear branch and cut method for solving discrete minimum compliance problems to global optimality

    DEFF Research Database (Denmark)

    Stolpe, Mathias; Bendsøe, Martin P.

    2007-01-01

    This paper present some initial results pertaining to a search for globally optimal solutions to a challenging benchmark example proposed by Zhou and Rozvany. This means that we are dealing with global optimization of the classical single load minimum compliance topology design problem with a fixed...... finite element discretization and with discrete design variables. Global optimality is achieved by the implementation of some specially constructed convergent nonlinear branch and cut methods, based on the use of natural relaxations and by applying strengthening constraints (linear valid inequalities...

  16. A non-linear branch and cut method for solving discrete minimum compliance problems to global optimality

    DEFF Research Database (Denmark)

    Stolpe, Mathias; Bendsøe, Martin P.

    2007-01-01

    This paper present some initial results pertaining to a search for globally optimal solutions to a challenging benchmark example proposed by Zhou and Rozvany. This means that we are dealing with global optimization of the classical single load minimum compliance topology design problem with a fixed...... finite element discretization and with discrete design variables. Global optimality is achieved by the implementation of some specially constructed convergent nonlinear branch and cut methods, based on the use of natural relaxations and by applying strengthening constraints (linear valid inequalities......) and cuts....

  17. An Effective Experimental Optimization Method for Wireless Power Transfer System Design Using Frequency Domain Measurement

    Directory of Open Access Journals (Sweden)

    Sangyeong Jeong

    2017-10-01

    Full Text Available This paper proposes an experimental optimization method for a wireless power transfer (WPT system. The power transfer characteristics of a WPT system with arbitrary loads and various types of coupling and compensation networks can be extracted by frequency domain measurements. The various performance parameters of the WPT system, such as input real/imaginary/apparent power, power factor, efficiency, output power and voltage gain, can be accurately extracted in a frequency domain by a single passive measurement. Subsequently, the design parameters can be efficiently tuned by separating the overall design steps into two parts. The extracted performance parameters of the WPT system were validated with time-domain experiments.

  18. A conceptual framework for economic optimization of single hazard surveillance in livestock production chains.

    Science.gov (United States)

    Guo, Xuezhen; Claassen, G D H; Oude Lansink, A G J M; Saatkamp, H W

    2014-06-01

    Economic analysis of hazard surveillance in livestock production chains is essential for surveillance organizations (such as food safety authorities) when making scientifically based decisions on optimization of resource allocation. To enable this, quantitative decision support tools are required at two levels of analysis: (1) single-hazard surveillance system and (2) surveillance portfolio. This paper addresses the first level by presenting a conceptual approach for the economic analysis of single-hazard surveillance systems. The concept includes objective and subjective aspects of single-hazard surveillance system analysis: (1) a simulation part to derive an efficient set of surveillance setups based on the technical surveillance performance parameters (TSPPs) and the corresponding surveillance costs, i.e., objective analysis, and (2) a multi-criteria decision making model to evaluate the impacts of the hazard surveillance, i.e., subjective analysis. The conceptual approach was checked for (1) conceptual validity and (2) data validity. Issues regarding the practical use of the approach, particularly the data requirement, were discussed. We concluded that the conceptual approach is scientifically credible for economic analysis of single-hazard surveillance systems and that the practicability of the approach depends on data availability. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. A simplified counter diffusion method combined with a 1D simulation program for optimizing crystallization conditions.

    Science.gov (United States)

    Tanaka, Hiroaki; Inaka, Koji; Sugiyama, Shigeru; Takahashi, Sachiko; Sano, Satoshi; Sato, Masaru; Yoshitomi, Susumu

    2004-01-01

    We developed a new protein crystallization method has been developed using a simplified counter-diffusion method for optimizing crystallization condition. It is composed of only a single capillary, the gel in the silicon tube and the screw-top test tube, which are readily available in the laboratory. The one capillary can continuously scan a wide range of crystallization conditions (combination of the concentrations of the precipitant and the protein) unless crystallization occurs, which means that it corresponds to many drops in the vapor-diffusion method. The amount of the precipitant and the protein solutions can be much less than in conventional methods. In this study, lysozyme and alpha-amylase were used as model proteins for demonstrating the efficiency of this method. In addition, one-dimensional (1-D) simulations of the crystal growth were performed based on the 1-D diffusion model. The optimized conditions can be applied to the initial crystallization conditions for both other counter-diffusion methods with the Granada Crystallization Box (GCB) and for the vapor-diffusion method after some modification.

  20. Optimized Design of Spacer in Electrodialyzer Using CFD Simulation Method

    Science.gov (United States)

    Jia, Yuxiang; Yan, Chunsheng; Chen, Lijun; Hu, Yangdong

    2018-06-01

    In this study, the effects of length-width ratio and diversion trench of the spacer on the fluid flow behavior in an electrodialyzer have been investigated through CFD simulation method. The relevant information, including the pressure drop, velocity vector distribution and shear stress distribution, demonstrates the importance of optimized design of the spacer in an electrodialysis process. The results show width of the diversion trench has a great effect on the fluid flow compared with length. Increase of the diversion trench width could strength the fluid flow, but also increase the pressure drop. Secondly, the dead zone of the fluid flow decreases with increase of length-width ratio of the spacer, but the pressure drop increases with the increase of length-width ratio of the spacer. So the appropriate length-width ratio of the space should be moderate.

  1. Convex functions and optimization methods on Riemannian manifolds

    CERN Document Server

    Udrişte, Constantin

    1994-01-01

    This unique monograph discusses the interaction between Riemannian geometry, convex programming, numerical analysis, dynamical systems and mathematical modelling. The book is the first account of the development of this subject as it emerged at the beginning of the 'seventies. A unified theory of convexity of functions, dynamical systems and optimization methods on Riemannian manifolds is also presented. Topics covered include geodesics and completeness of Riemannian manifolds, variations of the p-energy of a curve and Jacobi fields, convex programs on Riemannian manifolds, geometrical constructions of convex functions, flows and energies, applications of convexity, descent algorithms on Riemannian manifolds, TC and TP programs for calculations and plots, all allowing the user to explore and experiment interactively with real life problems in the language of Riemannian geometry. An appendix is devoted to convexity and completeness in Finsler manifolds. For students and researchers in such diverse fields as pu...

  2. Optimized design and structural mechanics of a single-piece composite helicopter driveshaft

    Science.gov (United States)

    Henry, Todd C.

    In rotorcraft driveline design, single-piece composite driveshafts have much potential for reducing driveline mass and complexity over multi-segmented metallic driveshafts. The singlepiece shaft concept is enabled by the relatively high fatigue strain capacity of fiber reinforced polymer composites over metals. Challenges for single-piece driveshaft design lie in addressing the self-heating behavior of the composite due to the material damping, as well as, whirling stability, torsional buckling stability, and composite strength. Increased composite temperature due to self-heating reduces the composite strength and is accounted for in this research. The laminate longitudinal stiffness ( Ex) and strength (Fx) are known to be heavily degraded by fiber undulation, however, both are not well understood in compression. The whirling stability (a function of longitudinal stiffness) and the composite strength are strongly influential in driveshaft optimization, and thus are investigated further through the testing of flat and filament wound composite specimens. The design of single-piece composite driveshafts, however, needs to consider many failure criteria, including hysteresis-induced overheating, whirl stability, torsional buckling stability, and material failure by overstress. The present investigation uses multi-objective optimization to investigate the design space which visually highlights design trades. Design variables included stacking sequence, number of laminas, and number of hanger bearings. The design goals were to minimize weight and maximize the lowest factor of safety by adaptively generating solutions to the multi-objective problem. Several design spaces were investigated by examining the effect of misalignment, ambient temperature, and constant power transmission on the optimized solution. Several materials of interest were modeled using experimentally determined elastic properties and novel temperature-dependent composite strength. Compared to the

  3. A New Method for Single-Epoch Ambiguity Resolution with Indoor Pseudolite Positioning.

    Science.gov (United States)

    Li, Xin; Zhang, Peng; Guo, Jiming; Wang, Jinling; Qiu, Weining

    2017-04-21

    Ambiguity resolution (AR) is crucial for high-precision indoor pseudolite positioning. Due to the existing characteristics of the pseudolite positioning system, such as the geometry structure of the stationary pseudolite which is consistently invariant, the indoor signal is easy to interrupt and the first order linear truncation error cannot be ignored, and a new AR method based on the idea of the ambiguity function method (AFM) is proposed in this paper. The proposed method is a single-epoch and nonlinear method that is especially well-suited for indoor pseudolite positioning. Considering the very low computational efficiency of conventional AFM, we adopt an improved particle swarm optimization (IPSO) algorithm to search for the best solution in the coordinate domain, and variances of a least squares adjustment is conducted to ensure the reliability of the solving ambiguity. Several experiments, including static and kinematic tests, are conducted to verify the validity of the proposed AR method. Numerical results show that the IPSO significantly improved the computational efficiency of AFM and has a more elaborate search ability compared to the conventional grid searching method. For the indoor pseudolite system, which had an initial approximate coordinate precision better than 0.2 m, the AFM exhibited good performances in both static and kinematic tests. With the corrected ambiguity gained from our proposed method, indoor pseudolite positioning can achieve centimeter-level precision using a low-cost single-frequency software receiver.

  4. Optimizing the radioimmunologic determination methods for cortisol and calcitonin

    International Nuclear Information System (INIS)

    Stalla, G.

    1981-01-01

    In order to build up a specific 125-iodine cortisol radioimmunoassay (RIA) pure cortisol-3(0-carbodxymethyl) oxim was synthesized for teh production of antigens and tracers. The cortisol was coupled with tyrosin methylester and then labelled with 125-iodine. For the antigen production the cortisol derivate was coupled with the same method to thyreoglobulin. The major part of the antisera, which were obtained like this, presented high titres. Apart from a high specificity for cortisol a high affinity was found in the acid pH-area and quantified with a particularly developed computer program. An extractive step in the cortisol RIA could be prevented by efforts. The assay was carried out with an optimized double antibody principle: The reaction time between the first and the second antiserum was considerably accelerated by the administration of polyaethylenglycol. The assay can be carried out automatically by applying a modular analysis system, which operates fast and provides a large capacity. The required quality and accuracy controls were done. The comparison of this assay with other cortisol-RIA showed good correlation. The RIA for human clacitonin was improved. For separating bound and freely mobile hormones the optimized double-antibody technique was applied. The antiserum was examined with respect to its affinity to calcitonin. For the 'zero serum' production the Florisil extraction method was used. The criteria of the quality and accuracy controls were complied. Significantly increased calcitonin concentrations were found in a patient group with medullar thyroid carcinoma and in two patients with an additional phaechromocytoma. (orig./MG) [de

  5. Validation of a method for radionuclide activity optimize in SPECT

    International Nuclear Information System (INIS)

    Perez Diaz, M.; Diaz Rizo, O.; Lopez Diaz, A.; Estevez Aparicio, E.; Roque Diaz, R.

    2007-01-01

    A discriminant method for optimizing the activity administered in NM studies is validated by comparison with ROC curves. the method is tested in 21 SPECT, performed with a Cardiac phantom. Three different cold lesions (L1, L2 and L3) were placed in the myocardium-wall for each SPECT. Three activities (84 MBq, 37 MBq or 18.5 MBq) of Tc-99m diluted in water were used as background. The linear discriminant analysis was used to select the parameters that characterize image quality (Background-to-Lesion (B/L) and Signal-to-Noise (S/N) ratios). Two clusters with different image quality (p=0.021) were obtained following the selected variables. the first one involved the studies performed with 37 MBq and 84 MBq, and the second one included the studies with 18.5 MBq. the ratios B/L, B/L2 and B/L3 are the parameters capable to construct the function, with 100% of cases correctly classified into the clusters. The value of 37 MBq is the lowest tested activity for which good results for the B/Li variables were obtained,without significant differences from the results with 84 MBq (p>0.05). The result is coincident with the applied ROC-analysis. A correlation between both method of r=890 was obtained. (Author) 26 refs

  6. A single network adaptive critic (SNAC) architecture for optimal control synthesis for a class of nonlinear systems.

    Science.gov (United States)

    Padhi, Radhakant; Unnikrishnan, Nishant; Wang, Xiaohua; Balakrishnan, S N

    2006-12-01

    Even though dynamic programming offers an optimal control solution in a state feedback form, the method is overwhelmed by computational and storage requirements. Approximate dynamic programming implemented with an Adaptive Critic (AC) neural network structure has evolved as a powerful alternative technique that obviates the need for excessive computations and storage requirements in solving optimal control problems. In this paper, an improvement to the AC architecture, called the "Single Network Adaptive Critic (SNAC)" is presented. This approach is applicable to a wide class of nonlinear systems where the optimal control (stationary) equation can be explicitly expressed in terms of the state and costate variables. The selection of this terminology is guided by the fact that it eliminates the use of one neural network (namely the action network) that is part of a typical dual network AC setup. As a consequence, the SNAC architecture offers three potential advantages: a simpler architecture, lesser computational load and elimination of the approximation error associated with the eliminated network. In order to demonstrate these benefits and the control synthesis technique using SNAC, two problems have been solved with the AC and SNAC approaches and their computational performances are compared. One of these problems is a real-life Micro-Electro-Mechanical-system (MEMS) problem, which demonstrates that the SNAC technique is applicable to complex engineering systems.

  7. Consensus of satellite cluster flight using an energy-matching optimal control method

    Science.gov (United States)

    Luo, Jianjun; Zhou, Liang; Zhang, Bo

    2017-11-01

    This paper presents an optimal control method for consensus of satellite cluster flight under a kind of energy matching condition. Firstly, the relation between energy matching and satellite periodically bounded relative motion is analyzed, and the satellite energy matching principle is applied to configure the initial conditions. Then, period-delayed errors are adopted as state variables to establish the period-delayed errors dynamics models of a single satellite and the cluster. Next a novel satellite cluster feedback control protocol with coupling gain is designed, so that the satellite cluster periodically bounded relative motion consensus problem (period-delayed errors state consensus problem) is transformed to the stability of a set of matrices with the same low dimension. Based on the consensus region theory in the research of multi-agent system consensus issues, the coupling gain can be obtained to satisfy the requirement of consensus region and decouple the satellite cluster information topology and the feedback control gain matrix, which can be determined by Linear quadratic regulator (LQR) optimal method. This method can realize the consensus of satellite cluster period-delayed errors, leading to the consistency of semi-major axes (SMA) and the energy-matching of satellite cluster. Then satellites can emerge the global coordinative cluster behavior. Finally the feasibility and effectiveness of the present energy-matching optimal consensus for satellite cluster flight is verified through numerical simulations.

  8. An engineering optimization method with application to STOL-aircraft approach and landing trajectories

    Science.gov (United States)

    Jacob, H. G.

    1972-01-01

    An optimization method has been developed that computes the optimal open loop inputs for a dynamical system by observing only its output. The method reduces to static optimization by expressing the inputs as series of functions with parameters to be optimized. Since the method is not concerned with the details of the dynamical system to be optimized, it works for both linear and nonlinear systems. The method and the application to optimizing longitudinal landing paths for a STOL aircraft with an augmented wing are discussed. Noise, fuel, time, and path deviation minimizations are considered with and without angle of attack, acceleration excursion, flight path, endpoint, and other constraints.

  9. SINGLE TREE DETECTION FROM AIRBORNE LASER SCANNING DATA USING A MARKED POINT PROCESS BASED METHOD

    Directory of Open Access Journals (Sweden)

    J. Zhang

    2013-05-01

    Full Text Available Tree detection and reconstruction is of great interest in large-scale city modelling. In this paper, we present a marked point process model to detect single trees from airborne laser scanning (ALS data. We consider single trees in ALS recovered canopy height model (CHM as a realization of point process of circles. Unlike traditional marked point process, we sample the model in a constraint configuration space by making use of image process techniques. A Gibbs energy is defined on the model, containing a data term which judge the fitness of the model with respect to the data, and prior term which incorporate the prior knowledge of object layouts. We search the optimal configuration through a steepest gradient descent algorithm. The presented hybrid framework was test on three forest plots and experiments show the effectiveness of the proposed method.

  10. The Adjoint Method for Gradient-based Dynamic Optimization of UV Flash Processes

    DEFF Research Database (Denmark)

    Ritschel, Tobias Kasper Skovborg; Capolei, Andrea; Jørgensen, John Bagterp

    2017-01-01

    This paper presents a novel single-shooting algorithm for gradient-based solution of optimal control problems with vapor-liquid equilibrium constraints. Dynamic optimization of UV flash processes is relevant in nonlinear model predictive control of distillation columns, certain two-phase flow pro......-component flash process which demonstrate the importance of the optimization solver, the compiler, and the linear algebra software for the efficiency of dynamic optimization of UV flash processes....

  11. Piezoresistivity of mechanically drawn single-walled carbon nanotube (SWCNT) thin films-: mechanism and optimizing principle

    Science.gov (United States)

    Obitayo, Waris

    The individual carbon nanotube (CNT) based strain sensors have been found to have excellent piezoresistive properties with a reported gauge factor (GF) of up to 3000. This GF on the other hand, has been shown to be structurally dependent on the nanotubes. In contrast, to individual CNT based strain sensors, the ensemble CNT based strain sensors have very low GFs e.g. for a single walled carbon nanotube (SWCNT) thin film strain sensor, GF is ~1. As a result, studies which are mostly numerical/analytical have revealed the dependence of piezoresistivity on key parameters like concentration, orientation, length and diameter, aspect ratio, energy barrier height and Poisson ratio of polymer matrix. The fundamental understanding of the piezoresistive mechanism in an ensemble CNT based strain sensor still remains unclear, largely due to discrepancies in the outcomes of these numerical studies. Besides, there have been little or no experimental confirmation of these studies. The goal of my PhD is to study the mechanism and the optimizing principle of a SWCNT thin film strain sensor and provide experimental validation of the numerical/analytical investigations. The dependence of the piezoresistivity on key parameters like orientation, network density, bundle diameter (effective tunneling area), and length is studied, and how one can effectively optimize the piezoresistive behavior of a SWCNT thin film strain sensors. To reach this goal, my first research accomplishment involves the study of orientation of SWCNTs and its effect on the piezoresistivity of mechanically drawn SWCNT thin film based piezoresistive sensors. Using polarized Raman spectroscopy analysis and coupled electrical-mechanical test, a quantitative relationship between the strain sensitivity and SWCNT alignment order parameter was established. As compared to randomly oriented SWCNT thin films, the one with draw ratio of 3.2 exhibited ~6x increase on the GF. My second accomplishment involves studying the

  12. A Renormalisation Group Method. V. A Single Renormalisation Group Step

    Science.gov (United States)

    Brydges, David C.; Slade, Gordon

    2015-05-01

    This paper is the fifth in a series devoted to the development of a rigorous renormalisation group method applicable to lattice field theories containing boson and/or fermion fields, and comprises the core of the method. In the renormalisation group method, increasingly large scales are studied in a progressive manner, with an interaction parametrised by a field polynomial which evolves with the scale under the renormalisation group map. In our context, the progressive analysis is performed via a finite-range covariance decomposition. Perturbative calculations are used to track the flow of the coupling constants of the evolving polynomial, but on their own perturbative calculations are insufficient to control error terms and to obtain mathematically rigorous results. In this paper, we define an additional non-perturbative coordinate, which together with the flow of coupling constants defines the complete evolution of the renormalisation group map. We specify conditions under which the non-perturbative coordinate is contractive under a single renormalisation group step. Our framework is essentially combinatorial, but its implementation relies on analytic results developed earlier in the series of papers. The results of this paper are applied elsewhere to analyse the critical behaviour of the 4-dimensional continuous-time weakly self-avoiding walk and of the 4-dimensional -component model. In particular, the existence of a logarithmic correction to mean-field scaling for the susceptibility can be proved for both models, together with other facts about critical exponents and critical behaviour.

  13. Ionization detector, electrode configuration and single polarity charge detection method

    Science.gov (United States)

    He, Z.

    1998-07-07

    An ionization detector, an electrode configuration and a single polarity charge detection method each utilize a boundary electrode which symmetrically surrounds first and second central interlaced and symmetrical electrodes. All of the electrodes are held at a voltage potential of a first polarity type. The first central electrode is held at a higher potential than the second central or boundary electrodes. By forming the first and second central electrodes in a substantially interlaced and symmetrical pattern and forming the boundary electrode symmetrically about the first and second central electrodes, signals generated by charge carriers are substantially of equal strength with respect to both of the central electrodes. The only significant difference in measured signal strength occurs when the charge carriers move to within close proximity of the first central electrode and are received at the first central electrode. The measured signals are then subtracted and compared to quantitatively measure the magnitude of the charge. 10 figs.

  14. Solving optimum operation of single pump unit problem with ant colony optimization (ACO) algorithm

    International Nuclear Information System (INIS)

    Yuan, Y; Liu, C

    2012-01-01

    For pumping stations, the effective scheduling of daily pump operations from solutions to the optimum design operation problem is one of the greatest potential areas for energy cost-savings, there are some difficulties in solving this problem with traditional optimization methods due to the multimodality of the solution region. In this case, an ACO model for optimum operation of pumping unit is proposed and the solution method by ants searching is presented by rationally setting the object function and constrained conditions. A weighted directed graph was constructed and feasible solutions may be found by iteratively searching of artificial ants, and then the optimal solution can be obtained by applying the rule of state transition and the pheromone updating. An example calculation was conducted and the minimum cost was found as 4.9979. The result of ant colony algorithm was compared with the result from dynamic programming or evolutionary solving method in commercial software under the same discrete condition. The result of ACO is better and the computing time is shorter which indicates that ACO algorithm can provide a high application value to the field of optimal operation of pumping stations and related fields.

  15. Pipeline heating method based on optimal control and state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Vianna, F.L.V. [Dept. of Subsea Technology. Petrobras Research and Development Center - CENPES, Rio de Janeiro, RJ (Brazil)], e-mail: fvianna@petrobras.com.br; Orlande, H.R.B. [Dept. of Mechanical Engineering. POLI/COPPE, Federal University of Rio de Janeiro - UFRJ, Rio de Janeiro, RJ (Brazil)], e-mail: helcio@mecanica.ufrj.br; Dulikravich, G.S. [Dept. of Mechanical and Materials Engineering. Florida International University - FIU, Miami, FL (United States)], e-mail: dulikrav@fiu.edu

    2010-07-01

    In production of oil and gas wells in deep waters the flowing of hydrocarbon through pipeline is a challenging problem. This environment presents high hydrostatic pressures and low sea bed temperatures, which can favor the formation of solid deposits that in critical operating conditions, as unplanned shutdown conditions, may result in a pipeline blockage and consequently incur in large financial losses. There are different methods to protect the system, but nowadays thermal insulation and chemical injection are the standard solutions normally used. An alternative method of flow assurance is to heat the pipeline. This concept, which is known as active heating system, aims at heating the produced fluid temperature above a safe reference level in order to avoid the formation of solid deposits. The objective of this paper is to introduce a Bayesian statistical approach for the state estimation problem, in which the state variables are considered as the transient temperatures within a pipeline cross-section, and to use the optimal control theory as a design tool for a typical heating system during a simulated shutdown condition. An application example is presented to illustrate how Bayesian filters can be used to reconstruct the temperature field from temperature measurements supposedly available on the external surface of the pipeline. The temperatures predicted with the Bayesian filter are then utilized in a control approach for a heating system used to maintain the temperature within the pipeline above the critical temperature of formation of solid deposits. The physical problem consists of a pipeline cross section represented by a circular domain with four points over the pipe wall representing heating cables. The fluid is considered stagnant, homogeneous, isotropic and with constant thermo-physical properties. The mathematical formulation governing the direct problem was solved with the finite volume method and for the solution of the state estimation problem

  16. Methods for Optimizing CRISPR-Cas9 Genome Editing Specificity

    Science.gov (United States)

    Tycko, Josh; Myer, Vic E.; Hsu, Patrick D.

    2016-01-01

    Summary Advances in the development of delivery, repair, and specificity strategies for the CRISPR-Cas9 genome engineering toolbox are helping researchers understand gene function with unprecedented precision and sensitivity. CRISPR-Cas9 also holds enormous therapeutic potential for the treatment of genetic disorders by directly correcting disease-causing mutations. Although the Cas9 protein has been shown to bind and cleave DNA at off-target sites, the field of Cas9 specificity is rapidly progressing with marked improvements in guide RNA selection, protein and guide engineering, novel enzymes, and off-target detection methods. We review important challenges and breakthroughs in the field as a comprehensive practical guide to interested users of genome editing technologies, highlighting key tools and strategies for optimizing specificity. The genome editing community should now strive to standardize such methods for measuring and reporting off-target activity, while keeping in mind that the goal for specificity should be continued improvement and vigilance. PMID:27494557

  17. Experimental evaluation of optimization method for developing ultraviolet barrier coatings

    Science.gov (United States)

    Gonome, Hiroki; Okajima, Junnosuke; Komiya, Atsuki; Maruyama, Shigenao

    2014-01-01

    Ultraviolet (UV) barrier coatings can be used to protect many industrial products from UV attack. This study introduces a method of optimizing UV barrier coatings using pigment particles. The radiative properties of the pigment particles were evaluated theoretically, and the optimum particle size was decided from the absorption efficiency and the back-scattering efficiency. UV barrier coatings were prepared with zinc oxide (ZnO) and titanium dioxide (TiO2). The transmittance of the UV barrier coating was calculated theoretically. The radiative transfer in the UV barrier coating was modeled using the radiation element method by ray emission model (REM2). In order to validate the calculated results, the transmittances of these coatings were measured by a spectrophotometer. A UV barrier coating with a low UV transmittance and high VIS transmittance could be achieved. The calculated transmittance showed a similar spectral tendency with the measured one. The use of appropriate particles with optimum size, coating thickness and volume fraction will result in effective UV barrier coatings. UV barrier coatings can be achieved by the application of optical engineering.

  18. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    Science.gov (United States)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  19. A hybrid 3D SEM reconstruction method optimized for complex geologic material surfaces.

    Science.gov (United States)

    Yan, Shang; Adegbule, Aderonke; Kibbey, Tohren C G

    2017-08-01

    Reconstruction methods are widely used to extract three-dimensional information from scanning electron microscope (SEM) images. This paper presents a new hybrid reconstruction method that combines stereoscopic reconstruction with shape-from-shading calculations to generate highly-detailed elevation maps from SEM image pairs. The method makes use of an imaged glass sphere to determine the quantitative relationship between observed intensity and angles between the beam and surface normal, and the detector and surface normal. Two specific equations are derived to make use of image intensity information in creating the final elevation map. The equations are used together, one making use of intensities in the two images, the other making use of intensities within a single image. The method is specifically designed for SEM images captured with a single secondary electron detector, and is optimized to capture maximum detail from complex natural surfaces. The method is illustrated with a complex structured abrasive material, and a rough natural sand grain. Results show that the method is capable of capturing details such as angular surface features, varying surface roughness, and surface striations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A second-order unconstrained optimization method for canonical-ensemble density-functional methods

    Science.gov (United States)

    Nygaard, Cecilie R.; Olsen, Jeppe

    2013-03-01

    A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.

  1. Method for selection of optimal road safety composite index with examples from DEA and TOPSIS method.

    Science.gov (United States)

    Rosić, Miroslav; Pešić, Dalibor; Kukić, Dragoslav; Antić, Boris; Božović, Milan

    2017-01-01

    Concept of composite road safety index is a popular and relatively new concept among road safety experts around the world. As there is a constant need for comparison among different units (countries, municipalities, roads, etc.) there is need to choose an adequate method which will make comparison fair to all compared units. Usually comparisons using one specific indicator (parameter which describes safety or unsafety) can end up with totally different ranking of compared units which is quite complicated for decision maker to determine "real best performers". Need for composite road safety index is becoming dominant since road safety presents a complex system where more and more indicators are constantly being developed to describe it. Among wide variety of models and developed composite indexes, a decision maker can come to even bigger dilemma than choosing one adequate risk measure. As DEA and TOPSIS are well-known mathematical models and have recently been increasingly used for risk evaluation in road safety, we used efficiencies (composite indexes) obtained by different models, based on DEA and TOPSIS, to present PROMETHEE-RS model for selection of optimal method for composite index. Method for selection of optimal composite index is based on three parameters (average correlation, average rank variation and average cluster variation) inserted into a PROMETHEE MCDM method in order to choose the optimal one. The model is tested by comparing 27 police departments in Serbia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Optimal Estimation of Diffusion Coefficients from Noisy Time-Lapse-Recorded Single-Particle Trajectories

    DEFF Research Database (Denmark)

    Vestergaard, Christian Lyngby

    2012-01-01

    . The standard method for estimating diusion coecients from single-particle trajectories is based on leastsquares tting to the experimentally measured mean square displacements. This method is highly inecient, since it ignores the high correlations inherent in these. We derive the exact maximum likelihood...... of diusion coecients of hOgg1 repair proteins diusing on stretched uctuating DNA from data previously analyzed using a suboptimal method. Our analysis shows that the proteins have dierent eective diusion coecients and that their diusion coecients are correlated with their residence time on DNA. These results...

  3. Methods and tools for analysis and optimization of power plants

    Energy Technology Data Exchange (ETDEWEB)

    Assadi, Mohsen

    2000-09-01

    The most noticeable advantage of the introduction of the computer-aided tools in the field of power generation, has been the ability to study the plant's performance prior to the construction phase. The results of these studies have made it possible to change and adjust the plant layout to match the pre-defined requirements. Further development of computers in recent years has opened up for implementation of new features in the existing tools and also for the development of new tools for specific applications, like thermodynamic and economic optimization, prediction of the remaining component life time, and fault diagnostics, resulting in improvement of the plant's performance, availability and reliability. The most common tools for pre-design studies are heat and mass balance programs. Further thermodynamic and economic optimization of plant layouts, generated by the heat and mass balance programs, can be accomplished by using pinch programs, exergy analysis and thermoeconomics. Surveillance and fault diagnostics of existing systems can be performed by using tools like condition monitoring systems and artificial neural networks. The increased number of tools and their various construction and application areas make the choice of the most adequate tool for a certain application difficult. In this thesis the development of different categories of tools and techniques, and their application area are reviewed and presented. Case studies on both existing and theoretical power plant layouts have been performed using different commercially available tools to illuminate their advantages and shortcomings. The development of power plant technology and the requirements for new tools and measurement systems have been briefly reviewed. This thesis contains also programming techniques and calculation methods concerning part-load calculations using local linearization, which has been implemented in an inhouse heat and mass balance program developed by the author

  4. Optimal sizing method for stand-alone photovoltaic power systems

    Energy Technology Data Exchange (ETDEWEB)

    Groumpos, P P; Papageorgiou, G

    1987-01-01

    The total life-cycle cost of stand-alone photovoltaic (SAPV) power systems is mathematically formulated. A new optimal sizing algorithm for the solar array and battery capacity is developed. The optimum value of a balancing parameter, M, for the optimal sizing of SAPV system components is derived. The proposed optimal sizing algorithm is used in an illustrative example, where a more economical life-cycle cost has bene obtained. The question of cost versus reliability is briefly discussed.

  5. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    Science.gov (United States)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  6. An Improved Method for Reconfiguring and Optimizing Electrical Active Distribution Network Using Evolutionary Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Nur Faziera Napis

    2018-05-01

    Full Text Available The presence of optimized distributed generation (DG with suitable distribution network reconfiguration (DNR in the electrical distribution network has an advantage for voltage support, power losses reduction, deferment of new transmission line and distribution structure and system stability improvement. However, installation of a DG unit at non-optimal size with non-optimal DNR may lead to higher power losses, power quality problem, voltage instability and incremental of operational cost. Thus, an appropriate DG and DNR planning are essential and are considered as an objective of this research. An effective heuristic optimization technique named as improved evolutionary particle swarm optimization (IEPSO is proposed in this research. The objective function is formulated to minimize the total power losses (TPL and to improve the voltage stability index (VSI. The voltage stability index is determined for three load demand levels namely light load, nominal load, and heavy load with proper optimal DNR and DG sizing. The performance of the proposed technique is compared with other optimization techniques, namely particle swarm optimization (PSO and iteration particle swarm optimization (IPSO. Four case studies on IEEE 33-bus and IEEE 69-bus distribution systems have been conducted to validate the effectiveness of the proposed IEPSO. The optimization results show that, the best achievement is done by IEPSO technique with power losses reduction up to 79.26%, and 58.41% improvement in the voltage stability index. Moreover, IEPSO has the fastest computational time for all load conditions as compared to other algorithms.

  7. Adjoint-Baed Optimal Control on the Pitch Angle of a Single-Bladed Vertical-Axis Wind Turbine

    Science.gov (United States)

    Tsai, Hsieh-Chen; Colonius, Tim

    2017-11-01

    Optimal control on the pitch angle of a NACA0018 single-bladed vertical-axis wind turbine (VAWT) is numerically investigated at a low Reynolds number of 1500. With fixed tip-speed ratio, the input power is minimized and mean tangential force is maximized over a specific time horizon. The immersed boundary method is used to simulate the two-dimensional, incompressible flow around a horizontal cross section of the VAWT. The problem is formulated as a PDE constrained optimization problem and an iterative solution is obtained using adjoint-based conjugate gradient methods. By the end of the longest control horizon examined, two controls end up with time-invariant pitch angles of about the same magnitude but with the opposite signs. The results show that both cases lead to a reduction in the input power but not necessarily an enhancement in the mean tangential force. These reductions in input power are due to the removal of a power-damaging phenomenon that occurs when a vortex pair is captured by the blade in the upwind-half region of a cycle. This project was supported by Caltech FLOWE center/Gordon and Betty Moore Foundation.

  8. Springback effects during single point incremental forming: Optimization of the tool path

    Science.gov (United States)

    Giraud-Moreau, Laurence; Belchior, Jérémy; Lafon, Pascal; Lotoing, Lionel; Cherouat, Abel; Courtielle, Eric; Guines, Dominique; Maurine, Patrick

    2018-05-01

    Incremental sheet forming is an emerging process to manufacture sheet metal parts. This process is more flexible than conventional one and well suited for small batch production or prototyping. During the process, the sheet metal blank is clamped by a blank-holder and a small-size smooth-end hemispherical tool moves along a user-specified path to deform the sheet incrementally. Classical three-axis CNC milling machines, dedicated structure or serial robots can be used to perform the forming operation. Whatever the considered machine, large deviations between the theoretical shape and the real shape can be observed after the part unclamping. These deviations are due to both the lack of stiffness of the machine and residual stresses in the part at the end of the forming stage. In this paper, an optimization strategy of the tool path is proposed in order to minimize the elastic springback induced by residual stresses after unclamping. A finite element model of the SPIF process allowing the shape prediction of the formed part with a good accuracy is defined. This model, based on appropriated assumptions, leads to calculation times which remain compatible with an optimization procedure. The proposed optimization method is based on an iterative correction of the tool path. The efficiency of the method is shown by an improvement of the final shape.

  9. Single-molecule experiments in biological physics: methods and applications.

    Science.gov (United States)

    Ritort, F

    2006-08-16

    I review single-molecule experiments (SMEs) in biological physics. Recent technological developments have provided the tools to design and build scientific instruments of high enough sensitivity and precision to manipulate and visualize individual molecules and measure microscopic forces. Using SMEs it is possible to manipulate molecules one at a time and measure distributions describing molecular properties, characterize the kinetics of biomolecular reactions and detect molecular intermediates. SMEs provide additional information about thermodynamics and kinetics of biomolecular processes. This complements information obtained in traditional bulk assays. In SMEs it is also possible to measure small energies and detect large Brownian deviations in biomolecular reactions, thereby offering new methods and systems to scrutinize the basic foundations of statistical mechanics. This review is written at a very introductory level, emphasizing the importance of SMEs to scientists interested in knowing the common playground of ideas and the interdisciplinary topics accessible by these techniques. The review discusses SMEs from an experimental perspective, first exposing the most common experimental methodologies and later presenting various molecular systems where such techniques have been applied. I briefly discuss experimental techniques such as atomic-force microscopy (AFM), laser optical tweezers (LOTs), magnetic tweezers (MTs), biomembrane force probes (BFPs) and single-molecule fluorescence (SMF). I then present several applications of SME to the study of nucleic acids (DNA, RNA and DNA condensation) and proteins (protein-protein interactions, protein folding and molecular motors). Finally, I discuss applications of SMEs to the study of the nonequilibrium thermodynamics of small systems and the experimental verification of fluctuation theorems. I conclude with a discussion of open questions and future perspectives.

  10. Single-molecule experiments in biological physics: methods and applications

    International Nuclear Information System (INIS)

    Ritort, F

    2006-01-01

    I review single-molecule experiments (SMEs) in biological physics. Recent technological developments have provided the tools to design and build scientific instruments of high enough sensitivity and precision to manipulate and visualize individual molecules and measure microscopic forces. Using SMEs it is possible to manipulate molecules one at a time and measure distributions describing molecular properties, characterize the kinetics of biomolecular reactions and detect molecular intermediates. SMEs provide additional information about thermodynamics and kinetics of biomolecular processes. This complements information obtained in traditional bulk assays. In SMEs it is also possible to measure small energies and detect large Brownian deviations in biomolecular reactions, thereby offering new methods and systems to scrutinize the basic foundations of statistical mechanics. This review is written at a very introductory level, emphasizing the importance of SMEs to scientists interested in knowing the common playground of ideas and the interdisciplinary topics accessible by these techniques. The review discusses SMEs from an experimental perspective, first exposing the most common experimental methodologies and later presenting various molecular systems where such techniques have been applied. I briefly discuss experimental techniques such as atomic-force microscopy (AFM), laser optical tweezers (LOTs), magnetic tweezers (MTs), biomembrane force probes (BFPs) and single-molecule fluorescence (SMF). I then present several applications of SME to the study of nucleic acids (DNA, RNA and DNA condensation) and proteins (protein-protein interactions, protein folding and molecular motors). Finally, I discuss applications of SMEs to the study of the nonequilibrium thermodynamics of small systems and the experimental verification of fluctuation theorems. I conclude with a discussion of open questions and future perspectives. (topical review)

  11. Optimal Homotopy Asymptotic Method for Solving System of Fredholm Integral Equations

    Directory of Open Access Journals (Sweden)

    Bahman Ghazanfari

    2013-08-01

    Full Text Available In this paper, optimal homotopy asymptotic method (OHAM is applied to solve system of Fredholm integral equations. The effectiveness of optimal homotopy asymptotic method is presented. This method provides easy tools to control the convergence region of approximating solution series wherever necessary. The results of OHAM are compared with homotopy perturbation method (HPM and Taylor series expansion method (TSEM.

  12. Single and multiple objective biomass-to-biofuel supply chain optimization considering environmental impacts

    Science.gov (United States)

    Valles Sosa, Claudia Evangelina

    Bioenergy has become an important alternative source of energy to alleviate the reliance on petroleum energy. Bioenergy offers diminishing climate change by reducing Green House Gas Emissions, as well as providing energy security and enhancing rural development. The Energy Independence and Security Act mandate the use of 21 billion gallons of advanced biofuels including 16 billion gallons of cellulosic biofuels by the year 2022. It is clear that Biomass can make a substantial contribution to supply future energy demand in a sustainable way. However, the supply of sustainable energy is one of the main challenges that mankind will face over the coming decades. For instance, many logistical challenges will be faced in order to provide an efficient and reliable supply of quality feedstock to biorefineries. 700 million tons of biomass will be required to be sustainably delivered to biorefineries annually to meet the projected use of biofuels by the year of 2022. Approaching this complex logistic problem as a multi-commodity network flow structure, the present work proposes the use of a genetic algorithm as a single objective optimization problem that considers the maximization of profit and the present work also proposes the use of a Multiple Objective Evolutionary Algorithm to simultaneously maximize profit while minimizing global warming potential. Most transportation optimization problems available in the literature have mostly considered the maximization of profit or the minimization of total travel time as potential objectives to be optimized. However, on this research work, we take a more conscious and sustainable approach for this logistic problem. Planners are increasingly expected to adopt a multi-disciplinary approach, especially due to the rising importance of environmental stewardship. The role of a transportation planner and designer is shifting from simple economic analysis to promoting sustainability through the integration of environmental objectives. To

  13. Design optimization of the distributed modal filtering rod fiber for increasing single mode bandwidth

    DEFF Research Database (Denmark)

    Jørgensen, Mette Marie; Petersen, Sidsel Rübner; Laurila, Marko

    2012-01-01

    . Large preform tolerances are compensated during the fiber draw resulting in ultra low NA fibers with very large cores. In this paper, design optimization of the SM bandwidth of the DMF rod fiber is presented. Analysis of band gap properties results in a fourfold increase of the SM bandwidth compared...... LMA fiber amplifiers having high pump absorption through a pump cladding that is decoupled from the outer fiber. However, achieving ultra low NA for single-mode (SM) guidance is challenging, and thus different design strategies must be applied to filter out higher order modes (HOMs). The novel...... distributed modal filtering (DMF) design presented here enables SM guidance, and previous results have shown a SM mode field diameter of 60 μm operating in a 20 nm SM bandwidth. The DMF rod fiber has high index ring-shaped inclusions acting as resonators enabling SM guidance through modal filtering of HOMs...

  14. Optimization of the single point incremental forming process for titanium sheets by using response surface

    Directory of Open Access Journals (Sweden)

    Saidi Badreddine

    2016-01-01

    Full Text Available The single point incremental forming process is well-known to be perfectly suited for prototyping and small series. One of its fields of applicability is the medicine area for the forming of titanium prostheses or titanium medical implants. However this process is not yet very industrialized, mainly due its geometrical inaccuracy, its not homogeneous thickness distribution& Moreover considerable forces can occur. They must be controlled in order to preserve the tooling. In this paper, a numerical approach is proposed in order to minimize the maximum force achieved during the incremental forming of titanium sheets and to maximize the minimal thickness. A surface response methodology is used to find the optimal values of two input parameters of the process, the punch diameter and the vertical step size of the tool path.

  15. Stable Single-Mode Operation of Distributed Feedback Quantum Cascade Laser by Optimized Reflectivity Facet Coatings

    Science.gov (United States)

    Wang, Dong-Bo; Zhang, Jin-Chuan; Cheng, Feng-Min; Zhao, Yue; Zhuo, Ning; Zhai, Shen-Qiang; Wang, Li-Jun; Liu, Jun-Qi; Liu, Shu-Man; Liu, Feng-Qi; Wang, Zhan-Guo

    2018-02-01

    In this work, quantum cascade lasers (QCLs) based on strain compensation combined with two-phonon resonance design are presented. Distributed feedback (DFB) laser emitting at 4.76 μm was fabricated through a standard buried first-order grating and buried heterostructure (BH) processing. Stable single-mode emission is achieved under all injection currents and temperature conditions without any mode hop by the optimized antireflection (AR) coating on the front facet. The AR coating consists of a double layer dielectric of Al2O3 and Ge. For a 2-mm laser cavity, the maximum output power of the AR-coated DFB-QCL was more than 170 mW at 20 °C with a high wall-plug efficiency (WPE) of 4.7% in a continuous-wave (CW) mode.

  16. Three-dimensional polarization marked multiple-QR code encryption by optimizing a single vectorial beam

    Science.gov (United States)

    Lin, Chao; Shen, Xueju; Hua, Binbin; Wang, Zhisong

    2015-10-01

    We demonstrate the feasibility of three dimensional (3D) polarization multiplexing by optimizing a single vectorial beam using a multiple-signal window multiple-plane (MSW-MP) phase retrieval algorithm. Original messages represented with multiple quick response (QR) codes are first partitioned into a series of subblocks. Then, each subblock is marked with a specific polarization state and randomly distributed in 3D space with both longitudinal and transversal adjustable freedoms. A generalized 3D polarization mapping protocol is established to generate a 3D polarization key. Finally, multiple-QR code is encrypted into one phase only mask and one polarization only mask based on the modified Gerchberg-Saxton (GS) algorithm. We take the polarization mask as the cyphertext and the phase only mask as additional dimension of key. Only when both the phase key and 3D polarization key are correct, original messages can be recovered. We verify our proposal with both simulation and experiment evidences.

  17. Optimal Allocation of Power-Electronic Interfaced Wind Turbines Using a Genetic Algorithm - Monte Carlo Hybrid Optimization Method

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Siano, Pierluigi; Chen, Zhe

    2010-01-01

    determined by the wind resource and geographic conditions, the location of wind turbines in a power system network may significantly affect the distribution of power flow, power losses, etc. Furthermore, modern WTs with power-electronic interface have the capability of controlling reactive power output...... limit requirements. The method combines the Genetic Algorithm (GA), gradient-based constrained nonlinear optimization algorithm and sequential Monte Carlo simulation (MCS). The GA searches for the optimal locations and capacities of WTs. The gradient-based optimization finds the optimal power factor...... setting of WTs. The sequential MCS takes into account the stochastic behaviour of wind power generation and load. The proposed hybrid optimization method is demonstrated on an 11 kV 69-bus distribution system....

  18. Optimized Audio Classification and Segmentation Algorithm by Using Ensemble Methods

    Directory of Open Access Journals (Sweden)

    Saadia Zahid

    2015-01-01

    Full Text Available Audio segmentation is a basis for multimedia content analysis which is the most important and widely used application nowadays. An optimized audio classification and segmentation algorithm is presented in this paper that segments a superimposed audio stream on the basis of its content into four main audio types: pure-speech, music, environment sound, and silence. An algorithm is proposed that preserves important audio content and reduces the misclassification rate without using large amount of training data, which handles noise and is suitable for use for real-time applications. Noise in an audio stream is segmented out as environment sound. A hybrid classification approach is used, bagged support vector machines (SVMs with artificial neural networks (ANNs. Audio stream is classified, firstly, into speech and nonspeech segment by using bagged support vector machines; nonspeech segment is further classified into music and environment sound by using artificial neural networks and lastly, speech segment is classified into silence and pure-speech segments on the basis of rule-based classifier. Minimum data is used for training classifier; ensemble methods are used for minimizing misclassification rate and approximately 98% accurate segments are obtained. A fast and efficient algorithm is designed that can be used with real-time multimedia applications.

  19. Models and Methods for Structural Topology Optimization with Discrete Design Variables

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    in the conceptual design phase to find innovative designs. The strength of topology optimization is the capability of determining both the optimal shape and the topology of the structure. In some cases also the optimal material properties can be determined. Optimal structural design problems are modeled...... such as bridges, airplanes, wind turbines, cars, etc. Topology optimization is a collection of theory, mathematical models, and numerical methods and is often used in the conceptual design phase to find innovative designs. The strength of topology optimization is the capability of determining both the optimal......Structural topology optimization is a multi-disciplinary research field covering optimal design of load carrying mechanical structures such as bridges, airplanes, wind turbines, cars, etc. Topology optimization is a collection of theory, mathematical models, and numerical methods and is often used...

  20. Application of Taguchi method for cutting force optimization in rock

    Indian Academy of Sciences (India)

    In this paper, an optimization study was carried out for the cutting force (Fc) acting on circular diamond sawblades in rock sawing. The peripheral speed, traverse speed, cut depth and flow rate of cooling fluid were considered as operating variables and optimized by using Taguchi approach for the Fc. L16(44) orthogonal ...

  1. Green synthesis of isopropyl myristate in novel single phase medium Part I: Batch optimization studies

    Directory of Open Access Journals (Sweden)

    Rajeshkumar N. Vadgama

    2015-12-01

    Full Text Available Isopropyl myristate finds many applications in food, cosmetic and pharmaceutical industries as an emollient, thickening agent, or lubricant. Using a homogeneous reaction phase, non-specific lipase derived from Candida antartica, marketed as Novozym 435, was determined to be most suitable for the enzymatic synthesis of isopropyl myristate. The high molar ratio of alcohol to acid creates novel single phase medium which overcomes mass transfer effects and facilitates downstream processing. The effect of various reaction parameters was optimized to obtain a high yield of isopropyl myristate. Effect of temperature, agitation speed, organic solvent, biocatalyst loading and batch operational stability of the enzyme was systematically studied. The conversion of 87.65% was obtained when the molar ratio of isopropyl alcohol to myristic acid (15:1 was used with 4% (w/w catalyst loading and agitation speed of 150 rpm at 60 °C. The enzyme has also shown good batch operational stability under optimized conditions.

  2. Optimization of NANOGrav's time allocation for maximum sensitivity to single sources

    International Nuclear Information System (INIS)

    Christy, Brian; Anella, Ryan; Lommen, Andrea; Camuccio, Richard; Handzo, Emma; Finn, Lee Samuel

    2014-01-01

    Pulsar timing arrays (PTAs) are a collection of precisely timed millisecond pulsars (MSPs) that can search for gravitational waves (GWs) in the nanohertz frequency range by observing characteristic signatures in the timing residuals. The sensitivity of a PTA depends on the direction of the propagating GW source, the timing accuracy of the pulsars, and the allocation of the available observing time. The goal of this paper is to determine the optimal time allocation strategy among the MSPs in the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) for a single source of GW under a particular set of assumptions. We consider both an isotropic distribution of sources across the sky and a specific source in the Virgo cluster. This work improves on previous efforts by modeling the effect of intrinsic spin noise for each pulsar. We find that, in general, the array is optimized by maximizing time spent on the best-timed pulsars, with sensitivity improvements typically ranging from a factor of 1.5 to 4.

  3. Process optimization for inkjet printing of triisopropylsilylethynyl pentacene with single-solvent solutions

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xianghua, E-mail: xhwang@hfut.edu.cn [Key Lab of Special Display Technology, Ministry of Education, National Engineering Lab of Special Display Technology, National Key Lab of Advanced Display Technology, Academy of Opto-Electronic Technology, Hefei University of Technology, Hefei 230009 (China); Yuan, Miao [Key Lab of Special Display Technology, Ministry of Education, National Engineering Lab of Special Display Technology, National Key Lab of Advanced Display Technology, Academy of Opto-Electronic Technology, Hefei University of Technology, Hefei 230009 (China); School of Electronic Science & Applied Physics, Hefei University of Technology, Hefei 230009 (China); Xiong, Xianfeng; Chen, Mengjie [Key Lab of Special Display Technology, Ministry of Education, National Engineering Lab of Special Display Technology, National Key Lab of Advanced Display Technology, Academy of Opto-Electronic Technology, Hefei University of Technology, Hefei 230009 (China); Qin, Mengzhi [Key Lab of Special Display Technology, Ministry of Education, National Engineering Lab of Special Display Technology, National Key Lab of Advanced Display Technology, Academy of Opto-Electronic Technology, Hefei University of Technology, Hefei 230009 (China); School of Electronic Science & Applied Physics, Hefei University of Technology, Hefei 230009 (China); Qiu, Longzhen; Lu, Hongbo; Zhang, Guobing; Lv, Guoqiang [Key Lab of Special Display Technology, Ministry of Education, National Engineering Lab of Special Display Technology, National Key Lab of Advanced Display Technology, Academy of Opto-Electronic Technology, Hefei University of Technology, Hefei 230009 (China); Choi, Anthony H.W. [Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong (China)

    2015-03-02

    Inkjet printing of 6,13-bis(triisopropylsilylethynyl) pentacene (TIPS-PEN), a small molecule organic semiconductor, is performed on two types of substrates. Hydrophilic SiO{sub 2} substrates prepared by a combination of surface treatments lead to either a smaller size or a coffee-ring profile of the single-drop film. A hydrophobic surface with dominant dispersive component of surface energy such as that of a spin-coated poly(4-vinylphenol) film favors profile formation with uniform thickness of the printed semiconductor owing to the strong dispersion force between the semiconductor molecules and the hydrophobic surface of the substrate. With a hydrophobic dielectric as the substrate and via a properly selected solvent, high quality TIPS-PEN films were printed at a very low substrate temperature of 35 °C. Saturated field-effect mobility measured with top-contact thin-film transistor structure shows a narrow distribution and a maximum of 0.78 cm{sup 2}V{sup −1} s{sup −1}, which confirmed the film growth on the hydrophobic substrate with increased crystal coverage and continuity under the optimized process condition. - Highlights: • Hydrophobic substrates were employed to inhibit the coffee-ring effect. • Contact-line pinning is primarily controlled by the dispersion force. • Solvent selection is critical to crystal coverage of the printed film. • High performance and uniformity are achieved by process optimization.

  4. Model of a single mode energy harvester and properties for optimal power generation

    International Nuclear Information System (INIS)

    Liao Yabin; Sodano, Henry A

    2008-01-01

    The process of acquiring the energy surrounding a system and converting it into usable electrical energy is termed power harvesting. In the last few years, the field of power harvesting has experienced significant growth due to the ever increasing desire to produce portable and wireless electronics with extended life. Current portable and wireless devices must be designed to include electrochemical batteries as the power source. The use of batteries can be troublesome due to their finite energy supply, which necessitates their periodic replacement. In the case of wireless sensors that are to be placed in remote locations, the sensor must be easily accessible or of disposable nature to allow the device to function over extended periods of time. Energy scavenging devices are designed to capture the ambient energy surrounding the electronics and covert it into usable electrical energy. The concept of power harvesting works towards developing self-powered devices that do not require replaceable power supplies. The development of energy harvesting systems is greatly facilitated by an accurate model to assist in the design of the system. This paper will describe a theoretical model of a piezoelectric based energy harvesting system that is simple to apply yet provides an accurate prediction of the power generated around a single mode of vibration. Furthermore, this model will allow optimization of system parameters to be studied such that maximal performance can be achieved. Using this model an expression for the optimal resistance and a parameter describing the energy harvesting efficiency will be presented and evaluated through numerical simulations. The second part of this paper will present an experimental validation of the model and optimal parameters

  5. Laser: a Tool for Optimization and Enhancement of Analytical Methods

    Energy Technology Data Exchange (ETDEWEB)

    Preisler, Jan [Iowa State Univ., Ames, IA (United States)

    1997-01-01

    In this work, we use lasers to enhance possibilities of laser desorption methods and to optimize coating procedure for capillary electrophoresis (CE). We use several different instrumental arrangements to characterize matrix-assisted laser desorption (MALD) at atmospheric pressure and in vacuum. In imaging mode, 488-nm argon-ion laser beam is deflected by two acousto-optic deflectors to scan plumes desorbed at atmospheric pressure via absorption. All absorbing species, including neutral molecules, are monitored. Interesting features, e.g. differences between the initial plume and subsequent plumes desorbed from the same spot, or the formation of two plumes from one laser shot are observed. Total plume absorbance can be correlated with the acoustic signal generated by the desorption event. A model equation for the plume velocity as a function of time is proposed. Alternatively, the use of a static laser beam for observation enables reliable determination of plume velocities even when they are very high. Static scattering detection reveals negative influence of particle spallation on MS signal. Ion formation during MALD was monitored using 193-nm light to photodissociate a portion of insulin ion plume. These results define the optimal conditions for desorbing analytes from matrices, as opposed to achieving a compromise between efficient desorption and efficient ionization as is practiced in mass spectrometry. In CE experiment, we examined changes in a poly(ethylene oxide) (PEO) coating by continuously monitoring the electroosmotic flow (EOF) in a fused-silica capillary during electrophoresis. An imaging CCD camera was used to follow the motion of a fluorescent neutral marker zone along the length of the capillary excited by 488-nm Ar-ion laser. The PEO coating was shown to reduce the velocity of EOF by more than an order of magnitude compared to a bare capillary at pH 7.0. The coating protocol was important, especially at an intermediate pH of 7.7. The increase of p

  6. Determination of heterogeneous medium parameters by single fuel element method

    International Nuclear Information System (INIS)

    Veloso, M.A.F.

    1985-01-01

    The neutron pulse propagation technique was employed to study an heterogeneous system consisting of a single fuel element placed at the symmetry axis of a large cylindrical D 2 O tank. The response of system for the pulse propagation technique is related to the inverse complex relaxation length of the neutron waves also known as the system dispersion law ρ (ω). Experimental values of ρ (ω) were compared with the ones derived from Fermi age - Diffusion theory. The main purpose of the experiment was to obtain the Feinberg-Galanin thermal constant (γ), which is the logaritmic derivative of the neutron flux at the fuel-moderator interface and a such a main input data for heterogeneous reactor theory calculations. The γ thermal constant was determined as the number giving the best agreement between the theoretical and experimental values of ρ (ω). The simultaneous determination of two among four parameters η,ρ,τ and L s is possible through the intersection of dispersion laws of the pure moderator system and the fuel moderator system. The parameters τ and η were termined by this method. It was shown that the thermal constant γ and the product η ρ can be computed from the real and imaginary parts of the fuel-moderator dispersion law. The results for this evaluation scheme showns a not stable behavior of γ as a function of frequency, a result not foreseen by the theoretical model. (Author) [pt

  7. On the equivalence of optimality criterion and sequential approximate optimization methods in the classical layout problem

    NARCIS (Netherlands)

    Groenwold, A.A.; Etman, L.F.P.

    2008-01-01

    We study the classical topology optimization problem, in which minimum compliance is sought, subject to linear constraints. Using a dual statement, we propose two separable and strictly convex subproblems for use in sequential approximate optimization (SAO) algorithms.Respectively, the subproblems

  8. Methods for optimizing over the efficient and weakly efficient sets of an affine fractional vector optimization program

    DEFF Research Database (Denmark)

    Le, T.H.A.; Pham, D. T.; Canh, Nam Nguyen

    2010-01-01

    Both the efficient and weakly efficient sets of an affine fractional vector optimization problem, in general, are neither convex nor given explicitly. Optimization problems over one of these sets are thus nonconvex. We propose two methods for optimizing a real-valued function over the efficient...... and weakly efficient sets of an affine fractional vector optimization problem. The first method is a local one. By using a regularization function, we reformulate the problem into a standard smooth mathematical programming problem that allows applying available methods for smooth programming. In case...... the objective function is linear, we have investigated a global algorithm based upon a branch-and-bound procedure. The algorithm uses Lagrangian bound coupling with a simplicial bisection in the criteria space. Preliminary computational results show that the global algorithm is promising....

  9. Optimizing design parameter for light isotopes separation by distillation method

    International Nuclear Information System (INIS)

    Ahmadi, M.

    1999-01-01

    More than methods are suggested in the world for producing heavy water, where between them chemical isotopic methods, distillation and electro lys are used widely in industrial scale. To select suitable method for heavy water production in Iran, taking into consideration, domestic technology an facilities, combination of hydrogen sulphide-water dual temperature process (Gs) and distillation (D W) may be proposed. Natural water, is firstly enriched up to 15 a% by G S process and then by distillation unit is enriched up to the grade necessary for Candu type reactors (99.8 a%). The aim of present thesis, is to achieve know-how, optimization of design parameters, and executing basic design for water isotopes separation using distillation process in a plant having minimum scale possible. In distillation, vapour phase resulted from liquid phase heating, is evidently composed of the same constituents as liquid phase. In isotopic distillation, the difference in composition of constituents is not considerable. In fact alteration of constituents composition is so small that makes the separation process impossible, however, direct separation and production of pure products without further processing which becomes possible by distillation, makes this process as one of the most important separation processes. Profiting distillation process to produce heavy water is based on difference existing between boiling point of heavy and light water. The trends of boiling points differences (heavy and light water) is adversely dependant with pressure. As the whole system pressure decreases, difference in boiling points increases. On the other hand according to the definition, separation factor is equal to the ratio of pure light water vapour pressure to that of heavy water, or we can say that the trend of whole system pressure decrease results in separation factor increase, which accordingly separation factor equation to pressure variable should be computed firstly. According to the

  10. Transient Stability Promotion by FACTS Controller Based on Adaptive Inertia Weight Particle Swarm Optimization Method

    Directory of Open Access Journals (Sweden)

    Ghazanfar Shahgholian

    2018-01-01

    Full Text Available This paper examines the influence of Static Synchronous Series Compensator (SSSC on the oscillation damping control in the network. The performance of Flexible AC Transmission System (FACTS controller highly depends upon its parameters and appropriate location in the network. A new Adaptive Inertia Weight Particle Swarm Optimization (AIWPSO method is employed to design the parameters of the SSSC-base controller. In the proposed controller, the proper signal of the power system such as rotor angle is used as the feedback. AIWPSO technique has high flexibility and balanced mechanism for the local and global research. The proposed controller is compared with a Genetic Algorithm (GA based controller that confirms the operation of the controller. To show the integrity of the proposed controller method, the achievement of the simulations is done out in a single-machine infinite-bus and multi-machine grid under multi turmoil.

  11. Dynamic optimization approach for integrated supplier selection and tracking control of single product inventory system with product discount

    Science.gov (United States)

    Sutrisno; Widowati; Heru Tjahjana, R.

    2017-01-01

    In this paper, we propose a mathematical model in the form of dynamic/multi-stage optimization to solve an integrated supplier selection problem and tracking control problem of single product inventory system with product discount. The product discount will be stated as a piece-wise linear function. We use dynamic programming to solve this proposed optimization to determine the optimal supplier and the optimal product volume that will be purchased from the optimal supplier for each time period so that the inventory level tracks a reference trajectory given by decision maker with minimal total cost. We give a numerical experiment to evaluate the proposed model. From the result, the optimal supplier was determined for each time period and the inventory level follows the given reference well.

  12. Analyses of Methods and Algorithms for Modelling and Optimization of Biotechnological Processes

    Directory of Open Access Journals (Sweden)

    Stoyan Stoyanov

    2009-08-01

    Full Text Available A review of the problems in modeling, optimization and control of biotechnological processes and systems is given in this paper. An analysis of existing and some new practical optimization methods for searching global optimum based on various advanced strategies - heuristic, stochastic, genetic and combined are presented in the paper. Methods based on the sensitivity theory, stochastic and mix strategies for optimization with partial knowledge about kinetic, technical and economic parameters in optimization problems are discussed. Several approaches for the multi-criteria optimization tasks are analyzed. The problems concerning optimal controls of biotechnological systems are also discussed.

  13. Genetic-evolution-based optimization methods for engineering design

    Science.gov (United States)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  14. Single-trial log transformation is optimal in frequency analysis of resting EEG alpha.

    Science.gov (United States)

    Smulders, Fren T Y; Ten Oever, Sanne; Donkers, Franc C L; Quaedflieg, Conny W E M; van de Ven, Vincent

    2018-02-01

    The appropriate definition and scaling of the magnitude of electroencephalogram (EEG) oscillations is an underdeveloped area. The aim of this study was to optimize the analysis of resting EEG alpha magnitude, focusing on alpha peak frequency and nonlinear transformation of alpha power. A family of nonlinear transforms, Box-Cox transforms, were applied to find the transform that (a) maximized a non-disputed effect: the increase in alpha magnitude when the eyes are closed (Berger effect), and (b) made the distribution of alpha magnitude closest to normal across epochs within each participant, or across participants. The transformations were performed either at the single epoch level or at the epoch-average level. Alpha peak frequency showed large individual differences, yet good correspondence between various ways to estimate it in 2 min of eyes-closed and 2 min of eyes-open resting EEG data. Both alpha magnitude and the Berger effect were larger for individual alpha than for a generic (8-12 Hz) alpha band. The log-transform on single epochs (a) maximized the t-value of the contrast between the eyes-open and eyes-closed conditions when tested within each participant, and (b) rendered near-normally distributed alpha power across epochs and participants, thereby making further transformation of epoch averages superfluous. The results suggest that the log-normal distribution is a fundamental property of variations in alpha power across time in the order of seconds. Moreover, effects on alpha power appear to be multiplicative rather than additive. These findings support the use of the log-transform on single epochs to achieve appropriate scaling of alpha magnitude. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  15. Optimal Control of Micro Grid Operation Mode Seamless Switching Based on Radau Allocation Method

    Science.gov (United States)

    Chen, Xiaomin; Wang, Gang

    2017-05-01

    The seamless switching process of micro grid operation mode directly affects the safety and stability of its operation. According to the switching process from island mode to grid-connected mode of micro grid, we establish a dynamic optimization model based on two grid-connected inverters. We use Radau allocation method to discretize the model, and use Newton iteration method to obtain the optimal solution. Finally, we implement the optimization mode in MATLAB and get the optimal control trajectory of the inverters.

  16. Optimizing CT angiography in patients with Fontan physiology: single-center experience of dual-site power injection

    International Nuclear Information System (INIS)

    Sandler, K.L.; Markham, L.W.; Mah, M.L.; Byrum, E.P.; Williams, J.R.

    2014-01-01

    Aim: To identify adult patients with single-ventricle congenital heart disease and Fontan procedure palliation who have been misdiagnosed with or incompletely evaluated for pulmonary embolism. Additionally, this study was designed to demonstrate that simultaneous, dual-injection of contrast medium into an upper and lower extremity vein is superior to single-injection protocols for CT angiography (CTA) of the chest in this population. Materials and methods: Patients included in the study were retrospectively selected from the Adult Congenital Heart Disease (ACHD) database. Search criteria included history of Fontan palliation and available chest CT examination. Patients were evaluated for (1) type of congenital heart disease and prior operations;(2) indication for initial CT evaluation;(3) route of contrast medium administration for the initial CT examination and resulting diagnosis;(4) whether or not anticoagulation therapy was initiated; and (5) final diagnosis and treatment plan. Results: The query of the ACHD database resulted in 28 individuals or patients with Fontan palliation (superior and inferior venae cavae anastomosed to the pulmonary arteries). Of these, 19 patients with Fontan physiology underwent CTA of the pulmonary circulation, and 17 had suboptimal imaging studies. Unfortunately, seven of these 17 patients (41%) were started on anticoagulation therapy due to a diagnosis of pulmonary embolism that was later excluded. Conclusion: Patients with single-ventricle/Fontan physiology are at risk of thromboembolic disease. Therefore, studies evaluating their complex anatomy must be performed with the optimal imaging protocol to ensure diagnostic accuracy, which is best achieved with dual-injection of an upper and lower extremity central vein. - Highlights: • The adult congenital heart disease population is growing. • Many of these patients have single ventricle/Fontan physiology. • Patients with Fontan physiology are at increased risk for

  17. Topology optimization of bounded acoustic problems using the hybrid finite element-wave based method

    DEFF Research Database (Denmark)

    Goo, Seongyeol; Wang, Semyung; Kook, Junghwan

    2017-01-01

    This paper presents an alternative topology optimization method for bounded acoustic problems that uses the hybrid finite element-wave based method (FE-WBM). The conventional method for the topology optimization of bounded acoustic problems is based on the finite element method (FEM), which...

  18. A topology optimization method based on the level set method for the design of negative permeability dielectric metamaterials

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Izui, Kazuhiro

    2012-01-01

    This paper presents a level set-based topology optimization method for the design of negative permeability dielectric metamaterials. Metamaterials are artificial materials that display extraordinary physical properties that are unavailable with natural materials. The aim of the formulated...... optimization problem is to find optimized layouts of a dielectric material that achieve negative permeability. The presence of grayscale areas in the optimized configurations critically affects the performance of metamaterials, positively as well as negatively, but configurations that contain grayscale areas...... are highly impractical from an engineering and manufacturing point of view. Therefore, a topology optimization method that can obtain clear optimized configurations is desirable. Here, a level set-based topology optimization method incorporating a fictitious interface energy is applied to a negative...

  19. Iron Pole Shape Optimization of IPM Motors Using an Integrated Method

    Directory of Open Access Journals (Sweden)

    JABBARI, A.

    2010-02-01

    Full Text Available An iron pole shape optimization method to reduce cogging torque in Interior Permanent Magnet (IPM motors is developed by using the reduced basis technique coupled by finite element and design of experiments methods. Objective function is defined as the minimum cogging torque. The experimental design of Taguchi method is used to build the approximation model and to perform optimization. This method is demonstrated on the rotor pole shape optimization of a 4-poles/24-slots IPM motor.

  20. A Novel Optimal Joint Resource Allocation Method in Cooperative Multicarrier Networks: Theory and Practice

    Directory of Open Access Journals (Sweden)

    Yuan Gao

    2016-04-01

    Full Text Available With the increasing demands for better transmission speed and robust quality of service (QoS, the capacity constrained backhaul gradually becomes a bottleneck in cooperative wireless networks, e.g., in the Internet of Things (IoT scenario in joint processing mode of LTE-Advanced Pro. This paper focuses on resource allocation within capacity constrained backhaul in uplink cooperative wireless networks, where two base stations (BSs equipped with single antennae serve multiple single-antennae users via multi-carrier transmission mode. In this work, we propose a novel cooperative transmission scheme based on compress-and-forward with user pairing to solve the joint mixed integer programming problem. To maximize the system capacity under the limited backhaul, we formulate the joint optimization problem of user sorting, subcarrier mapping and backhaul resource sharing among different pairs (subcarriers for users. A novel robust and efficient centralized algorithm based on alternating optimization strategy and perfect mapping is proposed. Simulations show that our novel method can improve the system capacity significantly under the constraint of the backhaul resource compared with the blind alternatives.

  1. Single-photon source engineering using a Modal Method

    DEFF Research Database (Denmark)

    Gregersen, Niels

    Solid-state sources of single indistinguishable photons are of great interest for quantum information applications. The semiconductor quantum dot embedded in a host material represents an attractive platform to realize such a single-photon source (SPS). A near-unity efficiency, defined as the num...... nanowire SPSs...

  2. Single molecule force spectroscopy: methods and applications in biology

    International Nuclear Information System (INIS)

    Shen Yi; Hu Jun

    2012-01-01

    Single molecule measurements have transformed our view of biomolecules. Owing to the ability of monitoring the activity of individual molecules, we now see them as uniquely structured, fluctuating molecules that stochastically transition between frequently many substrates, as two molecules do not follow precisely the same trajectory. Indeed, it is this discovery of critical yet short-lived substrates that were often missed in ensemble measurements that has perhaps contributed most to the better understanding of biomolecular functioning resulting from single molecule experiments. In this paper, we give a review on the three major techniques of single molecule force spectroscopy, and their applications especially in biology. The single molecular study of biotin-streptavidin interactions is introduced as a successful example. The problems and prospects of the single molecule force spectroscopy are discussed, too. (authors)

  3. Combustion characteristics and optimal factors determination with Taguchi method for diesel engines port-injecting hydrogen

    International Nuclear Information System (INIS)

    Wu, Horng-Wen; Wu, Zhan-Yi

    2012-01-01

    This study applies the L 9 orthogonal array of the Taguchi method to find out the best hydrogen injection timing, hydrogen-energy-share ratio, and the percentage of exhaust gas circulation (EGR) in a single DI diesel engine. The injection timing is controlled by an electronic control unit (ECU) and the quantity of hydrogen is controlled by hydrogen flow controller. For various engine loads, the authors determine the optimal operating factors for low BSFC (brake specific fuel consumption), NO X , and smoke. Moreover, net heat-release rate involving variable specific heat ratio is computed from the experimental in-cylinder pressure. In-cylinder pressure, net heat-release rate, A/F ratios, COV (coefficient of variations) of IMEP (indicated mean effective pressure), NO X , and smoke using the optimum condition factors are compared with those by original baseline diesel engine. The predictions made using Taguchi's parameter design technique agreed with the confirmation results on 95% confidence interval. At 45% and 60% loads the optimum factor combination compared with the original baseline diesel engine reduces 14.52% for BSFC, 60.5% for NO X and for 42.28% smoke and improves combustion performance such as peak in-cylinder pressure and net heat-release rate. Adding hydrogen and EGR would not generate unstable combustion due to lower COV of IMEP. -- Highlights: ► We use hydrogen injector controlled by ECU and cooled EGR system in a diesel engine. ► Optimal factors by Taguchi method are determined for low BSFC, NO X and smoke. ► The COV of IMEP is lower than 10% so it will not cause the unstable combustion. ► We improve A/F ratio, in-cylinder pressure, and heat-release at optimized engine. ► Decrease is 14.5% for BSFC, 60.5% for NO X , and 42.28% for smoke at optimized engine.

  4. Nonlinear Multidimensional Assignment Problems Efficient Conic Optimization Methods and Applications

    Science.gov (United States)

    2015-06-24

    WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Arizona State University School of Mathematical & Statistical Sciences 901 S...SUPPLEMENTARY NOTES 14. ABSTRACT The major goals of this project were completed: the exact solution of previously unsolved challenging combinatorial optimization... combinatorial optimization problem, the Directional Sensor Problem, was solved in two ways. First, heuristically in an engineering fashion and second, exactly

  5. Optimized Method for Untargeted Metabolomics Analysis of MDA-MB-231 Breast Cancer Cells

    Directory of Open Access Journals (Sweden)

    Amanda L. Peterson

    2016-09-01

    Full Text Available Cancer cells often have dysregulated metabolism, which is largely characterized by the Warburg effect—an increase in glycolytic activity at the expense of oxidative phosphorylation—and increased glutamine utilization. Modern metabolomics tools offer an efficient means to investigate metabolism in cancer cells. Currently, a number of protocols have been described for harvesting adherent cells for metabolomics analysis, but the techniques vary greatly and they lack specificity to particular cancer cell lines with diverse metabolic and structural features. Here we present an optimized method for untargeted metabolomics characterization of MDA-MB-231 triple negative breast cancer cells, which are commonly used to study metastatic breast cancer. We found that an approach that extracted all metabolites in a single step within the culture dish optimally detected both polar and non-polar metabolite classes with higher relative abundance than methods that involved removal of cells from the dish. We show that this method is highly suited to diverse applications, including the characterization of central metabolic flux by stable isotope labelling and differential analysis of cells subjected to specific pharmacological interventions.

  6. An equivalent method for optimization of particle tuned mass damper based on experimental parametric study

    Science.gov (United States)

    Lu, Zheng; Chen, Xiaoyi; Zhou, Ying

    2018-04-01

    A particle tuned mass damper (PTMD) is a creative combination of a widely used tuned mass damper (TMD) and an efficient particle damper (PD) in the vibration control area. The performance of a one-storey steel frame attached with a PTMD is investigated through free vibration and shaking table tests. The influence of some key parameters (filling ratio of particles, auxiliary mass ratio, and particle density) on the vibration control effects is investigated, and it is shown that the attenuation level significantly depends on the filling ratio of particles. According to the experimental parametric study, some guidelines for optimization of the PTMD that mainly consider the filling ratio are proposed. Furthermore, an approximate analytical solution based on the concept of an equivalent single-particle damper is proposed, and it shows satisfied agreement between the simulation and experimental results. This simplified method is then used for the preliminary optimal design of a PTMD system, and a case study of a PTMD system attached to a five-storey steel structure following this optimization process is presented.

  7. Optimization of Wind Farm Layout: A Refinement Method by Random Search

    DEFF Research Database (Denmark)

    Feng, Ju; Shen, Wen Zhong

    2013-01-01

    Wind farm layout optimization is to find the optimal positions of wind turbines inside a wind farm, so as to maximize and/or minimize a single objective or multiple objectives, while satisfying certain constraints. Most of the works in the literature divide the wind farm into cells in which turbi...

  8. A unified, multifidelity quasi-newton optimization method with application to aero-structural designa

    Science.gov (United States)

    Bryson, Dean Edward

    of low-fidelity evaluations required. This narrowing of the search domain also alleviates the burden on the surrogate model corrections between the low- and high-fidelity data. Rather than requiring the surrogate to be accurate in a hyper-volume bounded by the trust region, the model needs only to be accurate along the forward-looking search direction. Maintaining the approximate inverse Hessian also allows the multifidelity algorithm to revert to high-fidelity optimization at any time. In contrast, the standard approach has no memory of the previously-computed high-fidelity data. The primary disadvantage of the proposed algorithm is that it may require modifications to the optimization software, whereas standard optimizers may be used as black-box drivers in the typical trust region method. A multifidelity, multidisciplinary simulation of aeroelastic vehicle performance is developed to demonstrate the optimization method. The numerical physics models include body-fitted Euler computational fluid dynamics; linear, panel aerodynamics; linear, finite-element computational structural mechanics; and reduced, modal structural bases. A central element of the multifidelity, multidisciplinary framework is a shared parametric, attributed geometric representation that ensures the analysis inputs are consistent between disciplines and fidelities. The attributed geometry also enables the transfer of data between disciplines. The new optimization algorithm, a standard trust region approach, and a single-fidelity quasi-Newton method are compared for a series of analytic test functions, using both polynomial chaos expansions and kriging to correct discrepancies between fidelity levels of data. In the aggregate, the new method requires fewer high-fidelity evaluations than the trust region approach in 51% of cases, and the same number of evaluations in 18%. The new approach also requires fewer low-fidelity evaluations, by up to an order of magnitude, in almost all cases. The efficacy

  9. Structural Optimization Design of Horizontal-Axis Wind Turbine Blades Using a Particle Swarm Optimization Algorithm and Finite Element Method

    Directory of Open Access Journals (Sweden)

    Pan Pan

    2012-11-01

    Full Text Available This paper presents an optimization method for the structural design of horizontal-axis wind turbine (HAWT blades based on the particle swarm optimization algorithm (PSO combined with the finite element method (FEM. The main goal is to create an optimization tool and to demonstrate the potential improvements that could be brought to the structural design of HAWT blades. A multi-criteria constrained optimization design model pursued with respect to minimum mass of the blade is developed. The number and the location of layers in the spar cap and the positions of the shear webs are employed as the design variables, while the strain limit, blade/tower clearance limit and vibration limit are taken into account as the constraint conditions. The optimization of the design of a commercial 1.5 MW HAWT blade is carried out by combining the above method and design model under ultimate (extreme flap-wise load conditions. The optimization results are described and compared with the original design. It shows that the method used in this study is efficient and produces improved designs.

  10. Optimization of the temporal pattern of applied dose for a single fraction of radiation: Implications for radiation therapy

    Science.gov (United States)

    Altman, Michael B.

    The increasing prevalence of intensity modulated radiation therapy (IMRT) as a treatment modality has led to a renewed interest in the potential for interaction between prolonged treatment time, as frequently associated with IMRT, and the underlying radiobiology of the irradiated tissue. A particularly relevant aspect of radiobiology is cell repair capacity, which influences cell survival, and thus directly relates to the ability to control tumors and spare normal tissues. For a single fraction of radiation, the linear quadratic (LQ) model is commonly used to relate the radiation dose to the fraction of cells surviving. The LQ model implies a dependence on two time-related factors which correlate to radiobiological effects: the duration of radiation application, and the functional form of how the dose is applied over that time (the "temporal pattern of applied dose"). Although the former has been well studied, the latter has not. Thus, the goal of this research is to investigate the impact of the temporal pattern of applied dose on the survival of human cells and to explore how the manipulation of this temporal dose pattern may be incorporated into an IMRT-based radiation therapy treatment planning scheme. The hypothesis is that the temporal pattern of applied dose in a single fraction of radiation can be optimized to maximize or minimize cell kill. Furthermore, techniques which utilize this effect could have clinical ramifications. In situations where increased cell kill is desirable, such as tumor control, or limiting the degree of cell kill is important, such as the sparing of normal tissue, temporal sequences of dose which maximize or minimize cell kill (temporally "optimized" sequences) may provide greater benefit than current clinically used radiation patterns. In the first part of this work, an LQ-based modeling analysis of effects of the temporal pattern of dose on cell kill is performed. Through this, patterns are identified for maximizing cell kill for a

  11. Sci-Thur PM - Colourful Interactions: Highlights 08: ARC TBI using Single-Step Optimized VMAT Fields

    International Nuclear Information System (INIS)

    Hudson, Alana; Gordon, Deborah; Moore, Roseanne; Balogh, Alex; Pierce, Greg

    2016-01-01

    Purpose: This work outlines a new TBI delivery technique to replace a lateral POP full bolus technique. The new technique is done with VMAT arc delivery, without bolus, treating the patient prone and supine. The benefits of the arc technique include: increased patient experience and safety, better dose conformity, better organ at risk sparing, decreased therapist time and reduction of therapist injuries. Methods: In this work we build on a technique developed by Jahnke et al. We use standard arc fields with gantry speeds corrected for varying distance to the patient followed by a single step VMAT optimization on a patient CT to increase dose inhomogeneity and to reduce dose to the lungs (vs. blocks). To compare the arc TBI technique to our full bolus technique, we produced plans on patient CTs for both techniques and evaluated several dosimetric parameters using an ANOVA test. Results and Conclusions: The arc technique is able reduce both the hot areas to the body (D2% reduced from 122.2% to 111.8% p<0.01) and the lungs (mean lung dose reduced from 107.5% to 99.1%, p<0.01), both statistically significant, while maintaining coverage (D98% = 97.8% vs. 94.6%, p=0.313, not statistically significant). We developed a more patient and therapist-friendly TBI treatment technique that utilizes single-step optimized VMAT plans. It was found that this technique was dosimetrically equivalent to our previous lateral technique in terms of coverage and statistically superior in terms of reduced lung dose.

  12. Sci-Thur PM - Colourful Interactions: Highlights 08: ARC TBI using Single-Step Optimized VMAT Fields

    Energy Technology Data Exchange (ETDEWEB)

    Hudson, Alana; Gordon, Deborah; Moore, Roseanne; Balogh, Alex; Pierce, Greg [Tom Baker Cancer Centre (Canada)

    2016-08-15

    Purpose: This work outlines a new TBI delivery technique to replace a lateral POP full bolus technique. The new technique is done with VMAT arc delivery, without bolus, treating the patient prone and supine. The benefits of the arc technique include: increased patient experience and safety, better dose conformity, better organ at risk sparing, decreased therapist time and reduction of therapist injuries. Methods: In this work we build on a technique developed by Jahnke et al. We use standard arc fields with gantry speeds corrected for varying distance to the patient followed by a single step VMAT optimization on a patient CT to increase dose inhomogeneity and to reduce dose to the lungs (vs. blocks). To compare the arc TBI technique to our full bolus technique, we produced plans on patient CTs for both techniques and evaluated several dosimetric parameters using an ANOVA test. Results and Conclusions: The arc technique is able reduce both the hot areas to the body (D2% reduced from 122.2% to 111.8% p<0.01) and the lungs (mean lung dose reduced from 107.5% to 99.1%, p<0.01), both statistically significant, while maintaining coverage (D98% = 97.8% vs. 94.6%, p=0.313, not statistically significant). We developed a more patient and therapist-friendly TBI treatment technique that utilizes single-step optimized VMAT plans. It was found that this technique was dosimetrically equivalent to our previous lateral technique in terms of coverage and statistically superior in terms of reduced lung dose.

  13. Development of Combinatorial Methods for Alloy Design and Optimization

    International Nuclear Information System (INIS)

    Pharr, George M.; George, Easo P.; Santella, Michael L

    2005-01-01

    The primary goal of this research was to develop a comprehensive methodology for designing and optimizing metallic alloys by combinatorial principles. Because conventional techniques for alloy preparation are unavoidably restrictive in the range of alloy composition that can be examined, combinatorial methods promise to significantly reduce the time, energy, and expense needed for alloy design. Combinatorial methods can be developed not only to optimize existing alloys, but to explore and develop new ones as well. The scientific approach involved fabricating an alloy specimen with a continuous distribution of binary and ternary alloy compositions across its surface--an ''alloy library''--and then using spatially resolved probing techniques to characterize its structure, composition, and relevant properties. The three specific objectives of the project were: (1) to devise means by which simple test specimens with a library of alloy compositions spanning the range interest can be produced; (2) to assess how well the properties of the combinatorial specimen reproduce those of the conventionally processed alloys; and (3) to devise screening tools which can be used to rapidly assess the important properties of the alloys. As proof of principle, the methodology was applied to the Fe-Ni-Cr ternary alloy system that constitutes many commercially important materials such as stainless steels and the H-series and C-series heat and corrosion resistant casting alloys. Three different techniques were developed for making alloy libraries: (1) vapor deposition of discrete thin films on an appropriate substrate and then alloying them together by solid-state diffusion; (2) co-deposition of the alloying elements from three separate magnetron sputtering sources onto an inert substrate; and (3) localized melting of thin films with a focused electron-beam welding system. Each of the techniques was found to have its own advantages and disadvantages. A new and very powerful technique for

  14. Estimating of aquifer parameters from the single-well water-level measurements in response to advancing longwall mine by using particle swarm optimization

    Science.gov (United States)

    Buyuk, Ersin; Karaman, Abdullah

    2017-04-01

    We estimated transmissivity and storage coefficient values from the single well water-level measurements positioned ahead of the mining face by using particle swarm optimization (PSO) technique. The water-level response to the advancing mining face contains an semi-analytical function that is not suitable for conventional inversion shemes because the partial derivative is difficult to calculate . Morever, the logaritmic behaviour of the model create difficulty for obtaining an initial model that may lead to a stable convergence. The PSO appears to obtain a reliable solution that produce a reasonable fit between water-level data and model function response. Optimization methods have been used to find optimum conditions consisting either minimum or maximum of a given objective function with regard to some criteria. Unlike PSO, traditional non-linear optimization methods have been used for many hydrogeologic and geophysical engineering problems. These methods indicate some difficulties such as dependencies to initial model, evolution of the partial derivatives that is required while linearizing the model and trapping at local optimum. Recently, Particle swarm optimization (PSO) became the focus of modern global optimization method that is inspired from the social behaviour of birds of swarms, and appears to be a reliable and powerful algorithms for complex engineering applications. PSO that is not dependent on an initial model, and non-derivative stochastic process appears to be capable of searching all possible solutions in the model space either around local or global optimum points.

  15. Global Optimization Based on the Hybridization of Harmony Search and Particle Swarm Optimization Methods

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available We consider a class of stochastic search algorithms of global optimization which in various publications are called behavioural, intellectual, metaheuristic, inspired by the nature, swarm, multi-agent, population, etc. We use the last term.Experience in using the population algorithms to solve challenges of global optimization shows that application of one such algorithm may not always effective. Therefore now great attention is paid to hybridization of population algorithms of global optimization. Hybrid algorithms unite various algorithms or identical algorithms, but with various values of free parameters. Thus efficiency of one algorithm can compensate weakness of another.The purposes of the work are development of hybrid algorithm of global optimization based on known algorithms of harmony search (HS and swarm of particles (PSO, software implementation of algorithm, study of its efficiency using a number of known benchmark problems, and a problem of dimensional optimization of truss structure.We set a problem of global optimization, consider basic algorithms of HS and PSO, give a flow chart of the offered hybrid algorithm called PSO HS , present results of computing experiments with developed algorithm and software, formulate main results of work and prospects of its development.

  16. A reliable method for the counting and control of single ions for single-dopant controlled devices

    International Nuclear Information System (INIS)

    Shinada, T; Kurosawa, T; Nakayama, H; Zhu, Y; Hori, M; Ohdomari, I

    2008-01-01

    By 2016, transistor device size will be just 10 nm. However, a transistor that is doped at a typical concentration of 10 18 atoms cm -3 has only one dopant atom in the active channel region. Therefore, it can be predicted that conventional doping methods such as ion implantation and thermal diffusion will not be available ten years from now. We have been developing a single-ion implantation (SII) method that enables us to implant dopant ions one-by-one into semiconductors until the desired number is reached. Here we report a simple but reliable method to control the number of single-dopant atoms by detecting the change in drain current induced by single-ion implantation. The drain current decreases in a stepwise fashion as a result of the clusters of displaced Si atoms created by every single-ion incidence. This result indicates that the single-ion detection method we have developed is capable of detecting single-ion incidence with 100% efficiency. Our method potentially could pave the way to future single-atom devices, including a solid-state quantum computer

  17. Single source dual energy CT: What is the optimal monochromatic energy level for the analysis of the lung parenchyma?

    Energy Technology Data Exchange (ETDEWEB)

    Ohana, M., E-mail: mickael.ohana@gmail.com [iCube Laboratory, Université de Strasbourg/CNRS, UMR 7357, 67400 Illkirch (France); Service de Radiologie B, Nouvel Hôpital Civil – Hôpitaux Universitaires de Strasbourg, 1 place de l’hôpital, 67000 Strasbourg (France); Labani, A., E-mail: aissam.labani@chru-strasbourg.fr [Service de Radiologie B, Nouvel Hôpital Civil – Hôpitaux Universitaires de Strasbourg, 1 place de l’hôpital, 67000 Strasbourg (France); Severac, F., E-mail: francois.severac@chru-strasbourg.fr [Département de Biostatistiques et d’Informatique Médicale, Hôpital Civil – Hôpitaux Universitaires de Strasbourg,1 place de l’hôpital, 67000 Strasbourg (France); Jeung, M.Y., E-mail: Mi-Young.Jeung@chru-strasbourg.fr [Service de Radiologie B, Nouvel Hôpital Civil – Hôpitaux Universitaires de Strasbourg, 1 place de l’hôpital, 67000 Strasbourg (France); Gaertner, S., E-mail: Sebastien.Gaertner@chru-strasbourg.fr [Service de Médecine Vasculaire, Nouvel Hôpital Civil – Hôpitaux Universitaires de Strasbourg,1 place de l’hôpital, 67000 Strasbourg (France); and others

    2017-03-15

    Highlights: • Lung parenchyma aspect varies with the monochromatic energy level in spectral CT. • Optimal diagnostic and image quality is obtained at 50–55 keV. • Mediastinum and parenchyma could be read on the same monochromatic energy level. - Abstract: Objective: To determine the optimal monochromatic energy level for lung parenchyma analysis in spectral CT. Methods: All 50 examinations (58% men, 64.8 ± 16yo) from an IRB-approved prospective study on single-source dual energy chest CT were retrospectively included and analyzed. Monochromatic images in lung window reconstructed every 5 keV from 40 to 140 keV were independently assessed by two chest radiologists. Based on the overall image quality and the depiction/conspicuity of parenchymal lesions, each reader had to designate for every patient the keV level providing the best diagnostic and image quality. Results: 72% of the examinations exhibited parenchymal lesions. Reader 1 picked the 55 keV monochromatic reconstruction in 52% of cases, 50 in 30% and 60 in 18%. Reader 2 chose 50 keV in 52% cases, 55 in 40%, 60 in 6% and 40 in 2%. The 50 and 55 keV levels were chosen by at least one reader in 64% and 76% of all patients, respectively. Merging 50 and 55 keV into one category results in an optimal setting selected by reader 1 in 82% of patients and by reader 2 in 92%, with a 74% concomitant agreement. Conclusion: The best image quality for lung parenchyma in spectral CT is obtained with the 50–55 keV monochromatic reconstructions.

  18. Mass Spectrometric Method for Analyzing Metabolites in Yeast with Single Cell Sensitivity

    NARCIS (Netherlands)

    Amantonico, Andrea; Oh, Joo Yeon; Sobek, Jens; Heinemann, Matthias; Zenobi, Renato

    2008-01-01

    Getting a look-in: An optimized MALDI-MS procedure has been developed to detect endogenous primary metabolites directly in the cell extract. A detection limit corresponding to metabolites from less than a single cell has been attained, opening the door to single-cell metabolomics by mass

  19. Combined optimal-pathlengths method for near-infrared spectroscopy analysis

    International Nuclear Information System (INIS)

    Liu Rong; Xu Kexin; Lu Yanhui; Sun Huili

    2004-01-01

    Near-infrared (NIR) spectroscopy is a rapid, reagent-less and nondestructive analytical technique, which is being increasingly employed for quantitative application in chemistry, pharmaceutics and food industry, and for the optical analysis of biological tissue. The performance of NIR technology greatly depends on the abilities to control and acquire data from the instrument and to calibrate and analyse data. Optical pathlength is a key parameter of the NIR instrument, which has been thoroughly discussed in univariate quantitative analysis in the presence of photometric errors. Although multiple wavelengths can provide more chemical information, it is difficult to determine a single pathlength that is suitable for each wavelength region. A theoretical investigation of a selection procedure for multiple pathlengths, called the combined optimal-pathlengths (COP) method, is identified in this paper and an extensive comparison with the single pathlength method is also performed on simulated and experimental NIR spectral data sets. The results obtained show that the COP method can greatly improve the prediction accuracy in NIR spectroscopy quantitative analysis

  20. Practical implementation of optimal management strategies in conservation programmes: a mate selection method

    Directory of Open Access Journals (Sweden)

    Fernández, J.

    2001-12-01

    Full Text Available The maintenance of genetic diversity is, from a genetic point of view, a key objective of conservation programmes. The selection of individuals contributing offspring and the decision of the mating scheme are the steps on which managers can control genetic diversity, specially on ‘ex situ’ programmes. Previous studies have shown that the optimal management strategy is to look for the parents’ contributions that yield minimum group coancestry (overall probability of identity by descent in the population and, then, to arrange mating couples following minimum pairwise coancestry. However, physiological constraints make it necessary to account for mating restrictions when deciding the contributions and, therefore, these should be implemented in a single step along with the mating plan. In the present paper, a single-step method is proposed to optimise the management of a conservation programme when restrictions on the mating scheme exist. The performance of the method is tested by computer simulation. The strategy turns out to be as efficient as the two-step method, regarding both the genetic diversity preserved and the fitness of the population.