WorldWideScience

Sample records for optimal basis set

  1. Optimized Basis Sets for the Environment in the Domain-Specific Basis Set Approach of the Incremental Scheme.

    Science.gov (United States)

    Anacker, Tony; Hill, J Grant; Friedrich, Joachim

    2016-04-21

    Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.

  2. Energy optimized Gaussian basis sets for the atoms T1 - Rn

    International Nuclear Information System (INIS)

    Faegri, K. Jr.

    1987-01-01

    Energy optimized Gaussian basis sets have been derived for the atoms Tl-Rn. Two sets are presented - a (20,16,10,6) set and a (22,17,13,8) set. The smallest sets yield atomic energies 107 to 123 mH above the numerical Hartree-Fock values, while the larger sets give energies 11 mH above the numerical results. Energy trends from the smaller sets indicate that reduced shielding by p-electrons may place a greater demand on the flexibility of d- and f-orbital description for the lighter elements of the series

  3. Geminal embedding scheme for optimal atomic basis set construction in correlated calculations

    Energy Technology Data Exchange (ETDEWEB)

    Sorella, S., E-mail: sorella@sissa.it [International School for Advanced Studies (SISSA), Via Beirut 2-4, 34014 Trieste, Italy and INFM Democritos National Simulation Center, Trieste (Italy); Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr [Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France); Mazzola, G., E-mail: gmazzola@phys.ethz.ch [Theoretische Physik, ETH Zurich, 8093 Zurich (Switzerland); Casula, M., E-mail: michele.casula@impmc.upmc.fr [CNRS and Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France)

    2015-12-28

    We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.

  4. Quality of Gaussian basis sets: direct optimization of orbital exponents by the method of conjugate gradients

    International Nuclear Information System (INIS)

    Kari, R.E.; Mezey, P.G.; Csizmadia, I.G.

    1975-01-01

    Expressions are given for calculating the energy gradient vector in the exponent space of Gaussian basis sets and a technique to optimize orbital exponents using the method of conjugate gradients is described. The method is tested on the (9/sups/5/supp/) Gaussian basis space and optimum exponents are determined for the carbon atom. The analysis of the results shows that the calculated one-electron properties converge more slowly to their optimum values than the total energy converges to its optimum value. In addition, basis sets approximating the optimum total energy very well can still be markedly improved for the prediction of one-electron properties. For smaller basis sets, this improvement does not warrant the necessary expense

  5. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    Energy Technology Data Exchange (ETDEWEB)

    Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au [School of Chemistry and Biochemistry, The University of Western Australia, Perth, WA 6009 (Australia)

    2015-05-15

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.

  6. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    International Nuclear Information System (INIS)

    Spackman, Peter R.; Karton, Amir

    2015-01-01

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L α two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol –1 . The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol –1

  7. Optimization of auxiliary basis sets for the LEDO expansion and a projection technique for LEDO-DFT.

    Science.gov (United States)

    Götz, Andreas W; Kollmar, Christian; Hess, Bernd A

    2005-09-01

    We present a systematic procedure for the optimization of the expansion basis for the limited expansion of diatomic overlap density functional theory (LEDO-DFT) and report on optimized auxiliary orbitals for the Ahlrichs split valence plus polarization basis set (SVP) for the elements H, Li--F, and Na--Cl. A new method to deal with near-linear dependences in the LEDO expansion basis is introduced, which greatly reduces the computational effort of LEDO-DFT calculations. Numerical results for a test set of small molecules demonstrate the accuracy of electronic energies, structural parameters, dipole moments, and harmonic frequencies. For larger molecular systems the numerical errors introduced by the LEDO approximation can lead to an uncontrollable behavior of the self-consistent field (SCF) process. A projection technique suggested by Löwdin is presented in the framework of LEDO-DFT, which guarantees for SCF convergence. Numerical results on some critical test molecules suggest the general applicability of the auxiliary orbitals presented in combination with this projection technique. Timing results indicate that LEDO-DFT is competitive with conventional density fitting methods. (c) 2005 Wiley Periodicals, Inc.

  8. Density Functional Theory and the Basis Set Truncation Problem with Correlation Consistent Basis Sets: Elephant in the Room or Mouse in the Closet?

    Science.gov (United States)

    Feller, David; Dixon, David A

    2018-03-08

    Two recent papers in this journal called into question the suitability of the correlation consistent basis sets for density functional theory (DFT) calculations, because the sets were designed for correlated methods such as configuration interaction, perturbation theory, and coupled cluster theory. These papers focused on the ability of the correlation consistent and other basis sets to reproduce total energies, atomization energies, and dipole moments obtained from "quasi-exact" multiwavelet results. Undesirably large errors were observed for the correlation consistent basis sets. One of the papers argued that basis sets specifically optimized for DFT methods were "essential" for obtaining high accuracy. In this work we re-examined the performance of the correlation consistent basis sets by resolving problems with the previous calculations and by making more appropriate basis set choices for the alkali and alkaline-earth metals and second-row elements. When this is done, the statistical errors with respect to the benchmark values and with respect to DFT optimized basis sets are greatly reduced, especially in light of the relatively large intrinsic error of the underlying DFT method. When judged with respect to high-quality Feller-Peterson-Dixon coupled cluster theory atomization energies, the PBE0 DFT method used in the previous studies exhibits a mean absolute deviation more than a factor of 50 larger than the quintuple zeta basis set truncation error.

  9. Basis set construction for molecular electronic structure theory: natural orbital and Gauss-Slater basis for smooth pseudopotentials.

    Science.gov (United States)

    Petruzielo, F R; Toulouse, Julien; Umrigar, C J

    2011-02-14

    A simple yet general method for constructing basis sets for molecular electronic structure calculations is presented. These basis sets consist of atomic natural orbitals from a multiconfigurational self-consistent field calculation supplemented with primitive functions, chosen such that the asymptotics are appropriate for the potential of the system. Primitives are optimized for the homonuclear diatomic molecule to produce a balanced basis set. Two general features that facilitate this basis construction are demonstrated. First, weak coupling exists between the optimal exponents of primitives with different angular momenta. Second, the optimal primitive exponents for a chosen system depend weakly on the particular level of theory employed for optimization. The explicit case considered here is a basis set appropriate for the Burkatzki-Filippi-Dolg pseudopotentials. Since these pseudopotentials are finite at nuclei and have a Coulomb tail, the recently proposed Gauss-Slater functions are the appropriate primitives. Double- and triple-zeta bases are developed for elements hydrogen through argon. These new bases offer significant gains over the corresponding Burkatzki-Filippi-Dolg bases at various levels of theory. Using a Gaussian expansion of the basis functions, these bases can be employed in any electronic structure method. Quantum Monte Carlo provides an added benefit: expansions are unnecessary since the integrals are evaluated numerically.

  10. Accelerating GW calculations with optimal polarizability basis

    Energy Technology Data Exchange (ETDEWEB)

    Umari, P.; Stenuit, G. [CNR-IOM DEMOCRITOS Theory Elettra Group, Basovizza (Trieste) (Italy); Qian, X.; Marzari, N. [Department of Materials Science and Engineering, MIT, Cambridge, MA (United States); Giacomazzi, L.; Baroni, S. [CNR-IOM DEMOCRITOS Theory Elettra Group, Basovizza (Trieste) (Italy); SISSA - Scuola Internazionale Superiore di Studi Avanzati, Trieste (Italy)

    2011-03-15

    We present a method for accelerating GW quasi-particle (QP) calculations. This is achieved through the introduction of optimal basis sets for representing polarizability matrices. First the real-space products of Wannier like orbitals are constructed and then optimal basis sets are obtained through singular value decomposition. Our method is validated by calculating the vertical ionization energies of the benzene molecule and the band structure of crystalline silicon. Its potentialities are illustrated by calculating the QP spectrum of a model structure of vitreous silica. Finally, we apply our method for studying the electronic structure properties of a model of quasi-stoichiometric amorphous silicon nitride and of its point defects. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  11. Electronic structure of crystalline uranium nitrides UN, U2N3 and UN2: LCAO calculations with the basis set optimization

    International Nuclear Information System (INIS)

    Evarestov, R A; Panin, A I; Bandura, A V; Losev, M V

    2008-01-01

    The results of LCAO DFT calculations of lattice parameters, cohesive energy and bulk modulus of the crystalline uranium nitrides UN, U 2 N 3 and UN 2 are presented and discussed. The LCAO computer codes Gaussian03 and Crystal06 are applied. The calculations are made with the uranium atom relativistic effective small core potential by Stuttgart-Cologne group (60 electrons in the core). The calculations include the U atom basis set optimization. Powell, Hooke-Jeeves, conjugated gradient and Box methods are implemented in the author's optimization package, being external to the codes for molecular and periodic calculations. The basis set optimization in LCAO calculations improves the agreement of the lattice parameter and bulk modulus of UN crystal with the experimental data, the change of the cohesive energy due to the optimization is small. The mixed metallic-covalent chemical bonding is found both in LCAO calculations of UN and U 2 N 3 crystals; UN 2 crystal has the semiconducting nature

  12. A comparison of the behavior of functional/basis set combinations for hydrogen-bonding in the water dimer with emphasis on basis set superposition error.

    Science.gov (United States)

    Plumley, Joshua A; Dannenberg, J J

    2011-06-01

    We evaluate the performance of ten functionals (B3LYP, M05, M05-2X, M06, M06-2X, B2PLYP, B2PLYPD, X3LYP, B97D, and MPWB1K) in combination with 16 basis sets ranging in complexity from 6-31G(d) to aug-cc-pV5Z for the calculation of the H-bonded water dimer with the goal of defining which combinations of functionals and basis sets provide a combination of economy and accuracy for H-bonded systems. We have compared the results to the best non-density functional theory (non-DFT) molecular orbital (MO) calculations and to experimental results. Several of the smaller basis sets lead to qualitatively incorrect geometries when optimized on a normal potential energy surface (PES). This problem disappears when the optimization is performed on a counterpoise (CP) corrected PES. The calculated interaction energies (ΔEs) with the largest basis sets vary from -4.42 (B97D) to -5.19 (B2PLYPD) kcal/mol for the different functionals. Small basis sets generally predict stronger interactions than the large ones. We found that, because of error compensation, the smaller basis sets gave the best results (in comparison to experimental and high-level non-DFT MO calculations) when combined with a functional that predicts a weak interaction with the largest basis set. As many applications are complex systems and require economical calculations, we suggest the following functional/basis set combinations in order of increasing complexity and cost: (1) D95(d,p) with B3LYP, B97D, M06, or MPWB1k; (2) 6-311G(d,p) with B3LYP; (3) D95++(d,p) with B3LYP, B97D, or MPWB1K; (4) 6-311++G(d,p) with B3LYP or B97D; and (5) aug-cc-pVDZ with M05-2X, M06-2X, or X3LYP. Copyright © 2011 Wiley Periodicals, Inc.

  13. Optimization of metabolite basis sets prior to quantitation in magnetic resonance spectroscopy: an approach based on quantum mechanics

    International Nuclear Information System (INIS)

    Lazariev, A; Graveron-Demilly, D; Allouche, A-R; Aubert-Frécon, M; Fauvelle, F; Piotto, M; Elbayed, K; Namer, I-J; Van Ormondt, D

    2011-01-01

    High-resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. The need to monitor diseases and pharmaceutical follow-up requires an automatic quantitation of HRMAS 1 H signals. However, for several metabolites, the values of chemical shifts of proton groups may slightly differ according to the micro-environment in the tissue or cells, in particular to its pH. This hampers the accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis set: the metabolite fingerprints are not correct anymore. In this work, we propose an accurate method coupling quantum mechanical simulations and quantitation algorithms to handle basis-set changes. The proposed algorithm automatically corrects mismatches between the signals of the simulated basis set and the signal under analysis by maximizing the normalized cross-correlation between the mentioned signals. Optimized chemical shift values of the metabolites are obtained. This method, QM-QUEST, provides more robust fitting while limiting user involvement and respects the correct fingerprints of metabolites. Its efficiency is demonstrated by accurately quantitating 33 signals from tissue samples of human brains with oligodendroglioma, obtained at 11.7 tesla. The corresponding chemical shift changes of several metabolites within the series are also analyzed

  14. Optimization of metabolite basis sets prior to quantitation in magnetic resonance spectroscopy: an approach based on quantum mechanics

    Science.gov (United States)

    Lazariev, A.; Allouche, A.-R.; Aubert-Frécon, M.; Fauvelle, F.; Piotto, M.; Elbayed, K.; Namer, I.-J.; van Ormondt, D.; Graveron-Demilly, D.

    2011-11-01

    High-resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. The need to monitor diseases and pharmaceutical follow-up requires an automatic quantitation of HRMAS 1H signals. However, for several metabolites, the values of chemical shifts of proton groups may slightly differ according to the micro-environment in the tissue or cells, in particular to its pH. This hampers the accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis set: the metabolite fingerprints are not correct anymore. In this work, we propose an accurate method coupling quantum mechanical simulations and quantitation algorithms to handle basis-set changes. The proposed algorithm automatically corrects mismatches between the signals of the simulated basis set and the signal under analysis by maximizing the normalized cross-correlation between the mentioned signals. Optimized chemical shift values of the metabolites are obtained. This method, QM-QUEST, provides more robust fitting while limiting user involvement and respects the correct fingerprints of metabolites. Its efficiency is demonstrated by accurately quantitating 33 signals from tissue samples of human brains with oligodendroglioma, obtained at 11.7 tesla. The corresponding chemical shift changes of several metabolites within the series are also analyzed.

  15. Development of new auxiliary basis functions of the Karlsruhe segmented contracted basis sets including diffuse basis functions (def2-SVPD, def2-TZVPPD, and def2-QVPPD) for RI-MP2 and RI-CC calculations.

    Science.gov (United States)

    Hellweg, Arnim; Rappoport, Dmitrij

    2015-01-14

    We report optimized auxiliary basis sets for use with the Karlsruhe segmented contracted basis sets including moderately diffuse basis functions (Rappoport and Furche, J. Chem. Phys., 2010, 133, 134105) in resolution-of-the-identity (RI) post-self-consistent field (post-SCF) computations for the elements H-Rn (except lanthanides). The errors of the RI approximation using optimized auxiliary basis sets are analyzed on a comprehensive test set of molecules containing the most common oxidation states of each element and do not exceed those of the corresponding unaugmented basis sets. During these studies an unsatisfying performance of the def2-SVP and def2-QZVPP auxiliary basis sets for Barium was found and improved sets are provided. We establish the versatility of the def2-SVPD, def2-TZVPPD, and def2-QZVPPD basis sets for RI-MP2 and RI-CC (coupled-cluster) energy and property calculations. The influence of diffuse basis functions on correlation energy, basis set superposition error, atomic electron affinity, dipole moments, and computational timings is evaluated at different levels of theory using benchmark sets and showcase examples.

  16. The aug-cc-pVnZ-F12 basis set family: Correlation consistent basis sets for explicitly correlated benchmark calculations on anions and noncovalent complexes.

    Science.gov (United States)

    Sylvetsky, Nitai; Kesharwani, Manoj K; Martin, Jan M L

    2017-10-07

    We have developed a new basis set family, denoted as aug-cc-pVnZ-F12 (or aVnZ-F12 for short), for explicitly correlated calculations. The sets included in this family were constructed by supplementing the corresponding cc-pVnZ-F12 sets with additional diffuse functions on the higher angular momenta (i.e., additional d-h functions on non-hydrogen atoms and p-g on hydrogen atoms), optimized for the MP2-F12 energy of the relevant atomic anions. The new basis sets have been benchmarked against electron affinities of the first- and second-row atoms, the W4-17 dataset of total atomization energies, the S66 dataset of noncovalent interactions, the Benchmark Energy and Geometry Data Base water cluster subset, and the WATER23 subset of the GMTKN24 and GMTKN30 benchmark suites. The aVnZ-F12 basis sets displayed excellent performance, not just for electron affinities but also for noncovalent interaction energies of neutral and anionic species. Appropriate CABSs (complementary auxiliary basis sets) were explored for the S66 noncovalent interaction benchmark: between similar-sized basis sets, CABSs were found to be more transferable than generally assumed.

  17. Electronic structure of crystalline uranium nitrides UN, U{sub 2}N{sub 3} and UN{sub 2}: LCAO calculations with the basis set optimization

    Energy Technology Data Exchange (ETDEWEB)

    Evarestov, R A; Panin, A I; Bandura, A V; Losev, M V [Department of Quantum Chemistry, St. Petersburg State University, University Prospect 26, Stary Peterghof, St. Petersburg, 198504 (Russian Federation)], E-mail: re1973@re1973.spb.edu

    2008-06-01

    The results of LCAO DFT calculations of lattice parameters, cohesive energy and bulk modulus of the crystalline uranium nitrides UN, U{sub 2}N{sub 3} and UN{sub 2} are presented and discussed. The LCAO computer codes Gaussian03 and Crystal06 are applied. The calculations are made with the uranium atom relativistic effective small core potential by Stuttgart-Cologne group (60 electrons in the core). The calculations include the U atom basis set optimization. Powell, Hooke-Jeeves, conjugated gradient and Box methods are implemented in the author's optimization package, being external to the codes for molecular and periodic calculations. The basis set optimization in LCAO calculations improves the agreement of the lattice parameter and bulk modulus of UN crystal with the experimental data, the change of the cohesive energy due to the optimization is small. The mixed metallic-covalent chemical bonding is found both in LCAO calculations of UN and U{sub 2}N{sub 3} crystals; UN{sub 2} crystal has the semiconducting nature.

  18. Some considerations about Gaussian basis sets for electric property calculations

    Science.gov (United States)

    Arruda, Priscilla M.; Canal Neto, A.; Jorge, F. E.

    Recently, segmented contracted basis sets of double, triple, and quadruple zeta valence quality plus polarization functions (XZP, X = D, T, and Q, respectively) for the atoms from H to Ar were reported. In this work, with the objective of having a better description of polarizabilities, the QZP set was augmented with diffuse (s and p symmetries) and polarization (p, d, f, and g symmetries) functions that were chosen to maximize the mean dipole polarizability at the UHF and UMP2 levels, respectively. At the HF and B3LYP levels of theory, electric dipole moment and static polarizability for a sample of molecules were evaluated. Comparison with experimental data and results obtained with a similar size basis set, whose diffuse functions were optimized for the ground state energy of the anion, was done.

  19. Relativistic double-zeta, triple-zeta, and quadruple-zeta basis sets for the lanthanides La–Lu

    NARCIS (Netherlands)

    Dyall, K.G.; Gomes, A.S.P.; Visscher, L.

    2010-01-01

    Relativistic basis sets of double-zeta, triple-zeta, and quadruple-zeta quality have been optimized for the lanthanide elements La-Lu. The basis sets include SCF exponents for the occupied spinors and for the 6p shell, exponents of correlating functions for the valence shells (4f, 5d and 6s) and the

  20. Basis set convergence on static electric dipole polarizability calculations of alkali-metal clusters

    International Nuclear Information System (INIS)

    Souza, Fabio A. L. de; Jorge, Francisco E.

    2013-01-01

    A hierarchical sequence of all-electron segmented contracted basis sets of double, triple and quadruple zeta valence qualities plus polarization functions augmented with diffuse functions for the atoms from H to Ar was constructed. A systematic study of basis sets required to obtain reliable and accurate values of static dipole polarizabilities of lithium and sodium clusters (n = 2, 4, 6 and 8) at their optimized equilibrium geometries is reported. Three methods are examined: Hartree-Fock (HF), second-order Moeller-Plesset perturbation theory (MP2), and density functional theory (DFT). By direct calculations or by fitting the directly calculated values through one extrapolation scheme, estimates of the HF, MP2 and DFT complete basis set limits were obtained. Comparison with experimental and theoretical data reported previously in the literature is done (author)

  1. Basis set convergence on static electric dipole polarizability calculations of alkali-metal clusters

    Energy Technology Data Exchange (ETDEWEB)

    Souza, Fabio A. L. de; Jorge, Francisco E., E-mail: jorge@cce.ufes.br [Departamento de Fisica, Universidade Federal do Espirito Santo, 29060-900 Vitoria-ES (Brazil)

    2013-07-15

    A hierarchical sequence of all-electron segmented contracted basis sets of double, triple and quadruple zeta valence qualities plus polarization functions augmented with diffuse functions for the atoms from H to Ar was constructed. A systematic study of basis sets required to obtain reliable and accurate values of static dipole polarizabilities of lithium and sodium clusters (n = 2, 4, 6 and 8) at their optimized equilibrium geometries is reported. Three methods are examined: Hartree-Fock (HF), second-order Moeller-Plesset perturbation theory (MP2), and density functional theory (DFT). By direct calculations or by fitting the directly calculated values through one extrapolation scheme, estimates of the HF, MP2 and DFT complete basis set limits were obtained. Comparison with experimental and theoretical data reported previously in the literature is done (author)

  2. Molecular basis sets - a general similarity-based approach for representing chemical spaces.

    Science.gov (United States)

    Raghavendra, Akshay S; Maggiora, Gerald M

    2007-01-01

    A new method, based on generalized Fourier analysis, is described that utilizes the concept of "molecular basis sets" to represent chemical space within an abstract vector space. The basis vectors in this space are abstract molecular vectors. Inner products among the basis vectors are determined using an ansatz that associates molecular similarities between pairs of molecules with their corresponding inner products. Moreover, the fact that similarities between pairs of molecules are, in essentially all cases, nonzero implies that the abstract molecular basis vectors are nonorthogonal, but since the similarity of a molecule with itself is unity, the molecular vectors are normalized to unity. A symmetric orthogonalization procedure, which optimally preserves the character of the original set of molecular basis vectors, is used to construct appropriate orthonormal basis sets. Molecules can then be represented, in general, by sets of orthonormal "molecule-like" basis vectors within a proper Euclidean vector space. However, the dimension of the space can become quite large. Thus, the work presented here assesses the effect of basis set size on a number of properties including the average squared error and average norm of molecular vectors represented in the space-the results clearly show the expected reduction in average squared error and increase in average norm as the basis set size is increased. Several distance-based statistics are also considered. These include the distribution of distances and their differences with respect to basis sets of differing size and several comparative distance measures such as Spearman rank correlation and Kruscal stress. All of the measures show that, even though the dimension can be high, the chemical spaces they represent, nonetheless, behave in a well-controlled and reasonable manner. Other abstract vector spaces analogous to that described here can also be constructed providing that the appropriate inner products can be directly

  3. Approaching the basis set limit for DFT calculations using an environment-adapted minimal basis with perturbation theory: Formulation, proof of concept, and a pilot implementation

    International Nuclear Information System (INIS)

    Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Teresa; Skylaris, Chris-Kriton; Head-Gordon, Martin

    2016-01-01

    Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set produces <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.

  4. Localized orbitals vs. pseudopotential-plane waves basis sets: performances and accuracy for molecular magnetic systems

    CERN Document Server

    Massobrio, C

    2003-01-01

    Density functional theory, in combination with a) a careful choice of the exchange-correlation part of the total energy and b) localized basis sets for the electronic orbital, has become the method of choice for calculating the exchange-couplings in magnetic molecular complexes. Orbital expansion on plane waves can be seen as an alternative basis set especially suited to allow optimization of newly synthesized materials of unknown geometries. However, little is known on the predictive power of this scheme to yield quantitative values for exchange coupling constants J as small as a few hundredths of eV (50-300 cm sup - sup 1). We have used density functional theory and a plane waves basis set to calculate the exchange couplings J of three homodinuclear Cu-based molecular complexes with experimental values ranging from +40 cm sup - sup 1 to -300 cm sup - sup 1. The plane waves basis set proves as accurate as the localized basis set, thereby suggesting that this approach can be reliably employed to predict and r...

  5. Localized orbitals vs. pseudopotential-plane waves basis sets: performances and accuracy for molecular magnetic systems

    International Nuclear Information System (INIS)

    Massobrio, C.; Ruiz, E.

    2003-01-01

    Density functional theory, in combination with a) a careful choice of the exchange-correlation part of the total energy and b) localized basis sets for the electronic orbital, has become the method of choice for calculating the exchange-couplings in magnetic molecular complexes. Orbital expansion on plane waves can be seen as an alternative basis set especially suited to allow optimization of newly synthesized materials of unknown geometries. However, little is known on the predictive power of this scheme to yield quantitative values for exchange coupling constants J as small as a few hundredths of eV (50-300 cm -1 ). We have used density functional theory and a plane waves basis set to calculate the exchange couplings J of three homodinuclear Cu-based molecular complexes with experimental values ranging from +40 cm -1 to -300 cm -1 . The plane waves basis set proves as accurate as the localized basis set, thereby suggesting that this approach can be reliably employed to predict and rationalize the magnetic properties of molecular-based materials. (author)

  6. On the effects of basis set truncation and electron correlation in conformers of 2-hydroxy-acetamide

    Science.gov (United States)

    Szarecka, A.; Day, G.; Grout, P. J.; Wilson, S.

    Ab initio quantum chemical calculations have been used to study the differences in energy between two gas phase conformers of the 2-hydroxy-acetamide molecule that possess intramolecular hydrogen bonding. In particular, rotation around the central C-C bond has been considered as a factor determining the structure of the hydrogen bond and stabilization of the conformer. Energy calculations include full geometiy optimization using both the restricted matrix Hartree-Fock model and second-order many-body perturbation theory with a number of commonly used basis sets. The basis sets employed ranged from the minimal STO-3G set to [`]split-valence' sets up to 6-31 G. The effects of polarization functions were also studied. The results display a strong basis set dependence.

  7. Dynamical basis set

    International Nuclear Information System (INIS)

    Blanco, M.; Heller, E.J.

    1985-01-01

    A new Cartesian basis set is defined that is suitable for the representation of molecular vibration-rotation bound states. The Cartesian basis functions are superpositions of semiclassical states generated through the use of classical trajectories that conform to the intrinsic dynamics of the molecule. Although semiclassical input is employed, the method becomes ab initio through the standard matrix diagonalization variational method. Special attention is given to classical-quantum correspondences for angular momentum. In particular, it is shown that the use of semiclassical information preferentially leads to angular momentum eigenstates with magnetic quantum number Vertical BarMVertical Bar equal to the total angular momentum J. The present method offers a reliable technique for representing highly excited vibrational-rotational states where perturbation techniques are no longer applicable

  8. First-principle modelling of forsterite surface properties: Accuracy of methods and basis sets.

    Science.gov (United States)

    Demichelis, Raffaella; Bruno, Marco; Massaro, Francesco R; Prencipe, Mauro; De La Pierre, Marco; Nestola, Fabrizio

    2015-07-15

    The seven main crystal surfaces of forsterite (Mg2 SiO4 ) were modeled using various Gaussian-type basis sets, and several formulations for the exchange-correlation functional within the density functional theory (DFT). The recently developed pob-TZVP basis set provides the best results for all properties that are strongly dependent on the accuracy of the wavefunction. Convergence on the structure and on the basis set superposition error-corrected surface energy can be reached also with poorer basis sets. The effect of adopting different DFT functionals was assessed. All functionals give the same stability order for the various surfaces. Surfaces do not exhibit any major structural differences when optimized with different functionals, except for higher energy orientations where major rearrangements occur around the Mg sites at the surface or subsurface. When dispersions are not accounted for, all functionals provide similar surface energies. The inclusion of empirical dispersions raises the energy of all surfaces by a nearly systematic value proportional to the scaling factor s of the dispersion formulation. An estimation for the surface energy is provided through adopting C6 coefficients that are more suitable than the standard ones to describe O-O interactions in minerals. A 2 × 2 supercell of the most stable surface (010) was optimized. No surface reconstruction was observed. The resulting structure and surface energy show no difference with respect to those obtained when using the primitive cell. This result validates the (010) surface model here adopted, that will serve as a reference for future studies on adsorption and reactivity of water and carbon dioxide at this interface. © 2015 Wiley Periodicals, Inc.

  9. Optimal Piecewise Linear Basis Functions in Two Dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Brooks III, E D; Szoke, A

    2009-01-26

    We use a variational approach to optimize the center point coefficients associated with the piecewise linear basis functions introduced by Stone and Adams [1], for polygonal zones in two Cartesian dimensions. Our strategy provides optimal center point coefficients, as a function of the location of the center point, by minimizing the error induced when the basis function interpolation is used for the solution of the time independent diffusion equation within the polygonal zone. By using optimal center point coefficients, one expects to minimize the errors that occur when these basis functions are used to discretize diffusion equations, or transport equations in optically thick zones (where they approach the solution of the diffusion equation). Our optimal center point coefficients satisfy the requirements placed upon the basis functions for any location of the center point. We also find that the location of the center point can be optimized, but this requires numerical calculations. Curiously, the optimum center point location is independent of the values of the dependent variable on the corners only for quadrilaterals.

  10. Systematic determination of extended atomic orbital basis sets and application to molecular SCF and MCSCF calculations

    Energy Technology Data Exchange (ETDEWEB)

    Feller, D.F.

    1979-01-01

    The behavior of the two exponential parameters in an even-tempered gaussian basis set is investigated as the set optimally approaches an integral transform representation of the radial portion of atomic and molecular orbitals. This approach permits a highly accurate assessment of the Hartree-Fock limit for atoms and molecules.

  11. MRD-CI potential surfaces using balanced basis sets. IV. The H2 molecule and the H3 surface

    International Nuclear Information System (INIS)

    Wright, J.S.; Kruus, E.

    1986-01-01

    The utility of midbond functions in molecular calculations was tested in two cases where the correct results are known: the H 2 potential curve and the collinear H 3 potential surface. For H 2 , a variety of basis sets both with and without bond functions was compared to the exact nonrelativistic potential curve of Kolos and Wolniewicz [J. Chem. Phys. 43, 2429 (1965)]. It was found that optimally balanced basis sets at two levels of quality were the double zeta single polarization plus sp bond function basis (BF1) and the triple zeta double polarization plus two sets of sp bond function basis (BF2). These gave bond dissociation energies D/sub e/ = 4.7341 and 4.7368 eV, respectively (expt. 4.7477 eV). Four basis sets were tested for basis set superposition errors, which were found to be small relative to basis set incompleteness and therefore did not affect any conclusions regarding basis set balance. Basis sets BF1 and BF2 were used to construct potential surfaces for collinear H 3 , along with the corresponding basis sets DZ*P and TZ*PP which contain no bond functions. Barrier heights of 12.52, 10.37, 10.06, and 9.96 kcal/mol were obtained for basis sets DZ*P, TZ*PP, BF1, and BF2, respectively, compared to an estimated limiting value of 9.60 kcal/mol. Difference maps, force constants, and relative rms deviations show that the bond functions improve the surface shape as well as the barrier height

  12. Accurate and balanced anisotropic Gaussian type orbital basis sets for atoms in strong magnetic fields

    Science.gov (United States)

    Zhu, Wuming; Trickey, S. B.

    2017-12-01

    In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li+, Be+, and B+, in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B field.

  13. Accurate and balanced anisotropic Gaussian type orbital basis sets for atoms in strong magnetic fields.

    Science.gov (United States)

    Zhu, Wuming; Trickey, S B

    2017-12-28

    In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li + , Be + , and B + , in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B

  14. The Dirac Equation in the algebraic approximation. VII. A comparison of molecular finite difference and finite basis set calculations using distributed Gaussian basis sets

    NARCIS (Netherlands)

    Quiney, H. M.; Glushkov, V. N.; Wilson, S.; Sabin,; Brandas, E

    2001-01-01

    A comparison is made of the accuracy achieved in finite difference and finite basis set approximations to the Dirac equation for the ground state of the hydrogen molecular ion. The finite basis set calculations are carried out using a distributed basis set of Gaussian functions the exponents and

  15. Correlation consistent basis sets for actinides. II. The atoms Ac and Np-Lr.

    Science.gov (United States)

    Feng, Rulin; Peterson, Kirk A

    2017-08-28

    New correlation consistent basis sets optimized using the all-electron third-order Douglas-Kroll-Hess (DKH3) scalar relativistic Hamiltonian are reported for the actinide elements Ac and Np through Lr. These complete the series of sets reported previously for Th-U [K. A. Peterson, J. Chem. Phys. 142, 074105 (2015); M. Vasiliu et al., J. Phys. Chem. A 119, 11422 (2015)]. The new sets range in size from double- to quadruple-zeta and encompass both those optimized for valence (6s6p5f7s6d) and outer-core electron correlations (valence + 5s5p5d). The final sets have been contracted for both the DKH3 and eXact 2-component (X2C) Hamiltonians, yielding cc-pVnZ-DK3/cc-pVnZ-X2C sets for valence correlation and cc-pwCVnZ-DK3/cc-pwCVnZ-X2C sets for outer-core correlation (n = D, T, Q in each case). In order to test the effectiveness of the new basis sets, both atomic and molecular benchmark calculations have been carried out. In the first case, the first three atomic ionization potentials (IPs) of all the actinide elements Ac-Lr have been calculated using the Feller-Peterson-Dixon (FPD) composite approach, primarily with the multireference configuration interaction (MRCI) method. Excellent convergence towards the respective complete basis set (CBS) limits is achieved with the new sets, leading to good agreement with experiment, where these exist, after accurately accounting for spin-orbit effects using the 4-component Dirac-Hartree-Fock method. For a molecular test, the IP and atomization energy (AE) of PuO 2 have been calculated also using the FPD method but using a coupled cluster approach with spin-orbit coupling accounted for using the 4-component MRCI. The present calculations yield an IP 0 for PuO 2 of 159.8 kcal/mol, which is in excellent agreement with the experimental electron transfer bracketing value of 162 ± 3 kcal/mol. Likewise, the calculated 0 K AE of 305.6 kcal/mol is in very good agreement with the currently accepted experimental value of 303.1 ± 5 kcal

  16. Zeroth-order exchange energy as a criterion for optimized atomic basis sets in interatomic force calculations

    International Nuclear Information System (INIS)

    Varandas, A.J.C.

    1980-01-01

    A suggestion is made for using the zeroth-order exchange term, at the one-exchange level, in the perturbation development of the interaction energy as a criterion for optmizing the atomic basis sets in interatomic force calculations. The approach is illustrated for the case of two helium atoms. (orig.)

  17. Consistent gaussian basis sets of double- and triple-zeta valence with polarization quality of the fifth period for solid-state calculations.

    Science.gov (United States)

    Laun, Joachim; Vilela Oliveira, Daniel; Bredow, Thomas

    2018-02-22

    Consistent basis sets of double- and triple-zeta valence with polarization quality for the fifth period have been derived for periodic quantum-chemical solid-state calculations with the crystalline-orbital program CRYSTAL. They are an extension of the pob-TZVP basis sets, and are based on the full-relativistic effective core potentials (ECPs) of the Stuttgart/Cologne group and on the def2-SVP and def2-TZVP valence basis of the Ahlrichs group. We optimized orbital exponents and contraction coefficients to supply robust and stable self-consistent field (SCF) convergence for a wide range of different compounds. The computed crystal structures are compared to those obtained with standard basis sets available from the CRYSTAL basis set database. For the applied hybrid density functional PW1PW, the average deviations of calculated lattice constants from experimental references are smaller with pob-DZVP and pob-TZVP than with standard basis sets. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  18. Quantum Dynamics with Short-Time Trajectories and Minimal Adaptive Basis Sets.

    Science.gov (United States)

    Saller, Maximilian A C; Habershon, Scott

    2017-07-11

    Methods for solving the time-dependent Schrödinger equation via basis set expansion of the wave function can generally be categorized as having either static (time-independent) or dynamic (time-dependent) basis functions. We have recently introduced an alternative simulation approach which represents a middle road between these two extremes, employing dynamic (classical-like) trajectories to create a static basis set of Gaussian wavepackets in regions of phase-space relevant to future propagation of the wave function [J. Chem. Theory Comput., 11, 8 (2015)]. Here, we propose and test a modification of our methodology which aims to reduce the size of basis sets generated in our original scheme. In particular, we employ short-time classical trajectories to continuously generate new basis functions for short-time quantum propagation of the wave function; to avoid the continued growth of the basis set describing the time-dependent wave function, we employ Matching Pursuit to periodically minimize the number of basis functions required to accurately describe the wave function. Overall, this approach generates a basis set which is adapted to evolution of the wave function while also being as small as possible. In applications to challenging benchmark problems, namely a 4-dimensional model of photoexcited pyrazine and three different double-well tunnelling problems, we find that our new scheme enables accurate wave function propagation with basis sets which are around an order-of-magnitude smaller than our original trajectory-guided basis set methodology, highlighting the benefits of adaptive strategies for wave function propagation.

  19. Prederivatives of gamma paraconvex set-valued maps and Pareto optimality conditions for set optimization problems.

    Science.gov (United States)

    Huang, Hui; Ning, Jixian

    2017-01-01

    Prederivatives play an important role in the research of set optimization problems. First, we establish several existence theorems of prederivatives for γ -paraconvex set-valued mappings in Banach spaces with [Formula: see text]. Then, in terms of prederivatives, we establish both necessary and sufficient conditions for the existence of Pareto minimal solution of set optimization problems.

  20. On the choice of an optimal value-set of qualitative attributes for information retrieval in databases

    International Nuclear Information System (INIS)

    Ryjov, A.; Loginov, D.

    1994-01-01

    The problem of choosing an optimal set of significances of qualitative attributes for information retrieval in databases is addressed. Given a particular database, a set of significances is called optimal if it results in the minimization of losses of information and information noise for information retrieval in the data base. Obviously, such a set of significances depends on the statistical parameters of the data base. The software, which enables to calculate on the basis of the statistical parameters of the given data base, the losses of information and the information noise for arbitrary sets of significances of qualitative attributes, is described. The software also permits to compare various sets of significances of qualitative attributes and to choose the optimal set of significances

  1. Kohn-Sham potentials from electron densities using a matrix representation within finite atomic orbital basis sets

    Science.gov (United States)

    Zhang, Xing; Carter, Emily A.

    2018-01-01

    We revisit the static response function-based Kohn-Sham (KS) inversion procedure for determining the KS effective potential that corresponds to a given target electron density within finite atomic orbital basis sets. Instead of expanding the potential in an auxiliary basis set, we directly update the potential in its matrix representation. Through numerical examples, we show that the reconstructed density rapidly converges to the target density. Preliminary results are presented to illustrate the possibility of obtaining a local potential in real space from the optimized potential in its matrix representation. We have further applied this matrix-based KS inversion approach to density functional embedding theory. A proof-of-concept study of a solvated proton transfer reaction demonstrates the method's promise.

  2. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    Science.gov (United States)

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  3. Groebner basis, resultants and the generalized Mandelbrot set

    Energy Technology Data Exchange (ETDEWEB)

    Geum, Young Hee [Centre of Research for Computational Sciences and Informatics in Biology, Bioindustry, Environment, Agriculture and Healthcare, University of Malaya, 50603 Kuala Lumpur (Malaysia)], E-mail: conpana@empal.com; Hare, Kevin G. [Department of Pure Mathematics, University of Waterloo, Waterloo, Ont., N2L 3G1 (Canada)], E-mail: kghare@math.uwaterloo.ca

    2009-10-30

    This paper demonstrates how the Groebner basis algorithm can be used for finding the bifurcation points in the generalized Mandelbrot set. It also shows how resultants can be used to find components of the generalized Mandelbrot set.

  4. Groebner basis, resultants and the generalized Mandelbrot set

    International Nuclear Information System (INIS)

    Geum, Young Hee; Hare, Kevin G.

    2009-01-01

    This paper demonstrates how the Groebner basis algorithm can be used for finding the bifurcation points in the generalized Mandelbrot set. It also shows how resultants can be used to find components of the generalized Mandelbrot set.

  5. Localized atomic basis set in the projector augmented wave method

    DEFF Research Database (Denmark)

    Larsen, Ask Hjorth; Vanin, Marco; Mortensen, Jens Jørgen

    2009-01-01

    We present an implementation of localized atomic-orbital basis sets in the projector augmented wave (PAW) formalism within the density-functional theory. The implementation in the real-space GPAW code provides a complementary basis set to the accurate but computationally more demanding grid...

  6. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    Directory of Open Access Journals (Sweden)

    Khang Jie Liew

    Full Text Available This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  7. Dynamical pruning of static localized basis sets in time-dependent quantum dynamics

    NARCIS (Netherlands)

    McCormack, D.A.

    2006-01-01

    We investigate the viability of dynamical pruning of localized basis sets in time-dependent quantum wave packet methods. Basis functions that have a very small population at any given time are removed from the active set. The basis functions themselves are time independent, but the set of active

  8. Set-valued optimization an introduction with applications

    CERN Document Server

    Khan, Akhtar A; Zalinescu, Constantin

    2014-01-01

    Set-valued optimization is a vibrant and expanding branch of mathematics that deals with optimization problems where the objective map and/or the constraints maps are set-valued maps acting between certain spaces. Since set-valued maps subsumes single valued maps, set-valued optimization provides an important extension and unification of the scalar as well as the vector optimization problems. Therefore this relatively new discipline has justifiably attracted a great deal of attention in recent years. This book presents, in a unified framework, basic properties on ordering relations, solution c

  9. On the performance of atomic natural orbital basis sets: A full configuration interaction study

    International Nuclear Information System (INIS)

    Illas, F.; Ricart, J.M.; Rubio, J.; Bagus, P.S.

    1990-01-01

    The performance of atomic natural orbital (ANO) basis sets has been studied by comparing self-consistant field (SCF) and full configuration interaction (CI) results obtained for the first row atoms and hydrides. The ANO results have been compared with those obtained using a segmented basis set containing the same number of contracted basis functions. The total energies obtained with the ANO basis sets are always lower than the one obtained by using the segmented one. However, for the hydrides, differential electronic correlation energy obtained with the ANO basis set may be smaller than the one recovered with the segmented set. We relate this poorer differential correlation energy for the ANO basis set to the fact that only one contracted d function is used for the ANO and segmented basis sets

  10. Optimization of the variational basis in the three body problem

    International Nuclear Information System (INIS)

    Simenog, I.V.; Pushkash, O.M.; Bestuzheva, A.B.

    1995-01-01

    The procedure of variational oscillator basis optimization is proposed to the calculation the energy spectra of three body systems. The hierarchy of basis functions is derived and energies of ground and excited states for three gravitating particles is obtained with high accuracy. 12 refs

  11. Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures.

    Science.gov (United States)

    Papior, Nick R; Calogero, Gaetano; Brandbyge, Mads

    2018-06-27

    We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C 60 ). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.

  12. Dynamical basis sets for algebraic variational calculations in quantum-mechanical scattering theory

    Science.gov (United States)

    Sun, Yan; Kouri, Donald J.; Truhlar, Donald G.; Schwenke, David W.

    1990-01-01

    New basis sets are proposed for linear algebraic variational calculations of transition amplitudes in quantum-mechanical scattering problems. These basis sets are hybrids of those that yield the Kohn variational principle (KVP) and those that yield the generalized Newton variational principle (GNVP) when substituted in Schlessinger's stationary expression for the T operator. Trial calculations show that efficiencies almost as great as that of the GNVP and much greater than the KVP can be obtained, even for basis sets with the majority of the members independent of energy.

  13. The static response function in Kohn-Sham theory: An appropriate basis for its matrix representation in case of finite AO basis sets

    International Nuclear Information System (INIS)

    Kollmar, Christian; Neese, Frank

    2014-01-01

    The role of the static Kohn-Sham (KS) response function describing the response of the electron density to a change of the local KS potential is discussed in both the theory of the optimized effective potential (OEP) and the so-called inverse Kohn-Sham problem involving the task to find the local KS potential for a given electron density. In a general discussion of the integral equation to be solved in both cases, it is argued that a unique solution of this equation can be found even in case of finite atomic orbital basis sets. It is shown how a matrix representation of the response function can be obtained if the exchange-correlation potential is expanded in terms of a Schmidt-orthogonalized basis comprising orbitals products of occupied and virtual orbitals. The viability of this approach in both OEP theory and the inverse KS problem is illustrated by numerical examples

  14. Adaptive local basis set for Kohn–Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Gaigong [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Lin, Lin, E-mail: linlin@math.berkeley.edu [Department of Mathematics, University of California, Berkeley, Berkeley, CA 94720 (United States); Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Hu, Wei, E-mail: whu@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Yang, Chao, E-mail: cyang@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Pask, John E., E-mail: pask1@llnl.gov [Physics Division, Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States)

    2017-04-15

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynman forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Since the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H{sub 2} and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.

  15. Comparison of Property-Oriented Basis Sets for the Computation of Electronic and Nuclear Relaxation Hyperpolarizabilities.

    Science.gov (United States)

    Zaleśny, Robert; Baranowska-Łączkowska, Angelika; Medveď, Miroslav; Luis, Josep M

    2015-09-08

    In the present work, we perform an assessment of several property-oriented atomic basis sets in computing (hyper)polarizabilities with a focus on the vibrational contributions. Our analysis encompasses the Pol and LPol-ds basis sets of Sadlej and co-workers, the def2-SVPD and def2-TZVPD basis sets of Rappoport and Furche, and the ORP basis set of Baranowska-Łączkowska and Łączkowski. Additionally, we use the d-aug-cc-pVQZ and aug-cc-pVTZ basis sets of Dunning and co-workers to determine the reference estimates of the investigated electric properties for small- and medium-sized molecules, respectively. We combine these basis sets with ab initio post-Hartree-Fock quantum-chemistry approaches (including the coupled cluster method) to calculate electronic and nuclear relaxation (hyper)polarizabilities of carbon dioxide, formaldehyde, cis-diazene, and a medium-sized Schiff base. The primary finding of our study is that, among all studied property-oriented basis sets, only the def2-TZVPD and ORP basis sets yield nuclear relaxation (hyper)polarizabilities of small molecules with average absolute errors less than 5.5%. A similar accuracy for the nuclear relaxation (hyper)polarizabilites of the studied systems can also be reached using the aug-cc-pVDZ basis set (5.3%), although for more accurate calculations of vibrational contributions, i.e., average absolute errors less than 1%, the aug-cc-pVTZ basis set is recommended. It was also demonstrated that anharmonic contributions to first and second hyperpolarizabilities of a medium-sized Schiff base are particularly difficult to accurately predict at the correlated level using property-oriented basis sets. For instance, the value of the nuclear relaxation first hyperpolarizability computed at the MP2/def2-TZVPD level of theory is roughly 3 times larger than that determined using the aug-cc-pVTZ basis set. We link the failure of the def2-TZVPD basis set with the difficulties in predicting the first-order field

  16. Relaxation of functions of STO-3G and 6-31G* basis sets in the series of isoelectronic to LiF molecule

    International Nuclear Information System (INIS)

    Ermakov, A.I.; Belousov, V.V.

    2007-01-01

    Relaxation effect of functions of the basis sets (BS) STO-3G and 6-31G* on their equilibration in the series of isoelectron molecules: LiF, BeO, BN and C 2 is considered. Values of parameters (exponential factor of basis functions, orbital exponents of Gauss primitives and coefficients of their grouping) of basis functions in molecules are discovered using the criterion of minimum of energy by the unlimited Hartree-Fock method calculations (UHF) with the help of direct optimization of parameters: the simplex-method and Rosenbrock method. Certain schemes of optimization differing by the amount of varying parameters have been done. Interaction of basis functions parameters of concerned sets through medium values of the Gauss exponents is established. Effects of relaxation on the change of full energy and relative errors of the calculations of interatomic distances, normal oscillations frequencies, dissociation energy and other properties of molecules are considered. Change of full energy during the relaxation of basis functions (RBF) STO-3G and 6-31G* amounts 1100 and 80 kJ/mol correspondingly, and it is in need of the account during estimation of energetic characteristics, especially for systems with high-polar chemical bonds. The relaxation BS STO-3G practically in all considered cases improves description of molecular properties, whereas the relaxation BS 6-31G* lightly effects on its equilibration [ru

  17. Optimal trading quantity integration as a basis for optimal portfolio management

    Directory of Open Access Journals (Sweden)

    Saša Žiković

    2005-06-01

    Full Text Available The author in this paper points out the reason behind calculating and using optimal trading quantity in conjunction with Markowitz’s Modern portfolio theory. In the opening part the author presents an example of calculating optimal weights using Markowitz’s Mean-Variance approach, followed by an explanation of basic logic behind optimal trading quantity. The use of optimal trading quantity is not limited to systems with Bernoulli outcome, but can also be used when trading shares, futures, options etc. Optimal trading quantity points out two often-overlooked axioms: (1 a system with negative mathematical expectancy can never be transformed in a system with positive mathematical expectancy, (2 by missing the optimal trading quantity an investor can turn a system with positive expectancy into a negative one. Optimal trading quantity is that quantity which maximizes geometric mean (growth function of a particular system. To determine the optimal trading quantity for simpler systems, with a very limited number of outcomes, a set of Kelly’s formulas is appropriate. In the conclusion the summary of the paper is presented.

  18. A Comparison of the Behavior of Functional/Basis Set Combinations for Hydrogen-Bonding in the Water Dimer with Emphasis on Basis Set Superposition Error

    OpenAIRE

    Plumley, Joshua A.; Dannenberg, J. J.

    2011-01-01

    We evaluate the performance of nine functionals (B3LYP, M05, M05-2X, M06, M06-2X, B2PLYP, B2PLYPD, X3LYP, B97D and MPWB1K) in combination with 16 basis sets ranging in complexity from 6-31G(d) to aug-cc-pV5Z for the calculation of the H-bonded water dimer with the goal of defining which combinations of functionals and basis sets provide a combination of economy and accuracy for H-bonded systems. We have compared the results to the best non-DFT molecular orbital calculations and to experimenta...

  19. Optimal Set-Point Synthesis in HVAC Systems

    DEFF Research Database (Denmark)

    Komareji, Mohammad; Stoustrup, Jakob; Rasmussen, Henrik

    2007-01-01

    This paper presents optimal set-point synthesis for a heating, ventilating, and air-conditioning (HVAC) system. This HVAC system is made of two heat exchangers: an air-to-air heat exchanger and a water-to-air heat exchanger. The objective function is composed of the electrical power for different...... components, encompassing fans, primary/secondary pump, tertiary pump, and air-to-air heat exchanger wheel; and a fraction of thermal power used by the HVAC system. The goals that have to be achieved by the HVAC system appear as constraints in the optimization problem. To solve the optimization problem......, a steady state model of the HVAC system is derived while different supplying hydronic circuits are studied for the water-to-air heat exchanger. Finally, the optimal set-points and the optimal supplying hydronic circuit are resulted....

  20. Correlation consistent basis sets for actinides. I. The Th and U atoms

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, Kirk A., E-mail: kipeters@wsu.edu [Department of Chemistry, Washington State University, Pullman, Washington 99164-4630 (United States)

    2015-02-21

    New correlation consistent basis sets based on both pseudopotential (PP) and all-electron Douglas-Kroll-Hess (DKH) Hamiltonians have been developed from double- to quadruple-zeta quality for the actinide atoms thorium and uranium. Sets for valence electron correlation (5f6s6p6d), cc − pV nZ − PP and cc − pV nZ − DK3, as well as outer-core correlation (valence + 5s5p5d), cc − pwCV nZ − PP and cc − pwCV nZ − DK3, are reported (n = D, T, Q). The -PP sets are constructed in conjunction with small-core, 60-electron PPs, while the -DK3 sets utilized the 3rd-order Douglas-Kroll-Hess scalar relativistic Hamiltonian. Both series of basis sets show systematic convergence towards the complete basis set limit, both at the Hartree-Fock and correlated levels of theory, making them amenable to standard basis set extrapolation techniques. To assess the utility of the new basis sets, extensive coupled cluster composite thermochemistry calculations of ThF{sub n} (n = 2 − 4), ThO{sub 2}, and UF{sub n} (n = 4 − 6) have been carried out. After accurately accounting for valence and outer-core correlation, spin-orbit coupling, and even Lamb shift effects, the final 298 K atomization enthalpies of ThF{sub 4}, ThF{sub 3}, ThF{sub 2}, and ThO{sub 2} are all within their experimental uncertainties. Bond dissociation energies of ThF{sub 4} and ThF{sub 3}, as well as UF{sub 6} and UF{sub 5}, were similarly accurate. The derived enthalpies of formation for these species also showed a very satisfactory agreement with experiment, demonstrating that the new basis sets allow for the use of accurate composite schemes just as in molecular systems composed only of lighter atoms. The differences between the PP and DK3 approaches were found to increase with the change in formal oxidation state on the actinide atom, approaching 5-6 kcal/mol for the atomization enthalpies of ThF{sub 4} and ThO{sub 2}. The DKH3 atomization energy of ThO{sub 2} was calculated to be smaller than the DKH2

  1. DETERMINATION ALGORITHM OF OPTIMAL GEOMETRICAL PARAMETERS FOR COMPONENTS OF FREIGHT CARS ON THE BASIS OF GENERALIZED MATHEMATICAL MODELS

    Directory of Open Access Journals (Sweden)

    O. V. Fomin

    2013-10-01

    Full Text Available Purpose. Presentation of features and example of the use of the offered determination algorithm of optimum geometrical parameters for the components of freight cars on the basis of the generalized mathematical models, which is realized using computer. Methodology. The developed approach to search for optimal geometrical parameters can be described as the determination of optimal decision of the selected set of possible variants. Findings. The presented application example of the offered algorithm proved its operation capacity and efficiency of use. Originality. The determination procedure of optimal geometrical parameters for freight car components on the basis of the generalized mathematical models was formalized in the paper. Practical value. Practical introduction of the research results for universal open cars allows one to reduce container of their design and accordingly to increase the carrying capacity almost by100 kg with the improvement of strength characteristics. Taking into account the mass of their park this will provide a considerable economic effect when producing and operating. The offered approach is oriented to the distribution of the software packages (for example Microsoft Excel, which are used by technical services of the most enterprises, and does not require additional capital investments (acquisitions of the specialized programs and proper technical staff training. This proves the correctness of the research direction. The offered algorithm can be used for the solution of other optimization tasks on the basis of the generalized mathematical models.

  2. Simulations of smog-chamber experiments using the two-dimensional volatility basis set: linear oxygenated precursors.

    Science.gov (United States)

    Chacon-Madrid, Heber J; Murphy, Benjamin N; Pandis, Spyros N; Donahue, Neil M

    2012-10-16

    We use a two-dimensional volatility basis set (2D-VBS) box model to simulate secondary organic aerosol (SOA) mass yields of linear oxygenated molecules: n-tridecanal, 2- and 7-tridecanone, 2- and 7-tridecanol, and n-pentadecane. A hybrid model with explicit, a priori treatment of the first-generation products for each precursor molecule, followed by a generic 2D-VBS mechanism for later-generation chemistry, results in excellent model-measurement agreement. This strongly confirms that the 2D-VBS mechanism is a predictive tool for SOA modeling but also suggests that certain important first-generation products for major primary SOA precursors should be treated explicitly for optimal SOA predictions.

  3. The 6-31B(d) basis set and the BMC-QCISD and BMC-CCSD multicoefficient correlation methods.

    Science.gov (United States)

    Lynch, Benjamin J; Zhao, Yan; Truhlar, Donald G

    2005-03-03

    Three new multicoefficient correlation methods (MCCMs) called BMC-QCISD, BMC-CCSD, and BMC-CCSD-C are optimized against 274 data that include atomization energies, electron affinities, ionization potentials, and reaction barrier heights. A new basis set called 6-31B(d) is developed and used as part of the new methods. BMC-QCISD has mean unsigned errors in calculating atomization energies per bond and barrier heights of 0.49 and 0.80 kcal/mol, respectively. BMC-CCSD has mean unsigned errors of 0.42 and 0.71 kcal/mol for the same two quantities. BMC-CCSD-C is an equally effective variant of BMC-CCSD that employs Cartesian rather than spherical harmonic basis sets. The mean unsigned error of BMC-CCSD or BMC-CCSD-C for atomization energies, barrier heights, ionization potentials, and electron affinities is 22% lower than G3SX(MP2) at an order of magnitude less cost for gradients for molecules with 9-13 atoms, and it scales better (N6 vs N,7 where N is the number of atoms) when the size of the molecule is increased.

  4. Incomplete basis-set problem. V. Application of CIBS to many-electron systems

    International Nuclear Information System (INIS)

    McDowell, K.; Lewis, L.

    1982-01-01

    Five versions of CIBS (corrections to an incomplete basis set) theory are used to compute first and second corrections to Roothaan--Hartree--Fock energies via expansion of a given basis set. Version one is an order by order perturbation approximation which neglects virtual orbitals; version two is a full CIBS expansion which neglects virtual orbitals; version three is an order by order perturbation approximation which includes virtual orbitals; version four is a full CIBS expansion which includes orthogonalization to virtual orbitals but neglects virtual orbital coupling terms; and version five is a full CIBS expansion with inclusion of coupling to virtual orbitals. Results are presented for the atomic and molecular systems He, Be, H 2 , LiH, Li 2 , and H 2 O. Version five is shown to produce a corrected Hartree--Fock energy which is essentially in agreement with a comparable SCF result using the same expanded basis set. Versions one through four yield varying degrees of agreement; however, it is evident that the effect of the virtual orbitals must be included. From the results, CIBS version five is shown to be a viable quantitative procedure which can be used to expand or to study the use of basis sets in quantum chemistry

  5. Set optimization and applications the state of the art : from set relations to set-valued risk measures

    CERN Document Server

    Heyde, Frank; Löhne, Andreas; Rudloff, Birgit; Schrage, Carola

    2015-01-01

    This volume presents five surveys with extensive bibliographies and six original contributions on set optimization and its applications in mathematical finance and game theory. The topics range from more conventional approaches that look for minimal/maximal elements with respect to vector orders or set relations, to the new complete-lattice approach that comprises a coherent solution concept for set optimization problems, along with existence results, duality theorems, optimality conditions, variational inequalities and theoretical foundations for algorithms. Modern approaches to scalarization methods can be found as well as a fundamental contribution to conditional analysis. The theory is tailor-made for financial applications, in particular risk evaluation and [super-]hedging for market models with transaction costs, but it also provides a refreshing new perspective on vector optimization. There is no comparable volume on the market, making the book an invaluable resource for researchers working in vector o...

  6. Rotor Pole Shape Optimization of Permanent Magnet Brushless DC Motors Using the Reduced Basis Technique

    Directory of Open Access Journals (Sweden)

    GHOLAMIAN, A. S.

    2009-06-01

    Full Text Available In this paper, a magnet shape optimization method for reduction of cogging torque and torque ripple in Permanent Magnet (PM brushless DC motors is presented by using the reduced basis technique coupled by finite element and design of experiments methods. The primary objective of the method is to reduce the enormous number of design variables required to define the magnet shape. The reduced basis technique is a weighted combination of several basis shapes. The aim of the method is to find the best combination using the weights for each shape as the design variables. A multi-level design process is developed to find suitable basis shapes or trial shapes at each level that can be used in the reduced basis technique. Each level is treated as a separated optimization problem until the required objective is achieved. The experimental design of Taguchi method is used to build the approximation model and to perform optimization. This method is demonstrated on the magnet shape optimization of a 6-poles/18-slots PM BLDC motor.

  7. Accuracy of Lagrange-sinc functions as a basis set for electronic structure calculations of atoms and molecules

    International Nuclear Information System (INIS)

    Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook; Kim, Woo Youn

    2015-01-01

    We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal to 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems

  8. ROAM: A Radial-Basis-Function Optimization Approximation Method for Diagnosing the Three-Dimensional Coronal Magnetic Field

    International Nuclear Information System (INIS)

    Dalmasse, Kevin; Nychka, Douglas W.; Gibson, Sarah E.; Fan, Yuhong; Flyer, Natasha

    2016-01-01

    The Coronal Multichannel Polarimeter (CoMP) routinely performs coronal polarimetric measurements using the Fe XIII 10747 and 10798 lines, which are sensitive to the coronal magnetic field. However, inverting such polarimetric measurements into magnetic field data is a difficult task because the corona is optically thin at these wavelengths and the observed signal is therefore the integrated emission of all the plasma along the line of sight. To overcome this difficulty, we take on a new approach that combines a parameterized 3D magnetic field model with forward modeling of the polarization signal. For that purpose, we develop a new, fast and efficient, optimization method for model-data fitting: the Radial-basis-functions Optimization Approximation Method (ROAM). Model-data fitting is achieved by optimizing a user-specified log-likelihood function that quantifies the differences between the observed polarization signal and its synthetic/predicted analog. Speed and efficiency are obtained by combining sparse evaluation of the magnetic model with radial-basis-function (RBF) decomposition of the log-likelihood function. The RBF decomposition provides an analytical expression for the log-likelihood function that is used to inexpensively estimate the set of parameter values optimizing it. We test and validate ROAM on a synthetic test bed of a coronal magnetic flux rope and show that it performs well with a significantly sparse sample of the parameter space. We conclude that our optimization method is well-suited for fast and efficient model-data fitting and can be exploited for converting coronal polarimetric measurements, such as the ones provided by CoMP, into coronal magnetic field data.

  9. Polarization functions for the modified m6-31G basis sets for atoms Ga through Kr.

    Science.gov (United States)

    Mitin, Alexander V

    2013-09-05

    The 2df polarization functions for the modified m6-31G basis sets of the third-row atoms Ga through Kr (Int J Quantum Chem, 2007, 107, 3028; Int J. Quantum Chem, 2009, 109, 1158) are proposed. The performances of the m6-31G, m6-31G(d,p), and m6-31G(2df,p) basis sets were examined in molecular calculations carried out by the density functional theory (DFT) method with B3LYP hybrid functional, Møller-Plesset perturbation theory of the second order (MP2), quadratic configuration interaction method with single and double substitutions and were compared with those for the known 6-31G basis sets as well as with the other similar 641 and 6-311G basis sets with and without polarization functions. Obtained results have shown that the performances of the m6-31G, m6-31G(d,p), and m6-31G(2df,p) basis sets are better in comparison with the performances of the known 6-31G, 6-31G(d,p) and 6-31G(2df,p) basis sets. These improvements are mainly reached due to better approximations of different electrons belonging to the different atomic shells in the modified basis sets. Applicability of the modified basis sets in thermochemical calculations is also discussed. © 2013 Wiley Periodicals, Inc.

  10. Conductance calculations with a wavelet basis set

    DEFF Research Database (Denmark)

    Thygesen, Kristian Sommer; Bollinger, Mikkel; Jacobsen, Karsten Wedel

    2003-01-01

    We present a method based on density functional theory (DFT) for calculating the conductance of a phase-coherent system. The metallic contacts and the central region where the electron scattering occurs, are treated on the same footing taking their full atomic and electronic structure into account....... The linear-response conductance is calculated from the Green's function which is represented in terms of a system-independent basis set containing wavelets with compact support. This allows us to rigorously separate the central region from the contacts and to test for convergence in a systematic way...

  11. Correlation consistent basis sets for lanthanides: The atoms La–Lu

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Qing; Peterson, Kirk A., E-mail: kipeters@wsu.edu [Department of Chemistry, Washington State University, Pullman, Washington 99164-4630 (United States)

    2016-08-07

    Using the 3rd-order Douglas-Kroll-Hess (DKH3) Hamiltonian, all-electron correlation consistent basis sets of double-, triple-, and quadruple-zeta quality have been developed for the lanthanide elements La through Lu. Basis sets designed for the recovery of valence correlation (defined here as 4f5s5p5d6s), cc-pVnZ-DK3, and outer-core correlation (valence + 4s4p4d), cc-pwCVnZ-DK3, are reported (n = D, T, and Q). Systematic convergence of both Hartree-Fock and correlation energies towards their respective complete basis set (CBS) limits are observed. Benchmark calculations of the first three ionization potentials (IPs) of La through Lu are reported at the DKH3 coupled cluster singles and doubles with perturbative triples, CCSD(T), level of theory, including effects of correlation down through the 4s electrons. Spin-orbit coupling is treated at the 2-component HF level. After extrapolation to the CBS limit, the average errors with respect to experiment were just 0.52, 1.14, and 4.24 kcal/mol for the 1st, 2nd, and 3rd IPs, respectively, compared to the average experimental uncertainties of 0.03, 1.78, and 2.65 kcal/mol, respectively. The new basis sets are also used in CCSD(T) benchmark calculations of the equilibrium geometries, atomization energies, and heats of formation for Gd{sub 2}, GdF, and GdF{sub 3}. Except for the equilibrium geometry and harmonic frequency of GdF, which are accurately known from experiment, all other calculated quantities represent significant improvements compared to the existing experimental quantities. With estimated uncertainties of about ±3 kcal/mol, the 0 K atomization energies (298 K heats of formation) are calculated to be (all in kcal/mol): 33.2 (160.1) for Gd{sub 2}, 151.7 (−36.6) for GdF, and 447.1 (−295.2) for GdF{sub 3}.

  12. Current-voltage curves for molecular junctions computed using all-electron basis sets

    International Nuclear Information System (INIS)

    Bauschlicher, Charles W.; Lawson, John W.

    2006-01-01

    We present current-voltage (I-V) curves computed using all-electron basis sets on the conducting molecule. The all-electron results are very similar to previous results obtained using effective core potentials (ECP). A hybrid integration scheme is used that keeps the all-electron calculations cost competitive with respect to the ECP calculations. By neglecting the coupling of states to the contacts below a fixed energy cutoff, the density matrix for the core electrons can be evaluated analytically. The full density matrix is formed by adding this core contribution to the valence part that is evaluated numerically. Expanding the definition of the core in the all-electron calculations significantly reduces the computational effort and, up to biases of about 2 V, the results are very similar to those obtained using more rigorous approaches. The convergence of the I-V curves and transmission coefficients with respect to basis set is discussed. The addition of diffuse functions is critical in approaching basis set completeness

  13. Gaussian basis sets for use in correlated molecular calculations. XI. Pseudopotential-based and all-electron relativistic basis sets for alkali metal (K-Fr) and alkaline earth (Ca-Ra) elements

    Science.gov (United States)

    Hill, J. Grant; Peterson, Kirk A.

    2017-12-01

    New correlation consistent basis sets based on pseudopotential (PP) Hamiltonians have been developed from double- to quintuple-zeta quality for the late alkali (K-Fr) and alkaline earth (Ca-Ra) metals. These are accompanied by new all-electron basis sets of double- to quadruple-zeta quality that have been contracted for use with both Douglas-Kroll-Hess (DKH) and eXact 2-Component (X2C) scalar relativistic Hamiltonians. Sets for valence correlation (ms), cc-pVnZ-PP and cc-pVnZ-(DK,DK3/X2C), in addition to outer-core correlation [valence + (m-1)sp], cc-p(w)CVnZ-PP and cc-pwCVnZ-(DK,DK3/X2C), are reported. The -PP sets have been developed for use with small-core PPs [I. S. Lim et al., J. Chem. Phys. 122, 104103 (2005) and I. S. Lim et al., J. Chem. Phys. 124, 034107 (2006)], while the all-electron sets utilized second-order DKH Hamiltonians for 4s and 5s elements and third-order DKH for 6s and 7s. The accuracy of the basis sets is assessed through benchmark calculations at the coupled-cluster level of theory for both atomic and molecular properties. Not surprisingly, it is found that outer-core correlation is vital for accurate calculation of the thermodynamic and spectroscopic properties of diatomic molecules containing these elements.

  14. Gaussian basis sets for use in correlated molecular calculations. XI. Pseudopotential-based and all-electron relativistic basis sets for alkali metal (K-Fr) and alkaline earth (Ca-Ra) elements.

    Science.gov (United States)

    Hill, J Grant; Peterson, Kirk A

    2017-12-28

    New correlation consistent basis sets based on pseudopotential (PP) Hamiltonians have been developed from double- to quintuple-zeta quality for the late alkali (K-Fr) and alkaline earth (Ca-Ra) metals. These are accompanied by new all-electron basis sets of double- to quadruple-zeta quality that have been contracted for use with both Douglas-Kroll-Hess (DKH) and eXact 2-Component (X2C) scalar relativistic Hamiltonians. Sets for valence correlation (ms), cc-pVnZ-PP and cc-pVnZ-(DK,DK3/X2C), in addition to outer-core correlation [valence + (m-1)sp], cc-p(w)CVnZ-PP and cc-pwCVnZ-(DK,DK3/X2C), are reported. The -PP sets have been developed for use with small-core PPs [I. S. Lim et al., J. Chem. Phys. 122, 104103 (2005) and I. S. Lim et al., J. Chem. Phys. 124, 034107 (2006)], while the all-electron sets utilized second-order DKH Hamiltonians for 4s and 5s elements and third-order DKH for 6s and 7s. The accuracy of the basis sets is assessed through benchmark calculations at the coupled-cluster level of theory for both atomic and molecular properties. Not surprisingly, it is found that outer-core correlation is vital for accurate calculation of the thermodynamic and spectroscopic properties of diatomic molecules containing these elements.

  15. A Hartree–Fock study of the confined helium atom: Local and global basis set approaches

    Energy Technology Data Exchange (ETDEWEB)

    Young, Toby D., E-mail: tyoung@ippt.pan.pl [Zakład Metod Komputerowych, Instytut Podstawowych Prolemów Techniki Polskiej Akademia Nauk, ul. Pawińskiego 5b, 02-106 Warszawa (Poland); Vargas, Rubicelia [Universidad Autónoma Metropolitana Iztapalapa, División de Ciencias Básicas e Ingenierías, Departamento de Química, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, D.F. C.P. 09340, México (Mexico); Garza, Jorge, E-mail: jgo@xanum.uam.mx [Universidad Autónoma Metropolitana Iztapalapa, División de Ciencias Básicas e Ingenierías, Departamento de Química, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, D.F. C.P. 09340, México (Mexico)

    2016-02-15

    Two different basis set methods are used to calculate atomic energy within Hartree–Fock theory. The first is a local basis set approach using high-order real-space finite elements and the second is a global basis set approach using modified Slater-type orbitals. These two approaches are applied to the confined helium atom and are compared by calculating one- and two-electron contributions to the total energy. As a measure of the quality of the electron density, the cusp condition is analyzed. - Highlights: • Two different basis set methods for atomic Hartree–Fock theory. • Galerkin finite element method and modified Slater-type orbitals. • Confined atom model (helium) under small-to-extreme confinement radii. • Detailed analysis of the electron wave-function and the cusp condition.

  16. Continuum contributions to dipole oscillator-strength sum rules for hydrogen in finite basis sets

    DEFF Research Database (Denmark)

    Oddershede, Jens; Ogilvie, John F.; Sauer, Stephan P. A.

    2017-01-01

    Calculations of the continuum contributions to dipole oscillator sum rules for hydrogen are performed using both exact and basis-set representations of the stick spectra of the continuum wave function. We show that the same results are obtained for the sum rules in both cases, but that the conver......Calculations of the continuum contributions to dipole oscillator sum rules for hydrogen are performed using both exact and basis-set representations of the stick spectra of the continuum wave function. We show that the same results are obtained for the sum rules in both cases......, but that the convergence towards the final results with increasing excitation energies included in the sum over states is slower in the basis-set cases when we use the best basis. We argue also that this conclusion most likely holds also for larger atoms or molecules....

  17. Setting value optimization method in integration for relay protection based on improved quantum particle swarm optimization algorithm

    Science.gov (United States)

    Yang, Guo Sheng; Wang, Xiao Yang; Li, Xue Dong

    2018-03-01

    With the establishment of the integrated model of relay protection and the scale of the power system expanding, the global setting and optimization of relay protection is an extremely difficult task. This paper presents a kind of application in relay protection of global optimization improved particle swarm optimization algorithm and the inverse time current protection as an example, selecting reliability of the relay protection, selectivity, quick action and flexibility as the four requires to establish the optimization targets, and optimizing protection setting values of the whole system. Finally, in the case of actual power system, the optimized setting value results of the proposed method in this paper are compared with the particle swarm algorithm. The results show that the improved quantum particle swarm optimization algorithm has strong search ability, good robustness, and it is suitable for optimizing setting value in the relay protection of the whole power system.

  18. New basis set for the prediction of the specific rotation in flexible biological molecules

    DEFF Research Database (Denmark)

    Baranowska-Łaczkowska, Angelika; Z. Łaczkowski, Krzysztof Z. Łaczkowski; Henriksen, Christian

    2016-01-01

    are compared to those obtained with the (d-)aug-cc-pVXZ (X = D, T and Q) basis sets of Dunning et al. The ORP values are in good overall agreement with the aug-cc-pVTZ results making the ORP a good basis set for routine TD-DFT optical rotation calculations of conformationally flexible molecules. The results...

  19. Level-Set Topology Optimization with Aeroelastic Constraints

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2015-01-01

    Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.

  20. Global Optimization for Bus Line Timetable Setting Problem

    Directory of Open Access Journals (Sweden)

    Qun Chen

    2014-01-01

    Full Text Available This paper defines bus timetables setting problem during each time period divided in terms of passenger flow intensity; it is supposed that passengers evenly arrive and bus runs are set evenly; the problem is to determine bus runs assignment in each time period to minimize the total waiting time of passengers on platforms if the number of the total runs is known. For such a multistage decision problem, this paper designed a dynamic programming algorithm to solve it. Global optimization procedures using dynamic programming are developed. A numerical example about bus runs assignment optimization of a single line is given to demonstrate the efficiency of the proposed methodology, showing that optimizing buses’ departure time using dynamic programming can save computational time and find the global optimal solution.

  1. A two-center-oscillator-basis as an alternative set for heavy ion processes

    International Nuclear Information System (INIS)

    Tornow, V.; Reinhard, P.G.; Drechsel, D.

    1977-01-01

    The two-center-oscillator-basis, which is constructed from harmonic oscillator wave functions developing about two different centers, suffers from numerical problems at small center separations due to the overcompleteness of the set. In order to overcome these problems we admix higer oscillator wave functions before the orthogonalization, or antisymmetrization resp. This yields a numerically stable basis set at each center separation. The results obtained for the potential energy suface are comparable with the results of more elaborate models. (orig.) [de

  2. Magnetic anisotropy basis sets for epitaxial (110) and (111) REFe2 nanofilms

    International Nuclear Information System (INIS)

    Bowden, G J; Martin, K N; Fox, A; Rainford, B D; Groot, P A J de

    2008-01-01

    Magnetic anisotropy basis sets for the cubic Laves phase rare earth intermetallic REFe 2 compounds are discussed in some detail. Such compounds can be either free standing, or thin films grown in either (110) or (111) mode using molecular beam epitaxy. For the latter, it is useful to rotate to a new coordinate system where the z-axis coincides with the growth axes of the film. In this paper, three symmetry adapted basis sets are given, for multi-pole moments up to n = 12. These sets can be used for free-standing compounds and for (110) and (111) epitaxial films. In addition, the distortion of REFe 2 films, grown on sapphire substrates, is also considered. The distortions are different for the (110) and (111) films. Strain-induced harmonic sets are given for both specific and general distortions. Finally, some predictions are made concerning the preferred direction of easy magnetization in (111) molecular beam epitaxy grown REFe 2 films

  3. Behavior and neural basis of near-optimal visual search

    Science.gov (United States)

    Ma, Wei Ji; Navalpakkam, Vidhya; Beck, Jeffrey M; van den Berg, Ronald; Pouget, Alexandre

    2013-01-01

    The ability to search efficiently for a target in a cluttered environment is one of the most remarkable functions of the nervous system. This task is difficult under natural circumstances, as the reliability of sensory information can vary greatly across space and time and is typically a priori unknown to the observer. In contrast, visual-search experiments commonly use stimuli of equal and known reliability. In a target detection task, we randomly assigned high or low reliability to each item on a trial-by-trial basis. An optimal observer would weight the observations by their trial-to-trial reliability and combine them using a specific nonlinear integration rule. We found that humans were near-optimal, regardless of whether distractors were homogeneous or heterogeneous and whether reliability was manipulated through contrast or shape. We present a neural-network implementation of near-optimal visual search based on probabilistic population coding. The network matched human performance. PMID:21552276

  4. Distinguishing the elements of a full product basis set needs only projective measurements and classical communication

    International Nuclear Information System (INIS)

    Chen Pingxing; Li Chengzu

    2004-01-01

    Nonlocality without entanglement is an interesting field. A manifestation of quantum nonlocality without entanglement is the possible local indistinguishability of orthogonal product states. In this paper we analyze the character of operators to distinguish the elements of a full product basis set in a multipartite system, and show that distinguishing perfectly these product bases needs only local projective measurements and classical communication, and these measurements cannot damage each product basis. Employing these conclusions one can discuss local distinguishability of the elements of any full product basis set easily. Finally we discuss the generalization of these results to the locally distinguishability of the elements of incomplete product basis set

  5. Optimal choice of basis functions in the linear regression analysis

    International Nuclear Information System (INIS)

    Khotinskij, A.M.

    1988-01-01

    Problem of optimal choice of basis functions in the linear regression analysis is investigated. Step algorithm with estimation of its efficiency, which holds true at finite number of measurements, is suggested. Conditions, providing the probability of correct choice close to 1 are formulated. Application of the step algorithm to analysis of decay curves is substantiated. 8 refs

  6. Training set optimization under population structure in genomic selection.

    Science.gov (United States)

    Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E

    2015-01-01

    Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.

  7. An approach to develop chemical intuition for atomistic electron transport calculations using basis set rotations

    Energy Technology Data Exchange (ETDEWEB)

    Borges, A.; Solomon, G. C. [Department of Chemistry and Nano-Science Center, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen Ø (Denmark)

    2016-05-21

    Single molecule conductance measurements are often interpreted through computational modeling, but the complexity of these calculations makes it difficult to directly link them to simpler concepts and models. Previous work has attempted to make this connection using maximally localized Wannier functions and symmetry adapted basis sets, but their use can be ambiguous and non-trivial. Starting from a Hamiltonian and overlap matrix written in a hydrogen-like basis set, we demonstrate a simple approach to obtain a new basis set that is chemically more intuitive and allows interpretation in terms of simple concepts and models. By diagonalizing the Hamiltonians corresponding to each atom in the molecule, we obtain a basis set that can be partitioned into pseudo-σ and −π and allows partitioning of the Landuaer-Büttiker transmission as well as create simple Hückel models that reproduce the key features of the full calculation. This method provides a link between complex calculations and simple concepts and models to provide intuition or extract parameters for more complex model systems.

  8. Accurate Conformational Energy Differences of Carbohydrates: A Complete Basis Set Extrapolation

    Czech Academy of Sciences Publication Activity Database

    Csonka, G. I.; Kaminský, Jakub

    2011-01-01

    Roč. 7, č. 4 (2011), s. 988-997 ISSN 1549-9618 Institutional research plan: CEZ:AV0Z40550506 Keywords : MP2 * basis set extrapolation * saccharides Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.215, year: 2011

  9. Elitism set based particle swarm optimization and its application

    Directory of Open Access Journals (Sweden)

    Yanxia Sun

    2017-01-01

    Full Text Available Topology plays an important role for Particle Swarm Optimization (PSO to achieve good optimization performance. It is difficult to find one topology structure for the particles to achieve better optimization performance than the others since the optimization performance not only depends on the searching abilities of the particles, also depends on the type of the optimization problems. Three elitist set based PSO algorithm without using explicit topology structure is proposed in this paper. An elitist set, which is based on the individual best experience, is used to communicate among the particles. Moreover, to avoid the premature of the particles, different statistical methods have been used in these three proposed methods. The performance of the proposed PSOs is compared with the results of the standard PSO 2011 and several PSO with different topologies, and the simulation results and comparisons demonstrate that the proposed PSO with adaptive probabilistic preference can achieve good optimization performance.

  10. Utilization of reduced fuelling ripple set in ROP detector layout optimization

    International Nuclear Information System (INIS)

    Kastanya, Doddy

    2012-01-01

    Highlights: ► ADORE is an ROP detect layout optimization algorithm in CANDU reactors. ► The effect of using reduced set of fuelling ripples in ADORE is assessed. ► Significant speedup can be realized by adopting this approach. ► The quality of the results is comparable to results from full set of ripples. - Abstract: The ADORE (Alternative Detector layout Optimization for REgional overpower protection system) algorithm for performing the optimization of regional overpower protection (ROP) for CANDU® reactors has been recently developed. This algorithm utilizes the simulated annealing (SA) stochastic optimization technique to come up with an optimized detector layout for the ROP systems. For each history in the SA iteration where a particular detector layout is evaluated, the goodness of this detector layout is measured in terms of its trip set point value which is obtained by performing a probabilistic trip set point calculation using the ROVER-F code. Since during each optimization execution thousands of candidate detector layouts are evaluated, the overall optimization process is time consuming. Since for each ROVER-F evaluation the number of fuelling ripples controls the execution time, reducing the number of fuelling ripples will reduce the overall execution time. This approach has been investigated and the results are presented in this paper. The challenge is to construct a set of representative fuelling ripples which will significantly speedup the optimization process while guaranteeing that the resulting detector layout has similar quality to the ones produced when the complete set of fuelling ripples is employed.

  11. Basis set effects on coupled cluster benchmarks of electronically excited states: CC3, CCSDR(3) and CC2

    DEFF Research Database (Denmark)

    Silva-Junior, Mario R.; Sauer, Stephan P. A.; Schreiber, Marko

    2010-01-01

    Vertical electronic excitation energies and one-electron properties of 28 medium-sized molecules from a previously proposed benchmark set are revisited using the augmented correlation-consistent triple-zeta aug-cc-pVTZ basis set in CC2, CCSDR(3), and CC3 calculations. The results are compared...... to those obtained previously with the smaller TZVP basis set. For each of the three coupled cluster methods, a correlation coefficient greater than 0.994 is found between the vertical excitation energies computed with the two basis sets. The deviations of the CC2 and CCSDR(3) results from the CC3 reference...... values are very similar for both basis sets, thus confirming previous conclusions on the intrinsic accuracy of CC2 and CCSDR(3). This similarity justifies the use of CC2- or CCSDR(3)-based corrections to account for basis set incompleteness in CC3 studies of vertical excitation energies. For oscillator...

  12. Excited state nuclear forces from the Tamm-Dancoff approximation to time-dependent density functional theory within the plane wave basis set framework

    Science.gov (United States)

    Hutter, Jürg

    2003-03-01

    An efficient formulation of time-dependent linear response density functional theory for the use within the plane wave basis set framework is presented. The method avoids the transformation of the Kohn-Sham matrix into the canonical basis and references virtual orbitals only through a projection operator. Using a Lagrangian formulation nuclear derivatives of excited state energies within the Tamm-Dancoff approximation are derived. The algorithms were implemented into a pseudo potential/plane wave code and applied to the calculation of adiabatic excitation energies, optimized geometries and vibrational frequencies of three low lying states of formaldehyde. An overall good agreement with other time-dependent density functional calculations, multireference configuration interaction calculations and experimental data was found.

  13. Vector optimization set-valued and variational analysis

    CERN Document Server

    Chen, Guang-ya; Yang, Xiaogi

    2005-01-01

    This book is devoted to vector or multiple criteria approaches in optimization. Topics covered include: vector optimization, vector variational inequalities, vector variational principles, vector minmax inequalities and vector equilibrium problems. In particular, problems with variable ordering relations and set-valued mappings are treated. The nonlinear scalarization method is extensively used throughout the book to deal with various vector-related problems. The results presented are original and should be interesting to researchers and graduates in applied mathematics and operations research

  14. Gaussian basis sets for use in correlated molecular calculations. IV. Calculation of static electrical response properties

    International Nuclear Information System (INIS)

    Woon, D.E.; Dunning, T.H. Jr.

    1994-01-01

    An accurate description of the electrical properties of atoms and molecules is critical for quantitative predictions of the nonlinear properties of molecules and of long-range atomic and molecular interactions between both neutral and charged species. We report a systematic study of the basis sets required to obtain accurate correlated values for the static dipole (α 1 ), quadrupole (α 2 ), and octopole (α 3 ) polarizabilities and the hyperpolarizability (γ) of the rare gas atoms He, Ne, and Ar. Several methods of correlation treatment were examined, including various orders of Moller--Plesset perturbation theory (MP2, MP3, MP4), coupled-cluster theory with and without perturbative treatment of triple excitations [CCSD, CCSD(T)], and singles and doubles configuration interaction (CISD). All of the basis sets considered here were constructed by adding even-tempered sets of diffuse functions to the correlation consistent basis sets of Dunning and co-workers. With multiply-augmented sets we find that the electrical properties of the rare gas atoms converge smoothly to values that are in excellent agreement with the available experimental data and/or previously computed results. As a further test of the basis sets presented here, the dipole polarizabilities of the F - and Cl - anions and of the HCl and N 2 molecules are also reported

  15. Approaching the theoretical limit in periodic local MP2 calculations with atomic-orbital basis sets: the case of LiH.

    Science.gov (United States)

    Usvyat, Denis; Civalleri, Bartolomeo; Maschio, Lorenzo; Dovesi, Roberto; Pisani, Cesare; Schütz, Martin

    2011-06-07

    The atomic orbital basis set limit is approached in periodic correlated calculations for solid LiH. The valence correlation energy is evaluated at the level of the local periodic second order Møller-Plesset perturbation theory (MP2), using basis sets of progressively increasing size, and also employing "bond"-centered basis functions in addition to the standard atom-centered ones. Extended basis sets, which contain linear dependencies, are processed only at the MP2 stage via a dual basis set scheme. The local approximation (domain) error has been consistently eliminated by expanding the orbital excitation domains. As a final result, it is demonstrated that the complete basis set limit can be reached for both HF and local MP2 periodic calculations, and a general scheme is outlined for the definition of high-quality atomic-orbital basis sets for solids. © 2011 American Institute of Physics

  16. The Raman Spectrum of the Squarate (C4O4-2 Anion: An Ab Initio Basis Set Dependence Study

    Directory of Open Access Journals (Sweden)

    Miranda Sandro G. de

    2002-01-01

    Full Text Available The Raman excitation profile of the squarate anion, C4O4-2 , was calculated using ab initio methods at the Hartree-Fock using Linear Response Theory (LRT for six excitation frequencies: 632.5, 514.5, 488.0, 457.9, 363.8 and 337.1 nm. Five basis set functions (6-31G*, 6-31+G*, cc-pVDZ, aug-cc-pVDZ and Sadlej's polarizability basis set were investigated aiming to evaluate the performance of the 6-31G* set for numerical convergence and computational cost in relation to the larger basis sets. All basis sets reproduce the main spectroscopic features of the Raman spectrum of this anion for the excitation interval investigated. The 6-31G* basis set presented, on average, the same accuracy of numerical results as the larger sets but at a fraction of the computational cost showing that it is suitable for the theoretical investigation of the squarate dianion and its complexes and derivatives.

  17. A choice of the parameters of NPP steam generators on the basis of vector optimization

    International Nuclear Information System (INIS)

    Lemeshev, V.U.; Metreveli, D.G.

    1981-01-01

    The optimization problem of the parameters of the designed systems is considered as the problem of multicriterion optimization. It is proposed to choose non-dominant, optimal according to Pareto, parameters. An algorithm is built on the basis of the required and sufficient non-dominant conditions to find non-dominant solutions. This algorithm has been employed to solve the problem on a choice of optimal parameters for the counterflow shell-tube steam generator of NPP of BRGD type [ru

  18. Methods for optimizing over the efficient and weakly efficient sets of an affine fractional vector optimization program

    DEFF Research Database (Denmark)

    Le, T.H.A.; Pham, D. T.; Canh, Nam Nguyen

    2010-01-01

    Both the efficient and weakly efficient sets of an affine fractional vector optimization problem, in general, are neither convex nor given explicitly. Optimization problems over one of these sets are thus nonconvex. We propose two methods for optimizing a real-valued function over the efficient...... and weakly efficient sets of an affine fractional vector optimization problem. The first method is a local one. By using a regularization function, we reformulate the problem into a standard smooth mathematical programming problem that allows applying available methods for smooth programming. In case...... the objective function is linear, we have investigated a global algorithm based upon a branch-and-bound procedure. The algorithm uses Lagrangian bound coupling with a simplicial bisection in the criteria space. Preliminary computational results show that the global algorithm is promising....

  19. Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni

    KAUST Repository

    Shuttleworth, I.G.

    2015-01-01

    © 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.

  20. Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni

    KAUST Repository

    Shuttleworth, I.G.

    2015-11-01

    © 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.

  1. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin

    2011-04-01

    In this paper, we construct a level set method for an elliptic obstacle problem, which can be reformulated as a shape optimization problem. We provide a detailed shape sensitivity analysis for this reformulation and a stability result for the shape Hessian at the optimal shape. Using the shape sensitivities, we construct a geometric gradient flow, which can be realized in the context of level set methods. We prove the convergence of the gradient flow to an optimal shape and provide a complete analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its behavior through several computational experiments. © 2011 World Scientific Publishing Company.

  2. Optimal set of selected uranium enrichments that minimizes blending consequences

    International Nuclear Information System (INIS)

    Nachlas, J.A.; Kurstedt, H.A. Jr.; Lobber, J.S. Jr.

    1977-01-01

    Identities, quantities, and costs associated with producing a set of selected enrichments and blending them to provide fuel for existing reactors are investigated using an optimization model constructed with appropriate constraints. Selected enrichments are required for either nuclear reactor fuel standardization or potential uranium enrichment alternatives such as the gas centrifuge. Using a mixed-integer linear program, the model minimizes present worth costs for a 39-product-enrichment reference case. For four ingredients, the marginal blending cost is only 0.18% of the total direct production cost. Natural uranium is not an optimal blending ingredient. Optimal values reappear in most sets of ingredient enrichments

  3. A geometrical correction for the inter- and intra-molecular basis set superposition error in Hartree-Fock and density functional theory calculations for large systems

    Science.gov (United States)

    Kruse, Holger; Grimme, Stefan

    2012-04-01

    chemistry yields MAD=0.68 kcal/mol, which represents a huge improvement over plain B3LYP/6-31G* (MAD=2.3 kcal/mol). Application of gCP-corrected B97-D3 and HF-D3 on a set of large protein-ligand complexes prove the robustness of the method. Analytical gCP gradients make optimizations of large systems feasible with small basis sets, as demonstrated for the inter-ring distances of 9-helicene and most of the complexes in Hobza's S22 test set. The method is implemented in a freely available FORTRAN program obtainable from the author's website.

  4. A geometrical correction for the inter- and intra-molecular basis set superposition error in Hartree-Fock and density functional theory calculations for large systems.

    Science.gov (United States)

    Kruse, Holger; Grimme, Stefan

    2012-04-21

    chemistry yields MAD=0.68 kcal/mol, which represents a huge improvement over plain B3LYP/6-31G* (MAD=2.3 kcal/mol). Application of gCP-corrected B97-D3 and HF-D3 on a set of large protein-ligand complexes prove the robustness of the method. Analytical gCP gradients make optimizations of large systems feasible with small basis sets, as demonstrated for the inter-ring distances of 9-helicene and most of the complexes in Hobza's S22 test set. The method is implemented in a freely available FORTRAN program obtainable from the author's website.

  5. Ab initio localized basis set study of structural parameters and elastic properties of HfO2 polymorphs

    International Nuclear Information System (INIS)

    Caravaca, M A; Casali, R A

    2005-01-01

    The SIESTA approach based on pseudopotentials and a localized basis set is used to calculate the electronic, elastic and equilibrium properties of P 2 1 /c, Pbca, Pnma, Fm3m, P4 2 nmc and Pa3 phases of HfO 2 . Using separable Troullier-Martins norm-conserving pseudopotentials which include partial core corrections for Hf, we tested important physical properties as a function of the basis set size, grid size and cut-off ratio of the pseudo-atomic orbitals (PAOs). We found that calculations in this oxide with the LDA approach and using a minimal basis set (simple zeta, SZ) improve calculated phase transition pressures with respect to the double-zeta basis set and LDA (DZ-LDA), and show similar accuracy to that determined with the PPPW and GGA approach. Still, the equilibrium volumes and structural properties calculated with SZ-LDA compare better with experiments than the GGA approach. The bandgaps and elastic and structural properties calculated with DZ-LDA are accurate in agreement with previous state of the art ab initio calculations and experimental evidence and cannot be improved with a polarized basis set. These calculated properties show low sensitivity to the PAO localization parameter range between 40 and 100 meV. However, this is not true for the relative energy, which improves upon decrease of the mentioned parameter. We found a non-linear behaviour in the lattice parameters with pressure in the P 2 1 /c phase, showing a discontinuity of the derivative of the a lattice parameter with respect to external pressure, as found in experiments. The common enthalpy values calculated with the minimal basis set give pressure transitions of 3.3 and 10.8?GPa for P2 1 /c → Pbca and Pbca → Pnma, respectively, in accordance with different high pressure experimental values

  6. Cluster analysis by optimal decomposition of induced fuzzy sets

    Energy Technology Data Exchange (ETDEWEB)

    Backer, E

    1978-01-01

    Nonsupervised pattern recognition is addressed and the concept of fuzzy sets is explored in order to provide the investigator (data analyst) additional information supplied by the pattern class membership values apart from the classical pattern class assignments. The basic ideas behind the pattern recognition problem, the clustering problem, and the concept of fuzzy sets in cluster analysis are discussed, and a brief review of the literature of the fuzzy cluster analysis is given. Some mathematical aspects of fuzzy set theory are briefly discussed; in particular, a measure of fuzziness is suggested. The optimization-clustering problem is characterized. Then the fundamental idea behind affinity decomposition is considered. Next, further analysis takes place with respect to the partitioning-characterization functions. The iterative optimization procedure is then addressed. The reclassification function is investigated and convergence properties are examined. Finally, several experiments in support of the method suggested are described. Four object data sets serve as appropriate test cases. 120 references, 70 figures, 11 tables. (RWR)

  7. Optimality Conditions in Differentiable Vector Optimization via Second-Order Tangent Sets

    International Nuclear Information System (INIS)

    Jimenez, Bienvenido; Novo, Vicente

    2004-01-01

    We provide second-order necessary and sufficient conditions for a point to be an efficient element of a set with respect to a cone in a normed space, so that there is only a small gap between necessary and sufficient conditions. To this aim, we use the common second-order tangent set and the asymptotic second-order cone utilized by Penot. As an application we establish second-order necessary conditions for a point to be a solution of a vector optimization problem with an arbitrary feasible set and a twice Frechet differentiable objective function between two normed spaces. We also establish second-order sufficient conditions when the initial space is finite-dimensional so that there is no gap with necessary conditions. Lagrange multiplier rules are also given

  8. Nuclear-electronic orbital reduced explicitly correlated Hartree-Fock approach: Restricted basis sets and open-shell systems

    International Nuclear Information System (INIS)

    Brorsen, Kurt R.; Sirjoosingh, Andrew; Pak, Michael V.; Hammes-Schiffer, Sharon

    2015-01-01

    The nuclear electronic orbital (NEO) reduced explicitly correlated Hartree-Fock (RXCHF) approach couples select electronic orbitals to the nuclear orbital via Gaussian-type geminal functions. This approach is extended to enable the use of a restricted basis set for the explicitly correlated electronic orbitals and an open-shell treatment for the other electronic orbitals. The working equations are derived and the implementation is discussed for both extensions. The RXCHF method with a restricted basis set is applied to HCN and FHF − and is shown to agree quantitatively with results from RXCHF calculations with a full basis set. The number of many-particle integrals that must be calculated for these two molecules is reduced by over an order of magnitude with essentially no loss in accuracy, and the reduction factor will increase substantially for larger systems. Typically, the computational cost of RXCHF calculations with restricted basis sets will scale in terms of the number of basis functions centered on the quantum nucleus and the covalently bonded neighbor(s). In addition, the RXCHF method with an odd number of electrons that are not explicitly correlated to the nuclear orbital is implemented using a restricted open-shell formalism for these electrons. This method is applied to HCN + , and the nuclear densities are in qualitative agreement with grid-based calculations. Future work will focus on the significance of nonadiabatic effects in molecular systems and the further enhancement of the NEO-RXCHF approach to accurately describe such effects

  9. Nuclear-electronic orbital reduced explicitly correlated Hartree-Fock approach: Restricted basis sets and open-shell systems

    Energy Technology Data Exchange (ETDEWEB)

    Brorsen, Kurt R.; Sirjoosingh, Andrew; Pak, Michael V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu [Department of Chemistry, University of Illinois at Urbana-Champaign, 600 South Mathews Ave., Urbana, Illinois 61801 (United States)

    2015-06-07

    The nuclear electronic orbital (NEO) reduced explicitly correlated Hartree-Fock (RXCHF) approach couples select electronic orbitals to the nuclear orbital via Gaussian-type geminal functions. This approach is extended to enable the use of a restricted basis set for the explicitly correlated electronic orbitals and an open-shell treatment for the other electronic orbitals. The working equations are derived and the implementation is discussed for both extensions. The RXCHF method with a restricted basis set is applied to HCN and FHF{sup −} and is shown to agree quantitatively with results from RXCHF calculations with a full basis set. The number of many-particle integrals that must be calculated for these two molecules is reduced by over an order of magnitude with essentially no loss in accuracy, and the reduction factor will increase substantially for larger systems. Typically, the computational cost of RXCHF calculations with restricted basis sets will scale in terms of the number of basis functions centered on the quantum nucleus and the covalently bonded neighbor(s). In addition, the RXCHF method with an odd number of electrons that are not explicitly correlated to the nuclear orbital is implemented using a restricted open-shell formalism for these electrons. This method is applied to HCN{sup +}, and the nuclear densities are in qualitative agreement with grid-based calculations. Future work will focus on the significance of nonadiabatic effects in molecular systems and the further enhancement of the NEO-RXCHF approach to accurately describe such effects.

  10. Push it to the limit: Characterizing the convergence of common sequences of basis sets for intermolecular interactions as described by density functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Witte, Jonathon [Department of Chemistry, University of California, Berkeley, California 94720 (United States); Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Neaton, Jeffrey B. [Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Department of Physics, University of California, Berkeley, California 94720 (United States); Kavli Energy Nanosciences Institute at Berkeley, Berkeley, California 94720 (United States); Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu [Department of Chemistry, University of California, Berkeley, California 94720 (United States); Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States)

    2016-05-21

    With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methods and systems examined, the most complete basis is Jensen’s pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.

  11. Optimal projection of observations in a Bayesian setting

    KAUST Repository

    Giraldi, Loic

    2018-03-18

    Optimal dimensionality reduction methods are proposed for the Bayesian inference of a Gaussian linear model with additive noise in presence of overabundant data. Three different optimal projections of the observations are proposed based on information theory: the projection that minimizes the Kullback–Leibler divergence between the posterior distributions of the original and the projected models, the one that minimizes the expected Kullback–Leibler divergence between the same distributions, and the one that maximizes the mutual information between the parameter of interest and the projected observations. The first two optimization problems are formulated as the determination of an optimal subspace and therefore the solution is computed using Riemannian optimization algorithms on the Grassmann manifold. Regarding the maximization of the mutual information, it is shown that there exists an optimal subspace that minimizes the entropy of the posterior distribution of the reduced model; a basis of the subspace can be computed as the solution to a generalized eigenvalue problem; an a priori error estimate on the mutual information is available for this particular solution; and that the dimensionality of the subspace to exactly conserve the mutual information between the input and the output of the models is less than the number of parameters to be inferred. Numerical applications to linear and nonlinear models are used to assess the efficiency of the proposed approaches, and to highlight their advantages compared to standard approaches based on the principal component analysis of the observations.

  12. On the use of Locally Dense Basis Sets in the Calculation of EPR Hyperfine Couplings

    DEFF Research Database (Denmark)

    Hedegård, Erik D.; Sauer, Stephan P. A.; Milhøj, Birgitte O.

    2013-01-01

    The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site in the c......The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site...

  13. On the use of locally dense basis sets in the calculation of EPR hyperfine couplings

    DEFF Research Database (Denmark)

    Milhøj, Birgitte Olai; Hedegård, Erik D.; Sauer, Stephan P. A.

    2013-01-01

    The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site in the c......The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site...

  14. Approximating the Pareto set of multiobjective linear programs via robust optimization

    NARCIS (Netherlands)

    Gorissen, B.L.; den Hertog, D.

    2012-01-01

    We consider problems with multiple linear objectives and linear constraints and use adjustable robust optimization and polynomial optimization as tools to approximate the Pareto set with polynomials of arbitrarily large degree. The main difference with existing techniques is that we optimize a

  15. BLANK SHAPE OPTIMIZATION ON DEEP DRAWING OF A TWIN ELLIPTICAL CUP USING THE REDUCED BASIS TECHNIQUE METHOD

    Directory of Open Access Journals (Sweden)

    Mahdi Hasanzadeh Golshani

    2015-08-01

    Full Text Available In this project thesis, initial blank shape optimization of a twin elliptical cup to reduce earring phenomenon in anisotropic sheet deep drawing process was studied .The purpose of this study is optimization of initial blank for reduction of the ears height value. The optimization process carried out using finite element method approach, which is coupled with Taguchi design of experiments and reduced basis technique methods. The deep drawing process was simulated in FEM software ABAQUS 6.12. The results of optimization show earring height and, in addition, a number of design variables and time of process can be reduced by using this methods. After optimization process with the proposed method, the maximum reduction of the earring height would be from 21.08 mm to 0.07 mm and also it could be reduced to 0 in some of the directions. The proposed optimization design in this article allows the designers to select the practical basis shapes. This leads to obtain better results at the end of the optimization process, to reduce design variables, and also to prevent repeating the optimization steps for indirect shapes.

  16. Aerostructural Level Set Topology Optimization for a Common Research Model Wing

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2014-01-01

    The purpose of this work is to use level set topology optimization to improve the design of a representative wing box structure for the NASA common research model. The objective is to minimize the total compliance of the structure under aerodynamic and body force loading, where the aerodynamic loading is coupled to the structural deformation. A taxi bump case was also considered, where only body force loads were applied. The trim condition that aerodynamic lift must balance the total weight of the aircraft is enforced by allowing the root angle of attack to change. The level set optimization method is implemented on an unstructured three-dimensional grid, so that the method can optimize a wing box with arbitrary geometry. Fast matching and upwind schemes are developed for an unstructured grid, which make the level set method robust and efficient. The adjoint method is used to obtain the coupled shape sensitivities required to perform aerostructural optimization of the wing box structure.

  17. Global Optimization for Transport Network Expansion and Signal Setting

    OpenAIRE

    Liu, Haoxiang; Wang, David Z. W.; Yue, Hao

    2015-01-01

    This paper proposes a model to address an urban transport planning problem involving combined network design and signal setting in a saturated network. Conventional transport planning models usually deal with the network design problem and signal setting problem separately. However, the fact that network capacity design and capacity allocation determined by network signal setting combine to govern the transport network performance requires the optimal transport planning to consider the two pr...

  18. The Bethe Sum Rule and Basis Set Selection in the Calculation of Generalized Oscillator Strengths

    DEFF Research Database (Denmark)

    Cabrera-Trujillo, Remigio; Sabin, John R.; Oddershede, Jens

    1999-01-01

    Fulfillment of the Bethe sum rule may be construed as a measure of basis set quality for atomic and molecular properties involving the generalized oscillator strength distribution. It is first shown that, in the case of a complete basis, the Bethe sum rule is fulfilled exactly in the random phase...

  19. Straightening the Hierarchical Staircase for Basis Set Extrapolations: A Low-Cost Approach to High-Accuracy Computational Chemistry

    Science.gov (United States)

    Varandas, António J. C.

    2018-04-01

    Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.

  20. Ultrafuzziness Optimization Based on Type II Fuzzy Sets for Image Thresholding

    Directory of Open Access Journals (Sweden)

    Hudan Studiawan

    2010-11-01

    Full Text Available Image thresholding is one of the processing techniques to provide high quality preprocessed image. Image vagueness and bad illumination are common obstacles yielding in a poor image thresholding output. By assuming image as fuzzy sets, several different fuzzy thresholding techniques have been proposed to remove these obstacles during threshold selection. In this paper, we proposed an algorithm for thresholding image using ultrafuzziness optimization to decrease uncertainty in fuzzy system by common fuzzy sets like type II fuzzy sets. Optimization was conducted by involving ultrafuzziness measurement for background and object fuzzy sets separately. Experimental results demonstrated that the proposed image thresholding method had good performances for images with high vagueness, low level contrast, and grayscale ambiguity.

  1. An expanded calibration study of the explicitly correlated CCSD(T)-F12b method using large basis set standard CCSD(T) atomization energies.

    Science.gov (United States)

    Feller, David; Peterson, Kirk A

    2013-08-28

    The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies 0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.

  2. Generation of Optimal Basis Functions for Reconstruction of Power Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Park, Moonghu [Sejong Univ., Seoul (Korea, Republic of)

    2014-05-15

    This study proposes GMDH to find not only the best functional form but also the optimal parameters those describe the power distribution most accurately. A total of 1,060 cases of axially 1-dimensional core power distributions of 20-nodes are generated by 3-dimensional core analysis code covering BOL to EOL core burnup histories to validate the method. Axially five-point box powers at in-core detectors are considered as measurements. The reconstructed axial power shapes using GMDH method are compared to the reference power shapes. The results show that the proposed method is very robust and accurate compared with spline fitting method. It is shown that the GMDH analysis can give optimal basis functions for core power shape reconstruction. The in-core measurements are the 5 detector snapshots and the 20-node power distribution is successfully reconstructed. The effectiveness of the method is demonstrated by comparing the results of spline fitting for BOL, saddle and top-skewed power shapes.

  3. Estimation of isotropic nuclear magnetic shieldings in the CCSD(T) and MP2 complete basis set limit using affordable correlation calculations

    DEFF Research Database (Denmark)

    Kupka, Teobald; Stachów, Michal; Kaminsky, Jakub

    2013-01-01

    , estimated from calculations with the family of polarizationconsistent pcS-n basis sets is reported. This dependence was also supported by inspection of profiles of deviation between CBS estimated nuclear shieldings and obtained with significantly smaller basis sets pcS-2 and aug-cc-pVTZ-J for the selected......A linear correlation between isotropic nuclear magnetic shielding constants for seven model molecules (CH2O, H2O, HF, F2, HCN, SiH4 and H2S) calculated with 37 methods (34 density functionals, RHF, MP2 and CCSD(T) ), with affordable pcS-2 basis set and corresponding complete basis set results...... set of 37 calculation methods. It was possible to formulate a practical approach of estimating the values of isotropic nuclear magnetic shielding constants at the CCSD(T)/CBS and MP2/CBS levels from affordable CCSD(T)/pcS-2, MP2/pcS-2 and DFT/CBS calculations with pcS-n basis sets. The proposed method...

  4. Ab initio localized basis set study of structural parameters and elastic properties of HfO{sub 2} polymorphs

    Energy Technology Data Exchange (ETDEWEB)

    Caravaca, M A [Facultad de Ingenieria, Universidad Nacional del Nordeste, Avenida Las Heras 727, 3500-Resistencia (Argentina); Casali, R A [Facultad de Ciencias Exactas y Naturales y Agrimensura, Universidad Nacional del Nordeste, Avenida Libertad, 5600-Corrientes (Argentina)

    2005-09-21

    The SIESTA approach based on pseudopotentials and a localized basis set is used to calculate the electronic, elastic and equilibrium properties of P 2{sub 1}/c, Pbca, Pnma, Fm3m, P4{sub 2}nmc and Pa3 phases of HfO{sub 2}. Using separable Troullier-Martins norm-conserving pseudopotentials which include partial core corrections for Hf, we tested important physical properties as a function of the basis set size, grid size and cut-off ratio of the pseudo-atomic orbitals (PAOs). We found that calculations in this oxide with the LDA approach and using a minimal basis set (simple zeta, SZ) improve calculated phase transition pressures with respect to the double-zeta basis set and LDA (DZ-LDA), and show similar accuracy to that determined with the PPPW and GGA approach. Still, the equilibrium volumes and structural properties calculated with SZ-LDA compare better with experiments than the GGA approach. The bandgaps and elastic and structural properties calculated with DZ-LDA are accurate in agreement with previous state of the art ab initio calculations and experimental evidence and cannot be improved with a polarized basis set. These calculated properties show low sensitivity to the PAO localization parameter range between 40 and 100 meV. However, this is not true for the relative energy, which improves upon decrease of the mentioned parameter. We found a non-linear behaviour in the lattice parameters with pressure in the P 2{sub 1}/c phase, showing a discontinuity of the derivative of the a lattice parameter with respect to external pressure, as found in experiments. The common enthalpy values calculated with the minimal basis set give pressure transitions of 3.3 and 10.8?GPa for P2{sub 1}/c {yields} Pbca and Pbca {yields} Pnma, respectively, in accordance with different high pressure experimental values.

  5. Basis set approach in the constrained interpolation profile method

    International Nuclear Information System (INIS)

    Utsumi, T.; Koga, J.; Yabe, T.; Ogata, Y.; Matsunaga, E.; Aoki, T.; Sekine, M.

    2003-07-01

    We propose a simple polynomial basis-set that is easily extendable to any desired higher-order accuracy. This method is based on the Constrained Interpolation Profile (CIP) method and the profile is chosen so that the subgrid scale solution approaches the real solution by the constraints from the spatial derivative of the original equation. Thus the solution even on the subgrid scale becomes consistent with the master equation. By increasing the order of the polynomial, this solution quickly converges. 3rd and 5th order polynomials are tested on the one-dimensional Schroedinger equation and are proved to give solutions a few orders of magnitude higher in accuracy than conventional methods for lower-lying eigenstates. (author)

  6. Predicting Pt-195 NMR chemical shift using new relativistic all-electron basis set

    NARCIS (Netherlands)

    Paschoal, D.; Fonseca Guerra, C.; de Oliveira, M.A.L.; Ramalho, T.C.; Dos Santos, H.F.

    2016-01-01

    Predicting NMR properties is a valuable tool to assist the experimentalists in the characterization of molecular structure. For heavy metals, such as Pt-195, only a few computational protocols are available. In the present contribution, all-electron Gaussian basis sets, suitable to calculate the

  7. Optimal Set Anode Potentials Vary in Bioelectrochemical Systems

    KAUST Repository

    Wagner, Rachel C.

    2010-08-15

    In bioelectrochemical systems (BESs), the anode potential can be set to a fixed voltage using a potentiostat, but there is no accepted method for defining an optimal potential. Microbes can theoretically gain more energy by reducing a terminal electron acceptor with a more positive potential, for example oxygen compared to nitrate. Therefore, more positive anode potentials should allow microbes to gain more energy per electron transferred than a lower potential, but this can only occur if the microbe has metabolic pathways capable of capturing the available energy. Our review of the literature shows that there is a general trend of improved performance using more positive potentials, but there are several notable cases where biofilm growth and current generation improved or only occurred at more negative potentials. This suggests that even with diverse microbial communities, it is primarily the potential of the terminal respiratory proteins used by certain exoelectrogenic bacteria, and to a lesser extent the anode potential, that determines the optimal growth conditions in the reactor. Our analysis suggests that additional bioelectrochemical investigations of both pure and mixed cultures, over a wide range of potentials, are needed to better understand how to set and evaluate optimal anode potentials for improving BES performance. © 2010 American Chemical Society.

  8. Optimization of ultrasonic arrays design and setting using a differential evolution

    International Nuclear Information System (INIS)

    Puel, B.; Chatillon, S.; Calmon, P.; Lesselier, D.

    2011-01-01

    Optimization of both design and setting of phased arrays could be not so easy when they are performed manually via parametric studies. An optimization method based on an Evolutionary Algorithm and numerical simulation is proposed and evaluated. The Randomized Adaptive Differential Evolution has been adapted to meet the specificities of the non-destructive testing applications. In particular, the solution of multi-objective problems is aimed at with the implementation of the concept of pareto-optimal sets of solutions. The algorithm has been implemented and connected to the ultrasonic simulation modules of the CIVA software used as forward model. The efficiency of the method is illustrated on two realistic cases of application: optimization of the position and delay laws of a flexible array inspecting a nozzle, considered as a mono-objective problem; and optimization of the design of a surrounded array and its delay laws, considered as a constrained bi-objective problem. (authors)

  9. Optimal regional biases in ECB interest rate setting

    NARCIS (Netherlands)

    Arnold, I.J.M.

    2005-01-01

    This paper uses a simple model of optimal monetary policy to consider whether the influence of national output and inflation rates on ECB interest rate setting should equal a country’s weight in the eurozone economy. The findings depend on assumptions regarding interest rate elasticities, exchange

  10. Optimization Settings in the Fuzzy Combined Mamdani PID Controller

    Science.gov (United States)

    Kudinov, Y. I.; Pashchenko, F. F.; Pashchenko, A. F.; Kelina, A. Y.; Kolesnikov, V. A.

    2017-11-01

    In the present work the actual problem of determining the optimal settings of fuzzy parallel proportional-integral-derivative (PID) controller is considered to control nonlinear plants that is not always possible to perform with classical linear PID controllers. In contrast to the linear fuzzy PID controllers there are no analytical methods of settings calculation. In this paper, we develop a numerical optimization approach to determining the coefficients of a fuzzy PID controller. Decomposition method of optimization is proposed, the essence of which was as follows. All homogeneous coefficients were distributed to the relevant groups, for example, three error coefficients, the three coefficients of the changes of errors and the three coefficients of the outputs P, I and D components. Consistently in each of such groups the search algorithm was selected that has determined the coefficients under which we receive the schedule of the transition process satisfying all the applicable constraints. Thus, with the help of Matlab and Simulink in a reasonable time were found the factors of a fuzzy PID controller, which meet the accepted limitations on the transition process.

  11. An energy-saving set-point optimizer with a sliding mode controller for automotive air-conditioning/refrigeration systems

    International Nuclear Information System (INIS)

    Huang, Yanjun; Khajepour, Amir; Ding, Haitao; Bagheri, Farshid; Bahrami, Majid

    2017-01-01

    Highlights: • A novel two-layer energy-saving controller for automotive A/C-R system is developed. • A set-point optimizer at the outer loop is designed based on the steady state model. • A sliding mode controller in the inner loop is built. • Extensively experiments studies show that about 9% energy can be saving by this controller. - Abstract: This paper presents an energy-saving controller for automotive air-conditioning/refrigeration (A/C-R) systems. With their extensive application in homes, industry, and vehicles, A/C-R systems are consuming considerable amounts of energy. The proposed controller consists of two different time-scale layers. The outer or the slow time-scale layer called a set-point optimizer is used to find the set points related to energy efficiency by using the steady state model; whereas, the inner or the fast time-scale layer is used to track the obtained set points. In the inner loop, thanks to its robustness, a sliding mode controller (SMC) is utilized to track the set point of the cargo temperature. The currently used on/off controller is presented and employed as a basis for comparison to the proposed controller. More importantly, the real experimental results under several disturbed scenarios are analysed to demonstrate how the proposed controller can improve performance while reducing the energy consumption by 9% comparing with the on/off controller. The controller is suitable for any type of A/C-R system even though it is applied to an automotive A/C-R system in this paper.

  12. Multivariate optimization of production systems

    International Nuclear Information System (INIS)

    Carroll, J.A.; Horne, R.N.

    1992-01-01

    This paper reports that mathematically, optimization involves finding the extreme values of a function. Given a function of several variables, Z = ∫(rvec x 1 , rvec x 2 ,rvec x 3 ,→x n ), an optimization scheme will find the combination of these variables that produces an extreme value in the function, whether it is a minimum or a maximum value. Many examples of optimization exist. For instance, if a function gives and investor's expected return on the basis of different investments, numerical optimization of the function will determine the mix of investments that will yield the maximum expected return. This is the basis of modern portfolio theory. If a function gives the difference between a set of data and a model of the data, numerical optimization of the function will produce the best fit of the model to the data. This is the basis for nonlinear parameter estimation. Similar examples can be given for network analysis, queuing theory, decision analysis, etc

  13. Many-body calculations of molecular electric polarizabilities in asymptotically complete basis sets

    Science.gov (United States)

    Monten, Ruben; Hajgató, Balázs; Deleuze, Michael S.

    2011-10-01

    The static dipole polarizabilities of Ne, CO, N2, F2, HF, H2O, HCN, and C2H2 (acetylene) have been determined close to the Full-CI limit along with an asymptotically complete basis set (CBS), according to the principles of a Focal Point Analysis. For this purpose the results of Finite Field calculations up to the level of Coupled Cluster theory including Single, Double, Triple, Quadruple and perturbative Pentuple excitations [CCSDTQ(P)] were used, in conjunction with suited extrapolations of energies obtained using augmented and doubly-augmented Dunning's correlation consistent polarized valence basis sets of improving quality. The polarizability characteristics of C2H4 (ethylene) and C2H6 (ethane) have been determined on the same grounds at the CCSDTQ level in the CBS limit. Comparison is made with results obtained using lower levels in electronic correlation, or taking into account the relaxation of the molecular structure due to an adiabatic polarization process. Vibrational corrections to electronic polarizabilities have been empirically estimated according to Born-Oppenheimer Molecular Dynamical simulations employing Density Functional Theory. Confrontation with experiment ultimately indicates relative accuracies of the order of 1 to 2%.

  14. Optimal Interest-Rate Setting in a Dynamic IS/AS Model

    DEFF Research Database (Denmark)

    Jensen, Henrik

    2011-01-01

    This note deals with interest-rate setting in a simple dynamic macroeconomic setting. The purpose is to present some basic and central properties of an optimal interest-rate rule. The model framework predates the New-Keynesian paradigm of the late 1990s and onwards (it is accordingly dubbed “Old...

  15. An editor for the maintenance and use of a bank of contracted Gaussian basis set functions

    International Nuclear Information System (INIS)

    Taurian, O.E.

    1984-01-01

    A bank of basis sets to be used in ab-initio calculations has been created. The bases are sets of contracted Gaussian type orbitals to be used as input to any molecular integral package. In this communication we shall describe the organization of the bank and a portable editor program which was designed for its maintenance and use. This program is operated by commands and it may be used to obtain any kind of information about the bases in the bank as well as to produce output to be directly used as input for different integral programs. The editor may also be used to format basis sets in the conventional way utilized in publications, as well as to generate a complete, or partial, manual of the contents of the bank if so desired. (orig.)

  16. Constructing DNA Barcode Sets Based on Particle Swarm Optimization.

    Science.gov (United States)

    Wang, Bin; Zheng, Xuedong; Zhou, Shihua; Zhou, Changjun; Wei, Xiaopeng; Zhang, Qiang; Wei, Ziqi

    2018-01-01

    Following the completion of the human genome project, a large amount of high-throughput bio-data was generated. To analyze these data, massively parallel sequencing, namely next-generation sequencing, was rapidly developed. DNA barcodes are used to identify the ownership between sequences and samples when they are attached at the beginning or end of sequencing reads. Constructing DNA barcode sets provides the candidate DNA barcodes for this application. To increase the accuracy of DNA barcode sets, a particle swarm optimization (PSO) algorithm has been modified and used to construct the DNA barcode sets in this paper. Compared with the extant results, some lower bounds of DNA barcode sets are improved. The results show that the proposed algorithm is effective in constructing DNA barcode sets.

  17. A Novel Consensus-Based Particle Swarm Optimization-Assisted Trust-Tech Methodology for Large-Scale Global Optimization.

    Science.gov (United States)

    Zhang, Yong-Feng; Chiang, Hsiao-Dong

    2017-09-01

    A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.

  18. Geometrical correction for the inter- and intramolecular basis set superposition error in periodic density functional theory calculations.

    Science.gov (United States)

    Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan

    2013-09-26

    We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface.

  19. A geometric approach to multiperiod mean variance optimization of assets and liabilities

    OpenAIRE

    Leippold, Markus; Trojani, Fabio; Vanini, Paolo

    2005-01-01

    We present a geometric approach to discrete time multiperiod mean variance portfolio optimization that largely simplifies the mathematical analysis and the economic interpretation of such model settings. We show that multiperiod mean variance optimal policies can be decomposed in an orthogonal set of basis strategies, each having a clear economic interpretation. This implies that the corresponding multi period mean variance frontiers are spanned by an orthogonal basis of dynamic returns. Spec...

  20. Atomic Cholesky decompositions: A route to unbiased auxiliary basis sets for density fitting approximation with tunable accuracy and efficiency

    Science.gov (United States)

    Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland

    2009-04-01

    Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.

  1. Performance assessment of density functional methods with Gaussian and Slater basis sets using 7σ orbital momentum distributions of N2O

    Science.gov (United States)

    Wang, Feng; Pang, Wenning; Duffy, Patrick

    2012-12-01

    Performance of a number of commonly used density functional methods in chemistry (B3LYP, Bhandh, BP86, PW91, VWN, LB94, PBe0, SAOP and X3LYP and the Hartree-Fock (HF) method) has been assessed using orbital momentum distributions of the 7σ orbital of nitrous oxide (NNO), which models electron behaviour in a chemically significant region. The density functional methods are combined with a number of Gaussian basis sets (Pople's 6-31G*, 6-311G**, DGauss TZVP and Dunning's aug-cc-pVTZ as well as even-tempered Slater basis sets, namely, et-DZPp, et-QZ3P, et-QZ+5P and et-pVQZ). Orbital momentum distributions of the 7σ orbital in the ground electronic state of NNO, which are obtained from a Fourier transform into momentum space from single point electronic calculations employing the above models, are compared with experimental measurement of the same orbital from electron momentum spectroscopy (EMS). The present study reveals information on performance of (a) the density functional methods, (b) Gaussian and Slater basis sets, (c) combinations of the density functional methods and basis sets, that is, the models, (d) orbital momentum distributions, rather than a group of specific molecular properties and (e) the entire region of chemical significance of the orbital. It is found that discrepancies of this orbital between the measured and the calculated occur in the small momentum region (i.e. large r region). In general, Slater basis sets achieve better overall performance than the Gaussian basis sets. Performance of the Gaussian basis sets varies noticeably when combining with different Vxc functionals, but Dunning's augcc-pVTZ basis set achieves the best performance for the momentum distributions of this orbital. The overall performance of the B3LYP and BP86 models is similar to newer models such as X3LYP and SAOP. The present study also demonstrates that the combinations of the density functional methods and the basis sets indeed make a difference in the quality of the

  2. Topology optimization in acoustics and elasto-acoustics via a level-set method

    Science.gov (United States)

    Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.

    2018-04-01

    Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.

  3. Spectral properties of minimal-basis-set orbitals: Implications for molecular electronic continuum states

    Science.gov (United States)

    Langhoff, P. W.; Winstead, C. L.

    Early studies of the electronically excited states of molecules by John A. Pople and coworkers employing ab initio single-excitation configuration interaction (SECI) calculations helped to simulate related applications of these methods to the partial-channel photoionization cross sections of polyatomic molecules. The Gaussian representations of molecular orbitals adopted by Pople and coworkers can describe SECI continuum states when sufficiently large basis sets are employed. Minimal-basis virtual Fock orbitals stabilized in the continuous portions of such SECI spectra are generally associated with strong photoionization resonances. The spectral attributes of these resonance orbitals are illustrated here by revisiting previously reported experimental and theoretical studies of molecular formaldehyde (H2CO) in combination with recently calculated continuum orbital amplitudes.

  4. Complexity Reduction in Large Quantum Systems: Fragment Identification and Population Analysis via a Local Optimized Minimal Basis

    International Nuclear Information System (INIS)

    Mohr, Stephan; Masella, Michel; Ratcliff, Laura E.; Genovese, Luigi

    2017-01-01

    We present, within Kohn-Sham Density Functional Theory calculations, a quantitative method to identify and assess the partitioning of a large quantum mechanical system into fragments. We then introduce a simple and efficient formalism (which can be written as generalization of other well-known population analyses) to extract, from first principles, electrostatic multipoles for these fragments. The corresponding fragment multipoles can in this way be seen as reliable (pseudo-) observables. By applying our formalism within the code BigDFT, we show that the usage of a minimal set of in-situ optimized basis functions is of utmost importance for having at the same time a proper fragment definition and an accurate description of the electronic structure. With this approach it becomes possible to simplify the modeling of environmental fragments by a set of multipoles, without notable loss of precision in the description of the active quantum mechanical region. Furthermore, this leads to a considerable reduction of the degrees of freedom by an effective coarsegraining approach, eventually also paving the way towards efficient QM/QM and QM/MM methods coupling together different levels of accuracy.

  5. Doubly stochastic radial basis function methods

    Science.gov (United States)

    Yang, Fenglian; Yan, Liang; Ling, Leevan

    2018-06-01

    We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).

  6. Laser-Doppler-Velocimetry on the basis of frequency selective absorption: set-up and test of a Doppler Gloval Velocimeter; Laser-Doppler-Velocimetry auf der Basis frequenzselektiver Absorption: Aufbau und Einsatz eines Doppler Global Velocimeters

    Energy Technology Data Exchange (ETDEWEB)

    Roehle, I.

    1999-11-01

    A Doppler Global Velocimeter was set up in the frame of a PhD thesis. This velocimeter is optimized to carry out high accuracy, three component, time averaged planar velocity measurements. The anemometer was successfully applied to wind tunnel and test rig flows, and the measurement accuracy was investigated. A volumetric data-set of the flow field inside an industrial combustion chamber was measured. This data field contained about 400.000 vectors. DGV measurements in the intake of a jet engine model were carried out applying a fibre bundle boroskope. The flow structure of the wake of a car model in a wind tunnel was investigated. The measurement accuracy of the DGV-System is {+-}0.5 m/s when operated under ideal conditions. This study can serve as a basis to evaluate the use of DGV for aerodynamic development experiments. (orig.) [German] Im Rahmen der Dissertation wurde ein auf hohe Messgenauigkeit optimiertes DGV-Geraet fuer zeitlich gemittelte Drei-Komponenten-Geschwindigkeitsmessungen entwickelt und gebaut, an Laborstroemungen, an Teststaenden und an Windkanaelen erfolgreich eingesetzt und das Potential der Messtechnik, insbesondere im Hinblick auf Messgenauigkeit, untersucht. Im Fall einer industriellen Brennkammer konnte ein Volumen-Datensatz des Stroemungsfeldes erstellt werden, dessen Umfang bei ca. 400.000 Vektoren lag. Es wurden DGV-Messungen mittels eines flexiblen Endoskops auf Basis eines Faserbuendels durchgefuehrt und damit die Stroemung in einem Flugzeugeinlauf vermessen. Es wurden DGV-Messungen im Nachlauf eines PKW-Modells in einem Windkanal durchgefuehrt. Die Messgenauigkeit des erstellten DGV-Systems betraegt unter Idealbedingungen {+-}0,5 m/s. Durch die Arbeit wurde eine Basis zur Beurteilung des Nutzens der DGV-Technik fuer aerodynamische Entwicklungsarbeiten geschaffen. (orig.)

  7. A comparative research of different ensemble surrogate models based on set pair analysis for the DNAPL-contaminated aquifer remediation strategy optimization

    Science.gov (United States)

    Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin

    2017-08-01

    Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously.

  8. Evaluation of one-dimensional and two-dimensional volatility basis sets in simulating the aging of secondary organic aerosol with smog-chamber experiments.

    Science.gov (United States)

    Zhao, Bin; Wang, Shuxiao; Donahue, Neil M; Chuang, Wayne; Hildebrandt Ruiz, Lea; Ng, Nga L; Wang, Yangjun; Hao, Jiming

    2015-02-17

    We evaluate the one-dimensional volatility basis set (1D-VBS) and two-dimensional volatility basis set (2D-VBS) in simulating the aging of SOA derived from toluene and α-pinene against smog-chamber experiments. If we simulate the first-generation products with empirical chamber fits and the subsequent aging chemistry with a 1D-VBS or a 2D-VBS, the models mostly overestimate the SOA concentrations in the toluene oxidation experiments. This is because the empirical chamber fits include both first-generation oxidation and aging; simulating aging in addition to this results in double counting of the initial aging effects. If the first-generation oxidation is treated explicitly, the base-case 2D-VBS underestimates the SOA concentrations and O:C increase of the toluene oxidation experiments; it generally underestimates the SOA concentrations and overestimates the O:C increase of the α-pinene experiments. With the first-generation oxidation treated explicitly, we could modify the 2D-VBS configuration individually for toluene and α-pinene to achieve good model-measurement agreement. However, we are unable to simulate the oxidation of both toluene and α-pinene with the same 2D-VBS configuration. We suggest that future models should implement parallel layers for anthropogenic (aromatic) and biogenic precursors, and that more modeling studies and laboratory research be done to optimize the "best-guess" parameters for each layer.

  9. Consistent structures and interactions by density functional theory with small atomic orbital basis sets.

    Science.gov (United States)

    Grimme, Stefan; Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas

    2015-08-07

    A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of "low-cost" electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT methods

  10. Consistent structures and interactions by density functional theory with small atomic orbital basis sets

    International Nuclear Information System (INIS)

    Grimme, Stefan; Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas

    2015-01-01

    A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of “low-cost” electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT

  11. 42 CFR 415.170 - Conditions for payment on a fee schedule basis for physician services in a teaching setting.

    Science.gov (United States)

    2010-10-01

    ... physician services in a teaching setting. 415.170 Section 415.170 Public Health CENTERS FOR MEDICARE... BY PHYSICIANS IN PROVIDERS, SUPERVISING PHYSICIANS IN TEACHING SETTINGS, AND RESIDENTS IN CERTAIN SETTINGS Physician Services in Teaching Settings § 415.170 Conditions for payment on a fee schedule basis...

  12. Optimizing Geographic Allotment of Photovoltaic Capacity in a Distributed Generation Setting: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Urquhart, B.; Sengupta, M.; Keller, J.

    2012-09-01

    A multi-objective optimization was performed to allocate 2MW of PV among four candidate sites on the island of Lanai such that energy was maximized and variability in the form of ramp rates was minimized. This resulted in an optimal solution set which provides a range of geographic allotment alternatives for the fixed PV capacity. Within the optimal set, a tradeoff between energy produced and variability experienced was found, whereby a decrease in variability always necessitates a simultaneous decrease in energy. A design point within the optimal set was selected for study which decreased extreme ramp rates by over 50% while only decreasing annual energy generation by 3% over the maximum generation allocation. To quantify the allotment mix selected, a metric was developed, called the ramp ratio, which compares ramping magnitude when all capacity is allotted to a single location to the aggregate ramping magnitude in a distributed scenario. The ramp ratio quantifies simultaneously how much smoothing a distributed scenario would experience over single site allotment and how much a single site is being under-utilized for its ability to reduce aggregate variability. This paper creates a framework for use by cities and municipal utilities to reduce variability impacts while planning for high penetration of PV on the distribution grid.

  13. Formulation of improved basis sets for the study of polymer dynamics through diffusion theory methods.

    Science.gov (United States)

    Gaspari, Roberto; Rapallo, Arnaldo

    2008-06-28

    In this work a new method is proposed for the choice of basis functions in diffusion theory (DT) calculations. This method, named hybrid basis approach (HBA), combines the two previously adopted long time sorting procedure (LTSP) and maximum correlation approximation (MCA) techniques; the first emphasizing contributions from the long time dynamics, the latter being based on the local correlations along the chain. In order to fulfill this task, the HBA procedure employs a first order basis set corresponding to a high order MCA one and generates upper order approximations according to LTSP. A test of the method is made first on a melt of cis-1,4-polyisoprene decamers where HBA and LTSP are compared in terms of efficiency. Both convergence properties and numerical stability are improved by the use of the HBA basis set whose performance is evaluated on local dynamics, by computing the correlation times of selected bond vectors along the chain, and on global ones, through the eigenvalues of the diffusion operator L. Further use of the DT with a HBA basis set has been made on a 71-mer of syndiotactic trans-1,2-polypentadiene in toluene solution, whose dynamical properties have been computed with a high order calculation and compared to the "numerical experiment" provided by the molecular dynamics (MD) simulation in explicit solvent. The necessary equilibrium averages have been obtained by a vacuum trajectory of the chain where solvent effects on conformational properties have been reproduced with a proper screening of the nonbonded interactions, corresponding to a definite value of the mean radius of gyration of the polymer in vacuum. Results show a very good agreement between DT calculations and the MD numerical experiment. This suggests a further use of DT methods with the necessary input quantities obtained by the only knowledge of some experimental values, i.e., the mean radius of gyration of the chain and the viscosity of the solution, and by a suitable vacuum

  14. Optimal testing input sets for reduced diagnosis time of nuclear power plant digital electronic circuits

    International Nuclear Information System (INIS)

    Kim, D.S.; Seong, P.H.

    1994-01-01

    This paper describes the optimal testing input sets required for the fault diagnosis of the nuclear power plant digital electronic circuits. With the complicated systems such as very large scale integration (VLSI), nuclear power plant (NPP), and aircraft, testing is the major factor of the maintenance of the system. Particularly, diagnosis time grows quickly with the complexity of the component. In this research, for reduce diagnosis time the authors derived the optimal testing sets that are the minimal testing sets required for detecting the failure and for locating of the failed component. For reduced diagnosis time, the technique presented by Hayes fits best for the approach to testing sets generation among many conventional methods. However, this method has the following disadvantages: (a) it considers only the simple network (b) it concerns only whether the system is in failed state or not and does not provide the way to locate the failed component. Therefore the authors have derived the optimal testing input sets that resolve these problems by Hayes while preserving its advantages. When they applied the optimal testing sets to the automatic fault diagnosis system (AFDS) which incorporates the advanced fault diagnosis method of artificial intelligence technique, they found that the fault diagnosis using the optimal testing sets makes testing the digital electronic circuits much faster than that using exhaustive testing input sets; when they applied them to test the Universal (UV) Card which is a nuclear power plant digital input/output solid state protection system card, they reduced the testing time up to about 100 times

  15. Pseudo-atomic orbitals as basis sets for the O(N) DFT code CONQUEST

    Energy Technology Data Exchange (ETDEWEB)

    Torralba, A S; Brazdova, V; Gillan, M J; Bowler, D R [Materials Simulation Laboratory, UCL, Gower Street, London WC1E 6BT (United Kingdom); Todorovic, M; Miyazaki, T [National Institute for Materials Science, 1-2-1 Sengen, Tsukuba, Ibaraki 305-0047 (Japan); Choudhury, R [London Centre for Nanotechnology, UCL, 17-19 Gordon Street, London WC1H 0AH (United Kingdom)], E-mail: david.bowler@ucl.ac.uk

    2008-07-23

    Various aspects of the implementation of pseudo-atomic orbitals (PAOs) as basis functions for the linear scaling CONQUEST code are presented. Preliminary results for the assignment of a large set of PAOs to a smaller space of support functions are encouraging, and an important related proof on the necessary symmetry of the support functions is shown. Details of the generation and integration schemes for the PAOs are also given.

  16. Optimization of the primary collimator settings for fractionated IMRT stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Tobler, Matt; Leavitt, Dennis D.; Watson, Gordon

    2004-01-01

    Advances in field-shaping techniques for stereotactic radiosurgery/radiotherapy have allowed dynamic adjustment of field shape with gantry rotation (dynamic conformal arc) in an effort to minimize dose to critical structures. Recent work evaluated the potential for increased sparing of dose to normal tissues when the primary collimator setting is optimized to only the size necessary to cover the largest shape of the dynamic micro multi leaf field. Intensity-modulated radiotherapy (IMRT) is now a treatment option for patients receiving stereotactic radiotherapy treatments. This multisegmentation of the dose delivered through multiple fixed treatment fields provides for delivery of uniform dose to the tumor volume while allowing sparing of critical structures, particularly for patients whose tumor volumes are less suited for rotational treatment. For these segmented fields, the total number of monitor units (MUs) delivered may be much greater than the number of MUs required if dose delivery occurred through an unmodulated treatment field. As a result, undesired dose delivered, as leakage through the leaves to tissues outside the area of interest, will be proportionally increased. This work will evaluate the role of optimization of the primary collimator setting for these IMRT treatment fields, and compare these results to treatment fields where the primary collimator settings have not been optimized

  17. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin; Matevosyan, Norayr; Wolfram, Marie-Therese

    2011-01-01

    analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its

  18. Density functional theory calculations of the lowest energy quintet and triplet states of model hemes: role of functional, basis set, and zero-point energy corrections.

    Science.gov (United States)

    Khvostichenko, Daria; Choi, Andrew; Boulatov, Roman

    2008-04-24

    We investigated the effect of several computational variables, including the choice of the basis set, application of symmetry constraints, and zero-point energy (ZPE) corrections, on the structural parameters and predicted ground electronic state of model 5-coordinate hemes (iron(II) porphines axially coordinated by a single imidazole or 2-methylimidazole). We studied the performance of B3LYP and B3PW91 with eight Pople-style basis sets (up to 6-311+G*) and B97-1, OLYP, and TPSS functionals with 6-31G and 6-31G* basis sets. Only hybrid functionals B3LYP, B3PW91, and B97-1 reproduced the quintet ground state of the model hemes. With a given functional, the choice of the basis set caused up to 2.7 kcal/mol variation of the quintet-triplet electronic energy gap (DeltaEel), in several cases, resulting in the inversion of the sign of DeltaEel. Single-point energy calculations with triple-zeta basis sets of the Pople (up to 6-311G++(2d,2p)), Ahlrichs (TZVP and TZVPP), and Dunning (cc-pVTZ) families showed the same trend. The zero-point energy of the quintet state was approximately 1 kcal/mol lower than that of the triplet, and accounting for ZPE corrections was crucial for establishing the ground state if the electronic energy of the triplet state was approximately 1 kcal/mol less than that of the quintet. Within a given model chemistry, effects of symmetry constraints and of a "tense" structure of the iron porphine fragment coordinated to 2-methylimidazole on DeltaEel were limited to 0.3 kcal/mol. For both model hemes the best agreement with crystallographic structural data was achieved with small 6-31G and 6-31G* basis sets. Deviation of the computed frequency of the Fe-Im stretching mode from the experimental value with the basis set decreased in the order: nonaugmented basis sets, basis sets with polarization functions, and basis sets with polarization and diffuse functions. Contraction of Pople-style basis sets (double-zeta or triple-zeta) affected the results

  19. CHESS-changing horizon efficient set search: A simple principle for multiobjective optimization

    DEFF Research Database (Denmark)

    Borges, Pedro Manuel F. C.

    2000-01-01

    This paper presents a new concept for generating approximations to the non-dominated set in multiobjective optimization problems. The approximation set A is constructed by solving several single-objective minimization problems in which a particular function D(A, z) is minimized. A new algorithm t...

  20. Symmetry-adapted basis sets automatic generation for problems in chemistry and physics

    CERN Document Server

    Avery, John Scales; Avery, James Emil

    2012-01-01

    In theoretical physics, theoretical chemistry and engineering, one often wishes to solve partial differential equations subject to a set of boundary conditions. This gives rise to eigenvalue problems of which some solutions may be very difficult to find. For example, the problem of finding eigenfunctions and eigenvalues for the Hamiltonian of a many-particle system is usually so difficult that it requires approximate methods, the most common of which is expansion of the eigenfunctions in terms of basis functions that obey the boundary conditions of the problem. The computational effort needed

  1. Forecasting the Optimal Factors of Formation of the Population Savings as the Basis for Investment Resources of the Regional Economy

    Directory of Open Access Journals (Sweden)

    Odintsova Tetiana M.

    2017-04-01

    Full Text Available The article is aimed at studying the optimal factors of formation of the population savings as the basis for investment resources of the regional economy. A factorial (nonlinear correlative-regression analysis of the formation of savings of the population of Ukraine was completed. On its basis a forecast of the optimal structure and volumes of formation of the population incomes was carried out taking into consideration impact of fundamental factors on these incomes. Such approach provides to identify the marginal volumes of tax burden, population savings, and capital investments, directed to economic growth.

  2. Fuzzy resource optimization for safeguards

    International Nuclear Information System (INIS)

    Zardecki, A.; Markin, J.T.

    1991-01-01

    Authorization, enforcement, and verification -- three key functions of safeguards systems -- form the basis of a hierarchical description of the system risk. When formulated in terms of linguistic rather than numeric attributes, the risk can be computed through an algorithm based on the notion of fuzzy sets. Similarly, this formulation allows one to analyze the optimal resource allocation by maximizing the overall detection probability, regarded as a linguistic variable. After summarizing the necessary elements of the fuzzy sets theory, we outline the basic algorithm. This is followed by a sample computation of the fuzzy optimization. 10 refs., 1 tab

  3. Methodological basis for the optimization of a marine sea-urchin embryo test (SET) for the ecological assessment of coastal water quality.

    Science.gov (United States)

    Saco-Alvarez, Liliana; Durán, Iria; Ignacio Lorenzo, J; Beiras, Ricardo

    2010-05-01

    The sea-urchin embryo test (SET) has been frequently used as a rapid, sensitive, and cost-effective biological tool for marine monitoring worldwide, but the selection of a sensitive, objective, and automatically readable endpoint, a stricter quality control to guarantee optimum handling and biological material, and the identification of confounding factors that interfere with the response have hampered its widespread routine use. Size increase in a minimum of n=30 individuals per replicate, either normal larvae or earlier developmental stages, was preferred to observer-dependent, discontinuous responses as test endpoint. Control size increase after 48 h incubation at 20 degrees C must meet an acceptability criterion of 218 microm. In order to avoid false positives minimums of 32 per thousand salinity, 7 pH and 2mg/L oxygen, and a maximum of 40 microg/L NH(3) (NOEC) are required in the incubation media. For in situ testing size increase rates must be corrected on a degree-day basis using 12 degrees C as the developmental threshold. Copyright 2010 Elsevier Inc. All rights reserved.

  4. A New Methodology to Select the Preferred Solutions from the Pareto-optimal Set: Application to Polymer Extrusion

    International Nuclear Information System (INIS)

    Ferreira, Jose C.; Gaspar-Cunha, Antonio; Fonseca, Carlos M.

    2007-01-01

    Most of the real world optimization problems involve multiple, usually conflicting, optimization criteria. Generating Pareto optimal solutions plays an important role in multi-objective optimization, and the problem is considered to be solved when the Pareto optimal set is found, i.e., the set of non-dominated solutions. Multi-Objective Evolutionary Algorithms based on the principle of Pareto optimality are designed to produce the complete set of non-dominated solutions. However, this is not allays enough since the aim is not only to know the Pareto set but, also, to obtain one solution from this Pareto set. Thus, the definition of a methodology able to select a single solution from the set of non-dominated solutions (or a region of the Pareto frontier), and taking into account the preferences of a Decision Maker (DM), is necessary. A different method, based on a weighted stress function, is proposed. It is able to integrate the user's preferences in order to find the best region of the Pareto frontier accordingly with these preferences. This method was tested on some benchmark test problems, with two and three criteria, and on a polymer extrusion problem. This methodology is able to select efficiently the best Pareto-frontier region for the specified relative importance of the criteria

  5. Model's sparse representation based on reduced mixed GMsFE basis methods

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn [Institute of Mathematics, Hunan University, Changsha 410082 (China); Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn [College of Mathematics and Econometrics, Hunan University, Changsha 410082 (China)

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in

  6. A multilevel, level-set method for optimizing eigenvalues in shape design problems

    International Nuclear Information System (INIS)

    Haber, E.

    2004-01-01

    In this paper, we consider optimal design problems that involve shape optimization. The goal is to determine the shape of a certain structure such that it is either as rigid or as soft as possible. To achieve this goal we combine two new ideas for an efficient solution of the problem. First, we replace the eigenvalue problem with an approximation by using inverse iteration. Second, we use a level set method but rather than propagating the front we use constrained optimization methods combined with multilevel continuation techniques. Combining these two ideas we obtain a robust and rapid method for the solution of the optimal design problem

  7. Optimizing the structure of Tetracyanoplatinate(II)

    DEFF Research Database (Denmark)

    Dohn, Asmus Ougaard; Møller, Klaus Braagaard; Sauer, Stephan P. A.

    2013-01-01

    . For the C-N bond these trends are reversed and an order of magnitude smaller. With respect to the basis set dependence we observed that a triple zeta basis set with polarization functions gives in general sufficiently converged results, but while for the Pt-C bond it is advantageous to include extra diffuse......The geometry of tetracyanoplatinate(II) (TCP) has been optimized with density functional theory (DFT) calculations in order to compare different computational strategies. Two approximate scalar relativistic methods, i.e. the scalar zeroth-order regular approximation (ZORA) and non...... is almost quantitatively reproduced in the ZORA and ECP calculations. In addition, the effect of the exchange-correlation functional and one-electron basis set was studied by employing the two generalized gradient approximation (GGA) functionals, BLYP and PBE, as well as their hybrid version B3LYP and PBE0...

  8. On sets of vectors of a finite vector space in which every subset of basis size is a basis II

    OpenAIRE

    Ball, Simeon; De Beule, Jan

    2012-01-01

    This article contains a proof of the MDS conjecture for k a parts per thousand currency sign 2p - 2. That is, that if S is a set of vectors of in which every subset of S of size k is a basis, where q = p (h) , p is prime and q is not and k a parts per thousand currency sign 2p - 2, then |S| a parts per thousand currency sign q + 1. It also contains a short proof of the same fact for k a parts per thousand currency sign p, for all q.

  9. A Novel Rough Set Reduct Algorithm for Medical Domain Based on Bee Colony Optimization

    OpenAIRE

    Suguna, N.; Thanushkodi, K.

    2010-01-01

    Feature selection refers to the problem of selecting relevant features which produce the most predictive outcome. In particular, feature selection task is involved in datasets containing huge number of features. Rough set theory has been one of the most successful methods used for feature selection. However, this method is still not able to find optimal subsets. This paper proposes a new feature selection method based on Rough set theory hybrid with Bee Colony Optimization (BCO) in an attempt...

  10. Beyond bixels: Generalizing the optimization parameters for intensity modulated radiation therapy

    International Nuclear Information System (INIS)

    Markman, Jerry; Low, Daniel A.; Beavis, Andrew W.; Deasy, Joseph O.

    2002-01-01

    Intensity modulated radiation therapy (IMRT) treatment planning systems optimize fluence distributions by subdividing the fluence distribution into rectangular bixels. The algorithms typically optimize the fluence intensity directly, often leading to fluence distributions with sharp discontinuities. These discontinuities may yield difficulties in delivery of the fluence distribution, leading to inaccurate dose delivery. We have developed a method for decoupling the bixel intensities from the optimization parameters; either by introducing optimization control points from which the bixel intensities are interpolated or by parametrizing the fluence distribution using basis functions. In either case, the number of optimization search parameters is reduced from the direct bixel optimization method. To illustrate the concept, the technique is applied to two-dimensional idealized head and neck treatment plans. The interpolation algorithms investigated were nearest-neighbor, linear and cubic spline, and radial basis functions serve as the basis function test. The interpolation and basis function optimization techniques were compared against the direct bixel calculation. The number of optimization parameters were significantly reduced relative to the bixel optimization, and this was evident in the reduction of computation time of as much as 58% from the full bixel optimization. The dose distributions obtained using the reduced optimization parameter sets were very similar to the full bixel optimization when examined by dose distributions, statistics, and dose-volume histograms. To evaluate the sensitivity of the fluence calculations to spatial misalignment caused either by delivery errors or patient motion, the doses were recomputed with a 1 mm shift in each beam and compared to the unshifted distributions. Except for the nearest-neighbor algorithm, the reduced optimization parameter dose distributions were generally less sensitive to spatial shifts than the bixel

  11. The Neural Basis of Optimism and Pessimism

    OpenAIRE

    Hecht, David

    2013-01-01

    Our survival and wellness require a balance between optimism and pessimism. Undue pessimism makes life miserable; however, excessive optimism can lead to dangerously risky behaviors. A review and synthesis of the literature on the neurophysiology subserving these two worldviews suggests that optimism and pessimism are differentially associated with the two cerebral hemispheres. High self-esteem, a cheerful attitude that tends to look at the positive aspects of a given situation, as well as an...

  12. Optimal energy window setting depending on the energy resolution for radionuclides used in gamma camera imaging. Planar imaging evaluation

    International Nuclear Information System (INIS)

    Kojima, Akihiro; Watanabe, Hiroyuki; Arao, Yuichi; Kawasaki, Masaaki; Takaki, Akihiro; Matsumoto, Masanori

    2007-01-01

    In this study, we examined whether the optimal energy window (EW) setting depending on an energy resolution of a gamma camera, which we previously proposed, is valid on planar scintigraphic imaging using Tl-201, Ga-67, Tc-99m, and I-123. Image acquisitions for line sources and paper sheet phantoms containing each radionuclide were performed in air and with scattering materials. For the six photopeaks excluding the Hg-201 characteristic x-rays' one, the conventional 20%-width energy window (EW20%) setting and the optimal energy window (optimal EW) setting (15%-width below 100 keV and 13%-width above 100 keV) were compared. For the Hg-201 characteristic x-rays' photopeak, the conventional on-peak EW20% setting was compared with the off-peak EW setting (73 keV-25%) and the wider off-peak EW setting (77 keV-29%). Image-count ratio (defined as the ratio of the image counts obtained with an EW and the total image counts obtained with the EW covered the whole photopeak for a line source in air), image quality, spatial resolutions (full width half maximum (FWHM) and full width tenth maximum (FWTM) values), count-profile curves, and defect-contrast values were compared between the conventional EW setting and the optimal EW setting. Except for the Hg-201 characteristic x-rays, the image-count ratios were 94-99% for the EW20% setting, but 78-89% for the optimal EW setting. However, the optimal EW setting reduced scatter fraction (defined as the scattered-to-primary counts ratio) effectively, as compared with the EW20% setting. Consequently, all the images with the optimal EW setting gave better image quality than ones with the EW20% setting. For the Hg-201 characteristic x-rays, the off-peak EW setting showed great improvement in image quality in comparison with the EW20% setting and the wider off-peak EW setting gave the best results. In conclusion, from our planar imaging study it was shown that although the optimal EW setting proposed by us gives less image-count ratio by

  13. Security Optimization for Distributed Applications Oriented on Very Large Data Sets

    Directory of Open Access Journals (Sweden)

    Mihai DOINEA

    2010-01-01

    Full Text Available The paper presents the main characteristics of applications which are working with very large data sets and the issues related to security. First section addresses the optimization process and how it is approached when dealing with security. The second section describes the concept of very large datasets management while in the third section the risks related are identified and classified. Finally, a security optimization schema is presented with a cost-efficiency analysis upon its feasibility. Conclusions are drawn and future approaches are identified.

  14. A computerized traffic control algorithm to determine optimal traffic signal settings. Ph.D. Thesis - Toledo Univ.

    Science.gov (United States)

    Seldner, K.

    1977-01-01

    An algorithm was developed to optimally control the traffic signals at each intersection using a discrete time traffic model applicable to heavy or peak traffic. Off line optimization procedures were applied to compute the cycle splits required to minimize the lengths of the vehicle queues and delay at each intersection. The method was applied to an extensive traffic network in Toledo, Ohio. Results obtained with the derived optimal settings are compared with the control settings presently in use.

  15. Set-Based Discrete Particle Swarm Optimization Based on Decomposition for Permutation-Based Multiobjective Combinatorial Optimization Problems.

    Science.gov (United States)

    Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun

    2017-08-07

    This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.

  16. Approximating the Pareto Set of Multiobjective Linear Programs via Robust Optimization

    NARCIS (Netherlands)

    Gorissen, B.L.; den Hertog, D.

    2012-01-01

    Abstract: The Pareto set of a multiobjective optimization problem consists of the solutions for which one or more objectives can not be improved without deteriorating one or more other objectives. We consider problems with linear objectives and linear constraints and use Adjustable Robust

  17. Optimal timing for intravenous administration set replacement.

    Science.gov (United States)

    Gillies, D; O'Riordan, L; Wallen, M; Morrison, A; Rankin, K; Nagy, S

    2005-10-19

    Administration of intravenous therapy is a common occurrence within the hospital setting. Routine replacement of administration sets has been advocated to reduce intravenous infusion contamination. If decreasing the frequency of changing intravenous administration sets does not increase infection rates, a change in practice could result in considerable cost savings. The objective of this review was to identify the optimal interval for the routine replacement of intravenous administration sets when infusate or parenteral nutrition (lipid and non-lipid) solutions are administered to people in hospital via central or peripheral venous catheters. We searched The Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, CINAHL, EMBASE: all from inception to February 2004; reference lists of identified trials, and bibliographies of published reviews. We also contacted researchers in the field. We did not have a language restriction. We included all randomized or quasi-randomized controlled trials addressing the frequency of replacing intravenous administration sets when parenteral nutrition (lipid and non-lipid containing solutions) or infusions (excluding blood) were administered to people in hospital via a central or peripheral catheter. Two authors assessed all potentially relevant studies. We resolved disagreements between the two authors by discussion with a third author. We collected data for the outcomes; infusate contamination; infusate-related bloodstream infection; catheter contamination; catheter-related bloodstream infection; all-cause bloodstream infection and all-cause mortality. We identified 23 references for review. We excluded eight of these studies; five because they did not fit the inclusion criteria and three because of inadequate data. We extracted data from the remaining 15 references (13 studies) with 4783 participants. We conclude that there is no evidence that changing intravenous administration sets more often than every 96 hours

  18. Model of Optimal Collision Avoidance Manoeuvre on the Basis of Electronic Data Collection

    Directory of Open Access Journals (Sweden)

    Jelenko Švetak

    2005-11-01

    Full Text Available The results of the data analyses show that accidents mostlyinclude damages to the ship's hull and collisions. Generally allaccidents of ships can be divided into two basic categories.First, accidents in which measures for damage control shouldbe taken immediately, and second, those which require a littlemore patient reaction. The very fact that collisions belong to thefirst category provided the incentive for writing the current paper.The proposed model of optimal collision avoidance manoeuvreof ships on the basis of electronic data collection wasmade by means of the navigation simulator NTPRO- 1000,Transas manufacturer, Russian Federation.

  19. OPTIMIZATION OF AGGREGATION AND SEQUENTIAL-PARALLEL EXECUTION MODES OF INTERSECTING OPERATION SETS

    Directory of Open Access Journals (Sweden)

    G. М. Levin

    2016-01-01

    Full Text Available A mathematical model and a method for the problem of optimization of aggregation and of sequential- parallel execution modes of intersecting operation sets are proposed. The proposed method is based on the two-level decomposition scheme. At the top level the variant of aggregation for groups of operations is selected, and at the lower level the execution modes of operations are optimized for a fixed version of aggregation.

  20. A Decomposition Model for HPLC-DAD Data Set and Its Solution by Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Lizhi Cui

    2014-01-01

    Full Text Available This paper proposes a separation method, based on the model of Generalized Reference Curve Measurement and the algorithm of Particle Swarm Optimization (GRCM-PSO, for the High Performance Liquid Chromatography with Diode Array Detection (HPLC-DAD data set. Firstly, initial parameters are generated to construct reference curves for the chromatogram peaks of the compounds based on its physical principle. Then, a General Reference Curve Measurement (GRCM model is designed to transform these parameters to scalar values, which indicate the fitness for all parameters. Thirdly, rough solutions are found by searching individual target for every parameter, and reinitialization only around these rough solutions is executed. Then, the Particle Swarm Optimization (PSO algorithm is adopted to obtain the optimal parameters by minimizing the fitness of these new parameters given by the GRCM model. Finally, spectra for the compounds are estimated based on the optimal parameters and the HPLC-DAD data set. Through simulations and experiments, following conclusions are drawn: (1 the GRCM-PSO method can separate the chromatogram peaks and spectra from the HPLC-DAD data set without knowing the number of the compounds in advance even when severe overlap and white noise exist; (2 the GRCM-PSO method is able to handle the real HPLC-DAD data set.

  1. Topology optimization of hyperelastic structures using a level set method

    Science.gov (United States)

    Chen, Feifei; Wang, Yiqiang; Wang, Michael Yu; Zhang, Y. F.

    2017-12-01

    Soft rubberlike materials, due to their inherent compliance, are finding widespread implementation in a variety of applications ranging from assistive wearable technologies to soft material robots. Structural design of such soft and rubbery materials necessitates the consideration of large nonlinear deformations and hyperelastic material models to accurately predict their mechanical behaviour. In this paper, we present an effective level set-based topology optimization method for the design of hyperelastic structures that undergo large deformations. The method incorporates both geometric and material nonlinearities where the strain and stress measures are defined within the total Lagrange framework and the hyperelasticity is characterized by the widely-adopted Mooney-Rivlin material model. A shape sensitivity analysis is carried out, in the strict sense of the material derivative, where the high-order terms involving the displacement gradient are retained to ensure the descent direction. As the design velocity enters into the shape derivative in terms of its gradient and divergence terms, we develop a discrete velocity selection strategy. The whole optimization implementation undergoes a two-step process, where the linear optimization is first performed and its optimized solution serves as the initial design for the subsequent nonlinear optimization. It turns out that this operation could efficiently alleviate the numerical instability and facilitate the optimization process. To demonstrate the validity and effectiveness of the proposed method, three compliance minimization problems are studied and their optimized solutions present significant mechanical benefits of incorporating the nonlinearities, in terms of remarkable enhancement in not only the structural stiffness but also the critical buckling load.

  2. Velocity-gauge real-time TDDFT within a numerical atomic orbital basis set

    Science.gov (United States)

    Pemmaraju, C. D.; Vila, F. D.; Kas, J. J.; Sato, S. A.; Rehr, J. J.; Yabana, K.; Prendergast, David

    2018-05-01

    The interaction of laser fields with solid-state systems can be modeled efficiently within the velocity-gauge formalism of real-time time dependent density functional theory (RT-TDDFT). In this article, we discuss the implementation of the velocity-gauge RT-TDDFT equations for electron dynamics within a linear combination of atomic orbitals (LCAO) basis set framework. Numerical results obtained from our LCAO implementation, for the electronic response of periodic systems to both weak and intense laser fields, are compared to those obtained from established real-space grid and Full-Potential Linearized Augmented Planewave approaches. Potential applications of the LCAO based scheme in the context of extreme ultra-violet and soft X-ray spectroscopies involving core-electronic excitations are discussed.

  3. Matrix-product-state method with local basis optimization for nonequilibrium electron-phonon systems

    Science.gov (United States)

    Heidrich-Meisner, Fabian; Brockt, Christoph; Dorfner, Florian; Vidmar, Lev; Jeckelmann, Eric

    We present a method for simulating the time evolution of quasi-one-dimensional correlated systems with strongly fluctuating bosonic degrees of freedom (e.g., phonons) using matrix product states. For this purpose we combine the time-evolving block decimation (TEBD) algorithm with a local basis optimization (LBO) approach. We discuss the performance of our approach in comparison to TEBD with a bare boson basis, exact diagonalization, and diagonalization in a limited functional space. TEBD with LBO can reduce the computational cost by orders of magnitude when boson fluctuations are large and thus it allows one to investigate problems that are out of reach of other approaches. First, we test our method on the non-equilibrium dynamics of a Holstein polaron and show that it allows us to study the regime of strong electron-phonon coupling. Second, the method is applied to the scattering of an electronic wave packet off a region with electron-phonon coupling. Our study reveals a rich physics including transient self-trapping and dissipation. Supported by Deutsche Forschungsgemeinschaft (DFG) via FOR 1807.

  4. Basis set effects on the energy of intramolecular O-H...halogen hydrogen bridges in ortho-halophenols and 2,4-dihalo-malonaldehyde

    International Nuclear Information System (INIS)

    Buemi, Giuseppe

    2004-01-01

    Ab initio calculations of hydrogen bridge energies (E HB ) of 2-halophenols were carried out at various levels of sophistication using a variety of basis sets in order to verify their ability in reproducing the experimentally-determined gas phase ordering, and the related experimental frequencies of the O-H vibration stretching mode. The semiempirical AM1 and PM3 approaches were adopted, too. Calculations were extended to the O-H...X bridge of a particular conformation of 2,4-dihalo-malonaldehyde. The results and their trend with respect to the electronegativity of the halogen series are highly dependant on the basis set. The less sophisticated 3-21G, CEP121G and LANL2DZ basis sets (with and without correlation energy inclusion) predict E HB decreasing on decreasing the electronegativity power whilst the opposite is generally found when more extended bases are used. However, all high level calculations confirm the nearly negligible energy differences between the examined O-H...X bridges

  5. Design of turning hydraulic engines for manipulators of mobile machines on the basis of multicriterial optimization

    Directory of Open Access Journals (Sweden)

    Lagerev I.A.

    2016-12-01

    Full Text Available In this paper the mathematical models of the main types of turning hydraulic engines, which at the present time widely used in the construction of handling systems of domestic and foreign mobile transport-technological machines wide functionality. They allow to take into consideration the most significant from the viewpoint of ensuring high technical-economic indicators of hydraulic efficiency criteria – minimum mass (weight, their volume and losses of power. On the basis of these mathematical models the problem of multicriterial constrained optimization of the constructive sizes of turning hydraulic engines are subject to complex constructive, strength and deformation limits. It allows you to de-velop the hydraulic engines in an optimized design which is required for the purpose of designing a comprehensive measure takes into account efficiency criteria. The multicriterial optimization problem is universal in nature, so when designing a turning hydraulic engines allows for one-, two - and three-criteria optimization without making any changes in the solution algorithm. This is a significant advantage for the development of universal software for the automation of design of mobile transport-technological machines.

  6. The FERMI (at) Elettra Technical Optimization Study: Preliminary Parameter Set and Initial Studies

    International Nuclear Information System (INIS)

    Byrd, John; Corlett, John; Doolittle, Larry; Fawley, William; Lidia, Steven; Penn, Gregory; Ratti, Alex; Staples, John; Wilcox, Russell; Wurtele, Jonathan; Zholents, Alexander

    2005-01-01

    The goal of the FERMI (at) Elettra Technical Optimization Study is to produce a machine design and layout consistent with user needs for radiation in the approximate ranges 100 nm to 40 nm, and 40 nm to 10 nm, using seeded FEL's. The Study will involve collaboration between Italian and US physicists and engineers, and will form the basis for the engineering design and the cost estimation

  7. INDUSTRIAL AREA AS A BASIS FOR SPATIAL OPTIMIZATION OF LAND USE IN KIEV

    Directory of Open Access Journals (Sweden)

    Tsviakh О.

    2017-02-01

    Full Text Available In article deals with problem of using the urban land, including land under the industrial objects in Kiev. Also was analysed the ways of optimization the urban land using. Today become particularly acute the problem for efficient use of urban land use, including land for industrial facilities non-functioning as a reserve future development of Kyiv-based ecological-economic approach to solving them. However, to ensure sustainable development of urban population (preserve and improve health, improve working conditions, improve living conditions, increase the construction of social and affordable housing, reducing unemployment, creating new jobs, improving the ecological state of the environment within large cities , you need to identify ways to optimize existing urban land use. The complexity of management decisions is determined, above all, the fact that in most cities of Ukraine territorial resources are exhausted and vacant land plots require significant investment. Also, a significant proportion of non-functioning industrial enterprises, which occupy large areas in the city were in Kyiv surrounded by residential development, buffer zones, technogenic disturbed and contaminated land. These objects be removed outside the settlements and the land on which they are to be re-cultivated and restoration for more ecological, economically feasible and sustainable use. The rapid development of large cities around the world and increase their impact on the environment and society is accompanied by a set ekonominyh, environmental and social issues that significantly influence the development of land relations in settlements in general. Today in Kyiv observed the changing dynamics of land area, which is to reduce the share of agricultural and forestry purposes and to increase the territory of other categories. The process of de-industrialization and suburbanization of urban land use is inevitable. They in turn accelerate other processes - "crowding out

  8. Trace element analysis in an optimized set-up for total reflection PIXE (TPIXE)

    International Nuclear Information System (INIS)

    Van Kan, J.A.; Vis, R.D.

    1996-01-01

    A newly constructed chamber for measuring with MeV proton beams at small incidence angles (0 to 35 mrad) is used to analyse trace elements on flat surfaces such as Si wafers, quartz substrates and perspex. This set-up is constructed in such a way that the X-ray detector can reach very large solid angles, larger than 1 sr. Using these large solid angles in combination with the reduction of bremsstrahlungs background, lower limits of detection (LOD) using TPIXE can be obtained as compared with PIXE in the conventional geometry. Standard solutions are used to determine the LODs obtainable with TPIXE in the optimized set-up. These solutions contain traces of As and Sr with concentrations down to 20 ppb in an insulin solution. The limits of detection found are compared with earlier ones obtained with TPIXE in a non optimized set-up and with TXRF results. (author)

  9. Ab initio calculation of reaction energies. III. Basis set dependence of relative energies on the FH2 and H2CO potential energy surfaces

    International Nuclear Information System (INIS)

    Frisch, M.J.; Binkley, J.S.; Schaefer, H.F. III

    1984-01-01

    The relative energies of the stationary points on the FH 2 and H 2 CO nuclear potential energy surfaces relevant to the hydrogen atom abstraction, H 2 elimination and 1,2-hydrogen shift reactions have been examined using fourth-order Moller--Plesset perturbation theory and a variety of basis sets. The theoretical absolute zero activation energy for the F+H 2 →FH+H reaction is in better agreement with experiment than previous theoretical studies, and part of the disagreement between earlier theoretical calculations and experiment is found to result from the use of assumed rather than calculated zero-point vibrational energies. The fourth-order reaction energy for the elimination of hydrogen from formaldehyde is within 2 kcal mol -1 of the experimental value using the largest basis set considered. The qualitative features of the H 2 CO surface are unchanged by expansion of the basis set beyond the polarized triple-zeta level, but diffuse functions and several sets of polarization functions are found to be necessary for quantitative accuracy in predicted reaction and activation energies. Basis sets and levels of perturbation theory which represent good compromises between computational efficiency and accuracy are recommended

  10. Sensitivity of the optimal parameter settings for a LTE packet scheduler

    NARCIS (Netherlands)

    Fernandez-Diaz, I.; Litjens, R.; van den Berg, C.A.; Dimitrova, D.C.; Spaey, K.

    Advanced packet scheduling schemes in 3G/3G+ mobile networks provide one or more parameters to optimise the trade-off between QoS and resource efficiency. In this paper we study the sensitivity of the optimal parameter setting for packet scheduling in LTE radio networks with respect to various

  11. Application of HGSO to security based optimal placement and parameter setting of UPFC

    International Nuclear Information System (INIS)

    Tarafdar Hagh, Mehrdad; Alipour, Manijeh; Teimourzadeh, Saeed

    2014-01-01

    Highlights: • A new method for solving the security based UPFC placement and parameter setting problem is proposed. • The proposed method is a global method for all mixed-integer problems. • The proposed method has the ability of the parallel search in binary and continues space. • By using the proposed method, most of the problems due to line contingencies are solved. • Comparison studies are done to compare the performance of the proposed method. - Abstract: This paper presents a novel method to solve security based optimal placement and parameter setting of unified power flow controller (UPFC) problem based on hybrid group search optimization (HGSO) technique. Firstly, HGSO is introduced in order to solve mix-integer type problems. Afterwards, the proposed method is applied to the security based optimal placement and parameter setting of UPFC problem. The focus of the paper is to enhance the power system security through eliminating or minimizing the over loaded lines and the bus voltage limit violations under single line contingencies. Simulation studies are carried out on the IEEE 6-bus, IEEE 14-bus and IEEE 30-bus systems in order to verify the accuracy and robustness of the proposed method. The results indicate that by using the proposed method, the power system remains secure under single line contingencies

  12. Perturbation expansion theory corrected from basis set superposition error. I. Locally projected excited orbitals and single excitations.

    Science.gov (United States)

    Nagata, Takeshi; Iwata, Suehiro

    2004-02-22

    The locally projected self-consistent field molecular orbital method for molecular interaction (LP SCF MI) is reformulated for multifragment systems. For the perturbation expansion, two types of the local excited orbitals are defined; one is fully local in the basis set on a fragment, and the other has to be partially delocalized to the basis sets on the other fragments. The perturbation expansion calculations only within single excitations (LP SE MP2) are tested for water dimer, hydrogen fluoride dimer, and colinear symmetric ArM+ Ar (M = Na and K). The calculated binding energies of LP SE MP2 are all close to the corresponding counterpoise corrected SCF binding energy. By adding the single excitations, the deficiency in LP SCF MI is thus removed. The results suggest that the exclusion of the charge-transfer effects in LP SCF MI might indeed be the cause of the underestimation for the binding energy. (c) 2004 American Institute of Physics.

  13. Optimal control

    CERN Document Server

    Aschepkov, Leonid T; Kim, Taekyun; Agarwal, Ravi P

    2016-01-01

    This book is based on lectures from a one-year course at the Far Eastern Federal University (Vladivostok, Russia) as well as on workshops on optimal control offered to students at various mathematical departments at the university level. The main themes of the theory of linear and nonlinear systems are considered, including the basic problem of establishing the necessary and sufficient conditions of optimal processes. In the first part of the course, the theory of linear control systems is constructed on the basis of the separation theorem and the concept of a reachability set. The authors prove the closure of a reachability set in the class of piecewise continuous controls, and the problems of controllability, observability, identification, performance and terminal control are also considered. The second part of the course is devoted to nonlinear control systems. Using the method of variations and the Lagrange multipliers rule of nonlinear problems, the authors prove the Pontryagin maximum principle for prob...

  14. A topology optimization method based on the level set method for the design of negative permeability dielectric metamaterials

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Izui, Kazuhiro

    2012-01-01

    This paper presents a level set-based topology optimization method for the design of negative permeability dielectric metamaterials. Metamaterials are artificial materials that display extraordinary physical properties that are unavailable with natural materials. The aim of the formulated...... optimization problem is to find optimized layouts of a dielectric material that achieve negative permeability. The presence of grayscale areas in the optimized configurations critically affects the performance of metamaterials, positively as well as negatively, but configurations that contain grayscale areas...... are highly impractical from an engineering and manufacturing point of view. Therefore, a topology optimization method that can obtain clear optimized configurations is desirable. Here, a level set-based topology optimization method incorporating a fictitious interface energy is applied to a negative...

  15. SETTING OF TASK OF OPTIMIZATION OF THE ACTIVITY OF A MACHINE-BUILDING CLUSTER COMPANY

    Directory of Open Access Journals (Sweden)

    A. V. Romanenko

    2014-01-01

    Full Text Available The work is dedicated to the development of methodological approaches to the management of machine-building enterprise on the basis of cost reduction, optimization of the portfolio of orders and capacity utilization in the process of operational management. Evaluation of economic efficiency of such economic entities of the real sector of the economy is determined, including the timing of orders, which depend on the issues of building a production facility, maintenance of fixed assets and maintain them at a given level. Formulated key components of economic-mathematical model of industrial activity and is defined as the optimization criterion. As proposed formula accumulating profits due to production capacity and technology to produce products current direct variable costs, the amount of property tax and expenses appearing as a result of manifestations of variance when performing replacement of production tasks for a single period of time. The main component of the optimization of the production activity of the enterprise on the basis of this criterion is the vector of direct variable costs. It depends on the number of types of products in the current portfolio of orders, production schedules production, the normative time for the release of a particular product available Fund time efficient production positions, the current valuation for certain groups of technological operations and the current priority of operations for the degree of readiness performed internal orders. Modeling of industrial activity based on the proposed provisions would allow the enterprises of machine-building cluster, active innovation, improve the efficient use of available production resources by optimizing current operations at the high uncertainty of the magnitude of the demand planning and carrying out maintenance and routine repairs.

  16. Method and basis set dependence of anharmonic ground state nuclear wave functions and zero-point energies: Application to SSSH

    Science.gov (United States)

    Kolmann, Stephen J.; Jordan, Meredith J. T.

    2010-02-01

    One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol-1 at the CCSD(T)/6-31G∗ level of theory, has a 4 kJ mol-1 dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol-1 lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol-1 lower in energy at the CCSD(T)/6-31G∗ level of theory. Ideally, for sub-kJ mol-1 thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.

  17. Method and basis set dependence of anharmonic ground state nuclear wave functions and zero-point energies: application to SSSH.

    Science.gov (United States)

    Kolmann, Stephen J; Jordan, Meredith J T

    2010-02-07

    One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol(-1) at the CCSD(T)/6-31G* level of theory, has a 4 kJ mol(-1) dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol(-1) lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol(-1) lower in energy at the CCSD(T)/6-31G* level of theory. Ideally, for sub-kJ mol(-1) thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.

  18. Optimization of vehicle compartment low frequency noise based on Radial Basis Function Neuro-Network Approximation Model

    Directory of Open Access Journals (Sweden)

    HU Qi-guo

    2017-01-01

    Full Text Available For reducing the vehicle compartment low frequency noise, the Optimal Latin hypercube sampling method was applied to perform experimental design for sampling in the factorial design space. The thickness parameters of the panels with larger acoustic contribution was considered as factors, as well as the vehicle mass, seventh rank modal frequency of body, peak sound pressure of test point and sound pressure root-mean-square value as responses. By using the RBF(radial basis function neuro-network method, an approximation model of four responses about six factors was established. Further more, error analysis of established approximation model was performed in this paper. To optimize the panel’s thickness parameter, the adaptive simulated annealing algorithm was im-plemented. Optimization results show that the peak sound pressure of driver’s head was reduced by 4.45dB and 5.47dB at frequency 158HZ and 134Hz respec-tively. The test point pressure were significantly reduced at other frequency as well. The results indicate that through the optimization the vehicle interior cavity noise was reduced effectively, and the acoustical comfort of the vehicle was im-proved significantly.

  19. Wind-break walls with optimized setting angles for natural draft dry cooling tower with vertical radiators

    International Nuclear Information System (INIS)

    Ma, Huan; Si, Fengqi; Kong, Yu; Zhu, Kangping; Yan, Wensheng

    2017-01-01

    Highlights: • Aerodynamic field around dry cooling tower is presented with numerical model. • Performances of cooling deltas are figured out by air inflow velocity analysis. • Setting angles of wind-break walls are optimized to improve cooling performance. • Optimized walls can reduce the interference on air inflow at low wind speeds. • Optimized walls create stronger outside secondary flow at high wind speeds. - Abstract: To get larger cooling performance enhancement for natural draft dry cooling tower with vertical cooling deltas under crosswind, setting angles of wind-break walls were optimized. Considering specific structure of each cooling delta, an efficient numerical model was established and validated by some published results. Aerodynamic fields around cooling deltas under various crosswind speeds were presented, and outlet water temperatures of the two columns of cooling delta were exported as well. It was found that for each cooling delta, there was a difference in cooling performance between the two columns, which is closely related to the characteristic of main airflow outside the tower. Using the present model, air inflow deviation angles at cooling deltas’ inlet were calculated, and the effects of air inflow deviation on outlet water temperatures of the two columns for corresponding cooling delta were explained in detail. Subsequently, at cooling deltas’ inlet along radial direction of the tower, setting angles of wind-break walls were optimized equal to air inflow deviation angles when no airflow separation appeared outside the tower, while equal to zero when outside airflow separation occurred. In addition, wind-break walls with optimized setting angles were verified to be extremely effective, compared to the previous radial walls.

  20. SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Kenny S K; Lee, Louis K Y [Department of Clinical Oncology, Prince of Wales Hospital, Hong Kong SAR (China); Xing, L [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States); Chan, Anthony T C [Department of Clinical Oncology, Prince of Wales Hospital, Hong Kong SAR (China); State Key Laboratory of Oncology in South China, The Chinese University of Hong Kong, Hong Kong SAR (China)

    2015-06-15

    Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis, which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.

  1. Level Set-Based Topology Optimization for the Design of an Electromagnetic Cloak With Ferrite Material

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Andkjær, Jacob Anders

    2013-01-01

    . A level set-based topology optimization method incorporating a fictitious interface energy is used to find optimized configurations of the ferrite material. The numerical results demonstrate that the optimization successfully found an appropriate ferrite configuration that functions as an electromagnetic......This paper presents a structural optimization method for the design of an electromagnetic cloak made of ferrite material. Ferrite materials exhibit a frequency-dependent degree of permeability, due to a magnetic resonance phenomenon that can be altered by changing the magnitude of an externally...

  2. The pointer basis and the feedback stabilization of quantum systems

    International Nuclear Information System (INIS)

    Li, L; Chia, A; Wiseman, H M

    2014-01-01

    The dynamics for an open quantum system can be ‘unravelled’ in infinitely many ways, depending on how the environment is monitored, yielding different sorts of conditioned states, evolving stochastically. In the case of ideal monitoring these states are pure, and the set of states for a given monitoring forms a basis (which is overcomplete in general) for the system. It has been argued elsewhere (Atkins et al 2005 Europhys. Lett. 69 163) that the ‘pointer basis’ as introduced by Zurek et al (1993 Phys. Rev. Lett. 70 1187), should be identified with the unravelling-induced basis which decoheres most slowly. Here we show the applicability of this concept of pointer basis to the problem of state stabilization for quantum systems. In particular we prove that for linear Gaussian quantum systems, if the feedback control is assumed to be strong compared to the decoherence of the pointer basis, then the system can be stabilized in one of the pointer basis states with a fidelity close to one (the infidelity varies inversely with the control strength). Moreover, if the aim of the feedback is to maximize the fidelity of the unconditioned system state with a pure state that is one of its conditioned states, then the optimal unravelling for stabilizing the system in this way is that which induces the pointer basis for the conditioned states. We illustrate these results with a model system: quantum Brownian motion. We show that even if the feedback control strength is comparable to the decoherence, the optimal unravelling still induces a basis very close to the pointer basis. However if the feedback control is weak compared to the decoherence, this is not the case. (paper)

  3. Radiobiological basis for setting neutron radiation safety standards

    International Nuclear Information System (INIS)

    Straume, T.

    1985-01-01

    Present neutron standards, adopted more than 20 yr ago from a weak radiobiological data base, have been in doubt for a number of years and are currently under challenge. Moreover, recent dosimetric re-evaluations indicate that Hiroshima neutron doses may have been much lower than previously thought, suggesting that direct data for neutron-induced cancer in humans may in fact not be available. These recent developments make it urgent to determine the extent to which neutron cancer risk in man can be estimated from data that are available. Two approaches are proposed here that are anchored in particularly robust epidemiological and experimental data and appear most likely to provide reliable estimates of neutron cancer risk in man. The first approach uses gamma-ray dose-response relationships for human carcinogenesis, available from Nagasaki (Hiroshima data are also considered), together with highly characterized neutron and gamma-ray data for human cytogenetics. When tested against relevant experimental data, this approach either adequately predicts or somewhat overestimates neutron tumorigenesis (and mutagenesis) in animals. The second approach also uses the Nagasaki gamma-ray cancer data, but together with neutron RBEs from animal tumorigenesis studies. Both approaches give similar results and provide a basis for setting neutron radiation safety standards. They appear to be an improvement over previous approaches, including those that rely on highly uncertain maximum neutron RBEs and unnecessary extrapolations of gamma-ray data to very low doses. Results suggest that, at the presently accepted neutron dose limit of 0.5 rad/yr, the cancer mortality risk to radiation workers is not very different from accidental mortality risks to workers in various nonradiation occupations

  4. Chemical bonding analysis for solid-state systems using intrinsic oriented quasiatomic minimal-basis-set orbitals

    International Nuclear Information System (INIS)

    Yao, Y.X.; Wang, C.Z.; Ho, K.M.

    2010-01-01

    A chemical bonding scheme is presented for the analysis of solid-state systems. The scheme is based on the intrinsic oriented quasiatomic minimal-basis-set orbitals (IO-QUAMBOs) previously developed by Ivanic and Ruedenberg for molecular systems. In the solid-state scheme, IO-QUAMBOs are generated by a unitary transformation of the quasiatomic orbitals located at each site of the system with the criteria of maximizing the sum of the fourth power of interatomic orbital bond order. Possible bonding and antibonding characters are indicated by the single particle matrix elements, and can be further examined by the projected density of states. We demonstrate the method by applications to graphene and (6,0) zigzag carbon nanotube. The oriented-orbital scheme automatically describes the system in terms of sp 2 hybridization. The effect of curvature on the electronic structure of the zigzag carbon nanotube is also manifested in the deformation of the intrinsic oriented orbitals as well as a breaking of symmetry leading to nonzero single particle density matrix elements. In an additional study, the analysis is performed on the Al 3 V compound. The main covalent bonding characters are identified in a straightforward way without resorting to the symmetry analysis. Our method provides a general way for chemical bonding analysis of ab initio electronic structure calculations with any type of basis sets.

  5. Determining optimal interconnection capacity on the basis of hourly demand and supply functions of electricity

    International Nuclear Information System (INIS)

    Keppler, Jan Horst; Meunier, William; Coquentin, Alexandre

    2017-01-01

    Interconnections for cross-border electricity flows are at the heart of the project to create a common European electricity market. At the time, increase in production from variable renewables clustered during a limited numbers of hours reduces the availability of existing transport infrastructures. This calls for higher levels of optimal interconnection capacity than in the past. In complement to existing scenario-building exercises such as the TYNDP that respond to the challenge of determining optimal levels of infrastructure provision, the present paper proposes a new empirically-based methodology to perform Cost-Benefit analysis for the determination of optimal interconnection capacity, using as an example the French-German cross-border trade. Using a very fine dataset of hourly supply and demand curves (aggregated auction curves) for the year 2014 from the EPEX Spot market, it constructs linearized net export (NEC) and net import demand curves (NIDC) for both countries. This allows assessing hour by hour the welfare impacts for incremental increases in interconnection capacity. Summing these welfare increases over the 8 760 hours of the year, this provides the annual total for each step increase of interconnection capacity. Confronting welfare benefits with the annual cost of augmenting interconnection capacity indicated the socially optimal increase in interconnection capacity between France and Germany on the basis of empirical market micro-data. (authors)

  6. Optimization of heliostat field layout in solar central receiver systems on annual basis using differential evolution algorithm

    International Nuclear Information System (INIS)

    Atif, Maimoon; Al-Sulaiman, Fahad A.

    2015-01-01

    Highlights: • Differential evolution optimization model was developed to optimize the heliostat field. • Five optical parameters were considered for the optimization of the optical efficiency. • Optimization using insolation weighted and un-weighted annual efficiency are developed. • The daily averaged annual optical efficiency was calculated to be 0.5023 while the monthly was 0.5025. • The insolation weighted daily averaged annual efficiency was 0.5634. - Abstract: Optimization of a heliostat field is an essential task to make a solar central receiver system effective because major optical losses are associated with the heliostat fields. In this study, a mathematical model was developed to effectively optimize the heliostat field on annual basis using differential evolution, which is an evolutionary algorithm. The heliostat field layout optimization is based on the calculation of five optical performance parameters: the mirror or the heliostat reflectivity, the cosine factor, the atmospheric attenuation factor, the shadowing and blocking factor, and the intercept factor. This model calculates all the aforementioned performance parameters at every stage of the optimization, until the best heliostat field layout based on annual performance is obtained. Two different approaches were undertaken to optimize the heliostat field layout: one with optimizing insolation weighted annual efficiency and the other with optimizing the un-weighted annual efficiency. Moreover, an alternate approach was also proposed to efficiently optimize the heliostat field in which the number of computational time steps was considerably reduced. It was observed that the daily averaged annual optical efficiency was calculated to be 0.5023 as compared to the monthly averaged annual optical efficiency, 0.5025. Moreover, the insolation weighted daily averaged annual efficiency of the heliostat field was 0.5634 for Dhahran, Saudi Arabia. The code developed can be used for any other

  7. Application of Multiple-Population Genetic Algorithm in Optimizing the Train-Set Circulation Plan Problem

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2017-01-01

    Full Text Available The train-set circulation plan problem (TCPP belongs to the rolling stock scheduling (RSS problem and is similar to the aircraft routing problem (ARP in airline operations and the vehicle routing problem (VRP in the logistics field. However, TCPP involves additional complexity due to the maintenance constraint of train-sets: train-sets must conduct maintenance tasks after running for a certain time and distance. The TCPP is nondeterministic polynomial hard (NP-hard. There is no available algorithm that can obtain the optimal global solution, and many factors such as the utilization mode and the maintenance mode impact the solution of the TCPP. This paper proposes a train-set circulation optimization model to minimize the total connection time and maintenance costs and describes the design of an efficient multiple-population genetic algorithm (MPGA to solve this model. A realistic high-speed railway (HSR case is selected to verify our model and algorithm, and, then, a comparison of different algorithms is carried out. Furthermore, a new maintenance mode is proposed, and related implementation requirements are discussed.

  8. Internal combustion engine report: Spark ignited ICE GenSet optimization and novel concept development

    Energy Technology Data Exchange (ETDEWEB)

    Keller, J.; Blarigan, P. Van [Sandia National Labs., Livermore, CA (United States)

    1998-08-01

    In this manuscript the authors report on two projects each of which the goal is to produce cost effective hydrogen utilization technologies. These projects are: (1) the development of an electrical generation system using a conventional four-stroke spark-ignited internal combustion engine generator combination (SI-GenSet) optimized for maximum efficiency and minimum emissions, and (2) the development of a novel internal combustion engine concept. The SI-GenSet will be optimized to run on either hydrogen or hydrogen-blends. The novel concept seeks to develop an engine that optimizes the Otto cycle in a free piston configuration while minimizing all emissions. To this end the authors are developing a rapid combustion homogeneous charge compression ignition (HCCI) engine using a linear alternator for both power take-off and engine control. Targeted applications include stationary electrical power generation, stationary shaft power generation, hybrid vehicles, and nearly any other application now being accomplished with internal combustion engines.

  9. Optimal PID settings for first and second-order processes - Comparison with different controller tuning approaches

    OpenAIRE

    Pappas, Iosif

    2016-01-01

    PID controllers are extensively used in industry. Although many tuning methodologies exist, finding good controller settings is not an easy task and frequently optimization-based design is preferred to satisfy more complex criteria. In this thesis, the focus was to find which tuning approaches, if any, present close to optimal behavior. Pareto-optimal controllers were found for different first and second-order processes with time delay. Performance was quantified in terms of the integrat...

  10. Formation and physical characteristics of van der Waals molecules, cations, and anions: Estimates of complete basis set values

    Czech Academy of Sciences Publication Activity Database

    Zahradník, Rudolf; Šroubková, Libuše

    2005-01-01

    Roč. 104, č. 1 (2005), s. 52-63 ISSN 0020-7608 Institutional research plan: CEZ:AV0Z40400503 Keywords : intermolecular complexes * van der Waals species * ab initio calculations * complete basis set values * estimates Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.192, year: 2005

  11. Cost optimal building performance requirements. Calculation methodology for reporting on national energy performance requirements on the basis of cost optimality within the framework of the EPBD

    Energy Technology Data Exchange (ETDEWEB)

    Boermans, T.; Bettgenhaeuser, K.; Hermelink, A.; Schimschar, S. [Ecofys, Utrecht (Netherlands)

    2011-05-15

    On the European level, the principles for the requirements for the energy performance of buildings are set by the Energy Performance of Buildings Directive (EPBD). Dating from December 2002, the EPBD has set a common framework from which the individual Member States in the EU developed or adapted their individual national regulations. The EPBD in 2008 and 2009 underwent a recast procedure, with final political agreement having been reached in November 2009. The new Directive was then formally adopted on May 19, 2010. Among other clarifications and new provisions, the EPBD recast introduces a benchmarking mechanism for national energy performance requirements for the purpose of determining cost-optimal levels to be used by Member States for comparing and setting these requirements. The previous EPBD set out a general framework to assess the energy performance of buildings and required Member States to define maximum values for energy delivered to meet the energy demand associated with the standardised use of the building. However it did not contain requirements or guidance related to the ambition level of such requirements. As a consequence, building regulations in the various Member States have been developed by the use of different approaches (influenced by different building traditions, political processes and individual market conditions) and resulted in different ambition levels where in many cases cost optimality principles could justify higher ambitions. The EPBD recast now requests that Member States shall ensure that minimum energy performance requirements for buildings are set 'with a view to achieving cost-optimal levels'. The cost optimum level shall be calculated in accordance with a comparative methodology. The objective of this report is to contribute to the ongoing discussion in Europe around the details of such a methodology by describing possible details on how to calculate cost optimal levels and pointing towards important factors and

  12. OBESITY OF ADULTS LIVING IN THE URBAN SETTINGS AS BASIS FOR DIFFERENT APPLICATIONS OF RECREATIONAL SPORTS

    Directory of Open Access Journals (Sweden)

    Vesko Drašković

    2008-08-01

    Full Text Available According to the World Health Organization’s data, obesity is one of the main risk factors for the human health, especially in so called “mature age”, that is in forties and fiftees of the human’s life. There are many causes of obesity, and the most common ones are unadequate or excessive nutrition, low quality food rich in fats and highly caloric sweetener, unsufficient physical activity – hypokinesy, but also technical and technological development of the modern World (TV, cell phones, elevators, cars etc.. The objective of this research is to define the obesity of adults living in the urban settings through BMI (body mass index and to create, on the basis of these findings, the basis for different applications of the recreational sports programme.

  13. Diffusion Forecasting Model with Basis Functions from QR-Decomposition

    Science.gov (United States)

    Harlim, John; Yang, Haizhao

    2017-12-01

    The diffusion forecasting is a nonparametric approach that provably solves the Fokker-Planck PDE corresponding to Itô diffusion without knowing the underlying equation. The key idea of this method is to approximate the solution of the Fokker-Planck equation with a discrete representation of the shift (Koopman) operator on a set of basis functions generated via the diffusion maps algorithm. While the choice of these basis functions is provably optimal under appropriate conditions, computing these basis functions is quite expensive since it requires the eigendecomposition of an N× N diffusion matrix, where N denotes the data size and could be very large. For large-scale forecasting problems, only a few leading eigenvectors are computationally achievable. To overcome this computational bottleneck, a new set of basis functions constructed by orthonormalizing selected columns of the diffusion matrix and its leading eigenvectors is proposed. This computation can be carried out efficiently via the unpivoted Householder QR factorization. The efficiency and effectiveness of the proposed algorithm will be shown in both deterministically chaotic and stochastic dynamical systems; in the former case, the superiority of the proposed basis functions over purely eigenvectors is significant, while in the latter case forecasting accuracy is improved relative to using a purely small number of eigenvectors. Supporting arguments will be provided on three- and six-dimensional chaotic ODEs, a three-dimensional SDE that mimics turbulent systems, and also on the two spatial modes associated with the boreal winter Madden-Julian Oscillation obtained from applying the Nonlinear Laplacian Spectral Analysis on the measured Outgoing Longwave Radiation.

  14. A parametric level-set approach for topology optimization of flow domains

    DEFF Research Database (Denmark)

    Pingen, Georg; Waidmann, Matthias; Evgrafov, Anton

    2010-01-01

    of the design variables in the traditional approaches is seen as a possible cause for the slow convergence. Non-smooth material distributions are suspected to trigger premature onset of instationary flows which cannot be treated by steady-state flow models. In the present work, we study whether the convergence...... and the versatility of topology optimization methods for fluidic systems can be improved by employing a parametric level-set description. In general, level-set methods allow controlling the smoothness of boundaries, yield a non-local influence of design variables, and decouple the material description from the flow...... field discretization. The parametric level-set method used in this study utilizes a material distribution approach to represent flow boundaries, resulting in a non-trivial mapping between design variables and local material properties. Using a hydrodynamic lattice Boltzmann method, we study...

  15. The ways of enhancement and optimization of production by separate types of meat on the basis of modeling

    Directory of Open Access Journals (Sweden)

    Sulaymanova D.K.

    2017-02-01

    Full Text Available in this article the ways of optimization and enhancement of production of meat in the Kyrgyz Republic are considered. And also on the basis of statistical data and economic-mathematical modeling forecast calculations for the predicted years (2016–2020 are performed. Questions to increase the production volume by separate types of meat in the country are considered.

  16. Structural basis for inhibition of the histone chaperone activity of SET/TAF-Iβ by cytochrome c.

    Science.gov (United States)

    González-Arzola, Katiuska; Díaz-Moreno, Irene; Cano-González, Ana; Díaz-Quintana, Antonio; Velázquez-Campoy, Adrián; Moreno-Beltrán, Blas; López-Rivas, Abelardo; De la Rosa, Miguel A

    2015-08-11

    Chromatin is pivotal for regulation of the DNA damage process insofar as it influences access to DNA and serves as a DNA repair docking site. Recent works identify histone chaperones as key regulators of damaged chromatin's transcriptional activity. However, understanding how chaperones are modulated during DNA damage response is still challenging. This study reveals that the histone chaperone SET/TAF-Iβ interacts with cytochrome c following DNA damage. Specifically, cytochrome c is shown to be translocated into cell nuclei upon induction of DNA damage, but not upon stimulation of the death receptor or stress-induced pathways. Cytochrome c was found to competitively hinder binding of SET/TAF-Iβ to core histones, thereby locking its histone-binding domains and inhibiting its nucleosome assembly activity. In addition, we have used NMR spectroscopy, calorimetry, mutagenesis, and molecular docking to provide an insight into the structural features of the formation of the complex between cytochrome c and SET/TAF-Iβ. Overall, these findings establish a framework for understanding the molecular basis of cytochrome c-mediated blocking of SET/TAF-Iβ, which subsequently may facilitate the development of new drugs to silence the oncogenic effect of SET/TAF-Iβ's histone chaperone activity.

  17. Computing single step operators of logic programming in radial basis function neural networks

    Science.gov (United States)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  18. Computing single step operators of logic programming in radial basis function neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  19. Computing single step operators of logic programming in radial basis function neural networks

    International Nuclear Information System (INIS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-01-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T p :I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks

  20. A Binary Cat Swarm Optimization Algorithm for the Non-Unicost Set Covering Problem

    Directory of Open Access Journals (Sweden)

    Broderick Crawford

    2015-01-01

    Full Text Available The Set Covering Problem consists in finding a subset of columns in a zero-one matrix such that they cover all the rows of the matrix at a minimum cost. To solve the Set Covering Problem we use a metaheuristic called Binary Cat Swarm Optimization. This metaheuristic is a recent swarm metaheuristic technique based on the cat behavior. Domestic cats show the ability to hunt and are curious about moving objects. Based on this, the cats have two modes of behavior: seeking mode and tracing mode. We are the first ones to use this metaheuristic to solve this problem; our algorithm solves a set of 65 Set Covering Problem instances from OR-Library.

  1. Study of integrated optimization design of wind farm in complex terrain

    DEFF Research Database (Denmark)

    Xu, Chang; Chen, Dandan; Han, Xingxing

    2017-01-01

    wind farm design in complex terrain and setting up integrated optimization mathematical model for micro-site selection, power lines and road maintenance design etc.. Based on the existing 1-year wind measurement data in the wind farm area, the genetic algorithm was used to optimize the micro......-site selection. On the basis of location optimization of wind turbine, the optimization algorithms such as single-source shortest path algorithm and minimum spanning tree algorithm were used to optimize electric lines and maintenance roads. The practice shows that the research results can provide important...

  2. Optimal resource states for local state discrimination

    Science.gov (United States)

    Bandyopadhyay, Somshubhro; Halder, Saronath; Nathanson, Michael

    2018-02-01

    We study the problem of locally distinguishing pure quantum states using shared entanglement as a resource. For a given set of locally indistinguishable states, we define a resource state to be useful if it can enhance local distinguishability and optimal if it can distinguish the states as well as global measurements and is also minimal with respect to a partial ordering defined by entanglement and dimension. We present examples of useful resources and show that an entangled state need not be useful for distinguishing a given set of states. We obtain optimal resources with explicit local protocols to distinguish multipartite Greenberger-Horne-Zeilinger and graph states and also show that a maximally entangled state is an optimal resource under one-way local operations and classical communication to distinguish any bipartite orthonormal basis which contains at least one entangled state of full Schmidt rank.

  3. Optimization of the nuclear power engineering safety on the basis of social and economic parameters

    International Nuclear Information System (INIS)

    Kozlov, V.F.; Kuz'min, I.I.; Lystsov, V.N.; Amosova, T.V.; Makhutov, N.A.; Men'shikov, V.F.

    1995-01-01

    Principle of optimization of nuclear power engineering safety is presented on the basis of estimating the risks to the man's health with an account of peculiarities of socio-economic system and other types of economic activities in the region. Average expected duration of forthcoming life and costs of its prolongation serve as a unit for measuring the man's safety. It is shown that if the expenditures on NPP technical safety exceed the scientifically substantiated costs for this region with application of the above principle, than the risk for population will exceed the minimum achievable level. 8 refs., 2 figs., 1 tab

  4. Experimental evaluation and basis function optimization of the spatially variant image-space PSF on the Ingenuity PET/MR scanner

    International Nuclear Information System (INIS)

    Kotasidis, Fotis A.; Zaidi, Habib

    2014-01-01

    Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailed investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis function

  5. Experimental evaluation and basis function optimization of the spatially variant image-space PSF on the Ingenuity PET/MR scanner

    Energy Technology Data Exchange (ETDEWEB)

    Kotasidis, Fotis A., E-mail: Fotis.Kotasidis@unige.ch [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland and Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Zaidi, Habib [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva (Switzerland); Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva (Switzerland); Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, 9700 RB (Netherlands)

    2014-06-15

    Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailed investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis

  6. Successfull operation of biogas plants. Data acquisition as a basis of successful optimization measures; Erfolgreicher Betrieb von Biogasanlagen. Datenerfassung als Grundlage erfolgreicher Optimierungsmassnahmen

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-09-19

    Within the 2nd Bayreuth expert meeting on biomass at 6th June, 2012 in Bayreuth (Federal Republic of Germany), the following lectures were held: (1) Presentation of the activities in the bio energy sector of the Landwirtschaftliche Lehranstalt Bayreuth (Rainer Prischenk); (2) State of the art of utilizing biogas in Oberfranken from the view of FVB e.V. (Wolfgang Holland Goetz); (3) Optimization of the plant operation by means of an intelligent control (Christian Seier); (4) Process optimization by means of identification of losses of biogas and evaluation of the load behaviour and emission behaviour of gas engines (Wolfgang Schreier); (5) Data acquisition and implementation of optimization measures from the point of view of an environmental verifier (Thorsten Grantner); (6) Economic analysis and optimization by means of the Lfl program BZA Biogas (Josef Winkler); (7) Detailed data acquisition as a necessary basis of the process optimization (Timo Herfter); (8) Case examples of the biologic support of biogas plants and their correct evaluation (Birgit Pfeifer); (9) A systematic acquisition of operational data as a basis for the increase of efficiency using the Praxisforschungsbiogasanlage of the University Hohenheim (Hans-Joachim Naegele); (10) Practical report: The biogas plant Sochenberg towards 100% utilization of energy (Uli Bader).

  7. Lithium photoionization cross-section and dynamic polarizability using square integrable basis sets and correlated wave functions

    International Nuclear Information System (INIS)

    Hollauer, E.; Nascimento, M.A.C.

    1985-01-01

    The photoionization cross-section and dynamic polarizability for lithium atom are calculated using a discrete basis set to represent both the bound and the continuum-states of the atom, to construct an approximation to the dynamic polarizability. From the imaginary part of the complex dynamic polarizability one extracts the photoionization cross-section and from its real part the dynamic polarizability. The results are in good agreement with the experiments and other more elaborate calculations (Author) [pt

  8. Vibration behavior optimization of planetary gear sets

    Directory of Open Access Journals (Sweden)

    Farshad Shakeri Aski

    2014-12-01

    Full Text Available This paper presents a global optimization method focused on planetary gear vibration reduction by means of tip relief profile modifications. A nonlinear dynamic model is used to study the vibration behavior. In order to investigate the optimal radius and amplitude, Brute Force method optimization is used. One approach in optimization is straightforward and requires considerable computation power: brute force methods try to calculate all possible solutions and decide afterwards which one is the best. Results show the influence of optimal profile on planetary gear vibrations.

  9. Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2013-01-01

    Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.

  10. Multilevel geometry optimization

    Science.gov (United States)

    Rodgers, Jocelyn M.; Fast, Patton L.; Truhlar, Donald G.

    2000-02-01

    Geometry optimization has been carried out for three test molecules using six multilevel electronic structure methods, in particular Gaussian-2, Gaussian-3, multicoefficient G2, multicoefficient G3, and two multicoefficient correlation methods based on correlation-consistent basis sets. In the Gaussian-2 and Gaussian-3 methods, various levels are added and subtracted with unit coefficients, whereas the multicoefficient Gaussian-x methods involve noninteger parameters as coefficients. The multilevel optimizations drop the average error in the geometry (averaged over the 18 cases) by a factor of about two when compared to the single most expensive component of a given multilevel calculation, and in all 18 cases the accuracy of the atomization energy for the three test molecules improves; with an average improvement of 16.7 kcal/mol.

  11. Basis Set Convergence of Indirect Spin-Spin Coupling Constants in the Kohn-Sham Limit for Several Small Molecules

    Czech Academy of Sciences Publication Activity Database

    Kupka, T.; Nieradka, M.; Stachów, M.; Pluta, T.; Nowak, P.; Kjaer, H.; Kongsted, J.; Kaminský, Jakub

    2012-01-01

    Roč. 116, č. 14 (2012), s. 3728-3738 ISSN 1089-5639 R&D Projects: GA ČR GPP208/10/P356 Institutional research plan: CEZ:AV0Z40550506 Keywords : consistent basis-sets * density-functional methods * ab-inition calculations * polarization propagator approximation Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.771, year: 2012

  12. Tractable Pareto Optimization of Temporal Preferences

    Science.gov (United States)

    Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent

    2003-01-01

    This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.

  13. On the optimal identification of tag sets in time-constrained RFID configurations.

    Science.gov (United States)

    Vales-Alonso, Javier; Bueno-Delgado, María Victoria; Egea-López, Esteban; Alcaraz, Juan José; Pérez-Mañogil, Juan Manuel

    2011-01-01

    In Radio Frequency Identification facilities the identification delay of a set of tags is mainly caused by the random access nature of the reading protocol, yielding a random identification time of the set of tags. In this paper, the cumulative distribution function of the identification time is evaluated using a discrete time Markov chain for single-set time-constrained passive RFID systems, namely those ones where a single group of tags is assumed to be in the reading area and only for a bounded time (sojourn time) before leaving. In these scenarios some tags in a set may leave the reader coverage area unidentified. The probability of this event is obtained from the cumulative distribution function of the identification time as a function of the sojourn time. This result provides a suitable criterion to minimize the probability of losing tags. Besides, an identification strategy based on splitting the set of tags in smaller subsets is also considered. Results demonstrate that there are optimal splitting configurations that reduce the overall identification time while keeping the same probability of losing tags.

  14. Homogeneity analysis with k sets of variables: An alternating least squares method with optimal scaling features

    NARCIS (Netherlands)

    van der Burg, Eeke; de Leeuw, Jan; Verdegaal, Renée

    1988-01-01

    Homogeneity analysis, or multiple correspondence analysis, is usually applied tok separate variables. In this paper we apply it to sets of variables by using sums within sets. The resulting technique is called OVERALS. It uses the notion of optimal scaling, with transformations that can be multiple

  15. Optimal structural inference of signaling pathways from unordered and overlapping gene sets.

    Science.gov (United States)

    Acharya, Lipi R; Judeh, Thair; Wang, Guangdi; Zhu, Dongxiao

    2012-02-15

    A plethora of bioinformatics analysis has led to the discovery of numerous gene sets, which can be interpreted as discrete measurements emitted from latent signaling pathways. Their potential to infer signaling pathway structures, however, has not been sufficiently exploited. Existing methods accommodating discrete data do not explicitly consider signal cascading mechanisms that characterize a signaling pathway. Novel computational methods are thus needed to fully utilize gene sets and broaden the scope from focusing only on pairwise interactions to the more general cascading events in the inference of signaling pathway structures. We propose a gene set based simulated annealing (SA) algorithm for the reconstruction of signaling pathway structures. A signaling pathway structure is a directed graph containing up to a few hundred nodes and many overlapping signal cascades, where each cascade represents a chain of molecular interactions from the cell surface to the nucleus. Gene sets in our context refer to discrete sets of genes participating in signal cascades, the basic building blocks of a signaling pathway, with no prior information about gene orderings in the cascades. From a compendium of gene sets related to a pathway, SA aims to search for signal cascades that characterize the optimal signaling pathway structure. In the search process, the extent of overlap among signal cascades is used to measure the optimality of a structure. Throughout, we treat gene sets as random samples from a first-order Markov chain model. We evaluated the performance of SA in three case studies. In the first study conducted on 83 KEGG pathways, SA demonstrated a significantly better performance than Bayesian network methods. Since both SA and Bayesian network methods accommodate discrete data, use a 'search and score' network learning strategy and output a directed network, they can be compared in terms of performance and computational time. In the second study, we compared SA and

  16. An intelligent hybrid scheme for optimizing parking space: A Tabu metaphor and rough set based approach

    Directory of Open Access Journals (Sweden)

    Soumya Banerjee

    2011-03-01

    Full Text Available Congested roads, high traffic, and parking problems are major concerns for any modern city planning. Congestion of on-street spaces in official neighborhoods may give rise to inappropriate parking areas in office and shopping mall complex during the peak time of official transactions. This paper proposes an intelligent and optimized scheme to solve parking space problem for a small city (e.g., Mauritius using a reactive search technique (named as Tabu Search assisted by rough set. Rough set is being used for the extraction of uncertain rules that exist in the databases of parking situations. The inclusion of rough set theory depicts the accuracy and roughness, which are used to characterize uncertainty of the parking lot. Approximation accuracy is employed to depict accuracy of a rough classification [1] according to different dynamic parking scenarios. And as such, the hybrid metaphor proposed comprising of Tabu Search and rough set could provide substantial research directions for other similar hard optimization problems.

  17. The optimal design of UAV wing structure

    Science.gov (United States)

    Długosz, Adam; Klimek, Wiktor

    2018-01-01

    The paper presents an optimal design of UAV wing, made of composite materials. The aim of the optimization is to improve strength and stiffness together with reduction of the weight of the structure. Three different types of functionals, which depend on stress, stiffness and the total mass are defined. The paper presents an application of the in-house implementation of the evolutionary multi-objective algorithm in optimization of the UAV wing structure. Values of the functionals are calculated on the basis of results obtained from numerical simulations. Numerical FEM model, consisting of different composite materials is created. Adequacy of the numerical model is verified by results obtained from the experiment, performed on a tensile testing machine. Examples of multi-objective optimization by means of Pareto-optimal set of solutions are presented.

  18. Assessing the optimality of ASHRAE climate zones using high resolution meteorological data sets

    Science.gov (United States)

    Fils, P. D.; Kumar, J.; Collier, N.; Hoffman, F. M.; Xu, M.; Forbes, W.

    2017-12-01

    Energy consumed by built infrastructure constitutes a significant fraction of the nation's energy budget. According to 2015 US Energy Information Agency report, 41% of the energy used in the US was going to residential and commercial buildings. Additional research has shown that 32% of commercial building energy goes into heating and cooling the building. The American National Standards Institute and the American Society of Heating Refrigerating and Air-Conditioning Engineers Standard 90.1 provides climate zones for current state-of-practice since heating and cooling demands are strongly influenced by spatio-temporal weather variations. For this reason, we have been assessing the optimality of the climate zones using high resolution daily climate data from NASA's DAYMET database. We analyzed time series of meteorological data sets for all ASHRAE climate zones between 1980-2016 inclusively. We computed the mean, standard deviation, and other statistics for a set of meteorological variables (solar radiation, maximum and minimum temperature)within each zone. By plotting all the zonal statistics, we analyzed patterns and trends in those data over the past 36 years. We compared the means of each zone to its standard deviation to determine the range of spatial variability that exist within each zone. If the band around the mean is too large, it indicates that regions in the zone experience a wide range of weather conditions and perhaps a common set of building design guidelines would lead to a non-optimal energy consumption scenario. In this study we have observed a strong variation in the different climate zones. Some have shown consistent patterns in the past 36 years, indicating that the zone was well constructed, while others have greatly deviated from their mean indicating that the zone needs to be reconstructed. We also looked at redesigning the climate zones based on high resolution climate data. We are using building simulations models like EnergyPlus to develop

  19. Peptide dynamics by molecular dynamics simulation and diffusion theory method with improved basis sets

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, Po Jen; Lai, S. K., E-mail: sklai@coll.phy.ncu.edu.tw [Complex Liquids Laboratory, Department of Physics, National Central University, Chungli 320, Taiwan and Molecular Science and Technology Program, Taiwan International Graduate Program, Academia Sinica, Taipei 115, Taiwan (China); Rapallo, Arnaldo [Istituto per lo Studio delle Macromolecole (ISMAC) Consiglio Nazionale delle Ricerche (CNR), via E. Bassini 15, C.A.P 20133 Milano (Italy)

    2014-03-14

    Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit

  20. Peptide dynamics by molecular dynamics simulation and diffusion theory method with improved basis sets

    International Nuclear Information System (INIS)

    Hsu, Po Jen; Lai, S. K.; Rapallo, Arnaldo

    2014-01-01

    Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit

  1. Application of Fuzzy Sets for the Improvement of Routing Optimization Heuristic Algorithms

    Directory of Open Access Journals (Sweden)

    Mattas Konstantinos

    2016-12-01

    Full Text Available The determination of the optimal circular path has become widely known for its difficulty in producing a solution and for the numerous applications in the scope of organization and management of passenger and freight transport. It is a mathematical combinatorial optimization problem for which several deterministic and heuristic models have been developed in recent years, applicable to route organization issues, passenger and freight transport, storage and distribution of goods, waste collection, supply and control of terminals, as well as human resource management. Scope of the present paper is the development, with the use of fuzzy sets, of a practical, comprehensible and speedy heuristic algorithm for the improvement of the ability of the classical deterministic algorithms to identify optimum, symmetrical or non-symmetrical, circular route. The proposed fuzzy heuristic algorithm is compared to the corresponding deterministic ones, with regard to the deviation of the proposed solution from the best known solution and the complexity of the calculations needed to obtain this solution. It is shown that the use of fuzzy sets reduced up to 35% the deviation of the solution identified by the classical deterministic algorithms from the best known solution.

  2. Numerical Aspects of Atomic Physics: Helium Basis Sets and Matrix Diagonalization

    Science.gov (United States)

    Jentschura, Ulrich; Noble, Jonathan

    2014-03-01

    We present a matrix diagonalization algorithm for complex symmetric matrices, which can be used in order to determine the resonance energies of auto-ionizing states of comparatively simple quantum many-body systems such as helium. The algorithm is based in multi-precision arithmetic and proceeds via a tridiagonalization of the complex symmetric (not necessarily Hermitian) input matrix using generalized Householder transformations. Example calculations involving so-called PT-symmetric quantum systems lead to reference values which pertain to the imaginary cubic perturbation (the imaginary cubic anharmonic oscillator). We then proceed to novel basis sets for the helium atom and present results for Bethe logarithms in hydrogen and helium, obtained using the enhanced numerical techniques. Some intricacies of ``canned'' algorithms such as those used in LAPACK will be discussed. Our algorithm, for complex symmetric matrices such as those describing cubic resonances after complex scaling, is faster than LAPACK's built-in routines, for specific classes of input matrices. It also offer flexibility in terms of the calculation of the so-called implicit shift, which is used in order to ``pivot'' the system toward the convergence to diagonal form. We conclude with a wider overview.

  3. XZP + 1d and XZP + 1d-DKH basis sets for second-row elements: application to CCSD(T) zero-point vibrational energy and atomization energy calculations.

    Science.gov (United States)

    Campos, Cesar T; Jorge, Francisco E; Alves, Júlia M A

    2012-09-01

    Recently, segmented all-electron contracted double, triple, quadruple, quintuple, and sextuple zeta valence plus polarization function (XZP, X = D, T, Q, 5, and 6) basis sets for the elements from H to Ar were constructed for use in conjunction with nonrelativistic and Douglas-Kroll-Hess Hamiltonians. In this work, in order to obtain a better description of some molecular properties, the XZP sets for the second-row elements were augmented with high-exponent d "inner polarization functions," which were optimized in the molecular environment at the second-order Møller-Plesset level. At the coupled cluster level of theory, the inclusion of tight d functions for these elements was found to be essential to improve the agreement between theoretical and experimental zero-point vibrational energies (ZPVEs) and atomization energies. For all of the molecules studied, the ZPVE errors were always smaller than 0.5 %. The atomization energies were also improved by applying corrections due to core/valence correlation and atomic spin-orbit effects. This led to estimates for the atomization energies of various compounds in the gaseous phase. The largest error (1.2 kcal mol(-1)) was found for SiH(4).

  4. Formation of the portfolio of high-rise construction projects on the basis of optimization of «risk-return» rate

    Science.gov (United States)

    Uvarova, Svetlana; Kutsygina, Olga; Smorodina, Elena; Gumba, Khuta

    2018-03-01

    The effectiveness and sustainability of an enterprise are based on the effectiveness and sustainability of its portfolio of projects. When creating a production program for a construction company based on a portfolio of projects and related to the planning and implementation of initiated organizational and economic changes, the problem of finding the optimal "risk-return" ratio of the program (portfolio of projects) is solved. The article proposes and approves the methodology of forming a portfolio of enterprise projects on the basis of the correspondence principle. Optimization of the portfolio of projects on the criterion of "risk-return" also contributes to the company's sustainability.

  5. Fast-Solving Quasi-Optimal LS-S3VM Based on an Extended Candidate Set.

    Science.gov (United States)

    Ma, Yuefeng; Liang, Xun; Kwok, James T; Li, Jianping; Zhou, Xiaoping; Zhang, Haiyan

    2018-04-01

    The semisupervised least squares support vector machine (LS-S 3 VM) is an important enhancement of least squares support vector machines in semisupervised learning. Given that most data collected from the real world are without labels, semisupervised approaches are more applicable than standard supervised approaches. Although a few training methods for LS-S 3 VM exist, the problem of deriving the optimal decision hyperplane efficiently and effectually has not been solved. In this paper, a fully weighted model of LS-S 3 VM is proposed, and a simple integer programming (IP) model is introduced through an equivalent transformation to solve the model. Based on the distances between the unlabeled data and the decision hyperplane, a new indicator is designed to represent the possibility that the label of an unlabeled datum should be reversed in each iteration during training. Using the indicator, we construct an extended candidate set consisting of the indices of unlabeled data with high possibilities, which integrates more information from unlabeled data. Our algorithm is degenerated into a special scenario of the previous algorithm when the extended candidate set is reduced into a set with only one element. Two strategies are utilized to determine the descent directions based on the extended candidate set. Furthermore, we developed a novel method for locating a good starting point based on the properties of the equivalent IP model. Combined with the extended candidate set and the carefully computed starting point, a fast algorithm to solve LS-S 3 VM quasi-optimally is proposed. The choice of quasi-optimal solutions results in low computational cost and avoidance of overfitting. Experiments show that our algorithm equipped with the two designed strategies is more effective than other algorithms in at least one of the following three aspects: 1) computational complexity; 2) generalization ability; and 3) flexibility. However, our algorithm and other algorithms have

  6. Sampling optimization for printer characterization by direct search.

    Science.gov (United States)

    Bianco, Simone; Schettini, Raimondo

    2012-12-01

    Printer characterization usually requires many printer inputs and corresponding color measurements of the printed outputs. In this brief, a sampling optimization for printer characterization on the basis of direct search is proposed to maintain high color accuracy with a reduction in the number of characterization samples required. The proposed method is able to match a given level of color accuracy requiring, on average, a characterization set cardinality which is almost one-fourth of that required by the uniform sampling, while the best method in the state of the art needs almost one-third. The number of characterization samples required can be further reduced if the proposed algorithm is coupled with a sequential optimization method that refines the sample values in the device-independent color space. The proposed sampling optimization method is extended to deal with multiple substrates simultaneously, giving statistically better colorimetric accuracy (at the α = 0.05 significance level) than sampling optimization techniques in the state of the art optimized for each individual substrate, thus allowing use of a single set of characterization samples for multiple substrates.

  7. Level set method for optimal shape design of MRAM core. Micromagnetic approach

    International Nuclear Information System (INIS)

    Melicher, Valdemar; Cimrak, Ivan; Keer, Roger van

    2008-01-01

    We aim at optimizing the shape of the magnetic core in MRAM memories. The evolution of the magnetization during the writing process is described by the Landau-Lifshitz equation (LLE). The actual shape of the core in one cell is characterized by the coefficient γ. Cost functional f=f(γ) expresses the quality of the writing process having in mind the competition between the full-select and the half-select element. We derive an explicit form of the derivative F=∂f/∂γ which allows for the use of gradient-type methods for the actual computation of the optimized shape (e.g., steepest descend method). The level set method (LSM) is employed for the representation of the piecewise constant coefficient γ

  8. Optimization of the size and shape of the set-in nozzle for a PWR reactor pressure vessel

    Energy Technology Data Exchange (ETDEWEB)

    Murtaza, Usman Tariq, E-mail: maniiut@yahoo.com; Javed Hyder, M., E-mail: hyder@pieas.edu.pk

    2015-04-01

    Highlights: • The size and shape of the set-in nozzle of the RPV have been optimized. • The optimized nozzle ensure the reduction of the mass around 198 kg per nozzle. • The mass of the RPV should be minimized for better fracture toughness. - Abstract: The objective of this research work is to optimize the size and shape of the set-in nozzle for a typical reactor pressure vessel (RPV) of a 300 MW pressurized water reactor. The analysis was performed by optimizing the four design variables which control the size and shape of the nozzle. These variables are inner radius of the nozzle, thickness of the nozzle, taper angle at the nozzle-cylinder intersection, and the point where taper of the nozzle starts from. It is concluded that the optimum design of the nozzle is the one that minimizes the two conflicting state variables, i.e., the stress intensity (Tresca yield criterion) and the mass of the RPV.

  9. Optimization of Boiler Heat Load in Water-Heating Boiler-House

    Directory of Open Access Journals (Sweden)

    B. A. Bayrashevsky

    2009-01-01

    Full Text Available An analytical method for optimization of water-heating boiler loads has been developed on the basis of approximated semi-empirical dependences pertaining to changes of boiler gross efficiency due to its load. A complex (∂tух/∂ξΔξ is determined on the basis of a systematic analysis (monitoring of experimental data and the Y. P. Pecker’s formula for calculation of balance losses q2. This complex makes it possible to set a corresponding correction to a standard value of the boiler gross efficiency due to contamination of heating surfaces.Software means for optimization of water-heating boilers has been developed and it is recommended to be applied under operational conditions.

  10. Multilevel geometry optimization

    Energy Technology Data Exchange (ETDEWEB)

    Rodgers, Jocelyn M. [Department of Chemistry and Supercomputer Institute, University of Minnesota, Minneapolis, Minnesota 55455-0431 (United States); Fast, Patton L. [Department of Chemistry and Supercomputer Institute, University of Minnesota, Minneapolis, Minnesota 55455-0431 (United States); Truhlar, Donald G. [Department of Chemistry and Supercomputer Institute, University of Minnesota, Minneapolis, Minnesota 55455-0431 (United States)

    2000-02-15

    Geometry optimization has been carried out for three test molecules using six multilevel electronic structure methods, in particular Gaussian-2, Gaussian-3, multicoefficient G2, multicoefficient G3, and two multicoefficient correlation methods based on correlation-consistent basis sets. In the Gaussian-2 and Gaussian-3 methods, various levels are added and subtracted with unit coefficients, whereas the multicoefficient Gaussian-x methods involve noninteger parameters as coefficients. The multilevel optimizations drop the average error in the geometry (averaged over the 18 cases) by a factor of about two when compared to the single most expensive component of a given multilevel calculation, and in all 18 cases the accuracy of the atomization energy for the three test molecules improves; with an average improvement of 16.7 kcal/mol. (c) 2000 American Institute of Physics.

  11. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  12. An Optimized Method for Terrain Reconstruction Based on Descent Images

    Directory of Open Access Journals (Sweden)

    Xu Xinchao

    2016-02-01

    Full Text Available An optimization method is proposed to perform high-accuracy terrain reconstruction of the landing area of Chang’e III. First, feature matching is conducted using geometric model constraints. Then, the initial terrain is obtained and the initial normal vector of each point is solved on the basis of the initial terrain. By changing the vector around the initial normal vector in small steps a set of new vectors is obtained. By combining these vectors with the direction of light and camera, the functions are set up on the basis of a surface reflection model. Then, a series of gray values is derived by solving the equations. The new optimized vector is recorded when the obtained gray value is closest to the corresponding pixel. Finally, the optimized terrain is obtained after iteration of the vector field. Experiments were conducted using the laboratory images and descent images of Chang’e III. The results showed that the performance of the proposed method was better than that of the classical feature matching method. It can provide a reference for terrain reconstruction of the landing area in subsequent moon exploration missions.

  13. Beyond Low-Rank Representations: Orthogonal clustering basis reconstruction with optimized graph structure for multi-view spectral clustering.

    Science.gov (United States)

    Wang, Yang; Wu, Lin

    2018-07-01

    Low-Rank Representation (LRR) is arguably one of the most powerful paradigms for Multi-view spectral clustering, which elegantly encodes the multi-view local graph/manifold structures into an intrinsic low-rank self-expressive data similarity embedded in high-dimensional space, to yield a better graph partition than their single-view counterparts. In this paper we revisit it with a fundamentally different perspective by discovering LRR as essentially a latent clustered orthogonal projection based representation winged with an optimized local graph structure for spectral clustering; each column of the representation is fundamentally a cluster basis orthogonal to others to indicate its members, which intuitively projects the view-specific feature representation to be the one spanned by all orthogonal basis to characterize the cluster structures. Upon this finding, we propose our technique with the following: (1) We decompose LRR into latent clustered orthogonal representation via low-rank matrix factorization, to encode the more flexible cluster structures than LRR over primal data objects; (2) We convert the problem of LRR into that of simultaneously learning orthogonal clustered representation and optimized local graph structure for each view; (3) The learned orthogonal clustered representations and local graph structures enjoy the same magnitude for multi-view, so that the ideal multi-view consensus can be readily achieved. The experiments over multi-view datasets validate its superiority, especially over recent state-of-the-art LRR models. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. OPTIMIZING ANTIMICROBIAL PHARMACODYNAMICS: A GUIDE FOR YOUR STEWARDSHIP PROGRAM

    Directory of Open Access Journals (Sweden)

    Joseph L. Kuti, PharmD

    2016-09-01

    Full Text Available Pharmacodynamic concepts should be applied to optimize antibiotic dosing regimens, particularly in the face of some multidrug resistant bacterial infections. Although the pharmacodynamics of most antibiotic classes used in the hospital setting are well described, guidance on how to select regimens and implement them into an antimicrobial stewardship program in one's institution are more limited. The role of the antibiotic MIC is paramount in understanding which regimens might benefit from implementation as a protocol or use in individual patients. This review article outlines the pharmacodynamics of aminoglycosides, beta-lactams, fluoroquinolones, tigecycline, vancomycin, and polymyxins with the goal of providing a basis for strategy to select an optimized antibiotic regimen in your hospital setting.

  15. Quadratic Hedging of Basis Risk

    Directory of Open Access Journals (Sweden)

    Hardy Hulley

    2015-02-01

    Full Text Available This paper examines a simple basis risk model based on correlated geometric Brownian motions. We apply quadratic criteria to minimize basis risk and hedge in an optimal manner. Initially, we derive the Föllmer–Schweizer decomposition for a European claim. This allows pricing and hedging under the minimal martingale measure, corresponding to the local risk-minimizing strategy. Furthermore, since the mean-variance tradeoff process is deterministic in our setup, the minimal martingale- and variance-optimal martingale measures coincide. Consequently, the mean-variance optimal strategy is easily constructed. Simple pricing and hedging formulae for put and call options are derived in terms of the Black–Scholes formula. Due to market incompleteness, these formulae depend on the drift parameters of the processes. By making a further equilibrium assumption, we derive an approximate hedging formula, which does not require knowledge of these parameters. The hedging strategies are tested using Monte Carlo experiments, and are compared with results achieved using a utility maximization approach.

  16. REGION AGRICULTURE DEVELOPMENT ON THE BASIS OF OPTIMIZATION MODELLING

    Directory of Open Access Journals (Sweden)

    V.P. Neganova

    2008-06-01

    Full Text Available The scientific substantiation of accommodation of an agricultural production of territorial divisions of region is a complex social-economic problem. The decision of this problem demands definition market-oriented criteria of an optimality. The author considers three criteria of optimality: maximum of profit; maximum of gross output without production costs and costs for soil fertility simple reproduction; maximum of marginal income. Conclusion is drawn that the best criterion of optimization of production is the maximum the marginal income (the marginal income without constant costs, which will raise economic and ecological efficiency of an agricultural production at all management levels. As a result of agricultural production optimization the Republic Bashkortostan will become self-provided and taking out (foodstuffs region of Russia. In this case the republic is capable to provide with food substances (protein, carbohydrates and etc. 5.8 – 6.5 million person. It exceeds a population of republic on 40 – 60 %.

  17. Accurate Open-Shell Noncovalent Interaction Energies from the Orbital-Optimized Møller-Plesset Perturbation Theory: Achieving CCSD Quality at the MP2 Level by Orbital Optimization.

    Science.gov (United States)

    Soydaş, Emine; Bozkaya, Uğur

    2013-11-12

    The accurate description of noncovalent interactions is one of the most challenging problems in modern computational chemistry, especially those for open-shell systems. In this study, an investigation of open-shell noncovalent interactions with the orbital-optimized MP2 and MP3 (OMP2 and OMP3) is presented. For the considered test set of 23 complexes, mean absolute errors in noncovalent interaction energies (with respect to CCSD(T) at complete basis set limits) are 0.68 (MP2), 0.37 (OMP2), 0.59 (MP3), 0.23 (OMP3), and 0.38 (CCSD) kcal mol(-1) . Hence, with a greatly reduced computational cost, one may achieve CCSD quality at the MP2 level by orbital optimization [scaling formally as O(N(6)) for CCSD compared to O(N(5)) for OMP2, where N is the number of basis functions]. Further, one may obtain a considerably better performance than CCSD using the OMP3 method, which has also a lower cost than CCSD.

  18. Pareto Optimization Identifies Diverse Set of Phosphorylation Signatures Predicting Response to Treatment with Dasatinib.

    Science.gov (United States)

    Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph

    2015-01-01

    Multivariate biomarkers that can predict the effectiveness of targeted therapy in individual patients are highly desired. Previous biomarker discovery studies have largely focused on the identification of single biomarker signatures, aimed at maximizing prediction accuracy. Here, we present a different approach that identifies multiple biomarkers by simultaneously optimizing their predictive power, number of features, and proximity to the drug target in a protein-protein interaction network. To this end, we incorporated NSGA-II, a fast and elitist multi-objective optimization algorithm that is based on the principle of Pareto optimality, into the biomarker discovery workflow. The method was applied to quantitative phosphoproteome data of 19 non-small cell lung cancer (NSCLC) cell lines from a previous biomarker study. The algorithm successfully identified a total of 77 candidate biomarker signatures predicting response to treatment with dasatinib. Through filtering and similarity clustering, this set was trimmed to four final biomarker signatures, which then were validated on an independent set of breast cancer cell lines. All four candidates reached the same good prediction accuracy (83%) as the originally published biomarker. Although the newly discovered signatures were diverse in their composition and in their size, the central protein of the originally published signature - integrin β4 (ITGB4) - was also present in all four Pareto signatures, confirming its pivotal role in predicting dasatinib response in NSCLC cell lines. In summary, the method presented here allows for a robust and simultaneous identification of multiple multivariate biomarkers that are optimized for prediction performance, size, and relevance.

  19. Reduced design load basis for ultimate blade loads estimation in multidisciplinary design optimization frameworks

    DEFF Research Database (Denmark)

    Pavese, Christian; Tibaldi, Carlo; Larsen, Torben J.

    2016-01-01

    The aim is to provide a fast and reliable approach to estimate ultimate blade loads for a multidisciplinary design optimization (MDO) framework. For blade design purposes, the standards require a large amount of computationally expensive simulations, which cannot be efficiently run each cost...... function evaluation of an MDO process. This work describes a method that allows integrating the calculation of the blade load envelopes inside an MDO loop. Ultimate blade load envelopes are calculated for a baseline design and a design obtained after an iteration of an MDO. These envelopes are computed...... for a full standard design load basis (DLB) and a deterministic reduced DLB. Ultimate loads extracted from the two DLBs with the two blade designs each are compared and analyzed. Although the reduced DLB supplies ultimate loads of different magnitude, the shape of the estimated envelopes are similar...

  20. Searching for optimal integer solutions to set partitioning problems using column generation

    OpenAIRE

    Bredström, David; Jörnsten, Kurt; Rönnqvist, Mikael

    2007-01-01

    We describe a new approach to produce integer feasible columns to a set partitioning problem directly in solving the linear programming (LP) relaxation using column generation. Traditionally, column generation is aimed to solve the LP relaxation as quick as possible without any concern of the integer properties of the columns formed. In our approach we aim to generate the columns forming the optimal integer solution while simultaneously solving the LP relaxation. By this we can re...

  1. The research of optimal selection method for wavelet packet basis in compressing the vibration signal of a rolling bearing in fans and pumps

    International Nuclear Information System (INIS)

    Hao, W; Jinji, G

    2012-01-01

    Compressing the vibration signal of a rolling bearing has important significance to wireless monitoring and remote diagnosis of fans and pumps which is widely used in the petrochemical industry. In this paper, according to the characteristics of the vibration signal in a rolling bearing, a compression method based on the optimal selection of wavelet packet basis is proposed. We analyze several main attributes of wavelet packet basis and the effect to the compression of the vibration signal in a rolling bearing using wavelet packet transform in various compression ratios, and proposed a method to precisely select a wavelet packet basis. Through an actual signal, we come to the conclusion that an orthogonal wavelet packet basis with low vanishing moment should be used to compress the vibration signal of a rolling bearing to get an accurate energy proportion between the feature bands in the spectrum of reconstructing the signal. Within these low vanishing moments, orthogonal wavelet packet basis, and 'coif' wavelet packet basis can obtain the best signal-to-noise ratio in the same compression ratio for its best symmetry.

  2. Optimal wage setting for an export oriented firm under labor taxes and labor mobility

    Directory of Open Access Journals (Sweden)

    Raúl Ponce Rodríguez

    2005-01-01

    Full Text Available In this paper it is developed a theoretical model to study the incentives that a labor tax might induce in terms of the optimal wage setting for an export oriented firm. In particular, we analyze the interaction of a labor tax that tends to reduce the wage due the firm is induced to shift backwards the tax burden to its employees minimizing the possible increase in the payroll costs and a fall of profits. However a lower wage might not be an optimal response to the establishment of a labor tax because it increases the labor turnover and as a result the firm faces both: an output’s opportunity cost and a labors turnover cost. The firm thus optimally decides to respond to the qualification and labor taxes by increasing the after tax wage.

  3. A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks

    Science.gov (United States)

    De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio

    2016-05-01

    This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.

  4. Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set

    Science.gov (United States)

    Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.

    2017-12-01

    In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of

  5. Calculations of wavefunctions and energies of electron system in Coulomb potential by variational method without a basis set

    International Nuclear Information System (INIS)

    Bykov, V.P.; Gerasimov, A.V.

    1992-08-01

    A new variational method without a basis set for calculation of the eigenvalues and eigenfunctions of Hamiltonians is suggested. The expansion of this method for the Coulomb potentials is given. Calculation of the energy and charge distribution in the two-electron system for different values of the nuclear charge Z is made. It is shown that at small Z the Coulomb forces disintegrate the electron cloud into two clots. (author). 3 refs, 4 figs, 1 tab

  6. Ventilation area measured with eit in order to optimize peep settings in mechanically ventilated patients

    NARCIS (Netherlands)

    Blankman, P; Groot Jebbink, E; Preis, C; Bikker, I.; Gommers, D.

    2012-01-01

    INTRODUCTION. Electrical Impedance Tomography (EIT) is a non-invasive imaging technique, which can be used to visualize ventilation. Ventilation will be measured by impedance changes due to ventilation. OBJECTIVES. The aim of this study was to optimize PEEP settings based on the ventilation area of

  7. Setting clear expectations for safety basis development

    International Nuclear Information System (INIS)

    MORENO, M.R.

    2003-01-01

    DOE-RL has set clear expectations for a cost-effective approach for achieving compliance with the Nuclear Safety Management requirements (10 CFR 830, Nuclear Safety Rule) which will ensure long-term benefit to Hanford. To facilitate implementation of these expectations, tools were developed to streamline and standardize safety analysis and safety document development resulting in a shorter and more predictable DOE approval cycle. A Hanford Safety Analysis and Risk Assessment Handbook (SARAH) was issued to standardized methodologies for development of safety analyses. A Microsoft Excel spreadsheet (RADIDOSE) was issued for the evaluation of radiological consequences for accident scenarios often postulated for Hanford. A standard Site Documented Safety Analysis (DSA) detailing the safety management programs was issued for use as a means of compliance with a majority of 3009 Standard chapters. An in-process review was developed between DOE and the Contractor to facilitate DOE approval and provide early course correction. As a result of setting expectations and providing safety analysis tools, the four Hanford Site waste management nuclear facilities were able to integrate into one Master Waste Management Documented Safety Analysis (WM-DSA)

  8. Training set optimization and classifier performance in a top-down diabetic retinopathy screening system

    Science.gov (United States)

    Wigdahl, J.; Agurto, C.; Murray, V.; Barriga, S.; Soliz, P.

    2013-03-01

    Diabetic retinopathy (DR) affects more than 4.4 million Americans age 40 and over. Automatic screening for DR has shown to be an efficient and cost-effective way to lower the burden on the healthcare system, by triaging diabetic patients and ensuring timely care for those presenting with DR. Several supervised algorithms have been developed to detect pathologies related to DR, but little work has been done in determining the size of the training set that optimizes an algorithm's performance. In this paper we analyze the effect of the training sample size on the performance of a top-down DR screening algorithm for different types of statistical classifiers. Results are based on partial least squares (PLS), support vector machines (SVM), k-nearest neighbor (kNN), and Naïve Bayes classifiers. Our dataset consisted of digital retinal images collected from a total of 745 cases (595 controls, 150 with DR). We varied the number of normal controls in the training set, while keeping the number of DR samples constant, and repeated the procedure 10 times using randomized training sets to avoid bias. Results show increasing performance in terms of area under the ROC curve (AUC) when the number of DR subjects in the training set increased, with similar trends for each of the classifiers. Of these, PLS and k-NN had the highest average AUC. Lower standard deviation and a flattening of the AUC curve gives evidence that there is a limit to the learning ability of the classifiers and an optimal number of cases to train on.

  9. Vibrational frequency scaling factors for correlation consistent basis sets and the methods CC2 and MP2 and their spin-scaled SCS and SOS variants

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no [Centre for Theoretical and Computational Chemistry CTCC, Department of Chemistry, University of Tromsø, N-9037 Tromsø (Norway); Törk, Lisa; Hättig, Christof, E-mail: christof.haettig@rub.de [Lehrstuhl für Theoretische Chemie, Ruhr-Universität Bochum, D-44801 Bochum (Germany)

    2014-11-21

    We present scaling factors for vibrational frequencies calculated within the harmonic approximation and the correlated wave-function methods coupled cluster singles and doubles model (CC2) and Møller-Plesset perturbation theory (MP2) with and without a spin-component scaling (SCS or spin-opposite scaling (SOS)). Frequency scaling factors and the remaining deviations from the reference data are evaluated for several non-augmented basis sets of the cc-pVXZ family of generally contracted correlation-consistent basis sets as well as for the segmented contracted TZVPP basis. We find that the SCS and SOS variants of CC2 and MP2 lead to a slightly better accuracy for the scaled vibrational frequencies. The determined frequency scaling factors can also be used for vibrational frequencies calculated for excited states through response theory with CC2 and the algebraic diagrammatic construction through second order and their spin-component scaled variants.

  10. Basis set expansion for inverse problems in plasma diagnostic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jones, B.; Ruiz, C. L. [Sandia National Laboratories, PO Box 5800, Albuquerque, New Mexico 87185 (United States)

    2013-07-15

    A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)] is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20–25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.

  11. COMPARING INTRA- AND INTERENVIRONMENTAL PARAMETERS OF OPTIMAL SETTING IN BREEDING EXPERIMENTS

    Directory of Open Access Journals (Sweden)

    Domagoj Šimić

    2004-06-01

    Full Text Available A series of biometrical and quantitative-genetic parameters, not well known in Croatia, are being used for the most important agronomic traits to determine optimal genotype setting within a location as well as among locations. Objectives of the study are to estimate and to compare 1 parameters of intra-environment setting (effective mean square error EMSE, in lattice design, relative efficiency RE, of lattice design LD, compared to randomized complete block design RCBD, and repeatability Rep, of a plot value, and 2 operative heritability h2, as a parameter of inter-environment setting in an experiment with 72 maize hybrids. Trials were set up at four environments (two locations in two years evaluating grain yield and stalk rot. EMSE values corresponded across environments for both traits, while the estimations for RE of LD varied inconsistently over environments and traits. Rep estimates were more different over environments than traits. Rep values did not correspond with h2 estimates: Rep estimates for stalk rot were higher than those for grain yield, while h2 for grain yield was higher than for stalk rot in all instances. Our results suggest that due to importance of genotype × environment interaction, there is a need for multienvironment trials for both traits. If the experiment framework should be reduced due to economic or other reasons, decreasing number of locations in a year rather than decreasing number of years of investigation is recommended.

  12. Existence theorem and optimality conditions for a class of convex semi-infinite problems with noncompact index sets

    Directory of Open Access Journals (Sweden)

    Olga Kostyukova

    2017-11-01

    Full Text Available The paper is devoted to study of a special class of semi-infinite problems arising in nonlinear parametric Semi-infinite Programming, when the differential properties of the solutions are being studied. These problems are convex and possess noncompact index sets. In the paper, we present conditions guaranteeing the existence of optimal solutions, and prove new optimality criterion. An example illustrating the obtained results is presented.

  13. RHF and DFT study of the optimized molecular structure and atomic ...

    African Journals Online (AJOL)

    Restricted HartreeFock (RHF) and Density Functional Theory (DFT) studies were carried out on the organic semi conductor material Pentacene. 6-31G and 6-31G* basis sets were used to optimize the molecule and compute the charge distribution at both levels of theory. The results show that the Carbon-Hydrogen bonds in ...

  14. Need for reaction coordinates to ensure a complete basis set in an adiabatic representation of ion-atom collisions

    Science.gov (United States)

    Rabli, Djamal; McCarroll, Ronald

    2018-02-01

    This review surveys the different theoretical approaches, used to describe inelastic and rearrangement processes in collisions involving atoms and ions. For a range of energies from a few meV up to about 1 keV, the adiabatic representation is expected to be valid and under these conditions, inelastic and rearrangement processes take place via a network of avoided crossings of the potential energy curves of the collision system. In general, such avoided crossings are finite in number. The non-adiabatic coupling, due to the breakdown of the Born-Oppenheimer separation of the electronic and nuclear variables, depends on the ratio of the electron mass to the nuclear mass terms in the total Hamiltonian. By limiting terms in the total Hamiltonian correct to first order in the electron to nuclear mass ratio, a system of reaction coordinates is found which allows for a correct description of both inelastic channels. The connection between the use of reaction coordinates in the quantum description and the electron translation factors of the impact parameter approach is established. A major result is that only when reaction coordinates are used, is it possible to introduce the notion of a minimal basis set. Such a set must include all avoided crossings including both radial coupling and long range Coriolis coupling. But, only when reactive coordinates are used, can such a basis set be considered as complete. In particular when the centre of nuclear mass is used as centre of coordinates, rather than the correct reaction coordinates, it is shown that erroneous results are obtained. A few results to illustrate this important point are presented: one concerning a simple two-state Landau-Zener type avoided crossing, the other concerning a network of multiple crossings in a typical electron capture process involving a highly charged ion with a neutral atom.

  15. Optimizing structure of complex technical system by heterogeneous vector criterion in interval form

    Science.gov (United States)

    Lysenko, A. V.; Kochegarov, I. I.; Yurkov, N. K.; Grishko, A. K.

    2018-05-01

    The article examines the methods of development and multi-criteria choice of the preferred structural variant of the complex technical system at the early stages of its life cycle in the absence of sufficient knowledge of parameters and variables for optimizing this structure. The suggested methods takes into consideration the various fuzzy input data connected with the heterogeneous quality criteria of the designed system and the parameters set by their variation range. The suggested approach is based on the complex use of methods of interval analysis, fuzzy sets theory, and the decision-making theory. As a result, the method for normalizing heterogeneous quality criteria has been developed on the basis of establishing preference relations in the interval form. The method of building preferential relations in the interval form on the basis of the vector of heterogeneous quality criteria suggest the use of membership functions instead of the coefficients considering the criteria value. The former show the degree of proximity of the realization of the designed system to the efficient or Pareto optimal variants. The study analyzes the example of choosing the optimal variant for the complex system using heterogeneous quality criteria.

  16. Communication: Calculation of interatomic forces and optimization of molecular geometry with auxiliary-field quantum Monte Carlo

    Science.gov (United States)

    Motta, Mario; Zhang, Shiwei

    2018-05-01

    We propose an algorithm for accurate, systematic, and scalable computation of interatomic forces within the auxiliary-field quantum Monte Carlo (AFQMC) method. The algorithm relies on the Hellmann-Feynman theorem and incorporates Pulay corrections in the presence of atomic orbital basis sets. We benchmark the method for small molecules by comparing the computed forces with the derivatives of the AFQMC potential energy surface and by direct comparison with other quantum chemistry methods. We then perform geometry optimizations using the steepest descent algorithm in larger molecules. With realistic basis sets, we obtain equilibrium geometries in agreement, within statistical error bars, with experimental values. The increase in computational cost for computing forces in this approach is only a small prefactor over that of calculating the total energy. This paves the way for a general and efficient approach for geometry optimization and molecular dynamics within AFQMC.

  17. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: Dispersion, induction, and basis set superposition error

    Science.gov (United States)

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.

    2012-10-01

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.

  18. Basis set expansion for inverse problems in plasma diagnostic analysis

    Science.gov (United States)

    Jones, B.; Ruiz, C. L.

    2013-07-01

    A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)], 10.1063/1.1482156 is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20-25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.

  19. Research and Setting the Modified Algorithm "Predator-Prey" in the Problem of the Multi-Objective Optimization

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2016-01-01

    Full Text Available We consider a class of algorithms for multi-objective optimization - Pareto-approximation algorithms, which suppose a preliminary building of finite-dimensional approximation of a Pareto set, thereby also a Pareto front of the problem. The article gives an overview of population and non-population algorithms of the Pareto-approximation, identifies their strengths and weaknesses, and presents a canonical algorithm "predator-prey", showing its shortcomings. We offer a number of modifications of the canonical algorithm "predator-prey" with the aim to overcome the drawbacks of this algorithm, present the results of a broad study of the efficiency of these modifications of the algorithm. The peculiarity of the study is the use of the quality indicators of the Pareto-approximation, which previous publications have not used. In addition, we present the results of the meta-optimization of the modified algorithm, i.e. determining the optimal values of some free parameters of the algorithm. The study of efficiency of the modified algorithm "predator-prey" has shown that the proposed modifications allow us to improve the following indicators of the basic algorithm: cardinality of a set of the archive solutions, uniformity of archive solutions, and computation time. By and large, the research results have shown that the modified and meta-optimized algorithm enables achieving exactly the same approximation as the basic algorithm, but with the number of preys being one order less. Computational costs are proportionally reduced.

  20. Will the changes proposed to the conceptual framework’s definitions and recognition criteria provide a better basis for the IASB standard setting?

    NARCIS (Netherlands)

    Brouwer, A.; Hoogendoorn, M.; Naarding, E.

    2015-01-01

    In this paper we evaluate the International Accounting Standards Board’s (IASB) efforts, in a discussion paper (DP) of 2013, to develop a new conceptual framework (CF) in the light of its stated ambition to establish a robust and consistent basis for future standard setting, thereby guiding standard

  1. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    Science.gov (United States)

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  2. Population health management as a strategy for creation of optimal healing environments in worksite and corporate settings.

    Science.gov (United States)

    Chapman, Larry S; Pelletier, Kenneth R

    2004-01-01

    This paper provides an (OHE) overview of a population health management (PHM) approach to the creation of optimal healing environments (OHEs) in worksite and corporate settings. It presents a framework for consideration as the context for potential research projects to examine the health, well-being, and economic effects of a set of newer "virtual" prevention interventions operating in an integrated manner in worksite settings. The main topics discussed are the fundamentals of PHM with basic terminology and core principles, a description of PHM core technology and implications of a PHM approach to creating OHEs.

  3. Basis of valve operator selection for SMART

    International Nuclear Information System (INIS)

    Kang, H. S.; Lee, D. J.; See, J. K.; Park, C. K.; Choi, B. S.

    2000-05-01

    SMART, an integral reactor with enhanced safety and operability, is under development for use of the nuclear energy. The valve operator of SMART system were selected through the data survey and technical review of potential valve fabrication vendors, and it will provide the establishment and optimization of the basic system design of SMART. In order to establish and optimize the basic system design of SMART, the basis of selection for the valve operator type were provided based on the basic design requirements. The basis of valve operator selection for SMART will be used as a basic technical data for the SMART basic and detail design and a fundamental material for the new reactor development in the future

  4. Basis of valve operator selection for SMART

    Energy Technology Data Exchange (ETDEWEB)

    Kang, H. S.; Lee, D. J.; See, J. K.; Park, C. K.; Choi, B. S

    2000-05-01

    SMART, an integral reactor with enhanced safety and operability, is under development for use of the nuclear energy. The valve operator of SMART system were selected through the data survey and technical review of potential valve fabrication vendors, and it will provide the establishment and optimization of the basic system design of SMART. In order to establish and optimize the basic system design of SMART, the basis of selection for the valve operator type were provided based on the basic design requirements. The basis of valve operator selection for SMART will be used as a basic technical data for the SMART basic and detail design and a fundamental material for the new reactor development in the future.

  5. Sulcal set optimization for cortical surface registration.

    Science.gov (United States)

    Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M

    2010-04-15

    Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.

  6. Optimizing the Bankruptcy Rates of Corporate Enterprises

    Directory of Open Access Journals (Sweden)

    Vasyliev Oleksii V.

    2017-10-01

    Full Text Available An important issue in forecasting the probability of bankruptcy is the formation of an optimal set of financial-economic performance indicators with high forecast capacity. The article is aimed at optimizing the indicator system, which can be used to build a model for diagnosing the probability of corporate failures. The known methods and models for diagnosing bankruptcy were analyzed and it was found that they were based on the financial performance indicators, which use empirical data only. A set of financial performance indicators has been formed that can be used to forecast probability of corporate bankruptcy or to plan for anti-crisis measures. The practical significance of the study suggests developing a theoretical basis for solving issues arising in the diagnostics of probability of bankruptcy of corporate enterprises. Prospect for further research in this direction is to develop an integrated indicator using the fuzzy logic theory, taking into account the qualitative and quantitative performance indicators of enterprise.

  7. A Method of Forming the Optimal Set of Disjoint Path in Computer Networks

    Directory of Open Access Journals (Sweden)

    As'ad Mahmoud As'ad ALNASER

    2017-04-01

    Full Text Available This work provides a short analysis of algorithms of multipath routing. The modified algorithm of formation of the maximum set of not crossed paths taking into account their metrics is offered. Optimization of paths is carried out due to their reconfiguration with adjacent deadlock path. Reconfigurations are realized within the subgraphs including only peaks of the main and an adjacent deadlock path. It allows to reduce the field of formation of an optimum path and time complexity of its formation.

  8. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    Science.gov (United States)

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  9. Selection of the Bank Investment Strategy on the Basis of the Hierarchy Analysis Method

    Directory of Open Access Journals (Sweden)

    Zhytar Maksym O.

    2014-02-01

    Full Text Available The goal of the article lies in identification of a methodical approach to selection of the investment strategy of banks on the basis of factors of its formation with the use of the hierarchy analysis method. Factors of formation of the bank’s investment strategy were identified in the result of the study. The article demonstrates that selection of the investment strategy of the bank could be efficiently realised on the basis of the hierarchy analysis method, which is the most popular under conditions of a multi-criteria assessment of the search for optimal solution of the set task. The article offers a hierarchical structure of decision making, which could be a basis of selection of the bank’s investment strategy with consideration of the institutional flexibility. The prospect of further study in this direction is development of an optimisation model of the bank’s investment portfolio with consideration of not only institutional, but also market flexibility of decision making.

  10. Optimal stimulation as theoretical basis of hyperactivity.

    Science.gov (United States)

    Zentall, Sydney

    1975-07-01

    Current theory and practice in the clinical and educational management of hyperactive children recommend reduction of environmental stimulation, assuming hyperactive and distractable behaviors to be due to overstimulation. This paper reviews research suggesting that hyperactive behavior may result from a homeostatic mechanism that functions to increase stimulation for a child experienceing insufficient sensory stimulation. It is suggested that the effectiveness of drug and behavior therapies, as well as evidence from the field of sensory deprivation, further support the theory of a homeostatic mechanism that attempts to optimize sensory input.

  11. Optimal projection of observations in a Bayesian setting

    KAUST Repository

    Giraldi, Loic; Le Maî tre, Olivier P.; Hoteit, Ibrahim; Knio, Omar

    2018-01-01

    , and the one that maximizes the mutual information between the parameter of interest and the projected observations. The first two optimization problems are formulated as the determination of an optimal subspace and therefore the solution is computed using

  12. The use of an integrated variable fuzzy sets in water resources management

    Science.gov (United States)

    Qiu, Qingtai; Liu, Jia; Li, Chuanzhe; Yu, Xinzhe; Wang, Yang

    2018-06-01

    Based on the evaluation of the present situation of water resources and the development of water conservancy projects and social economy, optimal allocation of regional water resources presents an increasing need in the water resources management. Meanwhile it is also the most effective way to promote the harmonic relationship between human and water. In view of the own limitations of the traditional evaluations of which always choose a single index model using in optimal allocation of regional water resources, on the basis of the theory of variable fuzzy sets (VFS) and system dynamics (SD), an integrated variable fuzzy sets model (IVFS) is proposed to address dynamically complex problems in regional water resources management in this paper. The model is applied to evaluate the level of the optimal allocation of regional water resources of Zoucheng in China. Results show that the level of allocation schemes of water resources ranging from 2.5 to 3.5, generally showing a trend of lower level. To achieve optimal regional management of water resources, this model conveys a certain degree of accessing water resources management, which prominently improve the authentic assessment of water resources management by using the eigenvector of level H.

  13. Determining an Estimate of an Equivalence Relation for Moderate and Large Sized Sets

    Directory of Open Access Journals (Sweden)

    Leszek Klukowski

    2017-01-01

    Full Text Available This paper presents two approaches to determining estimates of an equivalence relation on the basis of pairwise comparisons with random errors. Obtaining such an estimate requires the solution of a discrete programming problem which minimizes the sum of the differences between the form of the relation and the comparisons. The problem is NP hard and can be solved with the use of exact algorithms for sets of moderate size, i.e. about 50 elements. In the case of larger sets, i.e. at least 200 comparisons for each element, it is necessary to apply heuristic algorithms. The paper presents results (a statistical preprocessing, which enable us to determine the optimal or a near-optimal solution with acceptable computational cost. They include: the development of a statistical procedure producing comparisons with low probabilities of errors and a heuristic algorithm based on such comparisons. The proposed approach guarantees the applicability of such estimators for any size of set. (original abstract

  14. Sea ice in the Baltic Sea - revisiting BASIS ice, a~historical data set covering the period 1960/1961-1978/1979

    Science.gov (United States)

    Löptien, U.; Dietze, H.

    2014-06-01

    The Baltic Sea is a seasonally ice-covered, marginal sea, situated in central northern Europe. It is an essential waterway connecting highly industrialised countries. Because ship traffic is intermittently hindered by sea ice, the local weather services have been monitoring sea ice conditions for decades. In the present study we revisit a historical monitoring data set, covering the winters 1960/1961. This data set, dubbed Data Bank for Baltic Sea Ice and Sea Surface Temperatures (BASIS) ice, is based on hand-drawn maps that were collected and then digitised 1981 in a joint project of the Finnish Institute of Marine Research (today Finish Meteorological Institute (FMI)) and the Swedish Meteorological and Hydrological Institute (SMHI). BASIS ice was designed for storage on punch cards and all ice information is encoded by five digits. This makes the data hard to access. Here we present a post-processed product based on the original five-digit code. Specifically, we convert to standard ice quantities (including information on ice types), which we distribute in the current and free Network Common Data Format (NetCDF). Our post-processed data set will help to assess numerical ice models and provide easy-to-access unique historical reference material for sea ice in the Baltic Sea. In addition we provide statistics showcasing the data quality. The website www.baltic-ocean.org hosts the post-prossed data and the conversion code. The data are also archived at the Data Publisher for Earth & Environmental Science PANGEA (doi:10.1594/PANGEA.832353).

  15. Sea ice in the Baltic Sea - revisiting BASIS ice, a historical data set covering the period 1960/1961-1978/1979

    Science.gov (United States)

    Löptien, U.; Dietze, H.

    2014-12-01

    The Baltic Sea is a seasonally ice-covered, marginal sea in central northern Europe. It is an essential waterway connecting highly industrialised countries. Because ship traffic is intermittently hindered by sea ice, the local weather services have been monitoring sea ice conditions for decades. In the present study we revisit a historical monitoring data set, covering the winters 1960/1961 to 1978/1979. This data set, dubbed Data Bank for Baltic Sea Ice and Sea Surface Temperatures (BASIS) ice, is based on hand-drawn maps that were collected and then digitised in 1981 in a joint project of the Finnish Institute of Marine Research (today the Finnish Meteorological Institute (FMI)) and the Swedish Meteorological and Hydrological Institute (SMHI). BASIS ice was designed for storage on punch cards and all ice information is encoded by five digits. This makes the data hard to access. Here we present a post-processed product based on the original five-digit code. Specifically, we convert to standard ice quantities (including information on ice types), which we distribute in the current and free Network Common Data Format (NetCDF). Our post-processed data set will help to assess numerical ice models and provide easy-to-access unique historical reference material for sea ice in the Baltic Sea. In addition we provide statistics showcasing the data quality. The website http://www.baltic-ocean.org hosts the post-processed data and the conversion code. The data are also archived at the Data Publisher for Earth & Environmental Science, PANGAEA (doi:10.1594/PANGAEA.832353).

  16. Basis set effects on the energy and hardness profiles of the ...

    Indian Academy of Sciences (India)

    Unknown

    maximum hardness principle (MHP); spurious stationary points; hydrogen fluoride dimer. 1. Introduction ... This error can be solved when accounting for the basis ..... DURSI for financial support through the Distinguished. University Research ...

  17. Optimization to the Culture Conditions for Phellinus Production with Regression Analysis and Gene-Set Based Genetic Algorithm

    Science.gov (United States)

    Li, Zhongwei; Xin, Yuezhen; Wang, Xun; Sun, Beibei; Xia, Shengyu; Li, Hui

    2016-01-01

    Phellinus is a kind of fungus and is known as one of the elemental components in drugs to avoid cancers. With the purpose of finding optimized culture conditions for Phellinus production in the laboratory, plenty of experiments focusing on single factor were operated and large scale of experimental data were generated. In this work, we use the data collected from experiments for regression analysis, and then a mathematical model of predicting Phellinus production is achieved. Subsequently, a gene-set based genetic algorithm is developed to optimize the values of parameters involved in culture conditions, including inoculum size, PH value, initial liquid volume, temperature, seed age, fermentation time, and rotation speed. These optimized values of the parameters have accordance with biological experimental results, which indicate that our method has a good predictability for culture conditions optimization. PMID:27610365

  18. Structure Optimal Design of Electromagnetic Levitation Load Reduction Device for Hydroturbine Generator Set

    Directory of Open Access Journals (Sweden)

    Qingyan Wang

    2015-01-01

    Full Text Available Thrust bearing is one part with the highest failure rate in hydroturbine generator set, which is primarily due to heavy axial load. Such heavy load often makes oil film destruction, bearing friction, and even burning. It is necessary to study the load and the reduction method. The dynamic thrust is an important factor to influence the axial load and reduction design of electromagnetic device. Therefore, in the paper, combined with the structure features of vertical turbine, the hydraulic thrust is analyzed accurately. Then, take the turbine model HL-220-LT-550, for instance; the electromagnetic levitation load reduction device is designed, and its mathematical model is built, whose purpose is to minimize excitation loss and total quality under the constraints of installation space, connection layout, and heat dissipation. Particle swarm optimization (PSO is employed to search for the optimum solution; finally, the result is verified by finite element method (FEM, which demonstrates that the optimized structure is more effective.

  19. New STO(II-3Gmag family basis sets for the calculations of the molecules magnetic properties

    Directory of Open Access Journals (Sweden)

    Karina Kapusta

    2015-10-01

    Full Text Available An efficient approach for construction of physically justified STO(II-3Gmag family basis sets for calculation of molecules magnetic properties has been proposed. The procedure of construction based upon the taken into account the second order of perturbation theory in the magnetic field case. Analytical form of correction functions has been obtained using the closed representation of the Green functions by the solution of nonhomogeneous Schrödinger equation for the model problem of "one-electron atom in the external uniform magnetic field". Their performance has been evaluated for the DFT level calculations carried out with a number of functionals. The test calculations of magnetic susceptibility and 1H nuclear magnetic shielding tensors demonstrated a good agreement of the calculated values with the experimental data.

  20. Topology optimization problems with design-dependent sets of constraints

    DEFF Research Database (Denmark)

    Schou, Marie-Louise Højlund

    Topology optimization is a design tool which is used in numerous fields. It can be used whenever the design is driven by weight and strength considerations. The basic concept of topology optimization is the interpretation of partial differential equation coefficients as effective material...... properties and designing through changing these coefficients. For example, consider a continuous structure. Then the basic concept is to represent this structure by small pieces of material that are coinciding with the elements of a finite element model of the structure. This thesis treats stress constrained...... structural topology optimization problems. For such problems a stress constraint for an element should only be present in the optimization problem when the structural design variable corresponding to this element has a value greater than zero. We model the stress constrained topology optimization problem...

  1. The same number of optimized parameters scheme for determining intermolecular interaction energies

    DEFF Research Database (Denmark)

    Kristensen, Kasper; Ettenhuber, Patrick; Eriksen, Janus Juul

    2015-01-01

    We propose the Same Number Of Optimized Parameters (SNOOP) scheme as an alternative to the counterpoise method for treating basis set superposition errors in calculations of intermolecular interaction energies. The key point of the SNOOP scheme is to enforce that the number of optimized wave...... as numerically. Numerical results for second-order Møller-Plesset perturbation theory (MP2) and coupled-cluster with single, double, and approximate triple excitations (CCSD(T)) show that the SNOOP scheme in general outperforms the uncorrected and counterpoise approaches. Furthermore, we show that SNOOP...

  2. Simulating Metabolite Basis Sets for in vivo MRS Quantification; Incorporating details of the PRESS Pulse Sequence by means of the GAMMA C++ library

    NARCIS (Netherlands)

    Van der Veen, J.W.; Van Ormondt, D.; De Beer, R.

    2012-01-01

    In this work we report on generating/using simulated metabolite basis sets for the quantification of in vivo MRS signals, assuming that they have been acquired by using the PRESS pulse sequence. To that end we have employed the classes and functions of the GAMMA C++ library. By using several

  3. Hybrid RHF/MP2 geometry optimizations with the effective fragment molecular orbital method

    DEFF Research Database (Denmark)

    Christensen, A. S.; Svendsen, Casper Steinmann; Fedorov, D. G.

    2014-01-01

    while the rest of the system is treated at the RHF level. MP2 geometry optimization is found to lower the barrier by up to 3.5 kcal/mol compared to RHF optimzations and ONIOM energy refinement and leads to a smoother convergence with respect to the basis set for the reaction profile. For double zeta...

  4. On the Convergence of the ccJ-pVXZ and pcJ-n Basis Sets in CCSD Calculations of Nuclear Spin-Spin Coupling Constants

    DEFF Research Database (Denmark)

    Faber, Rasmus; Sauer, Stephan P. A.

    2018-01-01

    The basis set convergence of nuclear spin-spin coupling constants (SSCC) calculated at the coupled cluster singles and doubles (CCSD) level has been investigated for ten difficult molecules. Eight of the molecules contain fluorine atoms and nine contain double or triple bonds. Results obtained...

  5. Damage tolerance optimization of composite stringer run-out under tensile load

    DEFF Research Database (Denmark)

    Badalló, Pere; Trias, Daniel; Lindgaard, Esben

    2015-01-01

    . The influence of some geometric variables of the run-out in the interface of the set stringer-panel is crucial to avoid the onset and growth of delamination cracks. In this study, a damage tolerant design of a stringer run-out is achieved by a process of design optimization and surrogate modeling techniques....... A parametric finite element model created with python was used to generate a number of different geometrical designs of the stringer run-out. The relevant information of these models was adjusted using Radial Basis Functions (RBF). Finally, the optimization problem was solved using Quasi-Newton method...

  6. Stochastic Optimization of Wind Turbine Power Factor Using Stochastic Model of Wind Power

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Siano, Pierluigi; Bak-Jensen, Birgitte

    2010-01-01

    This paper proposes a stochastic optimization algorithm that aims to minimize the expectation of the system power losses by controlling wind turbine (WT) power factors. This objective of the optimization is subject to the probability constraints of bus voltage and line current requirements....... The optimization algorithm utilizes the stochastic models of wind power generation (WPG) and load demand to take into account their stochastic variation. The stochastic model of WPG is developed on the basis of a limited autoregressive integrated moving average (LARIMA) model by introducing a crosscorrelation...... structure to the LARIMA model. The proposed stochastic optimization is carried out on a 69-bus distribution system. Simulation results confirm that, under various combinations of WPG and load demand, the system power losses are considerably reduced with the optimal setting of WT power factor as compared...

  7. NDARC-NASA Design and Analysis of Rotorcraft Theoretical Basis and Architecture

    Science.gov (United States)

    Johnson, Wayne

    2010-01-01

    The theoretical basis and architecture of the conceptual design tool NDARC (NASA Design and Analysis of Rotorcraft) are described. The principal tasks of NDARC are to design (or size) a rotorcraft to satisfy specified design conditions and missions, and then analyze the performance of the aircraft for a set of off-design missions and point operating conditions. The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated. The aircraft attributes are obtained from the sum of the component attributes. NDARC provides a capability to model general rotorcraft configurations, and estimate the performance and attributes of advanced rotor concepts. The software has been implemented with low-fidelity models, typical of the conceptual design environment. Incorporation of higher-fidelity models will be possible, as the architecture of the code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis and optimization.

  8. Adaptive Conflict-Free Optimization of Rule Sets for Network Security Packet Filtering Devices

    Directory of Open Access Journals (Sweden)

    Andrea Baiocchi

    2015-01-01

    Full Text Available Packet filtering and processing rules management in firewalls and security gateways has become commonplace in increasingly complex networks. On one side there is a need to maintain the logic of high level policies, which requires administrators to implement and update a large amount of filtering rules while keeping them conflict-free, that is, avoiding security inconsistencies. On the other side, traffic adaptive optimization of large rule lists is useful for general purpose computers used as filtering devices, without specific designed hardware, to face growing link speeds and to harden filtering devices against DoS and DDoS attacks. Our work joins the two issues in an innovative way and defines a traffic adaptive algorithm to find conflict-free optimized rule sets, by relying on information gathered with traffic logs. The proposed approach suits current technology architectures and exploits available features, like traffic log databases, to minimize the impact of ACO development on the packet filtering devices. We demonstrate the benefit entailed by the proposed algorithm through measurements on a test bed made up of real-life, commercial packet filtering devices.

  9. Evolution in time of an N-atom system. I. A physical basis set for the projection of the master equation

    International Nuclear Information System (INIS)

    Freedhoff, Helen

    2004-01-01

    We study an aggregate of N identical two-level atoms (TLA's) coupled by the retarded interatomic interaction, using the Lehmberg-Agarwal master equation. First, we calculate the entangled eigenstates of the system; then, we use these eigenstates as a basis set for the projection of the master equation. We demonstrate that in this basis the equations of motion for the level populations, as well as the expressions for the emission and absorption spectra, assume a simple mathematical structure and allow for a transparent physical interpretation. To illustrate the use of the general theory in emission processes, we study an isosceles triangle of atoms, and present in the long wavelength limit the (cascade) emission spectrum for a hexagon of atoms fully excited at t=0. To illustrate its use for absorption processes, we tabulate (in the same limit) the biexciton absorption frequencies, linewidths, and relative intensities for polygons consisting of N=2,...,9 TLA's

  10. Sparsely corrupted stimulated scattering signals recovery by iterative reweighted continuous basis pursuit

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Kunpeng; Chai, Yi [College of Automation, Chongqing University, Chongqing 400044 (China); Su, Chunxiao [Research Center of Laser Fusion, CAEP, P. O. Box 919-983, Mianyang 621900 (China)

    2013-08-15

    In this paper, we consider the problem of extracting the desired signals from noisy measurements. This is a classical problem of signal recovery which is of paramount importance in inertial confinement fusion. To accomplish this task, we develop a tractable algorithm based on continuous basis pursuit and reweighted ℓ{sub 1}-minimization. By modeling the observed signals as superposition of scale time-shifted copies of theoretical waveform, structured noise, and unstructured noise on a finite time interval, a sparse optimization problem is obtained. We propose to solve this problem through an iterative procedure that alternates between convex optimization to estimate the amplitude, and local optimization to estimate the dictionary. The performance of the method was evaluated both numerically and experimentally. Numerically, we recovered theoretical signals embedded in increasing amounts of unstructured noise and compared the results with those obtained through popular denoising methods. We also applied the proposed method to a set of actual experimental data acquired from the Shenguang-II laser whose energy was below the detector noise-equivalent energy. Both simulation and experiments show that the proposed method improves the signal recovery performance and extends the dynamic detection range of detectors.

  11. Sparsely corrupted stimulated scattering signals recovery by iterative reweighted continuous basis pursuit

    International Nuclear Information System (INIS)

    Wang, Kunpeng; Chai, Yi; Su, Chunxiao

    2013-01-01

    In this paper, we consider the problem of extracting the desired signals from noisy measurements. This is a classical problem of signal recovery which is of paramount importance in inertial confinement fusion. To accomplish this task, we develop a tractable algorithm based on continuous basis pursuit and reweighted ℓ 1 -minimization. By modeling the observed signals as superposition of scale time-shifted copies of theoretical waveform, structured noise, and unstructured noise on a finite time interval, a sparse optimization problem is obtained. We propose to solve this problem through an iterative procedure that alternates between convex optimization to estimate the amplitude, and local optimization to estimate the dictionary. The performance of the method was evaluated both numerically and experimentally. Numerically, we recovered theoretical signals embedded in increasing amounts of unstructured noise and compared the results with those obtained through popular denoising methods. We also applied the proposed method to a set of actual experimental data acquired from the Shenguang-II laser whose energy was below the detector noise-equivalent energy. Both simulation and experiments show that the proposed method improves the signal recovery performance and extends the dynamic detection range of detectors

  12. Research on optimizing pass schedule of tandem cold mill

    International Nuclear Information System (INIS)

    Lu, C.; Wang, J.S.; Zhao, Q.L.; Liu, X.H.; Wang, G.D.

    2000-01-01

    In this paper, research on pass schedule of tandem cold mill (TCM) is carried out. According to load (reduction, rolling force, motor power) balance, non-linear equations set with variables of inter-stand thickness is constructed. The pass schedule optimization is carried out by solving the non-linear equations set. As the traditional method, the Newton-Raphson method is used for solving the non-linear equations set. In this paper a new simple method is brought up. On basis of the monotone relations between thickness and load, the inter-stands thickness is adjusted dynamically. The solution of non-linear equations set can be converged by iterative calculation. This method can avoid the derivative calculation used by traditional method. So, this method is simple and calculation speed is high. It is suitable for on-line control. (author)

  13. Energy Optimal Path Planning: Integrating Coastal Ocean Modelling with Optimal Control

    Science.gov (United States)

    Subramani, D. N.; Haley, P. J., Jr.; Lermusiaux, P. F. J.

    2016-02-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. To set up the energy optimization, the relative vehicle speed and headings are considered to be stochastic, and new stochastic Dynamically Orthogonal (DO) level-set equations that govern their stochastic time-optimal reachability fronts are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. The accuracy and efficiency of the DO level-set equations for solving the governing stochastic level-set reachability fronts are quantitatively assessed, including comparisons with independent semi-analytical solutions. Energy-optimal missions are studied in wind-driven barotropic quasi-geostrophic double-gyre circulations, and in realistic data-assimilative re-analyses of multiscale coastal ocean flows. The latter re-analyses are obtained from multi-resolution 2-way nested primitive-equation simulations of tidal-to-mesoscale dynamics in the Middle Atlantic Bight and Shelbreak Front region. The effects of tidal currents, strong wind events, coastal jets, and shelfbreak fronts on the energy-optimal paths are illustrated and quantified. Results showcase the opportunities for longer-duration missions that intelligently utilize the ocean environment to save energy, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  14. A quantum molecular similarity analysis of changes in molecular electron density caused by basis set flotation and electric field application

    Science.gov (United States)

    Simon, Sílvia; Duran, Miquel

    1997-08-01

    Quantum molecular similarity (QMS) techniques are used to assess the response of the electron density of various small molecules to application of a static, uniform electric field. Likewise, QMS is used to analyze the changes in electron density generated by the process of floating a basis set. The results obtained show an interrelation between the floating process, the optimum geometry, and the presence of an external field. Cases involving the Le Chatelier principle are discussed, and an insight on the changes of bond critical point properties, self-similarity values and density differences is performed.

  15. Application of Bayesian statistical decision theory to the optimization of generating set maintenance

    International Nuclear Information System (INIS)

    Procaccia, H.; Cordier, R.; Muller, S.

    1994-07-01

    Statistical decision theory could be a alternative for the optimization of preventive maintenance periodicity. In effect, this theory concerns the situation in which a decision maker has to make a choice between a set of reasonable decisions, and where the loss associated to a given decision depends on a probabilistic risk, called state of nature. In the case of maintenance optimization, the decisions to be analyzed are different periodicities proposed by the experts, given the observed feedback experience, the states of nature are the associated failure probabilities, and the losses are the expectations of the induced cost of maintenance and of consequences of the failures. As failure probabilities concern rare events, at the ultimate state of RCM analysis (failure of sub-component), and as expected foreseeable behaviour of equipment has to be evaluated by experts, Bayesian approach is successfully used to compute states of nature. In Bayesian decision theory, a prior distribution for failure probabilities is modeled from expert knowledge, and is combined with few stochastic information provided by feedback experience, giving a posterior distribution of failure probabilities. The optimized decision is the decision that minimizes the expected loss over the posterior distribution. This methodology has been applied to inspection and maintenance optimization of cylinders of diesel generator engines of 900 MW nuclear plants. In these plants, auxiliary electric power is supplied by 2 redundant diesel generators which are tested every 2 weeks during about 1 hour. Until now, during yearly refueling of each plant, one endoscopic inspection of diesel cylinders is performed, and every 5 operating years, all cylinders are replaced. RCM has shown that cylinder failures could be critical. So Bayesian decision theory has been applied, taking into account expert opinions, and possibility of aging when maintenance periodicity is extended. (authors). 8 refs., 5 figs., 1 tab

  16. A branch and bound algorithm for the global optimization of Hessian Lipschitz continuous functions

    KAUST Repository

    Fowkes, Jaroslav M.

    2012-06-21

    We present a branch and bound algorithm for the global optimization of a twice differentiable nonconvex objective function with a Lipschitz continuous Hessian over a compact, convex set. The algorithm is based on applying cubic regularisation techniques to the objective function within an overlapping branch and bound algorithm for convex constrained global optimization. Unlike other branch and bound algorithms, lower bounds are obtained via nonconvex underestimators of the function. For a numerical example, we apply the proposed branch and bound algorithm to radial basis function approximations. © 2012 Springer Science+Business Media, LLC.

  17. Diabatic potential-optimized discrete variable representation: application to photodissociation process of the CO molecule

    International Nuclear Information System (INIS)

    Bitencourt, Ana Carla P; Prudente, Frederico V; Vianna, Jose David M

    2007-01-01

    We propose a new numerically optimized discrete variable representation using eigenstates of diabatic Hamiltonians. This procedure provides an efficient method to solve non-adiabatic coupling problems since the generated basis sets take into account information on the diabatic potentials. The method is applied to the B 1 Σ + - D' 1 Σ + Rydberg-valence predissociation interaction in the CO molecule. Here we give an account of the discrete variable representation and present the procedure for the calculation of its optimized version, which we apply to obtain the total photodissociation cross sections of the CO molecule

  18. Optimization of the alpha image reconstruction. An iterative CT-image reconstruction with well-defined image quality metrics

    Energy Technology Data Exchange (ETDEWEB)

    Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelriess, Marc [German Cancer Research Center, Heidelberg (Germany).

    2017-10-01

    Optimization of the AIR-algorithm for improved convergence and performance. TThe AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

  19. Optimization of the alpha image reconstruction. An iterative CT-image reconstruction with well-defined image quality metrics

    International Nuclear Information System (INIS)

    Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelriess, Marc

    2017-01-01

    Optimization of the AIR-algorithm for improved convergence and performance. TThe AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

  20. Support Vector Data Description Model to Map Specific Land Cover with Optimal Parameters Determined from a Window-Based Validation Set

    Directory of Open Access Journals (Sweden)

    Jinshui Zhang

    2017-04-01

    Full Text Available This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD, to determine optimal parameters for support vector data description (SVDD model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient (C and kernel width (s, in mapping homogeneous specific land cover.

  1. Optimal timing for intravascular administration set replacement.

    Science.gov (United States)

    Ullman, Amanda J; Cooke, Marie L; Gillies, Donna; Marsh, Nicole M; Daud, Azlina; McGrail, Matthew R; O'Riordan, Elizabeth; Rickard, Claire M

    2013-09-15

    The tubing (administration set) attached to both venous and arterial catheters may contribute to bacteraemia and other infections. The rate of infection may be increased or decreased by routine replacement of administration sets. This review was originally published in 2005 and was updated in 2012. The objective of this review was to identify any relationship between the frequency with which administration sets are replaced and rates of microbial colonization, infection and death. We searched The Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2012, Issue 6), MEDLINE (1950 to June 2012), CINAHL (1982 to June 2012), EMBASE (1980 to June 2012), reference lists of identified trials and bibliographies of published reviews. The original search was performed in February 2004. We also contacted researchers in the field. We applied no language restriction. We included all randomized or controlled clinical trials on the frequency of venous or arterial catheter administration set replacement in hospitalized participants. Two review authors assessed all potentially relevant studies. We resolved disagreements between the two review authors by discussion with a third review author. We collected data for seven outcomes: catheter-related infection; infusate-related infection; infusate microbial colonization; catheter microbial colonization; all-cause bloodstream infection; mortality; and cost. We pooled results from studies that compared different frequencies of administration set replacement, for instance, we pooled studies that compared replacement ≥ every 96 hours versus every 72 hours with studies that compared replacement ≥ every 48 hours versus every 24 hours. We identified 26 studies for this updated review, 10 of which we excluded; six did not fulfil the inclusion criteria and four did not report usable data. We extracted data from the remaining 18 references (16 studies) with 5001 participants: study designs included neonate and adult

  2. Recommendation Sets and Choice Queries

    DEFF Research Database (Denmark)

    Viappiani, Paolo Renato; Boutilier, Craig

    2011-01-01

    Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query users about their preferences and offer recommendations based on the system's belief about the user's utility function. We analyze the connection between...... the problem of generating optimal recommendation sets and the problem of generating optimal choice queries, considering both Bayesian and regret-based elicitation. Our results show that, somewhat surprisingly, under very general circumstances, the optimal recommendation set coincides with the optimal query....

  3. Optimization and analysis of large chemical kinetic mechanisms using the solution mapping method - Combustion of methane

    Science.gov (United States)

    Frenklach, Michael; Wang, Hai; Rabinowitz, Martin J.

    1992-01-01

    A method of systematic optimization, solution mapping, as applied to a large-scale dynamic model is presented. The basis of the technique is parameterization of model responses in terms of model parameters by simple algebraic expressions. These expressions are obtained by computer experiments arranged in a factorial design. The developed parameterized responses are then used in a joint multiparameter multidata-set optimization. A brief review of the mathematical background of the technique is given. The concept of active parameters is discussed. The technique is applied to determine an optimum set of parameters for a methane combustion mechanism. Five independent responses - comprising ignition delay times, pre-ignition methyl radical concentration profiles, and laminar premixed flame velocities - were optimized with respect to thirteen reaction rate parameters. The numerical predictions of the optimized model are compared to those computed with several recent literature mechanisms. The utility of the solution mapping technique in situations where the optimum is not unique is also demonstrated.

  4. Application of Multi-Objective Optimization on the Basis of Ratio Analysis (MOORA Method for Bank Branch Location Selection

    Directory of Open Access Journals (Sweden)

    Ali Gorener

    2013-04-01

    Full Text Available Location selection problem in banking is an important issue for the commercial success in competitive environment. There is a strategic fit between the location selection decision and overall performance of a new branch. Providing physical service in requested location as well as alternative distribution channels to meet profitable client needs is the current problematic to achieve the competitive advantage over the rivalry in financial system. In this paper, an integrated model has been developed to support in the decision of branch location selection for a new bank branch. Analytic Hierarchy Process (AHP technique has been conducted to prioritize of evaluation criteria, and multi-objective optimization on the basis of ratio analysis (MOORA method has been applied to rank location alternatives of bank branch.   

  5. Thermodynamic optimization of power plants

    NARCIS (Netherlands)

    Haseli, Y.

    2011-01-01

    Thermodynamic Optimization of Power Plants aims to establish and illustrate comparative multi-criteria optimization of various models and configurations of power plants. It intends to show what optimization objectives one may define on the basis of the thermodynamic laws, and how they can be applied

  6. Unsupervised learning of a steerable basis for invariant image representations

    Science.gov (United States)

    Bethge, Matthias; Gerwinn, Sebastian; Macke, Jakob H.

    2007-02-01

    There are two aspects to unsupervised learning of invariant representations of images: First, we can reduce the dimensionality of the representation by finding an optimal trade-off between temporal stability and informativeness. We show that the answer to this optimization problem is generally not unique so that there is still considerable freedom in choosing a suitable basis. Which of the many optimal representations should be selected? Here, we focus on this second aspect, and seek to find representations that are invariant under geometrical transformations occuring in sequences of natural images. We utilize ideas of 'steerability' and Lie groups, which have been developed in the context of filter design. In particular, we show how an anti-symmetric version of canonical correlation analysis can be used to learn a full-rank image basis which is steerable with respect to rotations. We provide a geometric interpretation of this algorithm by showing that it finds the two-dimensional eigensubspaces of the average bivector. For data which exhibits a variety of transformations, we develop a bivector clustering algorithm, which we use to learn a basis of generalized quadrature pairs (i.e. 'complex cells') from sequences of natural images.

  7. Dispositional Optimism

    Science.gov (United States)

    Carver, Charles S.; Scheier, Michael F.

    2014-01-01

    Optimism is a cognitive construct (expectancies regarding future outcomes) that also relates to motivation: optimistic people exert effort, whereas pessimistic people disengage from effort. Study of optimism began largely in health contexts, finding positive associations between optimism and markers of better psychological and physical health. Physical health effects likely occur through differences in both health-promoting behaviors and physiological concomitants of coping. Recently, the scientific study of optimism has extended to the realm of social relations: new evidence indicates that optimists have better social connections, partly because they work harder at them. In this review, we examine the myriad ways this trait can benefit an individual, and our current understanding of the biological basis of optimism. PMID:24630971

  8. Total Positivity of the Cubic Trigonometric Bézier Basis

    Directory of Open Access Journals (Sweden)

    Xuli Han

    2014-01-01

    Full Text Available Within the general framework of Quasi Extended Chebyshev space, we prove that the cubic trigonometric Bézier basis with two shape parameters λ and μ given in Han et al. (2009 forms an optimal normalized totally positive basis for λ,μ∈(-2,1]. Moreover, we show that for λ=-2 or μ=-2 the basis is not suited for curve design from the blossom point of view. In order to compute the corresponding cubic trigonometric Bézier curves stably and efficiently, we also develop a new corner cutting algorithm.

  9. Economic communication model set

    Science.gov (United States)

    Zvereva, Olga M.; Berg, Dmitry B.

    2017-06-01

    This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.

  10. Thermodynamic basis for effective energy utilization

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, J. T.

    1977-10-15

    A major difficulty in a quantitative assessment of effective energy utilization is that energy is always conserved (the First Law of Thermodynamics). However, the Second Law of Thermodynamics shows that, although energy cannot be destroyed, it can be degraded to a state in which it is of no further use for performing tasks. Thus, in considering the present world energy crisis, we are not really concerned with the conservation of energy but with the conservation of its ability to perform useful tasks. A measure of this ability is thermodynamic availability or, a less familiar term, exergy. In a real sense, we are concerned with an entropy-crisis, rather than an energy crisis. Analysis of energy processes on an exergy basis provides significantly different insights into the processes than those obtained from a conventional energy analysis. For example, process steam generation in an industrial boiler may appear quite efficient on the basis of a conventional analysis, but is shown to have very low effective use of energy when analyzed on an exergy basis. Applications of exergy analysis to other systems, such as large fossil and nuclear power stations, are discussed, and the benefits of extraction combined-purpose plants are demonstrated. Other examples of the application of the exergy concept in the industrial and residential energy sectors are also given. The concept is readily adaptable to economic optimization. Examples are given of economic optimization on an availability basis of an industrial heat exchanger and of a combined-purpose nuclear power and heavy-water production plant. Finally, the utility of the concept of exergy in assessing the energy requirements of an industrial society is discussed.

  11. Scale-up and optimization of biohydrogen production reactor from laboratory-scale to industrial-scale on the basis of computational fluid dynamics simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xu; Ding, Jie; Guo, Wan-Qian; Ren, Nan-Qi [State Key Laboratory of Urban Water Resource and Environment, Harbin Institute of Technology, 202 Haihe Road, Nangang District, Harbin, Heilongjiang 150090 (China)

    2010-10-15

    The objective of conducting experiments in a laboratory is to gain data that helps in designing and operating large-scale biological processes. However, the scale-up and design of industrial-scale biohydrogen production reactors is still uncertain. In this paper, an established and proven Eulerian-Eulerian computational fluid dynamics (CFD) model was employed to perform hydrodynamics assessments of an industrial-scale continuous stirred-tank reactor (CSTR) for biohydrogen production. The merits of the laboratory-scale CSTR and industrial-scale CSTR were compared and analyzed on the basis of CFD simulation. The outcomes demonstrated that there are many parameters that need to be optimized in the industrial-scale reactor, such as the velocity field and stagnation zone. According to the results of hydrodynamics evaluation, the structure of industrial-scale CSTR was optimized and the results are positive in terms of advancing the industrialization of biohydrogen production. (author)

  12. The Fundamental Solution and Its Role in the Optimal Control of Infinite Dimensional Neutral Systems

    International Nuclear Information System (INIS)

    Liu Kai

    2009-01-01

    In this work, we shall consider standard optimal control problems for a class of neutral functional differential equations in Banach spaces. As the basis of a systematic theory of neutral models, the fundamental solution is constructed and a variation of constants formula of mild solutions is established. We introduce a class of neutral resolvents and show that the Laplace transform of the fundamental solution is its neutral resolvent operator. Necessary conditions in terms of the solutions of neutral adjoint systems are established to deal with the fixed time integral convex cost problem of optimality. Based on optimality conditions, the maximum principle for time varying control domain is presented. Finally, the time optimal control problem to a target set is investigated

  13. Estimation of an optimal chemotherapy utilisation rate for cancer: setting an evidence-based benchmark for quality cancer care.

    Science.gov (United States)

    Jacob, S A; Ng, W L; Do, V

    2015-02-01

    There is wide variation in the proportion of newly diagnosed cancer patients who receive chemotherapy, indicating the need for a benchmark rate of chemotherapy utilisation. This study describes an evidence-based model that estimates the proportion of new cancer patients in whom chemotherapy is indicated at least once (defined as the optimal chemotherapy utilisation rate). The optimal chemotherapy utilisation rate can act as a benchmark for measuring and improving the quality of care. Models of optimal chemotherapy utilisation were constructed for each cancer site based on indications for chemotherapy identified from evidence-based treatment guidelines. Data on the proportion of patient- and tumour-related attributes for which chemotherapy was indicated were obtained, using population-based data where possible. Treatment indications and epidemiological data were merged to calculate the optimal chemotherapy utilisation rate. Monte Carlo simulations and sensitivity analyses were used to assess the effect of controversial chemotherapy indications and variations in epidemiological data on our model. Chemotherapy is indicated at least once in 49.1% (95% confidence interval 48.8-49.6%) of all new cancer patients in Australia. The optimal chemotherapy utilisation rates for individual tumour sites ranged from a low of 13% in thyroid cancers to a high of 94% in myeloma. The optimal chemotherapy utilisation rate can serve as a benchmark for planning chemotherapy services on a population basis. The model can be used to evaluate service delivery by comparing the benchmark rate with patterns of care data. The overall estimate for other countries can be obtained by substituting the relevant distribution of cancer types. It can also be used to predict future chemotherapy workload and can be easily modified to take into account future changes in cancer incidence, presentation stage or chemotherapy indications. Copyright © 2014 The Royal College of Radiologists. Published by

  14. Antiferromagnetic vs. non-magnetic ε phase of solid oxygen. Periodic density functional theory studies using a localized atomic basis set and the role of exact exchange.

    Science.gov (United States)

    Ramírez-Solís, A; Zicovich-Wilson, C M; Hernández-Lamoneda, R; Ochoa-Calle, A J

    2017-01-25

    The question of the non-magnetic (NM) vs. antiferromagnetic (AF) nature of the ε phase of solid oxygen is a matter of great interest and continuing debate. In particular, it has been proposed that the ε phase is actually composed of two phases, a low-pressure AF ε 1 phase and a higher pressure NM ε 0 phase [Crespo et al., Proc. Natl. Acad. Sci. U. S. A., 2014, 111, 10427]. We address this problem through periodic spin-restricted and spin-polarized Kohn-Sham density functional theory calculations at pressures from 10 to 50 GPa using calibrated GGA and hybrid exchange-correlation functionals with Gaussian atomic basis sets. The two possible configurations for the antiferromagnetic (AF1 and AF2) coupling of the 0 ≤ S ≤ 1 O 2 molecules in the (O 2 ) 4 unit cell were studied. Full enthalpy-driven geometry optimizations of the (O 2 ) 4 unit cells were done to study the pressure evolution of the enthalpy difference between the non-magnetic and both antiferromagnetic structures. We also address the evolution of structural parameters and the spin-per-molecule vs. pressure. We find that the spin-less solution becomes more stable than both AF structures above 50 GPa and, crucially, the spin-less solution yields lattice parameters in much better agreement with experimental data at all pressures than the AF structures. The optimized AF2 broken-symmetry structures lead to large errors of the a and b lattice parameters when compared with experiments. The results for the NM model are in much better agreement with the experimental data than those found for both AF models and are consistent with a completely non-magnetic (O 2 ) 4 unit cell for the low-pressure regime of the ε phase.

  15. Comparison of optimization of loading patterns on the basis of SA and PMA algorithms

    International Nuclear Information System (INIS)

    Beliczai, Botond

    2007-01-01

    Optimization of loading patterns is a very important task from economical point of view in a nuclear power plant. The optimization algorithms used for this purpose can be categorized basically into two categories: deterministic ones and stochastic ones. In the Paks nuclear power plant a deterministic optimization procedure is used to optimize the loading pattern at BOC, so that the core would have maximal reactivity reserve. To the group of stochastic optimization procedures belong mainly simulated annealing (SA) procedures and genetic algorithms (GA). There are new procedures as well, which try to combine the advantages of SAs and GAs. One of them is called population mutation annealing algorithm (PMA). In the Paks NPP we would like to introduce fuel assemblies including burnable poison (Gd) in the near future. In order to be able to find the optimal loading pattern (or near-optimal loading patterns) in that case, we have to optimize our core not only for objective functions defined at BOC, but at EOC as well. For this purpose I used stochastic algorithms (SA and PMA) to investigate loading pattern optimization results for different objective functions at BOC. (author)

  16. The association between resting functional connectivity and dispositional optimism.

    Science.gov (United States)

    Ran, Qian; Yang, Junyi; Yang, Wenjing; Wei, Dongtao; Qiu, Jiang; Zhang, Dong

    2017-01-01

    Dispositional optimism is an individual characteristic that plays an important role in human experience. Optimists are people who tend to hold positive expectations for their future. Previous studies have focused on the neural basis of optimism, such as task response neural activity and brain structure volume. However, the functional connectivity between brain regions of the dispositional optimists are poorly understood. Previous study suggested that the ventromedial prefrontal cortex (vmPFC) are associated with individual differences in dispositional optimism, but it is unclear whether there are other brain regions that combine with the vmPFC to contribute to dispositional optimism. Thus, the present study used the resting-state functional connectivity (RSFC) approach and set the vmPFC as the seed region to examine if differences in functional brain connectivity between the vmPFC and other brain regions would be associated with individual differences in dispositional optimism. The results found that dispositional optimism was significantly positively correlated with the strength of the RSFC between vmPFC and middle temporal gyrus (mTG) and negativly correlated with RSFC between vmPFC and inferior frontal gyrus (IFG). These findings may be suggested that mTG and IFG which associated with emotion processes and emotion regulation also play an important role in the dispositional optimism.

  17. The association between resting functional connectivity and dispositional optimism.

    Directory of Open Access Journals (Sweden)

    Qian Ran

    Full Text Available Dispositional optimism is an individual characteristic that plays an important role in human experience. Optimists are people who tend to hold positive expectations for their future. Previous studies have focused on the neural basis of optimism, such as task response neural activity and brain structure volume. However, the functional connectivity between brain regions of the dispositional optimists are poorly understood. Previous study suggested that the ventromedial prefrontal cortex (vmPFC are associated with individual differences in dispositional optimism, but it is unclear whether there are other brain regions that combine with the vmPFC to contribute to dispositional optimism. Thus, the present study used the resting-state functional connectivity (RSFC approach and set the vmPFC as the seed region to examine if differences in functional brain connectivity between the vmPFC and other brain regions would be associated with individual differences in dispositional optimism. The results found that dispositional optimism was significantly positively correlated with the strength of the RSFC between vmPFC and middle temporal gyrus (mTG and negativly correlated with RSFC between vmPFC and inferior frontal gyrus (IFG. These findings may be suggested that mTG and IFG which associated with emotion processes and emotion regulation also play an important role in the dispositional optimism.

  18. A software complex intended for constructing applied models and meta-models on the basis of mathematical programming principles

    Directory of Open Access Journals (Sweden)

    Михаил Юрьевич Чернышов

    2013-12-01

    Full Text Available A software complex (SC elaborated by the authors on the basis of the language LMPL and representing a software tool intended for synthesis of applied software models and meta-models constructed on the basis of mathematical programming (MP principles is described. LMPL provides for an explicit form of declarative representation of MP-models, presumes automatic constructing and transformation of models and the capability of adding external software packages. The following software versions of the SC have been implemented: 1 a SC intended for representing the process of choosing an optimal hydroelectric power plant model (on the principles of meta-modeling and 2 a SC intended for representing the logic-sense relations between the models of a set of discourse formations in the discourse meta-model.

  19. Pole Shape Optimization of Permanent Magnet Synchronous Motors Using the Reduced Basis Technique

    Directory of Open Access Journals (Sweden)

    A. Jabbari

    2010-03-01

    Full Text Available In the present work, an integrated method of pole shape design optimization for reduction of torque pulsation components in permanent magnet synchronous motors is developed. A progressive design process is presented to find feasible optimal shapes. This method is applied on the pole shape optimization of two prototype permanent magnet synchronous motors, i.e., 4-poles/6-slots and 4-poles-12slots.

  20. Search engine optimization

    OpenAIRE

    Marolt, Klemen

    2013-01-01

    Search engine optimization techniques, often shortened to “SEO,” should lead to first positions in organic search results. Some optimization techniques do not change over time, yet still form the basis for SEO. However, as the Internet and web design evolves dynamically, new optimization techniques flourish and flop. Thus, we looked at the most important factors that can help to improve positioning in search results. It is important to emphasize that none of the techniques can guarantee high ...

  1. On the Optimal Policy for the Single-product Inventory Problem with Set-up Cost and a Restricted Production Capacity

    NARCIS (Netherlands)

    Foreest, N. D. van; Wijngaard, J.

    2010-01-01

    The single-product, stationary inventory problem with set-up cost is one of the classical problems in stochastic operations research. Theories have been developed to cope with finite production capacity in periodic review systems, and it has been proved that optimal policies for these cases are not

  2. Optimally Stopped Optimization

    Science.gov (United States)

    Vinci, Walter; Lidar, Daniel

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.

  3. Damping layout optimization for ship's cabin noise reduction based on statistical energy analysis

    Directory of Open Access Journals (Sweden)

    WU Weiguo

    2017-08-01

    Full Text Available An optimization analysis study concerning the damping control of ship's cabin noise was carried out in order to improve the effect and reduce the weight of damping. Based on the Statistical Energy Analysis (SEA method, a theoretical deduction and numerical analysis of the first-order sensitivity analysis of the A-weighted sound pressure level concerning the damping loss factor of the subsystem were carried out. On this basis, a mathematical optimization model was proposed and an optimization program developed. Next, the secondary development of VA One software was implemented through the use of MATLAB, while the cabin noise damping control layout optimization system was established. Finally, the optimization model of the ship was constructed and numerical experiments of damping control optimization conducted. The damping installation region was divided into five parts with different damping thicknesses. The total weight of damping was set as an objective function and the A-weighted sound pressure level of the target cabin was set as a constraint condition. The best damping thickness was obtained through the optimization program, and the total damping weight was reduced by 60.4%. The results show that the damping noise reduction effect of unit weight is significantly improved through the optimization method. This research successfully solves the installation position and thickness selection problems in the acoustic design of damping control, providing a reliable analysis method and guidance for the design.

  4. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  5. Many-Body Energy Decomposition with Basis Set Superposition Error Corrections.

    Science.gov (United States)

    Mayer, István; Bakó, Imre

    2017-05-09

    The problem of performing many-body decompositions of energy is considered in the case when BSSE corrections are also performed. It is discussed that the two different schemes that have been proposed go back to the two different interpretations of the original Boys-Bernardi counterpoise correction scheme. It is argued that from the physical point of view the "hierarchical" scheme of Valiron and Mayer should be preferred and not the scheme recently discussed by Ouyang and Bettens, because it permits the energy of the individual monomers and all the two-body, three-body, etc. energy components to be free of unphysical dependence on the arrangement (basis functions) of other subsystems in the cluster.

  6. Optimization and anti-optimization of structures under uncertainty

    National Research Council Canada - National Science Library

    Elishakoff, Isaac; Ohsaki, Makoto

    2010-01-01

    .... The necessity of anti-optimization approach is first demonstrated, then the anti-optimization techniques are applied to static, dynamic and buckling problems, thus covering the broadest possible set of applications...

  7. An application of the Proper Orthogonal Decomposition method to the thermo-economic optimization of a dual pressure, combined cycle powerplant

    International Nuclear Information System (INIS)

    Melli, Roberto; Sciubba, Enrico; Toro, Claudia

    2014-01-01

    Highlights: • The CCGT is modelled and simulated in CAMEL-Pro. • Economic costs of the system product are computed. • The POD–RBF procedure is applied to the thermoeconomic optimization of a CCGT power plant. • Economic optimal configuration is identified with POD–RBF procedure. - Abstract: This paper presents a thermo-economic optimization of a combined cycle power plant obtained via the Proper Orthogonal Decomposition–Radial Basis Functions (POD–RBF) procedure. POD, also known as “Karhunen–Loewe decomposition” or as “Method of Snapshots” is a powerful mathematical method for the low-order approximation of highly dimensional processes for which a set of initial data is known in the form of a discrete and finite set of experimental (or simulated) data: the procedure consists in constructing an approximated representation of a matricial operator that optimally “represents” the original data set on the basis of the eigenvalues and eigenvectors of the properly re-assembled data set. By combining POD and RBF it is possible to construct, by interpolation, a functional (parametric) approximation of such a representation. In this paper the set of starting data for the POD–RBF procedure has been obtained by the CAMEL-Pro™ process simulator. The proposed procedure does not require the generation of a complete simulated set of results at each iteration step of the optimization, because POD constructs a very accurate approximation to the function described by a relatively small number of initial simulations, and thus “new” points in design space can be extrapolated without recurring to additional and expensive process simulations. Thus, the often taxing computational effort needed to iteratively generate numerical process simulations of incrementally different configurations is substantially reduced by replacing much of it by easy-to-perform matrix operations. The object of the study was a fossil-fuelled, combined cycle powerplant of

  8. An Approximate Method for Solving Optimal Control Problems for Discrete Systems Based on Local Approximation of an Attainability Set

    Directory of Open Access Journals (Sweden)

    V. A. Baturin

    2017-03-01

    Full Text Available An optimal control problem for discrete systems is considered. A method of successive improvements along with its modernization based on the expansion of the main structures of the core algorithm about the parameter is suggested. The idea of the method is based on local approximation of attainability set, which is described by the zeros of the Bellman function in the special problem of optimal control. The essence of the problem is as follows: from the end point of the phase is required to find a path that minimizes functional deviations of the norm from the initial state. If the initial point belongs to the attainability set of the original controlled system, the value of the Bellman function equal to zero, otherwise the value of the Bellman function is greater than zero. For this special task Bellman equation is considered. The support approximation and Bellman equation are selected. The Bellman function is approximated by quadratic terms. Along the allowable trajectory, this approximation gives nothing, because Bellman function and its expansion coefficients are zero. We used a special trick: an additional variable is introduced, which characterizes the degree of deviation of the system from the initial state, thus it is obtained expanded original chain. For the new variable initial nonzero conditions is selected, thus obtained trajectory is lying outside attainability set and relevant Bellman function is greater than zero, which allows it to hold a non-trivial approximation. As a result of these procedures algorithms of successive improvements is designed. Conditions for relaxation algorithms and conditions for the necessary conditions of optimality are also obtained.

  9. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    Science.gov (United States)

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Basis for selecting optimum antibiotic regimens for secondary peritonitis.

    Science.gov (United States)

    Maseda, Emilio; Gimenez, Maria-Jose; Gilsanz, Fernando; Aguilar, Lorenzo

    2016-01-01

    Adequate management of severely ill patients with secondary peritonitis requires supportive therapy of organ dysfunction, source control of infection and antimicrobial therapy. Since secondary peritonitis is polymicrobial, appropriate empiric therapy requires combination therapy in order to achieve the needed coverage for both common and more unusual organisms. This article reviews etiological agents, resistance mechanisms and their prevalence, how and when to cover them and guidelines for treatment in the literature. Local surveillances are the basis for the selection of compounds in antibiotic regimens, which should be further adapted to the increasing number of patients with risk factors for resistance (clinical setting, comorbidities, previous antibiotic treatments, previous colonization, severity…). Inadequate antimicrobial regimens are strongly associated with unfavorable outcomes. Awareness of resistance epidemiology and of clinical consequences of inadequate therapy against resistant bacteria is crucial for clinicians treating secondary peritonitis, with delicate balance between optimization of empirical therapy (improving outcomes) and antimicrobial overuse (increasing resistance emergence).

  11. Setting of the Optimal Parameters of Melted Glass

    Czech Academy of Sciences Publication Activity Database

    Luptáková, Natália; Matejíčka, L.; Krečmer, N.

    2015-01-01

    Roč. 10, č. 1 (2015), s. 73-79 ISSN 1802-2308 Institutional support: RVO:68081723 Keywords : Striae * Glass * Glass melting * Regression * Optimal parameters Subject RIV: JH - Ceramics, Fire-Resistant Materials and Glass

  12. Optimization of Multipurpose Reservoir Operation with Application Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Elahe Fallah Mehdipour

    2012-12-01

    Full Text Available Optimal operation of multipurpose reservoirs is one of the complex and sometimes nonlinear problems in the field of multi-objective optimization. Evolutionary algorithms are optimization tools that search decision space using simulation of natural biological evolution and present a set of points as the optimum solutions of problem. In this research, application of multi-objective particle swarm optimization (MOPSO in optimal operation of Bazoft reservoir with different objectives, including generating hydropower energy, supplying downstream demands (drinking, industry and agriculture, recreation and flood control have been considered. In this regard, solution sets of the MOPSO algorithm in bi-combination of objectives and compromise programming (CP using different weighting and power coefficients have been first compared that the MOPSO algorithm in all combinations of objectives is more capable than the CP to find solution with appropriate distribution and these solutions have dominated the CP solutions. Then, ending points of solution set from the MOPSO algorithm and nonlinear programming (NLP results have been compared. Results showed that the MOPSO algorithm with 0.3 percent difference from the NLP results has more capability to present optimum solutions in the ending points of solution set.

  13. Set-Valued Stochastic Equation with Set-Valued Square Integrable Martingale

    Directory of Open Access Journals (Sweden)

    Li Jun-Gang

    2017-01-01

    Full Text Available In this paper, we shall introduce the stochastic integral of a stochastic process with respect to set-valued square integrable martingale. Then we shall give the Aumann integral measurable theorem, and give the set-valued stochastic Lebesgue integral and set-valued square integrable martingale integral equation. The existence and uniqueness of solution to set-valued stochastic integral equation are proved. The discussion will be useful in optimal control and mathematical finance in psychological factors.

  14. Electronic structure of thin films by the self-consistent numerical-basis-set linear combination of atomic orbitals method: Ni(001)

    International Nuclear Information System (INIS)

    Wang, C.S.; Freeman, A.J.

    1979-01-01

    We present the self-consistent numerical-basis-set linear combination of atomic orbitals (LCAO) discrete variational method for treating the electronic structure of thin films. As in the case of bulk solids, this method provides for thin films accurate solutions of the one-particle local density equations with a non-muffin-tin potential. Hamiltonian and overlap matrix elements are evaluated accurately by means of a three-dimensional numerical Diophantine integration scheme. Application of this method is made to the self-consistent solution of one-, three-, and five-layer Ni(001) unsupported films. The LCAO Bloch basis set consists of valence orbitals (3d, 4s, and 4p states for transition metals) orthogonalized to the frozen-core wave functions. The self-consistent potential is obtained iteratively within the superposition of overlapping spherical atomic charge density model with the atomic configurations treated as adjustable parameters. Thus the crystal Coulomb potential is constructed as a superposition of overlapping spherically symmetric atomic potentials and, correspondingly, the local density Kohn-Sham (α = 2/3) potential is determined from a superposition of atomic charge densities. At each iteration in the self-consistency procedure, the crystal charge density is evaluated using a sampling of 15 independent k points in (1/8)th of the irreducible two-dimensional Brillouin zone. The total density of states (DOS) and projected local DOS (by layer plane) are calculated using an analytic linear energy triangle method (presented as an Appendix) generalized from the tetrahedron scheme for bulk systems. Distinct differences are obtained between the surface and central plane local DOS. The central plane DOS is found to converge rapidly to the DOS of bulk paramagnetic Ni obtained by Wang and Callaway. Only a very small surplus charge (0.03 electron/atom) is found on the surface planes, in agreement with jellium model calculations

  15. ANALYSIS AND PARAMETRIC OPTIMIZATION OF ENERGY-AND-TECHNOLOGY UNITS ON THE BASIS OF THE POWER EQUIPMENT OF COMPRESSOR PLANTS OF MAIN GAS PIPELINES

    Directory of Open Access Journals (Sweden)

    V. A. Sednin

    2017-01-01

    Full Text Available On the basis of the gas compressor units of compressor plants of a main gas pipeline mathematical models of the macro-level were generated for analysis and parametric optimization of combined energy-and-technology units. In continuation of the study these models was applied to obtain the regression dependencies. For this purpose, a numerical experiment was used which had been designed with the use of regression analysis mathematical tool, which assumes that the test results should represent independent, normally distributed, random variables with approximately equal variance. Herewith we study the dependence of the optimization criterion on the value of control parameters (factors. Planning, conducting and processing of results of the experiment was conducted in the following sequence: choice of the optimization criteria, selection of control parameters (factors, encoding factors, the matrix of experiment compiling, assessing significance of regression coefficients, testing the adequacy of the model and reproducibility of the experiments. As the optimization criteria the electricity capacity and efficiency of combined energy-technology units were adopted. As control parameters for the installation with a gas-expansion-and-generator machine the temperature of the fuel gas before the expander, the pressure of the fuel gas after the expander and the temperature of the air supplied to the compressor of the engine were adopted, while for the steam turbine the adopted optimization criteria were compression in the compressor of the engine, the steam consumption for the technology and the temperature of the air supplied to the compressor of the engine. The application of the outlined methodological approach makes it possible to obtain a simple polynomial dependence, which significantly simplify the procedures of analysis, parametric optimization and evaluation of efficiency in the feasibility studies of the options of construction of the energy

  16. Influence of the Training Methods in the Diagnosis of Multiple Sclerosis Using Radial Basis Functions Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ángel Gutiérrez

    2015-04-01

    Full Text Available The data available in the average clinical study of a disease is very often small. This is one of the main obstacles in the application of neural networks to the classification of biological signals used for diagnosing diseases. A rule of thumb states that the number of parameters (weights that can be used for training a neural network should be around 15% of the available data, to avoid overlearning. This condition puts a limit on the dimension of the input space. Different authors have used different approaches to solve this problem, like eliminating redundancy in the data, preprocessing the data to find centers for the radial basis functions, or extracting a small number of features that were used as inputs. It is clear that the classification would be better the more features we could feed into the network. The approach utilized in this paper is incrementing the number of training elements with randomly expanding training sets. This way the number of original signals does not constraint the dimension of the input set in the radial basis network. Then we train the network using the method that minimizes the error function using the gradient descent algorithm and the method that uses the particle swarm optimization technique. A comparison between the two methods showed that for the same number of iterations on both methods, the particle swarm optimization was faster, it was learning to recognize only the sick people. On the other hand, the gradient method was not as good in general better at identifying those people.

  17. Simulation of fruit-set and trophic competition and optimization of yield advantages in six Capsicum cultivars using functional-structural plant modelling.

    Science.gov (United States)

    Ma, Y T; Wubs, A M; Mathieu, A; Heuvelink, E; Zhu, J Y; Hu, B G; Cournède, P H; de Reffye, P

    2011-04-01

    Many indeterminate plants can have wide fluctuations in the pattern of fruit-set and harvest. Fruit-set in these types of plants depends largely on the balance between source (assimilate supply) and sink strength (assimilate demand) within the plant. This study aims to evaluate the ability of functional-structural plant models to simulate different fruit-set patterns among Capsicum cultivars through source-sink relationships. A greenhouse experiment of six Capsicum cultivars characterized with different fruit weight and fruit-set was conducted. Fruit-set patterns and potential fruit sink strength were determined through measurement. Source and sink strength of other organs were determined via the GREENLAB model, with a description of plant organ weight and dimensions according to plant topological structure established from the measured data as inputs. Parameter optimization was determined using a generalized least squares method for the entire growth cycle. Fruit sink strength differed among cultivars. Vegetative sink strength was generally lower for large-fruited cultivars than for small-fruited ones. The larger the size of the fruit, the larger variation there was in fruit-set and fruit yield. Large-fruited cultivars need a higher source-sink ratio for fruit-set, which means higher demand for assimilates. Temporal heterogeneity of fruit-set affected both number and yield of fruit. The simulation study showed that reducing heterogeneity of fruit-set was obtained by different approaches: for example, increasing source strength; decreasing vegetative sink strength, source-sink ratio for fruit-set and flower appearance rate; and harvesting individual fruits earlier before full ripeness. Simulation results showed that, when we increased source strength or decreased vegetative sink strength, fruit-set and fruit weight increased. However, no significant differences were found between large-fruited and small-fruited groups of cultivars regarding the effects of source

  18. Oyster Creek cycle 10 nodal model parameter optimization study using PSMS

    International Nuclear Information System (INIS)

    Dougher, J.D.

    1987-01-01

    The power shape monitoring system (PSMS) is an on-line core monitoring system that uses a three-dimensional nodal code (NODE-B) to perform nodal power calculations and compute thermal margins. The PSMS contains a parameter optimization function that improves the ability of NODE-B to accurately monitor core power distributions. This functions iterates on the model normalization parameters (albedos and mixing factors) to obtain the best agreement between predicted and measured traversing in-core probe (TIP) reading on a statepoint-by-statepoint basis. Following several statepoint optimization runs, an average set of optimized normalization parameters can be determined and can be implemented into the current or subsequent cycle core model for on-line core monitoring. A statistical analysis of 19 high-power steady-state state-points throughout Oyster Creek cycle 10 operation has shown a consistently poor virgin model performance. The normalization parameters used in the cycle 10 NODE-B model were based on a cycle 8 study, which evaluated only Exxon fuel types. The introduction of General Electric (GE) fuel into cycle 10 (172 assemblies) was a significant fuel/core design change that could have altered the optimum set of normalization parameters. Based on the need to evaluate a potential change in the model normalization parameters for cycle 11 and in an attempt to account for the poor cycle 10 model performance, a parameter optimization study was performed

  19. Optimization of a Biometric System Based on Acoustic Images

    Directory of Open Access Journals (Sweden)

    Alberto Izquierdo Fuente

    2014-01-01

    Full Text Available On the basis of an acoustic biometric system that captures 16 acoustic images of a person for 4 frequencies and 4 positions, a study was carried out to improve the performance of the system. On a first stage, an analysis to determine which images provide more information to the system was carried out showing that a set of 12 images allows the system to obtain results that are equivalent to using all of the 16 images. Finally, optimization techniques were used to obtain the set of weights associated with each acoustic image that maximizes the performance of the biometric system. These results improve significantly the performance of the preliminary system, while reducing the time of acquisition and computational burden, since the number of acoustic images was reduced.

  20. Optimization of a Biometric System Based on Acoustic Images

    Science.gov (United States)

    Izquierdo Fuente, Alberto; Del Val Puente, Lara; Villacorta Calvo, Juan J.; Raboso Mateos, Mariano

    2014-01-01

    On the basis of an acoustic biometric system that captures 16 acoustic images of a person for 4 frequencies and 4 positions, a study was carried out to improve the performance of the system. On a first stage, an analysis to determine which images provide more information to the system was carried out showing that a set of 12 images allows the system to obtain results that are equivalent to using all of the 16 images. Finally, optimization techniques were used to obtain the set of weights associated with each acoustic image that maximizes the performance of the biometric system. These results improve significantly the performance of the preliminary system, while reducing the time of acquisition and computational burden, since the number of acoustic images was reduced. PMID:24616643

  1. BWR NSSS design basis documentation

    International Nuclear Information System (INIS)

    Vij, R.S.; Bates, R.E.

    2004-01-01

    programs that GE has participated in and describes the different options and approaches that have been used by various utilities in their design basis programs. Some of these variations deal with the scope and depth of coverage of the information, while others are related to the process (how the work is done). Both of these topics can have a significant effect on the program cost. Some insight into these effects is provided. The final section of the paper presents a set of lessons learned and a recommendation for an optimum approach to a design basis information program. The lessons learned reflect the knowledge that GE has gained by participating in design basis programs with nineteen domestic and international BWR owner/operators. The optimum approach described in this paper is GE's attempt to define a set of information and a work process for a utility/GE NSSS Design Basis Information program that will maximize the cost effectiveness of the program for the utility. (author)

  2. Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points

    Science.gov (United States)

    Regis, Rommel G.

    2014-02-01

    This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.

  3. PROBLEM OF OPTIMIZATION OF ENTERPRISE FINANCIAL STREAMS: URGENCY UNDER ECONOMIC CRISIS CONDITIONS

    Directory of Open Access Journals (Sweden)

    J. E. Gorbach

    2011-01-01

    Full Text Available The paper considers a problem of structural optimization of financial streams in the economic activities of the enterprises. The authors describe a general process of enterprise capital structure optimization while breaking it in stages and consider the most interesting financial stream theories. The paper presents for the first time «Combined optimization model». In order to develop the model the most commonly applied methods have been used, namely: an optimization  method on the basis of an average capital price, an optimization method on the basis of financial leverage effect and an optimization method on the basis of the average managing subject price. Alternative calculations of optimum structure of financial stream sources on the basis of the proposed «combined model» have been presented in corresponding tables. The authors also use for the first time such concepts as «a break-even point» and «a safety zone» in respect of enterprise financial streams while using a graphic method.

  4. STATISTICAL OPTIMIZATION OF PROCESS VARIABLES FOR ...

    African Journals Online (AJOL)

    2012-11-03

    Nov 3, 2012 ... The osmotic dehydration process was optimized for water loss and solutes gain. ... basis) with safe moisture content for storage (10% wet basis) [3]. Due to ... sucrose, glucose, fructose, corn syrup and sodium chlo- ride have ...

  5. Considering a non-polynomial basis for local kernel regression problem

    Science.gov (United States)

    Silalahi, Divo Dharma; Midi, Habshah

    2017-01-01

    A common used as solution for local kernel nonparametric regression problem is given using polynomial regression. In this study, we demonstrated the estimator and properties using maximum likelihood estimator for a non-polynomial basis such B-spline to replacing the polynomial basis. This estimator allows for flexibility in the selection of a bandwidth and a knot. The best estimator was selected by finding an optimal bandwidth and knot through minimizing the famous generalized validation function.

  6. Dictionary descent in optimization

    OpenAIRE

    Temlyakov, Vladimir

    2015-01-01

    The problem of convex optimization is studied. Usually in convex optimization the minimization is over a d-dimensional domain. Very often the convergence rate of an optimization algorithm depends on the dimension d. The algorithms studied in this paper utilize dictionaries instead of a canonical basis used in the coordinate descent algorithms. We show how this approach allows us to reduce dimensionality of the problem. Also, we investigate which properties of a dictionary are beneficial for t...

  7. A set of rules for constructing an admissible set of D optimal exact ...

    African Journals Online (AJOL)

    In the search for a D-optimal exact design using the combinatorial iterative technique introduced by Onukogu and Iwundu, 2008, all the support points that make up the experimental region are grouped into H concentric balls according to their distances from the centre. Any selection of N support points from the balls defines ...

  8. What is the best density functional to describe water clusters: evaluation of widely used density functionals with various basis sets for (H2O)n (n = 1-10)

    Czech Academy of Sciences Publication Activity Database

    Li, F.; Wang, L.; Zhao, J.; Xie, J. R. H.; Riley, Kevin Eugene; Chen, Z.

    2011-01-01

    Roč. 130, 2/3 (2011), s. 341-352 ISSN 1432-881X Institutional research plan: CEZ:AV0Z40550506 Keywords : water cluster * density functional theory * MP2 . CCSD(T) * basis set * relative energies Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.162, year: 2011

  9. Reliability-Based Optimization in Structural Engineering

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1994-01-01

    In this paper reliability-based optimization problems in structural engineering are formulated on the basis of the classical decision theory. Several formulations are presented: Reliability-based optimal design of structural systems with component or systems reliability constraints, reliability...

  10. An optimized process flow for rapid segmentation of cortical bones of the craniofacial skeleton using the level-set method.

    Science.gov (United States)

    Szwedowski, T D; Fialkov, J; Pakdel, A; Whyne, C M

    2013-01-01

    Accurate representation of skeletal structures is essential for quantifying structural integrity, for developing accurate models, for improving patient-specific implant design and in image-guided surgery applications. The complex morphology of thin cortical structures of the craniofacial skeleton (CFS) represents a significant challenge with respect to accurate bony segmentation. This technical study presents optimized processing steps to segment the three-dimensional (3D) geometry of thin cortical bone structures from CT images. In this procedure, anoisotropic filtering and a connected components scheme were utilized to isolate and enhance the internal boundaries between craniofacial cortical and trabecular bone. Subsequently, the shell-like nature of cortical bone was exploited using boundary-tracking level-set methods with optimized parameters determined from large-scale sensitivity analysis. The process was applied to clinical CT images acquired from two cadaveric CFSs. The accuracy of the automated segmentations was determined based on their volumetric concurrencies with visually optimized manual segmentations, without statistical appraisal. The full CFSs demonstrated volumetric concurrencies of 0.904 and 0.719; accuracy increased to concurrencies of 0.936 and 0.846 when considering only the maxillary region. The highly automated approach presented here is able to segment the cortical shell and trabecular boundaries of the CFS in clinical CT images. The results indicate that initial scan resolution and cortical-trabecular bone contrast may impact performance. Future application of these steps to larger data sets will enable the determination of the method's sensitivity to differences in image quality and CFS morphology.

  11. Optimal allocation of the limited oral cholera vaccine supply between endemic and epidemic settings.

    Science.gov (United States)

    Moore, Sean M; Lessler, Justin

    2015-10-06

    The World Health Organization (WHO) recently established a global stockpile of oral cholera vaccine (OCV) to be preferentially used in epidemic response (reactive campaigns) with any vaccine remaining after 1 year allocated to endemic settings. Hence, the number of cholera cases or deaths prevented in an endemic setting represents the minimum utility of these doses, and the optimal risk-averse response to any reactive vaccination request (i.e. the minimax strategy) is one that allocates the remaining doses between the requested epidemic response and endemic use in order to ensure that at least this minimum utility is achieved. Using mathematical models, we find that the best minimax strategy is to allocate the majority of doses to reactive campaigns, unless the request came late in the targeted epidemic. As vaccine supplies dwindle, the case for reactive use of the remaining doses grows stronger. Our analysis provides a lower bound for the amount of OCV to keep in reserve when responding to any request. These results provide a strategic context for the fulfilment of requests to the stockpile, and define allocation strategies that minimize the number of OCV doses that are allocated to suboptimal situations. © 2015 The Authors.

  12. Optimization of a Solid-State Electron Spin Qubit Using Gate Set Tomography (Open Access, Publisher’s Version)

    Science.gov (United States)

    2016-10-13

    and addressedwhen the qubit is usedwithin a fault-tolerant quantum computation scheme. 1. Introduction One of themain challenges in the physical...supplied in the supplementarymaterial. Additionally, we have supplied the datafiles constructed from the experiments, alongwith the Python notebook used to...New J. Phys. 18 (2016) 103018 doi:10.1088/1367-2630/18/10/103018 PAPER Optimization of a solid-state electron spin qubit using gate set tomography

  13. Novel gene sets improve set-level classification of prokaryotic gene expression data.

    Science.gov (United States)

    Holec, Matěj; Kuželka, Ondřej; Železný, Filip

    2015-10-28

    Set-level classification of gene expression data has received significant attention recently. In this setting, high-dimensional vectors of features corresponding to genes are converted into lower-dimensional vectors of features corresponding to biologically interpretable gene sets. The dimensionality reduction brings the promise of a decreased risk of overfitting, potentially resulting in improved accuracy of the learned classifiers. However, recent empirical research has not confirmed this expectation. Here we hypothesize that the reported unfavorable classification results in the set-level framework were due to the adoption of unsuitable gene sets defined typically on the basis of the Gene ontology and the KEGG database of metabolic networks. We explore an alternative approach to defining gene sets, based on regulatory interactions, which we expect to collect genes with more correlated expression. We hypothesize that such more correlated gene sets will enable to learn more accurate classifiers. We define two families of gene sets using information on regulatory interactions, and evaluate them on phenotype-classification tasks using public prokaryotic gene expression data sets. From each of the two gene-set families, we first select the best-performing subtype. The two selected subtypes are then evaluated on independent (testing) data sets against state-of-the-art gene sets and against the conventional gene-level approach. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. Novel gene sets defined on the basis of regulatory interactions improve set-level classification of gene expression data. The experimental scripts and other material needed to reproduce the experiments are available at http://ida.felk.cvut.cz/novelgenesets.tar.gz.

  14. Typed Sets as a Basis for Object-Oriented Database Schemas

    NARCIS (Netherlands)

    Balsters, H.; de By, R.A.; Zicari, R.

    The object-oriented data model TM is a language that is based on the formal theory of FM, a typed language with object-oriented features such as attributes and methods in the presence of subtyping. The general (typed) set constructs of FM allow one to deal with (database) constraints in TM. The

  15. Depression screening optimization in an academic rural setting.

    Science.gov (United States)

    Aleem, Sohaib; Torrey, William C; Duncan, Mathew S; Hort, Shoshana J; Mecchella, John N

    2015-01-01

    Primary care plays a critical role in screening and management of depression. The purpose of this paper is to focus on leveraging the electronic health record (EHR) as well as work flow redesign to improve the efficiency and reliability of the process of depression screening in two adult primary care clinics of a rural academic institution in USA. The authors utilized various process improvement tools from lean six sigma methodology including project charter, swim lane process maps, critical to quality tree, process control charts, fishbone diagrams, frequency impact matrix, mistake proofing and monitoring plan in Define-Measure-Analyze-Improve-Control format. Interventions included change in depression screening tool, optimization of data entry in EHR. EHR data entry optimization; follow up of positive screen, staff training and EHR redesign. Depression screening rate for office-based primary care visits improved from 17.0 percent at baseline to 75.9 percent in the post-intervention control phase (p<0.001). Follow up of positive depression screen with Patient History Questionnaire-9 data collection remained above 90 percent. Duplication of depression screening increased from 0.6 percent initially to 11.7 percent and then decreased to 4.7 percent after optimization of data entry by patients and flow staff. Impact of interventions on clinical outcomes could not be evaluated. Successful implementation, sustainability and revision of a process improvement initiative to facilitate screening, follow up and management of depression in primary care requires accounting for voice of the process (performance metrics), system limitations and voice of the customer (staff and patients) to overcome various system, customer and human resource constraints.

  16. Basis for NGNP Reactor Design Down-Selection

    Energy Technology Data Exchange (ETDEWEB)

    L.E. Demick

    2010-08-01

    The purpose of this paper is to identify the extent of technology development, design and licensing maturity anticipated to be required to credibly identify differences that could make a technical choice practical between the prismatic and pebble bed reactor designs. This paper does not address a business decision based on the economics, business model and resulting business case since these will vary based on the reactor application. The selection of the type of reactor, the module ratings, the number of modules, the configuration of the balance of plant and other design selections will be made on the basis of optimizing the Business Case for the application. These are not decisions that can be made on a generic basis.

  17. Optimal Design of Rectification Circuit in Electronic Circuit Fault Self-repair Based on EHW and RBT

    Institute of Scientific and Technical Information of China (English)

    ZHANG Junbin; CAI Jinyan; MENG Yafeng

    2018-01-01

    Reliability of traditional electronic circuit is improved mainly by redundant fault-tolerant technol-ogy with large hardware resource consumption and limited fault self-repair capability. In complicated environment, electronic circuit faults appear easily. If on-site immedi-ate repair is not implemented, normal running of elec-tronic system will be directly affected. In order to solve these problems, Evolvable hardware (EHW) technology is widely used. The conventional EHW has some bottlenecks. The optimal design of Rectification circuit (RTC) is fur-ther researched on the basis of the previously proposed fault self-repair based on EHW and Reparation balance technology (RBT). Fault sets are selected by fault danger degree and fault coverage rate. The optimal designed RTC can completely repair faults in the fault set. Simulation re-sults prove that it has higher self-repair capability and less hardware resource.

  18. Design of cognitive engine for cognitive radio based on the rough sets and radial basis function neural network

    Science.gov (United States)

    Yang, Yanchao; Jiang, Hong; Liu, Congbin; Lan, Zhongli

    2013-03-01

    Cognitive radio (CR) is an intelligent wireless communication system which can dynamically adjust the parameters to improve system performance depending on the environmental change and quality of service. The core technology for CR is the design of cognitive engine, which introduces reasoning and learning methods in the field of artificial intelligence, to achieve the perception, adaptation and learning capability. Considering the dynamical wireless environment and demands, this paper proposes a design of cognitive engine based on the rough sets (RS) and radial basis function neural network (RBF_NN). The method uses experienced knowledge and environment information processed by RS module to train the RBF_NN, and then the learning model is used to reconfigure communication parameters to allocate resources rationally and improve system performance. After training learning model, the performance is evaluated according to two benchmark functions. The simulation results demonstrate the effectiveness of the model and the proposed cognitive engine can effectively achieve the goal of learning and reconfiguration in cognitive radio.

  19. Molecular Properties by Quantum Monte Carlo: An Investigation on the Role of the Wave Function Ansatz and the Basis Set in the Water Molecule

    Science.gov (United States)

    Zen, Andrea; Luo, Ye; Sorella, Sandro; Guidoni, Leonardo

    2014-01-01

    Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely, the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudopotential, and the basis set for QMC calculations. We also introduce a new method for the computation of forces with finite variance on open systems and a new strategy for the definition of the atomic orbitals involved in the Jastrow-Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets. PMID:24526929

  20. Topology optimized microbioreactors

    DEFF Research Database (Denmark)

    Schäpper, Daniel; Lencastre Fernandes, Rita; Eliasson Lantz, Anna

    2011-01-01

    This article presents the fusion of two hitherto unrelated fields—microbioreactors and topology optimization. The basis for this study is a rectangular microbioreactor with homogeneously distributed immobilized brewers yeast cells (Saccharomyces cerevisiae) that produce a recombinant protein...

  1. Strong stabilization servo controller with optimization of performance criteria.

    Science.gov (United States)

    Sarjaš, Andrej; Svečko, Rajko; Chowdhury, Amor

    2011-07-01

    Synthesis of a simple robust controller with a pole placement technique and a H(∞) metrics is the method used for control of a servo mechanism with BLDC and BDC electric motors. The method includes solving a polynomial equation on the basis of the chosen characteristic polynomial using the Manabe standard polynomial form and parametric solutions. Parametric solutions are introduced directly into the structure of the servo controller. On the basis of the chosen parametric solutions the robustness of a closed-loop system is assessed through uncertainty models and assessment of the norm ‖•‖(∞). The design procedure and the optimization are performed with a genetic algorithm differential evolution - DE. The DE optimization method determines a suboptimal solution throughout the optimization on the basis of a spectrally square polynomial and Šiljak's absolute stability test. The stability of the designed controller during the optimization is being checked with Lipatov's stability condition. Both utilized approaches: Šiljak's test and Lipatov's condition, check the robustness and stability characteristics on the basis of the polynomial's coefficients, and are very convenient for automated design of closed-loop control and for application in optimization algorithms such as DE. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Network models for solving the problem of multicriterial adaptive optimization of investment projects control with several acceptable technologies

    Science.gov (United States)

    Shorikov, A. F.; Butsenko, E. V.

    2017-10-01

    This paper discusses the problem of multicriterial adaptive optimization the control of investment projects in the presence of several technologies. On the basis of network modeling proposed a new economic and mathematical model and a method for solving the problem of multicriterial adaptive optimization the control of investment projects in the presence of several technologies. Network economic and mathematical modeling allows you to determine the optimal time and calendar schedule for the implementation of the investment project and serves as an instrument to increase the economic potential and competitiveness of the enterprise. On a meaningful practical example, the processes of forming network models are shown, including the definition of the sequence of actions of a particular investment projecting process, the network-based work schedules are constructed. The calculation of the parameters of network models is carried out. Optimal (critical) paths have been formed and the optimal time for implementing the chosen technologies of the investment project has been calculated. It also shows the selection of the optimal technology from a set of possible technologies for project implementation, taking into account the time and cost of the work. The proposed model and method for solving the problem of managing investment projects can serve as a basis for the development, creation and application of appropriate computer information systems to support the adoption of managerial decisions by business people.

  3. Stepwise multi-criteria optimization for robotic radiosurgery

    International Nuclear Information System (INIS)

    Schlaefer, A.; Schweikard, A.

    2008-01-01

    Achieving good conformality and a steep dose gradient around the target volume remains a key aspect of radiosurgery. Clearly, this involves a trade-off between target coverage, conformality of the dose distribution, and sparing of critical structures. Yet, image guidance and robotic beam placement have extended highly conformal dose delivery to extracranial and moving targets. Therefore, the multi-criteria nature of the optimization problem becomes even more apparent, as multiple conflicting clinical goals need to be considered coordinate to obtain an optimal treatment plan. Typically, planning for robotic radiosurgery is based on constrained optimization, namely linear programming. An extension of that approach is presented, such that each of the clinical goals can be addressed separately and in any sequential order. For a set of common clinical goals the mapping to a mathematical objective and a corresponding constraint is defined. The trade-off among the clinical goals is explored by modifying the constraints and optimizing a simple objective, while retaining feasibility of the solution. Moreover, it becomes immediately obvious whether a desired goal can be achieved and where a trade-off is possible. No importance factors or predefined prioritizations of clinical goals are necessary. The presented framework forms the basis for interactive and automated planning procedures. It is demonstrated for a sample case that the linear programming formulation is suitable to search for a clinically optimal treatment, and that the optimization steps can be performed quickly to establish that a Pareto-efficient solution has been found. Furthermore, it is demonstrated how the stepwise approach is preferable compared to modifying importance factors

  4. Topology optimization

    DEFF Research Database (Denmark)

    Bendsøe, Martin P.; Sigmund, Ole

    2007-01-01

    Taking as a starting point a design case for a compliant mechanism (a force inverter), the fundamental elements of topology optimization are described. The basis for the developments is a FEM format for this design problem and emphasis is given to the parameterization of design as a raster image...

  5. Optimization models using fuzzy sets and possibility theory

    CERN Document Server

    Orlovski, S

    1987-01-01

    Optimization is of central concern to a number of discip­ lines. Operations Research and Decision Theory are often consi­ dered to be identical with optimizationo But also in other areas such as engineering design, regional policy, logistics and many others, the search for optimal solutions is one of the prime goals. The methods and models which have been used over the last decades in these areas have primarily been "hard" or "crisp", i. e. the solutions were considered to be either fea­ sible or unfeasible, either above a certain aspiration level or below. This dichotomous structure of methods very often forced the modeller to approximate real problem situations of the more-or-less type by yes-or-no-type models, the solutions of which might turn out not to be the solutions to the real prob­ lems. This is particularly true if the problem under considera­ tion includes vaguely defined relationships, human evaluations, uncertainty due to inconsistent or incomplete evidence, if na­ tural language has to be...

  6. Reliability-based optimization of engineering structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2008-01-01

    The theoretical basis for reliability-based structural optimization within the framework of Bayesian statistical decision theory is briefly described. Reliability-based cost benefit problems are formulated and exemplitied with structural optimization. The basic reliability-based optimization...... problems are generalized to the following extensions: interactive optimization, inspection and repair costs, systematic reconstruction, re-assessment of existing structures. Illustrative examples are presented including a simple introductory example, a decision problem related to bridge re...

  7. NON-CONVENTIONAL MACHINING PROCESSES SELECTION USING MULTI-OBJECTIVE OPTIMIZATION ON THE BASIS OF RATIO ANALYSIS METHOD

    Directory of Open Access Journals (Sweden)

    MILOŠ MADIĆ

    2015-11-01

    Full Text Available The role of non-conventional machining processes (NCMPs in today’s manufacturing environment has been well acknowledged. For effective utilization of the capabilities and advantages of different NCMPs, selection of the most appropriate NCMP for a given machining application requires consideration of different conflicting criteria. The right choice of the NCMP is critical to the success and competitiveness of the company. As the NCMP selection problem involves consideration of different conflicting criteria, of different relative importance, the multi-criteria decision making (MCDM methods are very useful in systematical selection of the most appropriate NCMP. This paper presents the application of a recent MCDM method, i.e., the multi-objective optimization on the basis of ratio analysis (MOORA method to solve NCMP selection which has been defined considering different performance criteria of four most widely used NCMPs. In order to determine the relative significance of considered quality criteria a pair-wise comparison matrix of the analytic hierarchy process was used. The results obtained using the MOORA method showed perfect correlation with those obtained by the technique for order preference by similarity to ideal solution (TOPSIS method which proves the applicability and potentiality of this MCDM method for solving complex NCMP selection problems.

  8. Optimal separable bases and molecular collisions

    International Nuclear Information System (INIS)

    Poirier, L.W.

    1997-12-01

    A new methodology is proposed for the efficient determination of Green's functions and eigenstates for quantum systems of two or more dimensions. For a given Hamiltonian, the best possible separable approximation is obtained from the set of all Hilbert space operators. It is shown that this determination itself, as well as the solution of the resultant approximation, are problems of reduced dimensionality for most systems of physical interest. Moreover, the approximate eigenstates constitute the optimal separable basis, in the sense of self-consistent field theory. These distorted waves give rise to a Born series with optimized convergence properties. Analytical results are presented for an application of the method to the two-dimensional shifted harmonic oscillator system. The primary interest however, is quantum reactive scattering in molecular systems. For numerical calculations, the use of distorted waves corresponds to numerical preconditioning. The new methodology therefore gives rise to an optimized preconditioning scheme for the efficient calculation of reactive and inelastic scattering amplitudes, especially at intermediate energies. This scheme is particularly suited to discrete variable representations (DVR's) and iterative sparse matrix methods commonly employed in such calculations. State to state and cumulative reactive scattering results obtained via the optimized preconditioner are presented for the two-dimensional collinear H + H 2 → H 2 + H system. Computational time and memory requirements for this system are drastically reduced in comparison with other methods, and results are obtained for previously prohibitive energy regimes

  9. Joint global optimization of tomographic data based on particle swarm optimization and decision theory

    Science.gov (United States)

    Paasche, H.; Tronicke, J.

    2012-04-01

    In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto

  10. The optimization of heat supply centralization on the basis of boiler-rooms

    International Nuclear Information System (INIS)

    Arshakian, D.

    1992-01-01

    In this article the problem of finding of the optimum of heat supply centralization of towns and insutrial districts on the basis of boiler-rooms, using organic and nuclear fuel in the natural-climatic conditions and town-building transitions of Armenia is considered. (orig.) [de

  11. CRM System Optimization

    OpenAIRE

    Fučík, Ivan

    2015-01-01

    This thesis is focused on CRM solutions in small and medium-sized organizations with respect to the quality of their customer relationship. The main goal of this work is to design an optimal CRM solution in the environment of real organization. To achieve this goal it is necessary to understand the theoretical basis of several topics, such as organizations and their relationship with customers, CRM systems, their features and trends. On the basis of these theoretical topics it is possible to ...

  12. Comparative evaluation of various optimization methods and the development of an optimization code system SCOOP

    International Nuclear Information System (INIS)

    Suzuki, Tadakazu

    1979-11-01

    Thirty two programs for linear and nonlinear optimization problems with or without constraints have been developed or incorporated, and their stability, convergence and efficiency have been examined. On the basis of these evaluations, the first version of the optimization code system SCOOP-I has been completed. The SCOOP-I is designed to be an efficient, reliable, useful and also flexible system for general applications. The system enables one to find global optimization point for a wide class of problems by selecting the most appropriate optimization method built in it. (author)

  13. Totally optimal decision trees for Boolean functions

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2016-01-01

    We study decision trees which are totally optimal relative to different sets of complexity parameters for Boolean functions. A totally optimal tree is an optimal tree relative to each parameter from the set simultaneously. We consider the parameters

  14. Expressing clinical data sets with openEHR archetypes: a solid basis for ubiquitous computing.

    Science.gov (United States)

    Garde, Sebastian; Hovenga, Evelyn; Buck, Jasmin; Knaup, Petra

    2007-12-01

    The purpose of this paper is to analyse the feasibility and usefulness of expressing clinical data sets (CDSs) as openEHR archetypes. For this, we present an approach to transform CDS into archetypes, and outline typical problems with CDS and analyse whether some of these problems can be overcome by the use of archetypes. Literature review and analysis of a selection of existing Australian, German, other European and international CDSs; transfer of a CDS for Paediatric Oncology into openEHR archetypes; implementation of CDSs in application systems. To explore the feasibility of expressing CDS as archetypes an approach to transform existing CDSs into archetypes is presented in this paper. In case of the Paediatric Oncology CDS (which consists of 260 data items) this lead to the definition of 48 openEHR archetypes. To analyse the usefulness of expressing CDS as archetypes, we identified nine problems with CDS that currently remain unsolved without a common model underpinning the CDS. Typical problems include incompatible basic data types and overlapping and incompatible definitions of clinical content. A solution to most of these problems based on openEHR archetypes is motivated. With regard to integrity constraints, further research is required. While openEHR cannot overcome all barriers to Ubiquitous Computing, it can provide the common basis for ubiquitous presence of meaningful and computer-processable knowledge and information, which we believe is a basic requirement for Ubiquitous Computing. Expressing CDSs as openEHR archetypes is feasible and advantageous as it fosters semantic interoperability, supports ubiquitous computing, and helps to develop archetypes that are arguably of better quality than the original CDS.

  15. Probing community nurses' professional basis

    DEFF Research Database (Denmark)

    Schaarup, Clara; Pape-Haugaard, Louise; Jensen, Merete Hartun

    2017-01-01

    Complicated and long-lasting wound care of diabetic foot ulcers are moving from specialists in wound care at hospitals towards community nurses without specialist diabetic foot ulcer wound care knowledge. The aim of the study is to elucidate community nurses' professional basis for treating...... diabetic foot ulcers. A situational case study design was adopted in an archetypical Danish community nursing setting. Experience is a crucial component in the community nurses' professional basis for treating diabetic foot ulcers. Peer-to-peer training is the prevailing way to learn about diabetic foot...... ulcer, however, this contributes to the risk of low evidence-based practice. Finally, a frequent behaviour among the community nurses is to consult colleagues before treating the diabetic foot ulcers....

  16. The basis property of eigenfunctions in the problem of a nonhomogeneous damped string

    Directory of Open Access Journals (Sweden)

    Łukasz Rzepnicki

    2017-01-01

    Full Text Available The equation which describes the small vibrations of a nonhomogeneous damped string can be rewritten as an abstract Cauchy problem for the densely defined closed operator \\(i A\\. We prove that the set of root vectors of the operator \\(A\\ forms a basis of subspaces in a certain Hilbert space \\(H\\. Furthermore, we give the rate of convergence for the decomposition with respect to this basis. In the second main result we show that with additional assumptions the set of root vectors of the operator \\(A\\ is a Riesz basis for \\(H\\.

  17. Preventive maintenance basis: Volume 1 -- Air-operated valves. Final report

    International Nuclear Information System (INIS)

    Worledge, D.; Hinchcliffe, G.

    1997-07-01

    US nuclear plants are implementing preventive maintenance (PM) tasks with little documented basis beyond fundamental vendor information to support the tasks or their intervals. The Preventive Maintenance Basis project provides utilities with the technical basis for PM tasks and task intervals associated with 40 specific components such as valves, electric motors, pumps, and HVAC equipment. This report provides an overview of the PM Basis project and describes use of the PM Basis database. This document provides a program of PM tasks suitable for application to Air Operated Valves (AOV's) in nuclear power plants. The PM tasks that are recommended provide a cost-effective way to intercept the causes and mechanisms that lead to degradation and failure. They can be used, in conjunction with material from other sources, to develop a complete PM program or to improve an existing program. Users of this information will be utility managers, supervisors, craft technicians, and training instructors responsible for developing, optimizing, or fine-tuning PM programs

  18. Simulation-based robust optimization for signal timing and setting.

    Science.gov (United States)

    2009-12-30

    The performance of signal timing plans obtained from traditional approaches for : pre-timed (fixed-time or actuated) control systems is often unstable under fluctuating traffic : conditions. This report develops a general approach for optimizing the ...

  19. Optimization of externalities using DTM measures: a Pareto optimal multi objective optimization using the evolutionary algorithm SPEA2+

    NARCIS (Netherlands)

    Wismans, Luc Johannes Josephus; van Berkum, Eric C.; Bliemer, Michiel; Allkim, T.P.; van Arem, Bart

    2010-01-01

    Multi objective optimization of externalities of traffic is performed solving a network design problem in which Dynamic Traffic Management measures are used. The resulting Pareto optimal set is determined by employing the SPEA2+ evolutionary algorithm.

  20. Ab initio thermochemistry using optimal-balance models with isodesmic corrections: The ATOMIC protocol

    Science.gov (United States)

    Bakowies, Dirk

    2009-04-01

    A theoretical composite approach, termed ATOMIC for Ab initio Thermochemistry using Optimal-balance Models with Isodesmic Corrections, is introduced for the calculation of molecular atomization energies and enthalpies of formation. Care is taken to achieve optimal balance in accuracy and cost between the various components contributing to high-level estimates of the fully correlated energy at the infinite-basis-set limit. To this end, the energy at the coupled-cluster level of theory including single, double, and quasiperturbational triple excitations is decomposed into Hartree-Fock, low-order correlation (MP2, CCSD), and connected-triples contributions and into valence-shell and core contributions. Statistical analyses for 73 representative neutral closed-shell molecules containing hydrogen and at least three first-row atoms (CNOF) are used to devise basis-set and extrapolation requirements for each of the eight components to maintain a given level of accuracy. Pople's concept of bond-separation reactions is implemented in an ab initio framework, providing for a complete set of high-level precomputed isodesmic corrections which can be used for any molecule for which a valence structure can be drawn. Use of these corrections is shown to lower basis-set requirements dramatically for each of the eight components of the composite model. A hierarchy of three levels is suggested for isodesmically corrected composite models which reproduce atomization energies at the reference level of theory to within 0.1 kcal/mol (A), 0.3 kcal/mol (B), and 1 kcal/mol (C). Large-scale statistical analysis shows that corrections beyond the CCSD(T) reference level of theory, including coupled-cluster theory with fully relaxed connected triple and quadruple excitations, first-order relativistic and diagonal Born-Oppenheimer corrections can normally be dealt with using a greatly simplified model that assumes thermoneutral bond-separation reactions and that reduces the estimate of these

  1. Thickness optimization of fiber reinforced laminated composites using the discrete material optimization method

    DEFF Research Database (Denmark)

    Sørensen, Søren Nørgaard; Lund, Erik

    2012-01-01

    This work concerns a novel large-scale multi-material topology optimization method for simultaneous determination of the optimum variable integer thickness and fiber orientation throughout laminate structures with fixed outer geometries while adhering to certain manufacturing constraints....... The conceptual combinatorial/integer problem is relaxed to a continuous problem and solved on basis of the so-called Discrete Material Optimization method, explicitly including the manufacturing constraints as linear constraints....

  2. Dire deadlines: coping with dysfunctional family dynamics in an end-of-life care setting.

    Science.gov (United States)

    Holst, Lone; Lundgren, Maren; Olsen, Lutte; Ishøy, Torben

    2009-01-01

    Working in a hospice and being able to focus on individualized, specialized end-of-life care is a privilege for the hospice staff member. However, it also presents the hospice staff with unique challenges. This descriptive study is based upon two cases from an end-of-life care setting in Denmark, where dysfunctional family dynamics presented added challenges to the staff members in their efforts to provide optimal palliative care. The hospice triad--the patient, the staff member and the family member--forms the basis for communication and intervention in a hospice. Higher expectations and demands of younger, more well-informed patients and family members challenge hospice staff in terms of information and communication when planning for care. The inherent risk factors of working with patients in the terminal phase of life become a focal point in the prevention of the development of compassion fatigue among staff members. A series of coping strategies to more optimally manage dysfunctional families in a setting where time is of the essence are then presented in an effort to empower the hospice team, to prevent splitting among staff members, and to improve quality of care.

  3. Optimal configuration of power grid sources based on optimal particle swarm algorithm

    Science.gov (United States)

    Wen, Yuanhua

    2018-04-01

    In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.

  4. An adaptive control algorithm for optimization of intensity modulated radiotherapy considering uncertainties in beam profiles, patient set-up and internal organ motion

    International Nuclear Information System (INIS)

    Loef, Johan; Lind, Bengt K.; Brahme, Anders

    1998-01-01

    A new general beam optimization algorithm for inverse treatment planning is presented. It utilizes a new formulation of the probability to achieve complication-free tumour control. The new formulation explicitly describes the dependence of the treatment outcome on the incident fluence distribution, the patient geometry, the radiobiological properties of the patient and the fractionation schedule. In order to account for both measured and non-measured positioning uncertainties, the algorithm is based on a combination of dynamic and stochastic optimization techniques. Because of the difficulty in measuring all aspects of the intra- and interfractional variations in the patient geometry, such as internal organ displacements and deformations, these uncertainties are primarily accounted for in the treatment planning process by intensity modulation using stochastic optimization. The information about the deviations from the nominal fluence profiles and the nominal position of the patient relative to the beam that is obtained by portal imaging during treatment delivery, is used in a feedback loop to automatically adjust the profiles and the location of the patient for all subsequent treatments. Based on the treatment delivered in previous fractions, the algorithm furnishes optimal corrections for the remaining dose delivery both with regard to the fluence profile and its position relative to the patient. By dynamically refining the beam configuration from fraction to fraction, the algorithm generates an optimal sequence of treatments that very effectively reduces the influence of systematic and random set-up uncertainties to minimize and almost eliminate their overall effect on the treatment. Computer simulations have shown that the present algorithm leads to a significant increase in the probability of uncomplicated tumour control compared with the simple classical approach of adding fixed set-up margins to the internal target volume. (author)

  5. Learning Mixtures of Truncated Basis Functions from Data

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Pérez-Bernabé, Inmaculada

    2014-01-01

    In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utilize a kernel density estimate of the data in order to translate the data into a mixture of truncated basis functions (MoTBF) representation using a convex optimization technique. When utilizing a ke...... propose an alternative learning method that relies on the cumulative distribution function of the data. Empirical results demonstrate the usefulness of the approaches: Even though the methods produce estimators that are slightly poorer than the state of the art (in terms of log......In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utilize a kernel density estimate of the data in order to translate the data into a mixture of truncated basis functions (MoTBF) representation using a convex optimization technique. When utilizing......-likelihood), they are significantly faster, and therefore indicate that the MoTBF framework can be used for inference and learning in reasonably sized domains. Furthermore, we show how a particular sub- class of MoTBF potentials (learnable by the proposed methods) can be exploited to significantly reduce complexity during inference....

  6. Optimization problem in quantum cryptography

    International Nuclear Information System (INIS)

    Brandt, Howard E

    2003-01-01

    A complete optimization was recently performed, yielding the maximum information gain by a general unitary entangling probe in the four-state protocol of quantum cryptography. A larger set of optimum probe parameters was found than was known previously from an incomplete optimization. In the present work, a detailed comparison is made between the complete and incomplete optimizations. Also, a new set of optimum probe parameters is identified for the four-state protocol

  7. Minimization of Basis Risk in Parametric Earthquake Cat Bonds

    Science.gov (United States)

    Franco, G.

    2009-12-01

    A catastrophe -cat- bond is an instrument used by insurance and reinsurance companies, by governments or by groups of nations to cede catastrophic risk to the financial markets, which are capable of supplying cover for highly destructive events, surpassing the typical capacity of traditional reinsurance contracts. Parametric cat bonds, a specific type of cat bonds, use trigger mechanisms or indices that depend on physical event parameters published by respected third parties in order to determine whether a part or the entire bond principal is to be paid for a certain event. First generation cat bonds, or cat-in-a-box bonds, display a trigger mechanism that consists of a set of geographic zones in which certain conditions need to be met by an earthquake’s magnitude and depth in order to trigger payment of the bond principal. Second generation cat bonds use an index formulation that typically consists of a sum of products of a set of weights by a polynomial function of the ground motion variables reported by a geographically distributed seismic network. These instruments are especially appealing to developing countries with incipient insurance industries wishing to cede catastrophic losses to the financial markets because the payment trigger mechanism is transparent and does not involve the parties ceding or accepting the risk, significantly reducing moral hazard. In order to be successful in the market, however, parametric cat bonds have typically been required to specify relatively simple trigger conditions. The consequence of such simplifications is the increase of basis risk. This risk represents the possibility that the trigger mechanism fails to accurately capture the actual losses of a catastrophic event, namely that it does not trigger for a highly destructive event or vice versa, that a payment of the bond principal is caused by an event that produced insignificant losses. The first case disfavors the sponsor who was seeking cover for its losses while the

  8. Optimal decision making on the basis of evidence represented in spike trains.

    Science.gov (United States)

    Zhang, Jiaxiang; Bogacz, Rafal

    2010-05-01

    Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.

  9. Using an optimal CC-PLSR-RBFNN model and NIR spectroscopy for the starch content determination in corn

    Science.gov (United States)

    Jiang, Hao; Lu, Jiangang

    2018-05-01

    Corn starch is an important material which has been traditionally used in the fields of food and chemical industry. In order to enhance the rapidness and reliability of the determination for starch content in corn, a methodology is proposed in this work, using an optimal CC-PLSR-RBFNN calibration model and near-infrared (NIR) spectroscopy. The proposed model was developed based on the optimal selection of crucial parameters and the combination of correlation coefficient method (CC), partial least squares regression (PLSR) and radial basis function neural network (RBFNN). To test the performance of the model, a standard NIR spectroscopy data set was introduced, containing spectral information and chemical reference measurements of 80 corn samples. For comparison, several other models based on the identical data set were also briefly discussed. In this process, the root mean square error of prediction (RMSEP) and coefficient of determination (Rp2) in the prediction set were used to make evaluations. As a result, the proposed model presented the best predictive performance with the smallest RMSEP (0.0497%) and the highest Rp2 (0.9968). Therefore, the proposed method combining NIR spectroscopy with the optimal CC-PLSR-RBFNN model can be helpful to determine starch content in corn.

  10. Authorization basis requirements comparison report

    Energy Technology Data Exchange (ETDEWEB)

    Brantley, W.M.

    1997-08-18

    The TWRS Authorization Basis (AB) consists of a set of documents identified by TWRS management with the concurrence of DOE-RL. Upon implementation of the TWRS Basis for Interim Operation (BIO) and Technical Safety Requirements (TSRs), the AB list will be revised to include the BIO and TSRs. Some documents that currently form part of the AB will be removed from the list. This SD identifies each - requirement from those documents, and recommends a disposition for each to ensure that necessary requirements are retained when the AB is revised to incorporate the BIO and TSRs. This SD also identifies documents that will remain part of the AB after the BIO and TSRs are implemented. This document does not change the AB, but provides guidance for the preparation of change documentation.

  11. Authorization basis requirements comparison report

    International Nuclear Information System (INIS)

    Brantley, W.M.

    1997-01-01

    The TWRS Authorization Basis (AB) consists of a set of documents identified by TWRS management with the concurrence of DOE-RL. Upon implementation of the TWRS Basis for Interim Operation (BIO) and Technical Safety Requirements (TSRs), the AB list will be revised to include the BIO and TSRs. Some documents that currently form part of the AB will be removed from the list. This SD identifies each - requirement from those documents, and recommends a disposition for each to ensure that necessary requirements are retained when the AB is revised to incorporate the BIO and TSRs. This SD also identifies documents that will remain part of the AB after the BIO and TSRs are implemented. This document does not change the AB, but provides guidance for the preparation of change documentation

  12. TreeBASIS Feature Descriptor and Its Hardware Implementation

    Directory of Open Access Journals (Sweden)

    Spencer Fowers

    2014-01-01

    Full Text Available This paper presents a novel feature descriptor called TreeBASIS that provides improvements in descriptor size, computation time, matching speed, and accuracy. This new descriptor uses a binary vocabulary tree that is computed using basis dictionary images and a test set of feature region images. To facilitate real-time implementation, a feature region image is binary quantized and the resulting quantized vector is passed into the BASIS vocabulary tree. A Hamming distance is then computed between the feature region image and the effectively descriptive basis dictionary image at a node to determine the branch taken and the path the feature region image takes is saved as a descriptor. The TreeBASIS feature descriptor is an excellent candidate for hardware implementation because of its reduced descriptor size and the fact that descriptors can be created and features matched without the use of floating point operations. The TreeBASIS descriptor is more computationally and space efficient than other descriptors such as BASIS, SIFT, and SURF. Moreover, it can be computed entirely in hardware without the support of a CPU for additional software-based computations. Experimental results and a hardware implementation show that the TreeBASIS descriptor compares well with other descriptors for frame-to-frame homography computation while requiring fewer hardware resources.

  13. Optimal Load-Tracking Operation of Grid-Connected Solid Oxide Fuel Cells through Set Point Scheduling and Combined L1-MPC Control

    Directory of Open Access Journals (Sweden)

    Siwei Han

    2018-03-01

    Full Text Available An optimal load-tracking operation strategy for a grid-connected tubular solid oxide fuel cell (SOFC is studied based on the steady-state analysis of the system thermodynamics and electrochemistry. Control of the SOFC is achieved by a two-level hierarchical control system. In the upper level, optimal setpoints of output voltage and the current corresponding to unit load demand is obtained through a nonlinear optimization by minimizing the SOFC’s internal power waste. In the lower level, a combined L1-MPC control strategy is designed to achieve fast set point tracking under system nonlinearities, while maintaining a constant fuel utilization factor. To prevent fuel starvation during the transient state resulting from the output power surging, a fuel flow constraint is imposed on the MPC with direct electron balance calculation. The proposed control schemes are testified on the grid-connected SOFC model.

  14. Assessment of WWER fuel condition in design basis accident

    International Nuclear Information System (INIS)

    Bibilashvili, Yu.; Sokolov, N.; Andreeva-Andrievskaya, L.; Vlasov, Yu.; Nechaeva, O.; Salatov, A.

    1994-01-01

    The fuel behaviour in design basis accidents is assessed by means of the verified code RAPTA-5. The code uses a set of high temperature physico-chemical properties of the fuel components as determined for commercially produced materials, fuel rod simulators and fuel rod bundles. The WWER fuel criteria available in Russia for design basis accidents do not generally differ from the similar criteria adopted for PWR's. 12 figs., 11 refs

  15. Multiple-scattering theory with a truncated basis set

    International Nuclear Information System (INIS)

    Zhang, X.; Butler, W.H.

    1992-01-01

    Multiple-scattering theory (MST) is an extremely efficient technique for calculating the electronic structure of an assembly of atoms. The wave function in MST is expanded in terms of spherical waves centered on each atom and indexed by their orbital and azimuthal quantum numbers, l and m. The secular equation which determines the characteristic energies can be truncated at a value of the orbital angular momentum l max , for which the higher angular momentum phase shifts, δ l (l>l max ), are sufficiently small. Generally, the wave-function coefficients which are calculated from the secular equation are also truncated at l max . Here we point out that this truncation of the wave function is not necessary and is in fact inconsistent with the truncation of the secular equation. A consistent procedure is described in which the states with higher orbital angular momenta are retained but with their phase shifts set to zero. We show that this treatment gives smooth, continuous, and correctly normalized wave functions and that the total charge density calculated from the corresponding Green function agrees with the Lloyd formula result. We also show that this augmented wave function can be written as a linear combination of Andersen's muffin-tin orbitals in the case of muffin-tin potentials, and can be used to generalize the muffin-tin orbital idea to full-cell potentals

  16. Optimization of Wind Turbine Airfoil Using Nondominated Sorting Genetic Algorithm and Pareto Optimal Front

    Directory of Open Access Journals (Sweden)

    Ziaul Huque

    2012-01-01

    Full Text Available A Computational Fluid Dynamics (CFD and response surface-based multiobjective design optimization were performed for six different 2D airfoil profiles, and the Pareto optimal front of each airfoil is presented. FLUENT, which is a commercial CFD simulation code, was used to determine the relevant aerodynamic loads. The Lift Coefficient (CL and Drag Coefficient (CD data at a range of 0° to 12° angles of attack (α and at three different Reynolds numbers (Re=68,459, 479, 210, and 958, 422 for all the six airfoils were obtained. Realizable k-ε turbulence model with a second-order upwind solution method was used in the simulations. The standard least square method was used to generate response surface by the statistical code JMP. Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II was used to determine the Pareto optimal set based on the response surfaces. Each Pareto optimal solution represents a different compromise between design objectives. This gives the designer a choice to select a design compromise that best suits the requirements from a set of optimal solutions. The Pareto solution set is presented in the form of a Pareto optimal front.

  17. Optimization of parameters of heat exchangers vehicles

    Directory of Open Access Journals (Sweden)

    Andrei MELEKHIN

    2014-09-01

    Full Text Available The relevance of the topic due to the decision of problems of the economy of resources in heating systems of vehicles. To solve this problem we have developed an integrated method of research, which allows to solve tasks on optimization of parameters of heat exchangers vehicles. This method decides multicriteria optimization problem with the program nonlinear optimization on the basis of software with the introduction of an array of temperatures obtained using thermography. The authors have developed a mathematical model of process of heat exchange in heat exchange surfaces of apparatuses with the solution of multicriteria optimization problem and check its adequacy to the experimental stand in the visualization of thermal fields, an optimal range of managed parameters influencing the process of heat exchange with minimal metal consumption and the maximum heat output fin heat exchanger, the regularities of heat exchange process with getting generalizing dependencies distribution of temperature on the heat-release surface of the heat exchanger vehicles, defined convergence of the results of research in the calculation on the basis of theoretical dependencies and solving mathematical model.

  18. Optimal perturbations for nonlinear systems using graph-based optimal transport

    Science.gov (United States)

    Grover, Piyush; Elamvazhuthi, Karthik

    2018-06-01

    We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.

  19. Quality assurance for high dose rate brachytherapy treatment planning optimization: using a simple optimization to verify a complex optimization

    International Nuclear Information System (INIS)

    Deufel, Christopher L; Furutani, Keith M

    2014-01-01

    As dose optimization for high dose rate brachytherapy becomes more complex, it becomes increasingly important to have a means of verifying that optimization results are reasonable. A method is presented for using a simple optimization as quality assurance for the more complex optimization algorithms typically found in commercial brachytherapy treatment planning systems. Quality assurance tests may be performed during commissioning, at regular intervals, and/or on a patient specific basis. A simple optimization method is provided that optimizes conformal target coverage using an exact, variance-based, algebraic approach. Metrics such as dose volume histogram, conformality index, and total reference air kerma agree closely between simple and complex optimizations for breast, cervix, prostate, and planar applicators. The simple optimization is shown to be a sensitive measure for identifying failures in a commercial treatment planning system that are possibly due to operator error or weaknesses in planning system optimization algorithms. Results from the simple optimization are surprisingly similar to the results from a more complex, commercial optimization for several clinical applications. This suggests that there are only modest gains to be made from making brachytherapy optimization more complex. The improvements expected from sophisticated linear optimizations, such as PARETO methods, will largely be in making systems more user friendly and efficient, rather than in finding dramatically better source strength distributions. (paper)

  20. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan; Alzahrani, Majed A.; Gao, Xin

    2014-01-01

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  1. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  2. The Biological Basis of Learning and Individuality.

    Science.gov (United States)

    Kandel, Eric R.; Hawkins, Robert D.

    1992-01-01

    Describes the biological basis of learning and individuality. Presents an overview of recent discoveries that suggest learning engages a simple set of rules that modify the strength of connection between neurons in the brain. The changes are cited as playing an important role in making each individual unique. (MCO)

  3. PGHPF – An Optimizing High Performance Fortran Compiler for Distributed Memory Machines

    Directory of Open Access Journals (Sweden)

    Zeki Bozkus

    1997-01-01

    Full Text Available High Performance Fortran (HPF is the first widely supported, efficient, and portable parallel programming language for shared and distributed memory systems. HPF is realized through a set of directive-based extensions to Fortran 90. It enables application developers and Fortran end-users to write compact, portable, and efficient software that will compile and execute on workstations, shared memory servers, clusters, traditional supercomputers, or massively parallel processors. This article describes a production-quality HPF compiler for a set of parallel machines. Compilation techniques such as data and computation distribution, communication generation, run-time support, and optimization issues are elaborated as the basis for an HPF compiler implementation on distributed memory machines. The performance of this compiler on benchmark programs demonstrates that high efficiency can be achieved executing HPF code on parallel architectures.

  4. Designing an artificial neural network using radial basis function to model exergetic efficiency of nanofluids in mini double pipe heat exchanger

    Science.gov (United States)

    Ghasemi, Nahid; Aghayari, Reza; Maddah, Heydar

    2018-06-01

    The present study aims at predicting and optimizing exergetic efficiency of TiO2-Al2O3/water nanofluid at different Reynolds numbers, volume fractions and twisted ratios using Artificial Neural Networks (ANN) and experimental data. Central Composite Design (CCD) and cascade Radial Basis Function (RBF) were used to display the significant levels of the analyzed factors on the exergetic efficiency. The size of TiO2-Al2O3/water nanocomposite was 20-70 nm. The parameters of ANN model were adapted by a training algorithm of radial basis function (RBF) with a wide range of experimental data set. Total mean square error and correlation coefficient were used to evaluate the results which the best result was obtained from double layer perceptron neural network with 30 neurons in which total Mean Square Error(MSE) and correlation coefficient (R2) were equal to 0.002 and 0.999, respectively. This indicated successful prediction of the network. Moreover, the proposed equation for predicting exergetic efficiency was extremely successful. According to the optimal curves, the optimum designing parameters of double pipe heat exchanger with inner twisted tape and nanofluid under the constrains of exergetic efficiency 0.937 are found to be Reynolds number 2500, twisted ratio 2.5 and volume fraction( v/v%) 0.05.

  5. Basis expansion model for channel estimation in LTE-R communication system

    Directory of Open Access Journals (Sweden)

    Ling Deng

    2016-05-01

    Full Text Available This paper investigates fast time-varying channel estimation in LTE-R communication systems. The Basis Expansion Model (BEM is adopted to fit the fast time-varying channel in a high-speed railway communication scenario. The channel impulse response is modeled as the sum of basis functions multiplied by different coefficients. The optimal coefficients are obtained by theoretical analysis. Simulation results show that a Generalized Complex-Exponential BEM (GCE-BEM outperforms a Complex-Exponential BEM (CE-BEM and a polynomial BEM in terms of Mean Squared Error (MSE. Besides, the MSE of the CE-BEM decreases gradually as the number of basis functions increases. The GCE-BEM has a satisfactory performance with the serious fading channel.

  6. Preventive maintenance basis: Volume 21 -- HVAC, air handling equipment. Final report

    International Nuclear Information System (INIS)

    Worledge, D.; Hinchcliffe, G.

    1997-12-01

    US nuclear plants are implementing preventive maintenance (PM) tasks with little documented basis beyond fundamental vendor information to support the tasks or their intervals. The Preventive Maintenance Basis project provides utilities with the technical basis for PM tasks and task intervals associated with 40 specific components such as valves, electric motors, pumps, and HVAC equipment. This report provides an overview of the PM Basis project and describes use of the PM Basis database. Volume 21 of the report provides a program of PM tasks suitable for application to HVAC-Air Handling Equipment. The PM tasks that are recommended provide a cost-effective way to intercept the causes and mechanisms that lead to degradation and failure. They can be used, in conjunction with material from other sources, to develop a complete PM program or to improve an existing program. Users of this information will be utility managers, supervisors, craft technicians, and training instructors responsible for developing, optimizing, or fine-tuning PM programs

  7. Preventive maintenance basis: Volume 15 -- Rotary screw air compressors. Final report

    International Nuclear Information System (INIS)

    Worledge, D.; Hinchcliffe, G.

    1997-07-01

    US nuclear plants are implementing preventive maintenance (PM) tasks with little documented basis beyond fundamental vendor information to support the tasks or their intervals. The Preventive Maintenance Basis project provides utilities with the technical basis for PM tasks and task intervals associated with 40 specific components such as valves, electric motors, pumps, and HVAC equipment. This report provides an overview of the PM Basis project and describes use of the PM Basis database. Volume 15 of the report provides a program of PM tasks suitable for application to rotary screw air compressors in nuclear power plants. The PM tasks that are recommended provide a cost-effective way to intercept the causes and mechanisms that lead to degradation and failure. They can be used, in conjunction with material from other sources, to develop a complete PM program or to improve an existing program. Users of this information will be utility managers, supervisors, craft technicians, and training instructors responsible for developing, optimizing, or fine-tuning PM programs

  8. Assessment of WWER fuel condition in design basis accident

    Energy Technology Data Exchange (ETDEWEB)

    Bibilashvili, Yu; Sokolov, N; Andreeva-Andrievskaya, L; Vlasov, Yu; Nechaeva, O; Salatov, A [Vsesoyuznyj Nauchno-Issledovatel` skij Inst. Neorganicheskikh Materialov, Moscow (Russian Federation)

    1994-12-31

    The fuel behaviour in design basis accidents is assessed by means of the verified code RAPTA-5. The code uses a set of high temperature physico-chemical properties of the fuel components as determined for commercially produced materials, fuel rod simulators and fuel rod bundles. The WWER fuel criteria available in Russia for design basis accidents do not generally differ from the similar criteria adopted for PWR`s. 12 figs., 11 refs.

  9. AFRICAN BUFFALO OPTIMIZATION ico-pdf

    Directory of Open Access Journals (Sweden)

    Julius Beneoluchi Odili

    2016-02-01

    Full Text Available This is an introductory paper to the newly-designed African Buffalo Optimization (ABO algorithm for solving combinatorial and other optimization problems. The algorithm is inspired by the behavior of African buffalos, a species of wild cows known for their extensive migrant lifestyle. This paper presents an overview of major metaheuristic algorithms with the aim of providing a basis for the development of the African Buffalo Optimization algorithm which is a nature-inspired, population-based metaheuristic algorithm. Experimental results obtained from applying the novel ABO to solve a number of benchmark global optimization test functions as well as some symmetric and asymmetric Traveling Salesman’s Problems when compared to the results obtained from using other popular optimization methods show that the African Buffalo Optimization is a worthy addition to the growing number of swarm intelligence optimization techniques.

  10. Changing pulse-shape basis for molecular learning control

    International Nuclear Information System (INIS)

    Cardoza, David; Langhojer, Florian; Trallero-Herrero, Carlos; Weinacht, Thomas; Monti, Oliver L.A.

    2004-01-01

    We interpret the results of a molecular fragmentation learning control experiment. We show that in the case of a system where control can be related to the structure of the optimal pulse matching the vibrational dynamics of the molecule, a simple change of pulse-shape basis in which the learning algorithm performs the search can reduce the dimensionality of the search space to one or two degrees of freedom

  11. Symmetry Adapted Basis Sets

    DEFF Research Database (Denmark)

    Avery, John Scales; Rettrup, Sten; Avery, James Emil

    automatically with computer techniques. The method has a wide range of applicability, and can be used to solve difficult eigenvalue problems in a number of fields. The book is of special interest to quantum theorists, computer scientists, computational chemists and applied mathematicians....

  12. Accelerating ROP detector layout optimization

    International Nuclear Information System (INIS)

    Kastanya, D.; Fodor, B.

    2012-01-01

    The ADORE (Alternating Detector layout Optimization for REgional overpower protection system) algorithm for performing the optimization of regional overpower protection (ROP) system for CANDU® reactors have been recently developed. The simulated annealing (SA) stochastic optimization technique is utilized to come up with a quasi optimized detector layout for the ROP systems. Within each simulated annealing history, the objective function is calculated as a function of the trip set point (TSP) corresponding to the detector layout for that particular history. The evaluation of the TSP is done probabilistically using the ROVER-F code. Since during each optimization execution thousands of candidate detector layouts are evaluated, the overall optimization process is time consuming. Since for each ROVER-F evaluation the number of fuelling ripples controls the execution time, reducing the number of fuelling ripples used during the calculation of TSP will reduce the overall optimization execution time. This approach has been investigated and the results are presented in this paper. The challenge is to construct a set of representative fuelling ripples which will significantly speedup the optimization process while guaranteeing that the resulting detector layout has similar quality to the ones produced when the complete set of fuelling ripples is employed. Results presented in this paper indicate that a speedup of up to around 40 times is attainable when this approach is utilized. (author)

  13. Topology optimization of microwave waveguide filters

    DEFF Research Database (Denmark)

    Aage, Niels; Johansen, Villads Egede

    2017-01-01

    We present a density based topology optimization approach for the design of metallic microwave insert filters. A two-phase optimization procedure is proposed in which we, starting from a uniform design, first optimize to obtain a set of spectral varying resonators followed by a band gap optimizat......We present a density based topology optimization approach for the design of metallic microwave insert filters. A two-phase optimization procedure is proposed in which we, starting from a uniform design, first optimize to obtain a set of spectral varying resonators followed by a band gap...... little resemblance to standard filter layouts and hence the proposed design method offers a new design tool in microwave engineering....

  14. Migration and the Wage-Settings Curve

    DEFF Research Database (Denmark)

    Brücker, Herbert; Jahn, Elke

    Germany on basis of a wage-setting curve. The wage-setting curve relies on the assumption that wages respond to a hange in the unemployment rate, albeit imperfectly. This allows one to derive the wage and employment effects of migration simultaneously in a general equilibrium framework. Using...

  15. Abstract sets and finite ordinals an introduction to the study of set theory

    CERN Document Server

    Keene, G B

    2007-01-01

    This text unites the logical and philosophical aspects of set theory in a manner intelligible both to mathematicians without training in formal logic and to logicians without a mathematical background. It combines an elementary level of treatment with the highest possible degree of logical rigor and precision.Starting with an explanation of all the basic logical terms and related operations, the text progresses through a stage-by-stage elaboration that proves the fundamental theorems of finite sets. It focuses on the Bernays theory of finite classes and finite sets, exploring the system's basi

  16. Assessment of electricity demand-supply in health facilities in resource-constrained settings : optimization and evaluation of energy systems for a case in Rwanda

    NARCIS (Netherlands)

    Palacios, S.G.

    2015-01-01

    In health facilities in resource-constrained settings, a lack of access to sustainable and reliable electricity can result on a sub-optimal delivery of healthcare services, as they do not have lighting for medical procedures and power to run essential equipment and devices to treat their patients.

  17. Ant colony search algorithm for optimal reactive power optimization

    Directory of Open Access Journals (Sweden)

    Lenin K.

    2006-01-01

    Full Text Available The paper presents an (ACSA Ant colony search Algorithm for Optimal Reactive Power Optimization and voltage control of power systems. ACSA is a new co-operative agents’ approach, which is inspired by the observation of the behavior of real ant colonies on the topic of ant trial formation and foraging methods. Hence, in the ACSA a set of co-operative agents called "Ants" co-operates to find good solution for Reactive Power Optimization problem. The ACSA is applied for optimal reactive power optimization is evaluated on standard IEEE, 30, 57, 191 (practical test bus system. The proposed approach is tested and compared to genetic algorithm (GA, Adaptive Genetic Algorithm (AGA.

  18. Closed fringe demodulation using phase decomposition by Fourier basis functions.

    Science.gov (United States)

    Kulkarni, Rishikesh; Rastogi, Pramod

    2016-06-01

    We report a new technique for the demodulation of a closed fringe pattern by representing the phase as a weighted linear combination of a certain number of linearly independent Fourier basis functions in a given row/column at a time. A state space model is developed with the weights of the basis functions as the elements of the state vector. The iterative extended Kalman filter is effectively utilized for the robust estimation of the weights. A coarse estimate of the fringe density based on the fringe frequency map is used to determine the initial row/column to start with and subsequently the optimal number of basis functions. The performance of the proposed method is evaluated with several noisy fringe patterns. Experimental results are also reported to support the practical applicability of the proposed method.

  19. Application of an expert system to optimize reservoir performance

    International Nuclear Information System (INIS)

    Gharbi, Ridha

    2005-01-01

    The main challenge of oil displacement by an injected fluid, such as in Enhanced Oil Recovery (EOR) processes, is to reduce the cost and improve reservoir performance. An optimization methodology, combined with an economic model, is implemented into an expert system to optimize the net present value of full field development with an EOR process. The approach is automated and combines an economic package and existing numerical reservoir simulators to optimize the design of a selected EOR process using sensitivity analysis. The EOR expert system includes three stages of consultations: (1) select an appropriate EOR process on the basis of the reservoir characteristics, (2) prepare appropriate input data sets to design the selected EOR process using existing numerical simulators, and (3) apply the discounted-cash-flow methods to the optimization of the selected EOR process to find out under what conditions at current oil prices this EOR process might be profitable. The project profitability measures were used as the decision-making variables in an iterative approach to optimize the design of the EOR process. The economic analysis is based on the estimated recovery, residual oil in-place, oil price, and operating costs. Two case studies are presented for two reservoirs that have already been produced to their economic limits and are potential candidates for surfactant/polymer flooding, and carbon-dioxide flooding, respectively, or otherwise subject to abandonment. The effect of several design parameters on the project profitability of these EOR processes was investigated

  20. Optimization of Ventilation and Alarm Setting During the Process of Ammonia Leak in Refrigeration Machinery Room Based on Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Dongliang Liu

    2017-03-01

    Full Text Available In order to optimize the ventilation effect of ammonia leakage in the refrigeration machinery room, a food processing enterprise is selected as the subject investigated. The velocity and concentration field distribution during the process of ammonia leakage are discussed through simulation of refrigeration machinery room using CFD software. The ventilation system of the room is optimized in three aspects which are named air distribution, ventilation volume and discharge outlet. The influence of the ammonia alarm system through ventilation is also analyzed. The results show that it will be better to set the discharge outlet at the top of the plant than at the side of the wall, and the smaller of the distance between the air outlet and the ammonia gathering area, the better of the effect of ventilation will be. The air flow can be improved and the vortex flow can be reduced if the ventilation volume, the number of air vents and the exhaust velocity are reasonably arranged. Not only the function of the alarm could be ensured, but also the scope of the detection area could be enlarged if the detectors are set on the ceiling of the refrigeration units or the ammonia storage vessel.

  1. Optimal ordering quantities for substitutable deteriorating items under joint replenishment with cost of substitution

    Science.gov (United States)

    Mishra, Vinod Kumar

    2017-09-01

    In this paper we develop an inventory model, to determine the optimal ordering quantities, for a set of two substitutable deteriorating items. In this inventory model the inventory level of both items depleted due to demands and deterioration and when an item is out of stock, its demands are partially fulfilled by the other item and all unsatisfied demand is lost. Each substituted item incurs a cost of substitution and the demands and deterioration is considered to be deterministic and constant. Items are order jointly in each ordering cycle, to take the advantages of joint replenishment. The problem is formulated and a solution procedure is developed to determine the optimal ordering quantities that minimize the total inventory cost. We provide an extensive numerical and sensitivity analysis to illustrate the effect of different parameter on the model. The key observation on the basis of numerical analysis, there is substantial improvement in the optimal total cost of the inventory model with substitution over without substitution.

  2. Simulation of the transient processes of load rejection under different accident conditions in a hydroelectric generating set

    Science.gov (United States)

    Guo, W. C.; Yang, J. D.; Chen, J. P.; Peng, Z. Y.; Zhang, Y.; Chen, C. C.

    2016-11-01

    Load rejection test is one of the essential tests that carried out before the hydroelectric generating set is put into operation formally. The test aims at inspecting the rationality of the design of the water diversion and power generation system of hydropower station, reliability of the equipment of generating set and the dynamic characteristics of hydroturbine governing system. Proceeding from different accident conditions of hydroelectric generating set, this paper presents the transient processes of load rejection corresponding to different accident conditions, and elaborates the characteristics of different types of load rejection. Then the numerical simulation method of different types of load rejection is established. An engineering project is calculated to verify the validity of the method. Finally, based on the numerical simulation results, the relationship among the different types of load rejection and their functions on the design of hydropower station and the operation of load rejection test are pointed out. The results indicate that: The load rejection caused by the accident within the hydroelectric generating set is realized by emergency distributing valve, and it is the basis of the optimization for the closing law of guide vane and the calculation of regulation and guarantee. The load rejection caused by the accident outside the hydroelectric generating set is realized by the governor. It is the most efficient measure to inspect the dynamic characteristics of hydro-turbine governing system, and its closure rate of guide vane set in the governor depends on the optimization result in the former type load rejection.

  3. The Incompatibility of Pareto Optimality and Dominant-Strategy Incentive Compatibility in Sufficiently-Anonymous Budget-Constrained Quasilinear Settings

    Directory of Open Access Journals (Sweden)

    Rica Gonen

    2013-11-01

    Full Text Available We analyze the space of deterministic, dominant-strategy incentive compatible, individually rational and Pareto optimal combinatorial auctions. We examine a model with multidimensional types, nonidentical items, private values and quasilinear preferences for the players with one relaxation; the players are subject to publicly-known budget constraints. We show that the space includes dictatorial mechanisms and that if dictatorial mechanisms are ruled out by a natural anonymity property, then an impossibility of design is revealed. The same impossibility naturally extends to other abstract mechanisms with an arbitrary outcome set if one maintains the original assumptions of players with quasilinear utilities, public budgets and nonnegative prices.

  4. Optimized positioning of autonomous surgical lamps

    Science.gov (United States)

    Teuber, Jörn; Weller, Rene; Kikinis, Ron; Oldhafer, Karl-Jürgen; Lipp, Michael J.; Zachmann, Gabriel

    2017-03-01

    We consider the problem of finding automatically optimal positions of surgical lamps throughout the whole surgical procedure, where we assume that future lamps could be robotized. We propose a two-tiered optimization technique for the real-time autonomous positioning of those robotized surgical lamps. Typically, finding optimal positions for surgical lamps is a multi-dimensional problem with several, in part conflicting, objectives, such as optimal lighting conditions at every point in time while minimizing the movement of the lamps in order to avoid distractions of the surgeon. Consequently, we use multi-objective optimization (MOO) to find optimal positions in real-time during the entire surgery. Due to the conflicting objectives, there is usually not a single optimal solution for such kinds of problems, but a set of solutions that realizes a Pareto-front. When our algorithm selects a solution from this set it additionally has to consider the individual preferences of the surgeon. This is a highly non-trivial task because the relationship between the solution and the parameters is not obvious. We have developed a novel meta-optimization that considers exactly this challenge. It delivers an easy to understand set of presets for the parameters and allows a balance between the lamp movement and lamp obstruction. This metaoptimization can be pre-computed for different kinds of operations and it then used by our online optimization for the selection of the appropriate Pareto solution. Both optimization approaches use data obtained by a depth camera that captures the surgical site but also the environment around the operating table. We have evaluated our algorithms with data recorded during a real open abdominal surgery. It is available for use for scientific purposes. The results show that our meta-optimization produces viable parameter sets for different parts of an intervention even when trained on a small portion of it.

  5. An objective method to optimize the MR sequence set for plaque classification in carotid vessel wall images using automated image segmentation.

    Directory of Open Access Journals (Sweden)

    Ronald van 't Klooster

    Full Text Available A typical MR imaging protocol to study the status of atherosclerosis in the carotid artery consists of the application of multiple MR sequences. Since scanner time is limited, a balance has to be reached between the duration of the applied MR protocol and the quantity and quality of the resulting images which are needed to assess the disease. In this study an objective method to optimize the MR sequence set for classification of soft plaque in vessel wall images of the carotid artery using automated image segmentation was developed. The automated method employs statistical pattern recognition techniques and was developed based on an extensive set of MR contrast weightings and corresponding manual segmentations of the vessel wall and soft plaque components, which were validated by histological sections. Evaluation of the results from nine contrast weightings showed the tradeoff between scan duration and automated image segmentation performance. For our dataset the best segmentation performance was achieved by selecting five contrast weightings. Similar performance was achieved with a set of three contrast weightings, which resulted in a reduction of scan time by more than 60%. The presented approach can help others to optimize MR imaging protocols by investigating the tradeoff between scan duration and automated image segmentation performance possibly leading to shorter scanning times and better image interpretation. This approach can potentially also be applied to other research fields focusing on different diseases and anatomical regions.

  6. An Optimization Study on Listening Experiments to Improve the Comparability of Annoyance Ratings of Noise Samples from Different Experimental Sample Sets.

    Science.gov (United States)

    Di, Guoqing; Lu, Kuanguang; Shi, Xiaofan

    2018-03-08

    Annoyance ratings obtained from listening experiments are widely used in studies on health effect of environmental noise. In listening experiments, participants usually give the annoyance rating of each noise sample according to its relative annoyance degree among all samples in the experimental sample set if there are no reference sound samples, which leads to poor comparability between experimental results obtained from different experimental sample sets. To solve this problem, this study proposed to add several pink noise samples with certain loudness levels into experimental sample sets as reference sound samples. On this basis, the standard curve between logarithmic mean annoyance and loudness level of pink noise was used to calibrate the experimental results and the calibration procedures were described in detail. Furthermore, as a case study, six different types of noise sample sets were selected to conduct listening experiments using this method to examine the applicability of it. Results showed that the differences in the annoyance ratings of each identical noise sample from different experimental sample sets were markedly decreased after calibration. The determination coefficient ( R ²) of linear fitting functions between psychoacoustic annoyance (PA) and mean annoyance (MA) of noise samples from different experimental sample sets increased obviously after calibration. The case study indicated that the method above is applicable to calibrating annoyance ratings obtained from different types of noise sample sets. After calibration, the comparability of annoyance ratings of noise samples from different experimental sample sets can be distinctly improved.

  7. Symmetry-Adapted Ro-vibrational Basis Functions for Variational Nuclear Motion Calculations: TROVE Approach.

    Science.gov (United States)

    Yurchenko, Sergei N; Yachmenev, Andrey; Ovsyannikov, Roman I

    2017-09-12

    We present a general, numerically motivated approach to the construction of symmetry-adapted basis functions for solving ro-vibrational Schrödinger equations. The approach is based on the property of the Hamiltonian operator to commute with the complete set of symmetry operators and, hence, to reflect the symmetry of the system. The symmetry-adapted ro-vibrational basis set is constructed numerically by solving a set of reduced vibrational eigenvalue problems. In order to assign the irreducible representations associated with these eigenfunctions, their symmetry properties are probed on a grid of molecular geometries with the corresponding symmetry operations. The transformation matrices are reconstructed by solving overdetermined systems of linear equations related to the transformation properties of the corresponding wave functions on the grid. Our method is implemented in the variational approach TROVE and has been successfully applied to many problems covering the most important molecular symmetry groups. Several examples are used to illustrate the procedure, which can be easily applied to different types of coordinates, basis sets, and molecular systems.

  8. Optimization of nonlinear wave function parameters

    International Nuclear Information System (INIS)

    Shepard, R.; Minkoff, M.; Chemistry

    2006-01-01

    An energy-based optimization method is presented for our recently developed nonlinear wave function expansion form for electronic wave functions. This expansion form is based on spin eigenfunctions, using the graphical unitary group approach (GUGA). The wave function is expanded in a basis of product functions, allowing application to closed-shell and open-shell systems and to ground and excited electronic states. Each product basis function is itself a multiconfigurational function that depends on a relatively small number of nonlinear parameters called arc factors. The energy-based optimization is formulated in terms of analytic arc factor gradients and orbital-level Hamiltonian matrices that correspond to a specific kind of uncontraction of each of the product basis functions. These orbital-level Hamiltonian matrices give an intuitive representation of the energy in terms of disjoint subsets of the arc factors, they provide for an efficient computation of gradients of the energy with respect to the arc factors, and they allow optimal arc factors to be determined in closed form for subspaces of the full variation problem. Timings for energy and arc factor gradient computations involving expansion spaces of > 10 24 configuration state functions are reported. Preliminary convergence studies and molecular dissociation curves are presented for some small molecules

  9. EABOT - Energetic analysis as a basis for robust optimization of trigeneration systems by linear programming

    International Nuclear Information System (INIS)

    Piacentino, A.; Cardona, F.

    2008-01-01

    The optimization of synthesis, design and operation in trigeneration systems for building applications is a quite complex task, due to the high number of decision variables, the presence of irregular heat, cooling and electric load profiles and the variable electricity price. Consequently, computer-aided techniques are usually adopted to achieve the optimal solution, based either on iterative techniques, linear or non-linear programming or evolutionary search. Large efforts have been made in improving algorithm efficiency, which have resulted in an increasingly rapid convergence to the optimal solution and in reduced calculation time; robust algorithm have also been formulated, assuming stochastic behaviour for energy loads and prices. This paper is based on the assumption that margins for improvements in the optimization of trigeneration systems still exist, which require an in-depth understanding of plant's energetic behaviour. Robustness in the optimization of trigeneration systems has more to do with a 'correct and comprehensive' than with an 'efficient' modelling, being larger efforts required to energy specialists rather than to experts in efficient algorithms. With reference to a mixed integer linear programming model implemented in MatLab for a trigeneration system including a pressurized (medium temperature) heat storage, the relevant contribute of thermoeconomics and energo-environmental analysis in the phase of mathematical modelling and code testing are shown

  10. The task of multi-criteria optimization of metal frame structures

    Directory of Open Access Journals (Sweden)

    Alpatov Vadim

    2017-01-01

    Full Text Available Optimal design of a frame structure with a specified geometric scheme consists in finding control parameters that provide the highest or lowest value of composite functions which present some quality criteria. Searching for optimal parameters is related to a number of design and calculation constraints. When it is necessary to vary a geometrical scheme, node coordinates are also considered as unknown varied parameters that affect the quality criteria. When designing frames with a specified scheme, the volume of material is typically the primary criterion for solving an optimization task and is written as a function of control parameters and state settings. In problem specification it is also important to reduce the deformation of the system. This is accomplished by introducing an additional criterion -maximum moments of inertia of the sections of the system. There is a two-phase design and calculation model existing in design practice now. In the first stage, the work is based on the experience or existing prototype. On their basis stiffness of the bars is assigned, and then a load vector is calculated. In the second stage, the sections are chosen according to known forces.

  11. Adaptive local basis set for Kohn–Sham density functional theory in a discontinuous Galerkin framework I: Total energy calculation

    International Nuclear Information System (INIS)

    Lin Lin; Lu Jianfeng; Ying Lexing; Weinan, E

    2012-01-01

    Kohn–Sham density functional theory is one of the most widely used electronic structure theories. In the pseudopotential framework, uniform discretization of the Kohn–Sham Hamiltonian generally results in a large number of basis functions per atom in order to resolve the rapid oscillations of the Kohn–Sham orbitals around the nuclei. Previous attempts to reduce the number of basis functions per atom include the usage of atomic orbitals and similar objects, but the atomic orbitals generally require fine tuning in order to reach high accuracy. We present a novel discretization scheme that adaptively and systematically builds the rapid oscillations of the Kohn–Sham orbitals around the nuclei as well as environmental effects into the basis functions. The resulting basis functions are localized in the real space, and are discontinuous in the global domain. The continuous Kohn–Sham orbitals and the electron density are evaluated from the discontinuous basis functions using the discontinuous Galerkin (DG) framework. Our method is implemented in parallel and the current implementation is able to handle systems with at least thousands of atoms. Numerical examples indicate that our method can reach very high accuracy (less than 1 meV) with a very small number (4–40) of basis functions per atom.

  12. Optimal control of evaporator and washer plants

    International Nuclear Information System (INIS)

    Niemi, A.J.

    1989-01-01

    Tests with radioactive tracers were used for experimental analysis of a multiple-effect evaporator plant. The residence time distribution of the liquor in each evaporator was described by one or two perfect mixers with time delay and by-pass flow terms. The theoretical model of a single evaporator unit was set up on the basis of its instantaneous heat and mass balances and such models were fitted to the test data. The results were interpreted in terms of physical structures of the evaporators. Further model parameters were evaluated by conventional step tests and by measurements of process variables at one or more steady states. Computer simulation and comparison with the experimental results showed that the model produces a satisfactory response to solids concentration input and could be extended to cover the steam feed and liquor flow inputs. An optimal feedforward control algorithm was developed for a two unit, co-current evaporator plant. The control criterion comprised the deviations of the final solids content of liquor and the consumption of fresh steam, from their optimal steady-state values. In order to apply the algorithm, the model of the solids in liquor was reduced to two nonlinear differential equations. (author)

  13. The human error rate assessment and optimizing system HEROS - a new procedure for evaluating and optimizing the man-machine interface in PSA

    International Nuclear Information System (INIS)

    Richei, A.; Hauptmanns, U.; Unger, H.

    2001-01-01

    A new procedure allowing the probabilistic evaluation and optimization of the man-machine system is presented. This procedure and the resulting expert system HEROS, which is an acronym for Human Error Rate Assessment and Optimizing System, is based on the fuzzy set theory. Most of the well-known procedures employed for the probabilistic evaluation of human factors involve the use of vague linguistic statements on performance shaping factors to select and to modify basic human error probabilities from the associated databases. This implies a large portion of subjectivity. Vague statements are expressed here in terms of fuzzy numbers or intervals which allow mathematical operations to be performed on them. A model of the man-machine system is the basis of the procedure. A fuzzy rule-based expert system was derived from ergonomic and psychological studies. Hence, it does not rely on a database, whose transferability to situations different from its origin is questionable. In this way, subjective elements are eliminated to a large extent. HEROS facilitates the importance analysis for the evaluation of human factors, which is necessary for optimizing the man-machine system. HEROS is applied to the analysis of a simple diagnosis of task of the operating personnel in a nuclear power plant

  14. Collaborative Emission Reduction Model Based on Multi-Objective Optimization for Greenhouse Gases and Air Pollutants.

    Science.gov (United States)

    Meng, Qing-chun; Rong, Xiao-xia; Zhang, Yi-min; Wan, Xiao-le; Liu, Yuan-yuan; Wang, Yu-zhi

    2016-01-01

    CO2 emission influences not only global climate change but also international economic and political situations. Thus, reducing the emission of CO2, a major greenhouse gas, has become a major issue in China and around the world as regards preserving the environmental ecology. Energy consumption from coal, oil, and natural gas is primarily responsible for the production of greenhouse gases and air pollutants such as SO2 and NOX, which are the main air pollutants in China. In this study, a mathematical multi-objective optimization method was adopted to analyze the collaborative emission reduction of three kinds of gases on the basis of their common restraints in different ways of energy consumption to develop an economic, clean, and efficient scheme for energy distribution. The first part introduces the background research, the collaborative emission reduction for three kinds of gases, the multi-objective optimization, the main mathematical modeling, and the optimization method. The second part discusses the four mathematical tools utilized in this study, which include the Granger causality test to analyze the causality between air quality and pollutant emission, a function analysis to determine the quantitative relation between energy consumption and pollutant emission, a multi-objective optimization to set up the collaborative optimization model that considers energy consumption, and an optimality condition analysis for the multi-objective optimization model to design the optimal-pole algorithm and obtain an efficient collaborative reduction scheme. In the empirical analysis, the data of pollutant emission and final consumption of energies of Tianjin in 1996-2012 was employed to verify the effectiveness of the model and analyze the efficient solution and the corresponding dominant set. In the last part, several suggestions for collaborative reduction are recommended and the drawn conclusions are stated.

  15. Multiobjective Optimization Involving Quadratic Functions

    Directory of Open Access Journals (Sweden)

    Oscar Brito Augusto

    2014-01-01

    Full Text Available Multiobjective optimization is nowadays a word of order in engineering projects. Although the idea involved is simple, the implementation of any procedure to solve a general problem is not an easy task. Evolutionary algorithms are widespread as a satisfactory technique to find a candidate set for the solution. Usually they supply a discrete picture of the Pareto front even if this front is continuous. In this paper we propose three methods for solving unconstrained multiobjective optimization problems involving quadratic functions. In the first, for biobjective optimization defined in the bidimensional space, a continuous Pareto set is found analytically. In the second, applicable to multiobjective optimization, a condition test is proposed to check if a point in the decision space is Pareto optimum or not and, in the third, with functions defined in n-dimensional space, a direct noniterative algorithm is proposed to find the Pareto set. Simple problems highlight the suitability of the proposed methods.

  16. Pareto optimal pairwise sequence alignment.

    Science.gov (United States)

    DeRonne, Kevin W; Karypis, George

    2013-01-01

    Sequence alignment using evolutionary profiles is a commonly employed tool when investigating a protein. Many profile-profile scoring functions have been developed for use in such alignments, but there has not yet been a comprehensive study of Pareto optimal pairwise alignments for combining multiple such functions. We show that the problem of generating Pareto optimal pairwise alignments has an optimal substructure property, and develop an efficient algorithm for generating Pareto optimal frontiers of pairwise alignments. All possible sets of two, three, and four profile scoring functions are used from a pool of 11 functions and applied to 588 pairs of proteins in the ce_ref data set. The performance of the best objective combinations on ce_ref is also evaluated on an independent set of 913 protein pairs extracted from the BAliBASE RV11 data set. Our dynamic-programming-based heuristic approach produces approximated Pareto optimal frontiers of pairwise alignments that contain comparable alignments to those on the exact frontier, but on average in less than 1/58th the time in the case of four objectives. Our results show that the Pareto frontiers contain alignments whose quality is better than the alignments obtained by single objectives. However, the task of identifying a single high-quality alignment among those in the Pareto frontier remains challenging.

  17. The Emotional and Moral Basis of Rationality

    Science.gov (United States)

    Boostrom, Robert

    2013-01-01

    This chapter explores the basis of rationality, arguing that critical thinking tends to be taught in schools as a set of skills because of the failure to recognize that choosing to think critically depends on the prior development of stable sentiments or moral habits that nourish a rational self. Primary among these stable sentiments are the…

  18. Pseudolinear functions and optimization

    CERN Document Server

    Mishra, Shashi Kant

    2015-01-01

    Pseudolinear Functions and Optimization is the first book to focus exclusively on pseudolinear functions, a class of generalized convex functions. It discusses the properties, characterizations, and applications of pseudolinear functions in nonlinear optimization problems.The book describes the characterizations of solution sets of various optimization problems. It examines multiobjective pseudolinear, multiobjective fractional pseudolinear, static minmax pseudolinear, and static minmax fractional pseudolinear optimization problems and their results. The authors extend these results to locally

  19. A Novel Optimization Method on Logistics Operation for Warehouse & Port Enterprises Based on Game Theory

    Directory of Open Access Journals (Sweden)

    Junyang Li

    2013-09-01

    Full Text Available Purpose: The following investigation aims to deal with the competitive relationship among different warehouses & ports in the same company. Design/methodology/approach: In this paper, Game Theory is used in carrying out the optimization model. Genetic Algorithm is used to solve the model. Findings: Unnecessary competition will rise up if there is little internal communication among different warehouses & ports in one company. This paper carries out a novel optimization method on warehouse & port logistics operation model. Originality/value: Warehouse logistics business is a combination of warehousing services and terminal services which is provided by port logistics through the existing port infrastructure on the basis of a port. The newly proposed method can help to optimize logistics operation model for warehouse & port enterprises effectively. We set Sinotrans Guangdong Company as an example to illustrate the newly proposed method. Finally, according to the case study, this paper gives some responses and suggestions on logistics operation in Sinotrans Guangdong warehouse & port for its future development.

  20. Optimization theory with applications

    CERN Document Server

    Pierre, Donald A

    1987-01-01

    Optimization principles are of undisputed importance in modern design and system operation. They can be used for many purposes: optimal design of systems, optimal operation of systems, determination of performance limitations of systems, or simply the solution of sets of equations. While most books on optimization are limited to essentially one approach, this volume offers a broad spectrum of approaches, with emphasis on basic techniques from both classical and modern work.After an introductory chapter introducing those system concepts that prevail throughout optimization problems of all typ

  1. An optimal set of features for predicting type IV secretion system effector proteins for a subset of species based on a multi-level feature selection approach.

    Directory of Open Access Journals (Sweden)

    Zhila Esna Ashari

    Full Text Available Type IV secretion systems (T4SS are multi-protein complexes in a number of bacterial pathogens that can translocate proteins and DNA to the host. Most T4SSs function in conjugation and translocate DNA; however, approximately 13% function to secrete proteins, delivering effector proteins into the cytosol of eukaryotic host cells. Upon entry, these effectors manipulate the host cell's machinery for their own benefit, which can result in serious illness or death of the host. For this reason recognition of T4SS effectors has become an important subject. Much previous work has focused on verifying effectors experimentally, a costly endeavor in terms of money, time, and effort. Having good predictions for effectors will help to focus experimental validations and decrease testing costs. In recent years, several scoring and machine learning-based methods have been suggested for the purpose of predicting T4SS effector proteins. These methods have used different sets of features for prediction, and their predictions have been inconsistent. In this paper, an optimal set of features is presented for predicting T4SS effector proteins using a statistical approach. A thorough literature search was performed to find features that have been proposed. Feature values were calculated for datasets of known effectors and non-effectors for T4SS-containing pathogens for four genera with a sufficient number of known effectors, Legionella pneumophila, Coxiella burnetii, Brucella spp, and Bartonella spp. The features were ranked, and less important features were filtered out. Correlations between remaining features were removed, and dimensional reduction was accomplished using principal component analysis and factor analysis. Finally, the optimal features for each pathogen were chosen by building logistic regression models and evaluating each model. The results based on evaluation of our logistic regression models confirm the effectiveness of our four optimal sets of

  2. OPTIMIZATION OF CONTROL ACTIONS IN THE FORMULATION OF SAUSAGE PRODUCTS IN THE PRESENCE OF TECHNOLOGICAL DEFECTS

    Directory of Open Access Journals (Sweden)

    A. V. Tokarev

    2015-01-01

    Full Text Available The problem of optimization of food additives in compounding of sausages for elimination of defects in meat raw materials at production management is considered in article. The mathematical problem definition and algorithm of its decision is offered. Formally the task is classified as a combinatory problem of integer linear programming which purpose is providing a set functional and technological and the taste of the final product at the minimum cost of unit of mass offood additives. The offered algorithm of the decision realizes a method of step-by-step creation of the decision with elimination of the unpromising options defined on the basis of recurrence relations. An example of determining the optimal set of food additives for the production of a particular case, when the recipe sausage "Stolichnaya" contains large amounts of fat-containing raw materials. To solve the problem of binding and emulsifying oily materials, as shown in the article are necessary supplements which together would contain: phosphate (pH regulator, water-retaining agent, antioxidant, emulsifier, thickener, gelling agent, animal protein (filler, coloring agent retainer color, flavor intensifier. For example, it was presented with a set of six different food additives with their technological and functional properties of flavor and cost. It was necessary to determine which of these to include in the food additives formulated so as to on the one hand they ultimately comprise a predetermined set of specified properties, and on the other - of the total cost was minimal. Solving this problem with the use of the considered algorithm was found the optimal set of food additives, which completely cover the set. This kit contains all the necessary ingredients for the task, and their total cost is minimal, unlike other possible combinations. The algorithm is realized in program system "MultiMeat Expert" in system of support of decision-making.

  3. Optimizing Distributed Machine Learning for Large Scale EEG Data Set

    Directory of Open Access Journals (Sweden)

    M Bilal Shaikh

    2017-06-01

    Full Text Available Distributed Machine Learning (DML has gained its importance more than ever in this era of Big Data. There are a lot of challenges to scale machine learning techniques on distributed platforms. When it comes to scalability, improving the processor technology for high level computation of data is at its limit, however increasing machine nodes and distributing data along with computation looks as a viable solution. Different frameworks   and platforms are available to solve DML problems. These platforms provide automated random data distribution of datasets which miss the power of user defined intelligent data partitioning based on domain knowledge. We have conducted an empirical study which uses an EEG Data Set collected through P300 Speller component of an ERP (Event Related Potential which is widely used in BCI problems; it helps in translating the intention of subject w h i l e performing any cognitive task. EEG data contains noise due to waves generated by other activities in the brain which contaminates true P300Speller. Use of Machine Learning techniques could help in detecting errors made by P300 Speller. We are solving this classification problem by partitioning data into different chunks and preparing distributed models using Elastic CV Classifier. To present a case of optimizing distributed machine learning, we propose an intelligent user defined data partitioning approach that could impact on the accuracy of distributed machine learners on average. Our results show better average AUC as compared to average AUC obtained after applying random data partitioning which gives no control to user over data partitioning. It improves the average accuracy of distributed learner due to the domain specific intelligent partitioning by the user. Our customized approach achieves 0.66 AUC on individual sessions and 0.75 AUC on mixed sessions, whereas random / uncontrolled data distribution records 0.63 AUC.

  4. Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review

    Science.gov (United States)

    Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J.; Mojaza, Matin

    2015-12-01

    A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme—this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the ‘principle of maximum conformality’ (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the ‘principle of minimum sensitivity’ (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R e+e- and Γ(H\\to b\\bar{b}) up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on

  5. Optimization of multi-objective micro-grid based on improved particle swarm optimization algorithm

    Science.gov (United States)

    Zhang, Jian; Gan, Yang

    2018-04-01

    The paper presents a multi-objective optimal configuration model for independent micro-grid with the aim of economy and environmental protection. The Pareto solution set can be obtained by solving the multi-objective optimization configuration model of micro-grid with the improved particle swarm algorithm. The feasibility of the improved particle swarm optimization algorithm for multi-objective model is verified, which provides an important reference for multi-objective optimization of independent micro-grid.

  6. Identifying finite-time coherent sets from limited quantities of Lagrangian data

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Matthew O. [Program in Applied and Computational Mathematics, Princeton University, New Jersey 08544 (United States); Rypina, Irina I. [Department of Physical Oceanography, Woods Hole Oceanographic Institute, Massachusetts 02543 (United States); Rowley, Clarence W. [Department of Mechanical and Aerospace Engineering, Princeton University, New Jersey 08544 (United States)

    2015-08-15

    A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that “leak” from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, “data rich” test problems, and conceptually related methods based on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or “mesh-free” methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea.

  7. Identifying finite-time coherent sets from limited quantities of Lagrangian data

    International Nuclear Information System (INIS)

    Williams, Matthew O.; Rypina, Irina I.; Rowley, Clarence W.

    2015-01-01

    A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that “leak” from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, “data rich” test problems, and conceptually related methods based on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or “mesh-free” methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea

  8. Identifying finite-time coherent sets from limited quantities of Lagrangian data.

    Science.gov (United States)

    Williams, Matthew O; Rypina, Irina I; Rowley, Clarence W

    2015-08-01

    A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that "leak" from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, "data rich" test problems, and conceptually related methods based on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or "mesh-free" methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea.

  9. Physics Mining of Multi-Source Data Sets

    Science.gov (United States)

    Helly, John; Karimabadi, Homa; Sipes, Tamara

    2012-01-01

    Powerful new parallel data mining algorithms can produce diagnostic and prognostic numerical models and analyses from observational data. These techniques yield higher-resolution measures than ever before of environmental parameters by fusing synoptic imagery and time-series measurements. These techniques are general and relevant to observational data, including raster, vector, and scalar, and can be applied in all Earth- and environmental science domains. Because they can be highly automated and are parallel, they scale to large spatial domains and are well suited to change and gap detection. This makes it possible to analyze spatial and temporal gaps in information, and facilitates within-mission replanning to optimize the allocation of observational resources. The basis of the innovation is the extension of a recently developed set of algorithms packaged into MineTool to multi-variate time-series data. MineTool is unique in that it automates the various steps of the data mining process, thus making it amenable to autonomous analysis of large data sets. Unlike techniques such as Artificial Neural Nets, which yield a blackbox solution, MineTool's outcome is always an analytical model in parametric form that expresses the output in terms of the input variables. This has the advantage that the derived equation can then be used to gain insight into the physical relevance and relative importance of the parameters and coefficients in the model. This is referred to as physics-mining of data. The capabilities of MineTool are extended to include both supervised and unsupervised algorithms, handle multi-type data sets, and parallelize it.

  10. Optimization of oil extraction process by the method of mathematical modeling

    Directory of Open Access Journals (Sweden)

    М. V. Kopylov

    2017-01-01

    Full Text Available The main objective of the experimental study of all processes is the analysis, study and generalization of all available results. In accordance with the idea of indexing search experiment was carried out in several stages. The number of stages and steps on each of which depended on the results of the previous stage and the ultimate goal of research. The ultimate goal of the study is to determine the optimum process conditions. Studies vegetable oil extraction process was conducted in a pilot plant set up on the basis of a single-screw Oil-presses. To ensure maximum efficiency Oil-presses without loss of quality of the finished necessary to ensure continuity and uniformity of oilseeds admission hopper. To process the experimental studies, the software complex STATISTICA 10 was applied. To obtain the regression equation, the matrix data was processed using the MS Excel software package. It is found that in order to provide the required performance Oil-presses need to set the angular speed of the screw in the range of 5.5–6.4 s-1, the value of the annular gap between the screw and curb chamber is 0.7–0.9 mm, the quantity gap curb camera of 0.16–0.18 mm. To optimize the process, the output values of the parameters converted into a dimensionless scale desirability d. In the course of the research found that the desirability of the D function, which characterizes the adequacy of the obtained values, has an extreme experience in the 8 and equal to 0.8. On the basis of the following parameters should be considered optimal obtained data: the angular speed of the screw of 6.6 to 1, the value of the annular gap between the screw and camera curb 0.92 mm, the magnitude of the gap.

  11. Binary Cockroach Swarm Optimization for Combinatorial Optimization Problem

    Directory of Open Access Journals (Sweden)

    Ibidun Christiana Obagbuwa

    2016-09-01

    Full Text Available The Cockroach Swarm Optimization (CSO algorithm is inspired by cockroach social behavior. It is a simple and efficient meta-heuristic algorithm and has been applied to solve global optimization problems successfully. The original CSO algorithm and its variants operate mainly in continuous search space and cannot solve binary-coded optimization problems directly. Many optimization problems have their decision variables in binary. Binary Cockroach Swarm Optimization (BCSO is proposed in this paper to tackle such problems and was evaluated on the popular Traveling Salesman Problem (TSP, which is considered to be an NP-hard Combinatorial Optimization Problem (COP. A transfer function was employed to map a continuous search space CSO to binary search space. The performance of the proposed algorithm was tested firstly on benchmark functions through simulation studies and compared with the performance of existing binary particle swarm optimization and continuous space versions of CSO. The proposed BCSO was adapted to TSP and applied to a set of benchmark instances of symmetric TSP from the TSP library. The results of the proposed Binary Cockroach Swarm Optimization (BCSO algorithm on TSP were compared to other meta-heuristic algorithms.

  12. Methodology of shell structure reinforcement layout optimization

    Science.gov (United States)

    Szafrański, Tomasz; Małachowski, Jerzy; Damaziak, Krzysztof

    2018-01-01

    This paper presents an optimization process of a reinforced shell diffuser intended for a small wind turbine (rated power of 3 kW). The diffuser structure consists of multiple reinforcement and metal skin. This kind of structure is suitable for optimization in terms of selection of reinforcement density, stringers cross sections, sheet thickness, etc. The optimisation approach assumes the reduction of the amount of work to be done between the optimization process and the final product design. The proposed optimization methodology is based on application of a genetic algorithm to generate the optimal reinforcement layout. The obtained results are the basis for modifying the existing Small Wind Turbine (SWT) design.

  13. Impact of thermal constraints on the optimal design of high-level waste repositories in geologic media

    Energy Technology Data Exchange (ETDEWEB)

    Malbrain, C; Lester, R K [Massachusetts Inst. of Tech., Cambridge (USA). Dept. of Nuclear Engineering

    1982-12-01

    An approximate, semi-analytical heat conduction model for predicting the time-dependent temperature distribution in the region of a high-level waste repository has been developed. The model provides the basis for a systematic, inexpensive examination of the impact of several independent thermal design constraints on key repository design parameters and for determining the optimal set of design parameters which satisfy these constraints. Illustrative calculations have been carried out for conceptual repository designs for spent pressurized water reactor (PWR) fuel and reprocessed PWR high-level waste in salt and granite media.

  14. Bounds on Rates of Variable-Basis and Neural-Network Approximation

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra; Sanguineti, M.

    2001-01-01

    Roč. 47, č. 6 (2001), s. 2659-2665 ISSN 0018-9448 R&D Projects: GA ČR GA201/00/1482 Institutional research plan: AV0Z1030915 Keywords : approximation by variable-basis functions * bounds on rates of approximation * complexity of neural networks * high-dimensional optimal decision problems Subject RIV: BA - General Mathematics Impact factor: 2.077, year: 2001

  15. Optimizing distance-based methods for large data sets

    Science.gov (United States)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  16. Automatic Planning of External Search Engine Optimization

    Directory of Open Access Journals (Sweden)

    Vita Jasevičiūtė

    2015-07-01

    Full Text Available This paper describes an investigation of the external search engine optimization (SEO action planning tool, dedicated to automatically extract a small set of most important keywords for each month during whole year period. The keywords in the set are extracted accordingly to external measured parameters, such as average number of searches during the year and for every month individually. Additionally the position of the optimized web site for each keyword is taken into account. The generated optimization plan is similar to the optimization plans prepared manually by the SEO professionals and can be successfully used as a support tool for web site search engine optimization.

  17. Optimal covariate designs theory and applications

    CERN Document Server

    Das, Premadhis; Mandal, Nripes Kumar; Sinha, Bikas Kumar

    2015-01-01

    This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for...

  18. Optimization and optimal control in automotive systems

    CERN Document Server

    Kolmanovsky, Ilya; Steinbuch, Maarten; Re, Luigi

    2014-01-01

    This book demonstrates the use of the optimization techniques that are becoming essential to meet the increasing stringency and variety of requirements for automotive systems. It shows the reader how to move away from earlier  approaches, based on some degree of heuristics, to the use of  more and more common systematic methods. Even systematic methods can be developed and applied in a large number of forms so the text collects contributions from across the theory, methods and real-world automotive applications of optimization. Greater fuel economy, significant reductions in permissible emissions, new drivability requirements and the generally increasing complexity of automotive systems are among the criteria that the contributing authors set themselves to meet. In many cases multiple and often conflicting requirements give rise to multi-objective constrained optimization problems which are also considered. Some of these problems fall into the domain of the traditional multi-disciplinary optimization applie...

  19. A new hybrid optimization algorithm CRO-DE for optimal coordination of overcurrent relays in complex power systems

    Directory of Open Access Journals (Sweden)

    Mohamed Zellagui

    2017-09-01

    Full Text Available The paper presents a new hybrid global optimization algorithm based on Chemical Reaction based Optimization (CRO and Di¤erential evolution (DE algorithm for nonlinear constrained optimization problems. This approach proposed for the optimal coordination and setting relays of directional overcurrent relays in complex power systems. In protection coordination problem, the objective function to be minimized is the sum of the operating time of all main relays. The optimization problem is subject to a number of constraints which are mainly focused on the operation of the backup relay, which should operate if a primary relay fails to respond to the fault near to it, Time Dial Setting (TDS, Plug Setting (PS and the minimum operating time of a relay. The hybrid global proposed optimization algorithm aims to minimize the total operating time of each protection relay. Two systems are used as case study to check the effeciency of the optimization algorithm which are IEEE 4-bus and IEEE 6-bus models. Results are obtained and presented for CRO and DE and hybrid CRO-DE algorithms. The obtained results for the studied cases are compared with those results obtained when using other optimization algorithms which are Teaching Learning-Based Optimization (TLBO, Chaotic Differential Evolution Algorithm (CDEA and Modiffied Differential Evolution Algorithm (MDEA, and Hybrid optimization algorithms (PSO-DE, IA-PSO, and BFOA-PSO. From analysing the obtained results, it has been concluded that hybrid CRO-DO algorithm provides the most optimum solution with the best convergence rate.

  20. Optimal Investment Control of Macroeconomic Systems

    Institute of Scientific and Technical Information of China (English)

    ZHAO Ke-jie; LIU Chuan-zhe

    2006-01-01

    Economic growth is always accompanied by economic fluctuation. The target of macroeconomic control is to keep a basic balance of economic growth, accelerate the optimization of economic structures and to lead a rapid, sustainable and healthy development of national economies, in order to propel society forward. In order to realize the above goal, investment control must be regarded as the most important policy for economic stability. Readjustment and control of investment includes not only control of aggregate investment, but also structural control which depends on economic-technology relationships between various industries of a national economy. On the basis of the theory of a generalized system, an optimal investment control model for government has been developed. In order to provide a scientific basis for government to formulate a macroeconomic control policy, the model investigates the balance of total supply and aggregate demand through an adjustment in investment decisions realizes a sustainable and stable growth of the national economy. The optimal investment decision function proposed by this study has a unique and specific expression, high regulating precision and computable characteristics.

  1. A projection-free method for representing plane-wave DFT results in an atom-centered basis

    International Nuclear Information System (INIS)

    Dunnington, Benjamin D.; Schmidt, J. R.

    2015-01-01

    Plane wave density functional theory (DFT) is a powerful tool for gaining accurate, atomic level insight into bulk and surface structures. Yet, the delocalized nature of the plane wave basis set hinders the application of many powerful post-computation analysis approaches, many of which rely on localized atom-centered basis sets. Traditionally, this gap has been bridged via projection-based techniques from a plane wave to atom-centered basis. We instead propose an alternative projection-free approach utilizing direct calculation of matrix elements of the converged plane wave DFT Hamiltonian in an atom-centered basis. This projection-free approach yields a number of compelling advantages, including strict orthonormality of the resulting bands without artificial band mixing and access to the Hamiltonian matrix elements, while faithfully preserving the underlying DFT band structure. The resulting atomic orbital representation of the Kohn-Sham wavefunction and Hamiltonian provides a gateway to a wide variety of analysis approaches. We demonstrate the utility of the approach for a diverse set of chemical systems and example analysis approaches

  2. Adaptive Linear and Normalized Combination of Radial Basis Function Networks for Function Approximation and Regression

    Directory of Open Access Journals (Sweden)

    Yunfeng Wu

    2014-01-01

    Full Text Available This paper presents a novel adaptive linear and normalized combination (ALNC method that can be used to combine the component radial basis function networks (RBFNs to implement better function approximation and regression tasks. The optimization of the fusion weights is obtained by solving a constrained quadratic programming problem. According to the instantaneous errors generated by the component RBFNs, the ALNC is able to perform the selective ensemble of multiple leaners by adaptively adjusting the fusion weights from one instance to another. The results of the experiments on eight synthetic function approximation and six benchmark regression data sets show that the ALNC method can effectively help the ensemble system achieve a higher accuracy (measured in terms of mean-squared error and the better fidelity (characterized by normalized correlation coefficient of approximation, in relation to the popular simple average, weighted average, and the Bagging methods.

  3. Optimal operation of batch membrane processes

    CERN Document Server

    Paulen, Radoslav

    2016-01-01

    This study concentrates on a general optimization of a particular class of membrane separation processes: those involving batch diafiltration. Existing practices are explained and operational improvements based on optimal control theory are suggested. The first part of the book introduces the theory of membrane processes, optimal control and dynamic optimization. Separation problems are defined and mathematical models of batch membrane processes derived. The control theory focuses on problems of dynamic optimization from a chemical-engineering point of view. Analytical and numerical methods that can be exploited to treat problems of optimal control for membrane processes are described. The second part of the text builds on this theoretical basis to establish solutions for membrane models of increasing complexity. Each chapter starts with a derivation of optimal operation and continues with case studies exemplifying various aspects of the control problems under consideration. The authors work their way from th...

  4. Gaussian basis functions for highly oscillatory scattering wavefunctions

    Science.gov (United States)

    Mant, B. P.; Law, M. M.

    2018-04-01

    We have applied a basis set of distributed Gaussian functions within the S-matrix version of the Kohn variational method to scattering problems involving deep potential energy wells. The Gaussian positions and widths are tailored to the potential using the procedure of Bačić and Light (1986 J. Chem. Phys. 85 4594) which has previously been applied to bound-state problems. The placement procedure is shown to be very efficient and gives scattering wavefunctions and observables in agreement with direct numerical solutions. We demonstrate the basis function placement method with applications to hydrogen atom–hydrogen atom scattering and antihydrogen atom–hydrogen atom scattering.

  5. COMPROMISE, OPTIMAL AND TRACTIONAL ACCOUNTS ON PARETO SET

    Directory of Open Access Journals (Sweden)

    V. V. Lahuta

    2010-11-01

    Full Text Available The problem of optimum traction calculations is considered as a problem about optimum distribution of a resource. The dynamic programming solution is based on a step-by-step calculation of set of points of Pareto-optimum values of a criterion function (energy expenses and a resource (time.

  6. Estimation of optimal b-value sets for obtaining apparent diffusion coefficient free from perfusion in non-small cell lung cancer.

    Science.gov (United States)

    Karki, Kishor; Hugo, Geoffrey D; Ford, John C; Olsen, Kathryn M; Saraiya, Siddharth; Groves, Robert; Weiss, Elisabeth

    2015-10-21

    The purpose of this study was to determine optimal sets of b-values in diffusion-weighted MRI (DW-MRI) for obtaining monoexponential apparent diffusion coefficient (ADC) close to perfusion-insensitive intravoxel incoherent motion (IVIM) model ADC (ADCIVIM) in non-small cell lung cancer. Ten subjects had 40 DW-MRI scans before and during radiotherapy in a 1.5 T MRI scanner. Respiratory triggering was applied to the echo-planar DW-MRI with TR ≈ 4500 ms, TE  =  74 ms, eight b-values of 0-1000 μs μm(-2), pixel size  =  1.98 × 1.98 mm(2), slice thickness  =  6 mm, interslice gap  =  1.2 mm, 7 axial slices and total acquisition time ≈6 min. One or more DW-MRI scans together covered the whole tumour volume. Monoexponential model ADC values using various b-value sets were compared to reference-standard ADCIVIM values using all eight b-values. Intra-scan coefficient of variation (CV) of active tumour volumes was computed to compare the relative noise in ADC maps. ADC values for one pre-treatment DW-MRI scan of each of the 10 subjects were computed using b-value pairs from DW-MRI images synthesized for b-values of 0-2000 μs μm(-2) from the estimated IVIM parametric maps and corrupted by various Rician noise levels. The square root of mean of squared error percentage (RMSE) of the ADC value relative to the corresponding ADCIVIM for the tumour volume of the scan was computed. Monoexponential ADC values for the b-value sets of 250 and 1000; 250, 500 and 1000; 250, 650 and 1000; 250, 800 and 1000; and 250-1000 μs μm(-2) were not significantly different from ADCIVIM values (p > 0.05, paired t-test). Mean error in ADC values for these sets relative to ADCIVIM were within 3.5%. Intra-scan CVs for these sets were comparable to that for ADCIVIM. The monoexponential ADC values for other sets-0-1000; 50-1000; 100-1000; 500-1000; and 250 and 800 μs μm(-2) were significantly different from the ADCIVIM values. From Rician noise

  7. Shape optimization of high power centrifugal compressor using multi-objective optimal method

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Hyun Soo; Lee, Jeong Min; Kim, Youn Jea [School of Mechanical Engineering, Sungkyunkwan University, Seoul (Korea, Republic of)

    2015-03-15

    In this study, a method for optimal design of impeller and diffuser blades in the centrifugal compressor using response surface method (RSM) and multi-objective genetic algorithm (MOGA) was evaluated. A numerical simulation was conducted using ANSYS CFX with various values of impeller and diffuser parameters, which consist of leading edge (LE) angle, trailing edge (TE) angle, and blade thickness. Each of the parameters was divided into three levels. A total of 45 design points were planned using central composite design (CCD), which is one of the design of experiment (DOE) techniques. Response surfaces that were generated on the basis of the results of DOE were used to determine the optimal shape of impeller and diffuser blade. The entire process of optimization was conducted using ANSYS Design Xplorer (DX). Through the optimization, isentropic efficiency and pressure recovery coefficient, which are the main performance parameters of the centrifugal compressor, were increased by 0.3 and 5, respectively.

  8. Shape optimization of high power centrifugal compressor using multi-objective optimal method

    International Nuclear Information System (INIS)

    Kang, Hyun Soo; Lee, Jeong Min; Kim, Youn Jea

    2015-01-01

    In this study, a method for optimal design of impeller and diffuser blades in the centrifugal compressor using response surface method (RSM) and multi-objective genetic algorithm (MOGA) was evaluated. A numerical simulation was conducted using ANSYS CFX with various values of impeller and diffuser parameters, which consist of leading edge (LE) angle, trailing edge (TE) angle, and blade thickness. Each of the parameters was divided into three levels. A total of 45 design points were planned using central composite design (CCD), which is one of the design of experiment (DOE) techniques. Response surfaces that were generated on the basis of the results of DOE were used to determine the optimal shape of impeller and diffuser blade. The entire process of optimization was conducted using ANSYS Design Xplorer (DX). Through the optimization, isentropic efficiency and pressure recovery coefficient, which are the main performance parameters of the centrifugal compressor, were increased by 0.3 and 5, respectively

  9. A topological derivative method for topology optimization

    DEFF Research Database (Denmark)

    Norato, J.; Bendsøe, Martin P.; Haber, RB

    2007-01-01

    resource constraint. A smooth and consistent projection of the region bounded by the level set onto the fictitious analysis domain simplifies the response analysis and enhances the convergence of the optimization algorithm. Moreover, the projection supports the reintroduction of solid material in void......We propose a fictitious domain method for topology optimization in which a level set of the topological derivative field for the cost function identifies the boundary of the optimal design. We describe a fixed-point iteration scheme that implements this optimality criterion subject to a volumetric...... regions, a critical requirement for robust topology optimization. We present several numerical examples that demonstrate compliance minimization of fixed-volume, linearly elastic structures....

  10. Distributionally Robust Return-Risk Optimization Models and Their Applications

    Directory of Open Access Journals (Sweden)

    Li Yang

    2014-01-01

    Full Text Available Based on the risk control of conditional value-at-risk, distributionally robust return-risk optimization models with box constraints of random vector are proposed. They describe uncertainty in both the distribution form and moments (mean and covariance matrix of random vector. It is difficult to solve them directly. Using the conic duality theory and the minimax theorem, the models are reformulated as semidefinite programming problems, which can be solved by interior point algorithms in polynomial time. An important theoretical basis is therefore provided for applications of the models. Moreover, an application of the models to a practical example of portfolio selection is considered, and the example is evaluated using a historical data set of four stocks. Numerical results show that proposed methods are robust and the investment strategy is safe.

  11. Optimization of Laminated Composite Structures

    DEFF Research Database (Denmark)

    Henrichsen, Søren Randrup

    of the contributions of the PhD project are included in the second part of the thesis. Paper A presents a framework for free material optimization where commercially available finite element analysis software is used as analysis tool. Robust buckling optimization of laminated composite structures by including...... allows for a higher degree of tailoring of the resulting material. To enable better utilization of the composite materials, optimum design procedures can be used to assist the engineer. This PhD thesis is focused on developing numerical methods for optimization of laminated composite structures...... nonlinear analysis of structures, buckling and post-buckling analysis of structures, and formulations for optimization of structures considering stiffness, buckling, and post-buckling criteria. Lastly, descriptions, main findings, and conclusions of the papers are presented. The papers forming the basis...

  12. Preventive maintenance basis: Volume 16 -- Power operated relief valves, solenoid actuated. Final report

    International Nuclear Information System (INIS)

    Worledge, D.; Hinchcliffe, G.

    1997-07-01

    US nuclear plants are implementing preventive maintenance (PM) tasks with little documented basis beyond fundamental vendor information to support the tasks or their intervals. The Preventive Maintenance Basis project provides utilities with the technical basis for PM tasks and task intervals associated with 40 specific components such as valves, electric motors, pumps, and HVAC equipment. This report provides an overview of the PM Basis project and describes use of the PM Basis database. This volume 16 of the report provides a program of PM tasks suitable for application to power operated relief valves (PORV's) that are solenoid actuated. The PM tasks that are recommended provide a cost-effective way to intercept the causes and mechanisms that lead to degradation and failure. They can be used, in conjunction with material from other sources, to develop a complete PM program or to improve an existing program. Users of this information will be utility managers, supervisors, craft technicians, and training instructors responsible for developing, optimizing, or fine-tuning PM programs

  13. The neural basis of task switching changes with skill acquisition

    Directory of Open Access Journals (Sweden)

    Koji eJimura

    2014-05-01

    Full Text Available Learning novel skills involves reorganization and optimization of cognitive processing involving a broad network of brain regions. Previous work has shown asymmetric costs of switching to a well-trained task versus a poorly-trained task, but the neural basis of these differential switch costs is unclear. The current study examined the neural signature of task switching in the context of acquisition of new skill. Human participants alternated randomly between a novel visual task (mirror-reversed word reading and a highly practiced one (plain word reading, allowing the isolation of task switching and skill set maintenance. Two scan sessions were separated by two weeks, with behavioral training on the mirror reading task in between the two sessions. Broad cortical regions, including bilateral prefrontal, parietal, and extrastriate cortices, showed decreased activity associated with learning of the mirror reading skill. In contrast, learning to switch to the novel skill was associated with decreased activity in a focal subcortical region in the dorsal striatum. Switching to the highly practiced task was associated with a non-overlapping set of regions, suggesting substantial differences in the neural substrates of switching as a function of task skill. Searchlight multivariate pattern analysis also revealed that learning was associated with decreased pattern information for mirror versus plain reading tasks in fronto-parietal regions. Inferior frontal junction and posterior parietal cortex showed a joint effect of univariate activation and pattern information. These results suggest distinct learning mechanisms task performance and executive control as a function of learning.

  14. Optimal thermoeconomic performance of an irreversible regenerative ferromagnetic Ericsson refrigeration cycle

    International Nuclear Information System (INIS)

    Xu, Zhichao; Guo, Juncheng; Lin, Guoxing; Chen, Jincan

    2016-01-01

    On the basis of the Langevin theory of classical statistical mechanics, the magnetization, entropy, and iso-field heat capacity of ferromagnetic materials are analyzed and their mathematical expressions are derived. An irreversible regenerative Ericsson refrigeration cycle by using a ferromagnetic material as the working substance is established, in which finite heat capacity rates of low and high temperature reservoirs, non-perfect regenerative heat of the refrigeration cycle, additional regenerative heat loss, etc. are taken into account. Based on the regenerative refrigeration cycle model, a thermoeconomic function is introduced as one objective function and optimized with respect to the temperatures of the working substance in the two iso-thermal processes. By means of numerical calculation, the effects of the effective factor of the heat exchangers in high/low temperature reservoir sides, efficiency of the regenerator, heat capacity rate of the low temperature reservoir, and applied magnetic field on the optimal thermoeconomic function as well as the corresponding cooling rate and coefficient of performance are revealed. The results obtained in this paper can provide some theoretical guidance for the optimal design of actual regenerative magnetic refrigerator cycle. - Highlights: • Thermodynamic performance of ferromagnetic material is analyzed. • An irreversible regenerative ferromagnetic Ericsson refrigeration cycle is set up. • The thermoeconomic objective function is introduced and optimized. • Impacts of the thermoeconomic and other parameters are discussed.

  15. Chemical optimization algorithm for fuzzy controller design

    CERN Document Server

    Astudillo, Leslie; Castillo, Oscar

    2014-01-01

    In this book, a novel optimization method inspired by a paradigm from nature is introduced. The chemical reactions are used as a paradigm to propose an optimization method that simulates these natural processes. The proposed algorithm is described in detail and then a set of typical complex benchmark functions is used to evaluate the performance of the algorithm. Simulation results show that the proposed optimization algorithm can outperform other methods in a set of benchmark functions. This chemical reaction optimization paradigm is also applied to solve the tracking problem for the dynamic model of a unicycle mobile robot by integrating a kinematic and a torque controller based on fuzzy logic theory. Computer simulations are presented confirming that this optimization paradigm is able to outperform other optimization techniques applied to this particular robot application

  16. Educational texts as empirical basis in qualitative research in Physical Education

    DEFF Research Database (Denmark)

    Svendsen, Annemari Munk

    This presentation will focus attention on educational texts as empirical basis in qualitative research in Physical Education (PE). Educational texts may be defined as all kinds of texts used in a pedagogical setting, including textbooks, popular articles, webpages and political reports (Selander......). This makes them fundamental sites for illuminating what counts as knowledge in an educational setting (Selander & Skjeldbred, 2004). This presentation will introduce a qualitative research study obtained with discourse analysis of educational texts in Physical Education Teacher Education (PETE) in Denmark...... (Svendsen & Svendsen, 2014). It will present the theoretical and methodological considerations that are tied to the analysis of educational texts and discuss the qualities and challenges related to educational texts as empirical basis in qualitative research in PE. References: Apple, M. W. & Christian...

  17. Olive paste oil content on a dry weight basis (OPDW): an indicator for optimal harvesting time in modern olive orchards

    Energy Technology Data Exchange (ETDEWEB)

    Zipori, I.; Bustan, A.; Kerem, Z.; Dag, A.

    2016-07-01

    In modern oil olive orchards, mechanical harvesting technologies have significantly accelerated harvesting outputs, thereby allowing for careful planning of harvest timing. While optimizing harvest time may have profound effects on oil yield and quality, the necessary tools to precisely determine the best date are rather scarce. For instance, the commonly used indicator, the fruit ripening index, does not necessarily correlate with oil accumulation. Oil content per fruit fresh weight is strongly affected by fruit water content, making the ripening index an unreliable indicator. However, oil in the paste, calculated on a dry weight basis (OPDW), provides a reliable indication of oil accumulation in the fruit. In most cultivars tested here, OPDW never exceeded ca. 0.5 g·g–1 dry weight, making this threshold the best indicator for the completion of oil accumulation and its consequent reduction in quality thereafter. The rates of OPDW and changes in quality parameters strongly depend on local conditions, such as climate, tree water status and fruit load. We therefore propose a fast and easy method to determine and monitor the OPDW in a given orchard. The proposed method is a useful tool for the determination of optimal harvest timing, particularly in large plots under intensive cultivation practices, with the aim of increasing orchard revenues. The results of this research can be directly applied in olive orchards, especially in large-scale operations. By following the proposed method, individual plots can be harvested according to sharp thresholds of oil accumulation status and pre-determined oil quality parameters, thus effectively exploiting the potentials of oil yield and quality. The method can become a powerful tool for scheduling the harvest throughout the season, and at the same time forecasting the flow of olives to the olive mill. (Author)

  18. A new approach to nuclear reactor design optimization using genetic algorithms and regression analysis

    International Nuclear Information System (INIS)

    Kumar, Akansha; Tsvetkov, Pavel V.

    2015-01-01

    Highlights: • This paper presents a new method useful for the optimization of complex dynamic systems. • The method uses the strengths of; genetic algorithms (GA), and regression splines. • The method is applied to the design of a gas cooled fast breeder reactor design. • Tools like Java, R, and codes like MCNP, Matlab are used in this research. - Abstract: A module based optimization method using genetic algorithms (GA), and multivariate regression analysis has been developed to optimize a set of parameters in the design of a nuclear reactor. GA simulates natural evolution to perform optimization, and is widely used in recent times by the scientific community. The GA fits a population of random solutions to the optimized solution of a specific problem. In this work, we have developed a genetic algorithm to determine the values for a set of nuclear reactor parameters to design a gas cooled fast breeder reactor core including a basis thermal–hydraulics analysis, and energy transfer. Multivariate regression is implemented using regression splines (RS). Reactor designs are usually complex and a simulation needs a significantly large amount of time to execute, hence the implementation of GA or any other global optimization techniques is not feasible, therefore we present a new method of using RS in conjunction with GA. Due to using RS, we do not necessarily need to run the neutronics simulation for all the inputs generated from the GA module rather, run the simulations for a predefined set of inputs, build a multivariate regression fit to the input and the output parameters, and then use this fit to predict the output parameters for the inputs generated by GA. The reactor parameters are given by the, radius of a fuel pin cell, isotopic enrichment of the fissile material in the fuel, mass flow rate of the coolant, and temperature of the coolant at the core inlet. And, the optimization objectives for the reactor core are, high breeding of U-233 and Pu-239 in

  19. Introduction to optimal control theory

    International Nuclear Information System (INIS)

    Agrachev, A.A.

    2002-01-01

    These are lecture notes of the introductory course in Optimal Control theory treated from the geometric point of view. Optimal Control Problem is reduced to the study of controls (and corresponding trajectories) leading to the boundary of attainable sets. We discuss Pontryagin Maximum Principle, basic existence results, and apply these tools to concrete simple optimal control problems. Special sections are devoted to the general theory of linear time-optimal problems and linear-quadratic problems. (author)

  20. Encyclopedia of optimization

    CERN Document Server

    Pardalos, Panos

    2001-01-01

    Optimization problems are widespread in the mathematical modeling of real world systems and their applications arise in all branches of science, applied science and engineering. The goal of the Encyclopedia of Optimization is to introduce the reader to a complete set of topics in order to show the spectrum of recent research activities and the richness of ideas in the development of theories, algorithms and the applications of optimization. It is directed to a diverse audience of students, scientists, engineers, decision makers and problem solvers in academia, business, industry, and government.