Anacker, Tony; Hill, J Grant; Friedrich, Joachim
2016-04-21
Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.
Energy optimized Gaussian basis sets for the atoms T1 - Rn
International Nuclear Information System (INIS)
Faegri, K. Jr.
1987-01-01
Energy optimized Gaussian basis sets have been derived for the atoms Tl-Rn. Two sets are presented - a (20,16,10,6) set and a (22,17,13,8) set. The smallest sets yield atomic energies 107 to 123 mH above the numerical Hartree-Fock values, while the larger sets give energies 11 mH above the numerical results. Energy trends from the smaller sets indicate that reduced shielding by p-electrons may place a greater demand on the flexibility of d- and f-orbital description for the lighter elements of the series
International Nuclear Information System (INIS)
Kari, R.E.; Mezey, P.G.; Csizmadia, I.G.
1975-01-01
Expressions are given for calculating the energy gradient vector in the exponent space of Gaussian basis sets and a technique to optimize orbital exponents using the method of conjugate gradients is described. The method is tested on the (9/sups/5/supp/) Gaussian basis space and optimum exponents are determined for the carbon atom. The analysis of the results shows that the calculated one-electron properties converge more slowly to their optimum values than the total energy converges to its optimum value. In addition, basis sets approximating the optimum total energy very well can still be markedly improved for the prediction of one-electron properties. For smaller basis sets, this improvement does not warrant the necessary expense
Geminal embedding scheme for optimal atomic basis set construction in correlated calculations
Energy Technology Data Exchange (ETDEWEB)
Sorella, S., E-mail: sorella@sissa.it [International School for Advanced Studies (SISSA), Via Beirut 2-4, 34014 Trieste, Italy and INFM Democritos National Simulation Center, Trieste (Italy); Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr [Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France); Mazzola, G., E-mail: gmazzola@phys.ethz.ch [Theoretische Physik, ETH Zurich, 8093 Zurich (Switzerland); Casula, M., E-mail: michele.casula@impmc.upmc.fr [CNRS and Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France)
2015-12-28
We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.
International Nuclear Information System (INIS)
Blanco, M.; Heller, E.J.
1985-01-01
A new Cartesian basis set is defined that is suitable for the representation of molecular vibration-rotation bound states. The Cartesian basis functions are superpositions of semiclassical states generated through the use of classical trajectories that conform to the intrinsic dynamics of the molecule. Although semiclassical input is employed, the method becomes ab initio through the standard matrix diagonalization variational method. Special attention is given to classical-quantum correspondences for angular momentum. In particular, it is shown that the use of semiclassical information preferentially leads to angular momentum eigenstates with magnetic quantum number Vertical BarMVertical Bar equal to the total angular momentum J. The present method offers a reliable technique for representing highly excited vibrational-rotational states where perturbation techniques are no longer applicable
International Nuclear Information System (INIS)
Lazariev, A; Graveron-Demilly, D; Allouche, A-R; Aubert-Frécon, M; Fauvelle, F; Piotto, M; Elbayed, K; Namer, I-J; Van Ormondt, D
2011-01-01
High-resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. The need to monitor diseases and pharmaceutical follow-up requires an automatic quantitation of HRMAS 1 H signals. However, for several metabolites, the values of chemical shifts of proton groups may slightly differ according to the micro-environment in the tissue or cells, in particular to its pH. This hampers the accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis set: the metabolite fingerprints are not correct anymore. In this work, we propose an accurate method coupling quantum mechanical simulations and quantitation algorithms to handle basis-set changes. The proposed algorithm automatically corrects mismatches between the signals of the simulated basis set and the signal under analysis by maximizing the normalized cross-correlation between the mentioned signals. Optimized chemical shift values of the metabolites are obtained. This method, QM-QUEST, provides more robust fitting while limiting user involvement and respects the correct fingerprints of metabolites. Its efficiency is demonstrated by accurately quantitating 33 signals from tissue samples of human brains with oligodendroglioma, obtained at 11.7 tesla. The corresponding chemical shift changes of several metabolites within the series are also analyzed
Lazariev, A.; Allouche, A.-R.; Aubert-Frécon, M.; Fauvelle, F.; Piotto, M.; Elbayed, K.; Namer, I.-J.; van Ormondt, D.; Graveron-Demilly, D.
2011-11-01
High-resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. The need to monitor diseases and pharmaceutical follow-up requires an automatic quantitation of HRMAS 1H signals. However, for several metabolites, the values of chemical shifts of proton groups may slightly differ according to the micro-environment in the tissue or cells, in particular to its pH. This hampers the accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis set: the metabolite fingerprints are not correct anymore. In this work, we propose an accurate method coupling quantum mechanical simulations and quantitation algorithms to handle basis-set changes. The proposed algorithm automatically corrects mismatches between the signals of the simulated basis set and the signal under analysis by maximizing the normalized cross-correlation between the mentioned signals. Optimized chemical shift values of the metabolites are obtained. This method, QM-QUEST, provides more robust fitting while limiting user involvement and respects the correct fingerprints of metabolites. Its efficiency is demonstrated by accurately quantitating 33 signals from tissue samples of human brains with oligodendroglioma, obtained at 11.7 tesla. The corresponding chemical shift changes of several metabolites within the series are also analyzed.
Optimization of auxiliary basis sets for the LEDO expansion and a projection technique for LEDO-DFT.
Götz, Andreas W; Kollmar, Christian; Hess, Bernd A
2005-09-01
We present a systematic procedure for the optimization of the expansion basis for the limited expansion of diatomic overlap density functional theory (LEDO-DFT) and report on optimized auxiliary orbitals for the Ahlrichs split valence plus polarization basis set (SVP) for the elements H, Li--F, and Na--Cl. A new method to deal with near-linear dependences in the LEDO expansion basis is introduced, which greatly reduces the computational effort of LEDO-DFT calculations. Numerical results for a test set of small molecules demonstrate the accuracy of electronic energies, structural parameters, dipole moments, and harmonic frequencies. For larger molecular systems the numerical errors introduced by the LEDO approximation can lead to an uncontrollable behavior of the self-consistent field (SCF) process. A projection technique suggested by Löwdin is presented in the framework of LEDO-DFT, which guarantees for SCF convergence. Numerical results on some critical test molecules suggest the general applicability of the auxiliary orbitals presented in combination with this projection technique. Timing results indicate that LEDO-DFT is competitive with conventional density fitting methods. (c) 2005 Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Varandas, A.J.C.
1980-01-01
A suggestion is made for using the zeroth-order exchange term, at the one-exchange level, in the perturbation development of the interaction energy as a criterion for optmizing the atomic basis sets in interatomic force calculations. The approach is illustrated for the case of two helium atoms. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Evarestov, R A; Panin, A I; Bandura, A V; Losev, M V [Department of Quantum Chemistry, St. Petersburg State University, University Prospect 26, Stary Peterghof, St. Petersburg, 198504 (Russian Federation)], E-mail: re1973@re1973.spb.edu
2008-06-01
The results of LCAO DFT calculations of lattice parameters, cohesive energy and bulk modulus of the crystalline uranium nitrides UN, U{sub 2}N{sub 3} and UN{sub 2} are presented and discussed. The LCAO computer codes Gaussian03 and Crystal06 are applied. The calculations are made with the uranium atom relativistic effective small core potential by Stuttgart-Cologne group (60 electrons in the core). The calculations include the U atom basis set optimization. Powell, Hooke-Jeeves, conjugated gradient and Box methods are implemented in the author's optimization package, being external to the codes for molecular and periodic calculations. The basis set optimization in LCAO calculations improves the agreement of the lattice parameter and bulk modulus of UN crystal with the experimental data, the change of the cohesive energy due to the optimization is small. The mixed metallic-covalent chemical bonding is found both in LCAO calculations of UN and U{sub 2}N{sub 3} crystals; UN{sub 2} crystal has the semiconducting nature.
International Nuclear Information System (INIS)
Evarestov, R A; Panin, A I; Bandura, A V; Losev, M V
2008-01-01
The results of LCAO DFT calculations of lattice parameters, cohesive energy and bulk modulus of the crystalline uranium nitrides UN, U 2 N 3 and UN 2 are presented and discussed. The LCAO computer codes Gaussian03 and Crystal06 are applied. The calculations are made with the uranium atom relativistic effective small core potential by Stuttgart-Cologne group (60 electrons in the core). The calculations include the U atom basis set optimization. Powell, Hooke-Jeeves, conjugated gradient and Box methods are implemented in the author's optimization package, being external to the codes for molecular and periodic calculations. The basis set optimization in LCAO calculations improves the agreement of the lattice parameter and bulk modulus of UN crystal with the experimental data, the change of the cohesive energy due to the optimization is small. The mixed metallic-covalent chemical bonding is found both in LCAO calculations of UN and U 2 N 3 crystals; UN 2 crystal has the semiconducting nature
Energy Technology Data Exchange (ETDEWEB)
Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au [School of Chemistry and Biochemistry, The University of Western Australia, Perth, WA 6009 (Australia)
2015-05-15
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.
International Nuclear Information System (INIS)
Spackman, Peter R.; Karton, Amir
2015-01-01
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L α two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol –1 . The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol –1
Accelerating GW calculations with optimal polarizability basis
Energy Technology Data Exchange (ETDEWEB)
Umari, P.; Stenuit, G. [CNR-IOM DEMOCRITOS Theory Elettra Group, Basovizza (Trieste) (Italy); Qian, X.; Marzari, N. [Department of Materials Science and Engineering, MIT, Cambridge, MA (United States); Giacomazzi, L.; Baroni, S. [CNR-IOM DEMOCRITOS Theory Elettra Group, Basovizza (Trieste) (Italy); SISSA - Scuola Internazionale Superiore di Studi Avanzati, Trieste (Italy)
2011-03-15
We present a method for accelerating GW quasi-particle (QP) calculations. This is achieved through the introduction of optimal basis sets for representing polarizability matrices. First the real-space products of Wannier like orbitals are constructed and then optimal basis sets are obtained through singular value decomposition. Our method is validated by calculating the vertical ionization energies of the benzene molecule and the band structure of crystalline silicon. Its potentialities are illustrated by calculating the QP spectrum of a model structure of vitreous silica. Finally, we apply our method for studying the electronic structure properties of a model of quasi-stoichiometric amorphous silicon nitride and of its point defects. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Conductance calculations with a wavelet basis set
DEFF Research Database (Denmark)
Thygesen, Kristian Sommer; Bollinger, Mikkel; Jacobsen, Karsten Wedel
2003-01-01
We present a method based on density functional theory (DFT) for calculating the conductance of a phase-coherent system. The metallic contacts and the central region where the electron scattering occurs, are treated on the same footing taking their full atomic and electronic structure into account....... The linear-response conductance is calculated from the Green's function which is represented in terms of a system-independent basis set containing wavelets with compact support. This allows us to rigorously separate the central region from the contacts and to test for convergence in a systematic way...
Saco-Alvarez, Liliana; Durán, Iria; Ignacio Lorenzo, J; Beiras, Ricardo
2010-05-01
The sea-urchin embryo test (SET) has been frequently used as a rapid, sensitive, and cost-effective biological tool for marine monitoring worldwide, but the selection of a sensitive, objective, and automatically readable endpoint, a stricter quality control to guarantee optimum handling and biological material, and the identification of confounding factors that interfere with the response have hampered its widespread routine use. Size increase in a minimum of n=30 individuals per replicate, either normal larvae or earlier developmental stages, was preferred to observer-dependent, discontinuous responses as test endpoint. Control size increase after 48 h incubation at 20 degrees C must meet an acceptability criterion of 218 microm. In order to avoid false positives minimums of 32 per thousand salinity, 7 pH and 2mg/L oxygen, and a maximum of 40 microg/L NH(3) (NOEC) are required in the incubation media. For in situ testing size increase rates must be corrected on a degree-day basis using 12 degrees C as the developmental threshold. Copyright 2010 Elsevier Inc. All rights reserved.
Some considerations about Gaussian basis sets for electric property calculations
Arruda, Priscilla M.; Canal Neto, A.; Jorge, F. E.
Recently, segmented contracted basis sets of double, triple, and quadruple zeta valence quality plus polarization functions (XZP, X = D, T, and Q, respectively) for the atoms from H to Ar were reported. In this work, with the objective of having a better description of polarizabilities, the QZP set was augmented with diffuse (s and p symmetries) and polarization (p, d, f, and g symmetries) functions that were chosen to maximize the mean dipole polarizability at the UHF and UMP2 levels, respectively. At the HF and B3LYP levels of theory, electric dipole moment and static polarizability for a sample of molecules were evaluated. Comparison with experimental data and results obtained with a similar size basis set, whose diffuse functions were optimized for the ground state energy of the anion, was done.
Feller, David; Dixon, David A
2018-03-08
Two recent papers in this journal called into question the suitability of the correlation consistent basis sets for density functional theory (DFT) calculations, because the sets were designed for correlated methods such as configuration interaction, perturbation theory, and coupled cluster theory. These papers focused on the ability of the correlation consistent and other basis sets to reproduce total energies, atomization energies, and dipole moments obtained from "quasi-exact" multiwavelet results. Undesirably large errors were observed for the correlation consistent basis sets. One of the papers argued that basis sets specifically optimized for DFT methods were "essential" for obtaining high accuracy. In this work we re-examined the performance of the correlation consistent basis sets by resolving problems with the previous calculations and by making more appropriate basis set choices for the alkali and alkaline-earth metals and second-row elements. When this is done, the statistical errors with respect to the benchmark values and with respect to DFT optimized basis sets are greatly reduced, especially in light of the relatively large intrinsic error of the underlying DFT method. When judged with respect to high-quality Feller-Peterson-Dixon coupled cluster theory atomization energies, the PBE0 DFT method used in the previous studies exhibits a mean absolute deviation more than a factor of 50 larger than the quintuple zeta basis set truncation error.
Setting clear expectations for safety basis development
International Nuclear Information System (INIS)
MORENO, M.R.
2003-01-01
DOE-RL has set clear expectations for a cost-effective approach for achieving compliance with the Nuclear Safety Management requirements (10 CFR 830, Nuclear Safety Rule) which will ensure long-term benefit to Hanford. To facilitate implementation of these expectations, tools were developed to streamline and standardize safety analysis and safety document development resulting in a shorter and more predictable DOE approval cycle. A Hanford Safety Analysis and Risk Assessment Handbook (SARAH) was issued to standardized methodologies for development of safety analyses. A Microsoft Excel spreadsheet (RADIDOSE) was issued for the evaluation of radiological consequences for accident scenarios often postulated for Hanford. A standard Site Documented Safety Analysis (DSA) detailing the safety management programs was issued for use as a means of compliance with a majority of 3009 Standard chapters. An in-process review was developed between DOE and the Contractor to facilitate DOE approval and provide early course correction. As a result of setting expectations and providing safety analysis tools, the four Hanford Site waste management nuclear facilities were able to integrate into one Master Waste Management Documented Safety Analysis (WM-DSA)
Petruzielo, F R; Toulouse, Julien; Umrigar, C J
2011-02-14
A simple yet general method for constructing basis sets for molecular electronic structure calculations is presented. These basis sets consist of atomic natural orbitals from a multiconfigurational self-consistent field calculation supplemented with primitive functions, chosen such that the asymptotics are appropriate for the potential of the system. Primitives are optimized for the homonuclear diatomic molecule to produce a balanced basis set. Two general features that facilitate this basis construction are demonstrated. First, weak coupling exists between the optimal exponents of primitives with different angular momenta. Second, the optimal primitive exponents for a chosen system depend weakly on the particular level of theory employed for optimization. The explicit case considered here is a basis set appropriate for the Burkatzki-Filippi-Dolg pseudopotentials. Since these pseudopotentials are finite at nuclei and have a Coulomb tail, the recently proposed Gauss-Slater functions are the appropriate primitives. Double- and triple-zeta bases are developed for elements hydrogen through argon. These new bases offer significant gains over the corresponding Burkatzki-Filippi-Dolg bases at various levels of theory. Using a Gaussian expansion of the basis functions, these bases can be employed in any electronic structure method. Quantum Monte Carlo provides an added benefit: expansions are unnecessary since the integrals are evaluated numerically.
Groebner basis, resultants and the generalized Mandelbrot set
Energy Technology Data Exchange (ETDEWEB)
Geum, Young Hee [Centre of Research for Computational Sciences and Informatics in Biology, Bioindustry, Environment, Agriculture and Healthcare, University of Malaya, 50603 Kuala Lumpur (Malaysia)], E-mail: conpana@empal.com; Hare, Kevin G. [Department of Pure Mathematics, University of Waterloo, Waterloo, Ont., N2L 3G1 (Canada)], E-mail: kghare@math.uwaterloo.ca
2009-10-30
This paper demonstrates how the Groebner basis algorithm can be used for finding the bifurcation points in the generalized Mandelbrot set. It also shows how resultants can be used to find components of the generalized Mandelbrot set.
Groebner basis, resultants and the generalized Mandelbrot set
International Nuclear Information System (INIS)
Geum, Young Hee; Hare, Kevin G.
2009-01-01
This paper demonstrates how the Groebner basis algorithm can be used for finding the bifurcation points in the generalized Mandelbrot set. It also shows how resultants can be used to find components of the generalized Mandelbrot set.
Localized atomic basis set in the projector augmented wave method
DEFF Research Database (Denmark)
Larsen, Ask Hjorth; Vanin, Marco; Mortensen, Jens Jørgen
2009-01-01
We present an implementation of localized atomic-orbital basis sets in the projector augmented wave (PAW) formalism within the density-functional theory. The implementation in the real-space GPAW code provides a complementary basis set to the accurate but computationally more demanding grid...
The Neural Basis of Optimism and Pessimism
Hecht, David
2013-01-01
Our survival and wellness require a balance between optimism and pessimism. Undue pessimism makes life miserable; however, excessive optimism can lead to dangerously risky behaviors. A review and synthesis of the literature on the neurophysiology subserving these two worldviews suggests that optimism and pessimism are differentially associated with the two cerebral hemispheres. High self-esteem, a cheerful attitude that tends to look at the positive aspects of a given situation, as well as an...
Point Set Denoising Using Bootstrap-Based Radial Basis Function.
Directory of Open Access Journals (Sweden)
Khang Jie Liew
Full Text Available This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
Point Set Denoising Using Bootstrap-Based Radial Basis Function.
Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad
2016-01-01
This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
Energy Technology Data Exchange (ETDEWEB)
Feller, D.F.
1979-01-01
The behavior of the two exponential parameters in an even-tempered gaussian basis set is investigated as the set optimally approaches an integral transform representation of the radial portion of atomic and molecular orbitals. This approach permits a highly accurate assessment of the Hartree-Fock limit for atoms and molecules.
Sylvetsky, Nitai; Kesharwani, Manoj K; Martin, Jan M L
2017-10-07
We have developed a new basis set family, denoted as aug-cc-pVnZ-F12 (or aVnZ-F12 for short), for explicitly correlated calculations. The sets included in this family were constructed by supplementing the corresponding cc-pVnZ-F12 sets with additional diffuse functions on the higher angular momenta (i.e., additional d-h functions on non-hydrogen atoms and p-g on hydrogen atoms), optimized for the MP2-F12 energy of the relevant atomic anions. The new basis sets have been benchmarked against electron affinities of the first- and second-row atoms, the W4-17 dataset of total atomization energies, the S66 dataset of noncovalent interactions, the Benchmark Energy and Geometry Data Base water cluster subset, and the WATER23 subset of the GMTKN24 and GMTKN30 benchmark suites. The aVnZ-F12 basis sets displayed excellent performance, not just for electron affinities but also for noncovalent interaction energies of neutral and anionic species. Appropriate CABSs (complementary auxiliary basis sets) were explored for the S66 noncovalent interaction benchmark: between similar-sized basis sets, CABSs were found to be more transferable than generally assumed.
Optimal stimulation as theoretical basis of hyperactivity.
Zentall, Sydney
1975-07-01
Current theory and practice in the clinical and educational management of hyperactive children recommend reduction of environmental stimulation, assuming hyperactive and distractable behaviors to be due to overstimulation. This paper reviews research suggesting that hyperactive behavior may result from a homeostatic mechanism that functions to increase stimulation for a child experienceing insufficient sensory stimulation. It is suggested that the effectiveness of drug and behavior therapies, as well as evidence from the field of sensory deprivation, further support the theory of a homeostatic mechanism that attempts to optimize sensory input.
Plumley, Joshua A; Dannenberg, J J
2011-06-01
We evaluate the performance of ten functionals (B3LYP, M05, M05-2X, M06, M06-2X, B2PLYP, B2PLYPD, X3LYP, B97D, and MPWB1K) in combination with 16 basis sets ranging in complexity from 6-31G(d) to aug-cc-pV5Z for the calculation of the H-bonded water dimer with the goal of defining which combinations of functionals and basis sets provide a combination of economy and accuracy for H-bonded systems. We have compared the results to the best non-density functional theory (non-DFT) molecular orbital (MO) calculations and to experimental results. Several of the smaller basis sets lead to qualitatively incorrect geometries when optimized on a normal potential energy surface (PES). This problem disappears when the optimization is performed on a counterpoise (CP) corrected PES. The calculated interaction energies (ΔEs) with the largest basis sets vary from -4.42 (B97D) to -5.19 (B2PLYPD) kcal/mol for the different functionals. Small basis sets generally predict stronger interactions than the large ones. We found that, because of error compensation, the smaller basis sets gave the best results (in comparison to experimental and high-level non-DFT MO calculations) when combined with a functional that predicts a weak interaction with the largest basis set. As many applications are complex systems and require economical calculations, we suggest the following functional/basis set combinations in order of increasing complexity and cost: (1) D95(d,p) with B3LYP, B97D, M06, or MPWB1k; (2) 6-311G(d,p) with B3LYP; (3) D95++(d,p) with B3LYP, B97D, or MPWB1K; (4) 6-311++G(d,p) with B3LYP or B97D; and (5) aug-cc-pVDZ with M05-2X, M06-2X, or X3LYP. Copyright © 2011 Wiley Periodicals, Inc.
Optimization of the variational basis in the three body problem
International Nuclear Information System (INIS)
Simenog, I.V.; Pushkash, O.M.; Bestuzheva, A.B.
1995-01-01
The procedure of variational oscillator basis optimization is proposed to the calculation the energy spectra of three body systems. The hierarchy of basis functions is derived and energies of ground and excited states for three gravitating particles is obtained with high accuracy. 12 refs
Molecular basis sets - a general similarity-based approach for representing chemical spaces.
Raghavendra, Akshay S; Maggiora, Gerald M
2007-01-01
A new method, based on generalized Fourier analysis, is described that utilizes the concept of "molecular basis sets" to represent chemical space within an abstract vector space. The basis vectors in this space are abstract molecular vectors. Inner products among the basis vectors are determined using an ansatz that associates molecular similarities between pairs of molecules with their corresponding inner products. Moreover, the fact that similarities between pairs of molecules are, in essentially all cases, nonzero implies that the abstract molecular basis vectors are nonorthogonal, but since the similarity of a molecule with itself is unity, the molecular vectors are normalized to unity. A symmetric orthogonalization procedure, which optimally preserves the character of the original set of molecular basis vectors, is used to construct appropriate orthonormal basis sets. Molecules can then be represented, in general, by sets of orthonormal "molecule-like" basis vectors within a proper Euclidean vector space. However, the dimension of the space can become quite large. Thus, the work presented here assesses the effect of basis set size on a number of properties including the average squared error and average norm of molecular vectors represented in the space-the results clearly show the expected reduction in average squared error and increase in average norm as the basis set size is increased. Several distance-based statistics are also considered. These include the distribution of distances and their differences with respect to basis sets of differing size and several comparative distance measures such as Spearman rank correlation and Kruscal stress. All of the measures show that, even though the dimension can be high, the chemical spaces they represent, nonetheless, behave in a well-controlled and reasonable manner. Other abstract vector spaces analogous to that described here can also be constructed providing that the appropriate inner products can be directly
Relativistic double-zeta, triple-zeta, and quadruple-zeta basis sets for the lanthanides La–Lu
Dyall, K.G.; Gomes, A.S.P.; Visscher, L.
2010-01-01
Relativistic basis sets of double-zeta, triple-zeta, and quadruple-zeta quality have been optimized for the lanthanide elements La-Lu. The basis sets include SCF exponents for the occupied spinors and for the 6p shell, exponents of correlating functions for the valence shells (4f, 5d and 6s) and the
Hellweg, Arnim; Rappoport, Dmitrij
2015-01-14
We report optimized auxiliary basis sets for use with the Karlsruhe segmented contracted basis sets including moderately diffuse basis functions (Rappoport and Furche, J. Chem. Phys., 2010, 133, 134105) in resolution-of-the-identity (RI) post-self-consistent field (post-SCF) computations for the elements H-Rn (except lanthanides). The errors of the RI approximation using optimized auxiliary basis sets are analyzed on a comprehensive test set of molecules containing the most common oxidation states of each element and do not exceed those of the corresponding unaugmented basis sets. During these studies an unsatisfying performance of the def2-SVP and def2-QZVPP auxiliary basis sets for Barium was found and improved sets are provided. We establish the versatility of the def2-SVPD, def2-TZVPPD, and def2-QZVPPD basis sets for RI-MP2 and RI-CC (coupled-cluster) energy and property calculations. The influence of diffuse basis functions on correlation energy, basis set superposition error, atomic electron affinity, dipole moments, and computational timings is evaluated at different levels of theory using benchmark sets and showcase examples.
Huang, Hui; Ning, Jixian
2017-01-01
Prederivatives play an important role in the research of set optimization problems. First, we establish several existence theorems of prederivatives for γ -paraconvex set-valued mappings in Banach spaces with [Formula: see text]. Then, in terms of prederivatives, we establish both necessary and sufficient conditions for the existence of Pareto minimal solution of set optimization problems.
Optimal Piecewise Linear Basis Functions in Two Dimensions
Energy Technology Data Exchange (ETDEWEB)
Brooks III, E D; Szoke, A
2009-01-26
We use a variational approach to optimize the center point coefficients associated with the piecewise linear basis functions introduced by Stone and Adams [1], for polygonal zones in two Cartesian dimensions. Our strategy provides optimal center point coefficients, as a function of the location of the center point, by minimizing the error induced when the basis function interpolation is used for the solution of the time independent diffusion equation within the polygonal zone. By using optimal center point coefficients, one expects to minimize the errors that occur when these basis functions are used to discretize diffusion equations, or transport equations in optically thick zones (where they approach the solution of the diffusion equation). Our optimal center point coefficients satisfy the requirements placed upon the basis functions for any location of the center point. We also find that the location of the center point can be optimized, but this requires numerical calculations. Curiously, the optimum center point location is independent of the values of the dependent variable on the corners only for quadrilaterals.
Basis set convergence on static electric dipole polarizability calculations of alkali-metal clusters
International Nuclear Information System (INIS)
Souza, Fabio A. L. de; Jorge, Francisco E.
2013-01-01
A hierarchical sequence of all-electron segmented contracted basis sets of double, triple and quadruple zeta valence qualities plus polarization functions augmented with diffuse functions for the atoms from H to Ar was constructed. A systematic study of basis sets required to obtain reliable and accurate values of static dipole polarizabilities of lithium and sodium clusters (n = 2, 4, 6 and 8) at their optimized equilibrium geometries is reported. Three methods are examined: Hartree-Fock (HF), second-order Moeller-Plesset perturbation theory (MP2), and density functional theory (DFT). By direct calculations or by fitting the directly calculated values through one extrapolation scheme, estimates of the HF, MP2 and DFT complete basis set limits were obtained. Comparison with experimental and theoretical data reported previously in the literature is done (author)
Basis set convergence on static electric dipole polarizability calculations of alkali-metal clusters
Energy Technology Data Exchange (ETDEWEB)
Souza, Fabio A. L. de; Jorge, Francisco E., E-mail: jorge@cce.ufes.br [Departamento de Fisica, Universidade Federal do Espirito Santo, 29060-900 Vitoria-ES (Brazil)
2013-07-15
A hierarchical sequence of all-electron segmented contracted basis sets of double, triple and quadruple zeta valence qualities plus polarization functions augmented with diffuse functions for the atoms from H to Ar was constructed. A systematic study of basis sets required to obtain reliable and accurate values of static dipole polarizabilities of lithium and sodium clusters (n = 2, 4, 6 and 8) at their optimized equilibrium geometries is reported. Three methods are examined: Hartree-Fock (HF), second-order Moeller-Plesset perturbation theory (MP2), and density functional theory (DFT). By direct calculations or by fitting the directly calculated values through one extrapolation scheme, estimates of the HF, MP2 and DFT complete basis set limits were obtained. Comparison with experimental and theoretical data reported previously in the literature is done (author)
Quiney, H. M.; Glushkov, V. N.; Wilson, S.; Sabin,; Brandas, E
2001-01-01
A comparison is made of the accuracy achieved in finite difference and finite basis set approximations to the Dirac equation for the ground state of the hydrogen molecular ion. The finite basis set calculations are carried out using a distributed basis set of Gaussian functions the exponents and
Massobrio, C
2003-01-01
Density functional theory, in combination with a) a careful choice of the exchange-correlation part of the total energy and b) localized basis sets for the electronic orbital, has become the method of choice for calculating the exchange-couplings in magnetic molecular complexes. Orbital expansion on plane waves can be seen as an alternative basis set especially suited to allow optimization of newly synthesized materials of unknown geometries. However, little is known on the predictive power of this scheme to yield quantitative values for exchange coupling constants J as small as a few hundredths of eV (50-300 cm sup - sup 1). We have used density functional theory and a plane waves basis set to calculate the exchange couplings J of three homodinuclear Cu-based molecular complexes with experimental values ranging from +40 cm sup - sup 1 to -300 cm sup - sup 1. The plane waves basis set proves as accurate as the localized basis set, thereby suggesting that this approach can be reliably employed to predict and r...
International Nuclear Information System (INIS)
Massobrio, C.; Ruiz, E.
2003-01-01
Density functional theory, in combination with a) a careful choice of the exchange-correlation part of the total energy and b) localized basis sets for the electronic orbital, has become the method of choice for calculating the exchange-couplings in magnetic molecular complexes. Orbital expansion on plane waves can be seen as an alternative basis set especially suited to allow optimization of newly synthesized materials of unknown geometries. However, little is known on the predictive power of this scheme to yield quantitative values for exchange coupling constants J as small as a few hundredths of eV (50-300 cm -1 ). We have used density functional theory and a plane waves basis set to calculate the exchange couplings J of three homodinuclear Cu-based molecular complexes with experimental values ranging from +40 cm -1 to -300 cm -1 . The plane waves basis set proves as accurate as the localized basis set, thereby suggesting that this approach can be reliably employed to predict and rationalize the magnetic properties of molecular-based materials. (author)
Optimal choice of basis functions in the linear regression analysis
International Nuclear Information System (INIS)
Khotinskij, A.M.
1988-01-01
Problem of optimal choice of basis functions in the linear regression analysis is investigated. Step algorithm with estimation of its efficiency, which holds true at finite number of measurements, is suggested. Conditions, providing the probability of correct choice close to 1 are formulated. Application of the step algorithm to analysis of decay curves is substantiated. 8 refs
Basis set approach in the constrained interpolation profile method
International Nuclear Information System (INIS)
Utsumi, T.; Koga, J.; Yabe, T.; Ogata, Y.; Matsunaga, E.; Aoki, T.; Sekine, M.
2003-07-01
We propose a simple polynomial basis-set that is easily extendable to any desired higher-order accuracy. This method is based on the Constrained Interpolation Profile (CIP) method and the profile is chosen so that the subgrid scale solution approaches the real solution by the constraints from the spatial derivative of the original equation. Thus the solution even on the subgrid scale becomes consistent with the master equation. By increasing the order of the polynomial, this solution quickly converges. 3rd and 5th order polynomials are tested on the one-dimensional Schroedinger equation and are proved to give solutions a few orders of magnitude higher in accuracy than conventional methods for lower-lying eigenstates. (author)
Zhu, Wuming; Trickey, S B
2017-12-28
In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li + , Be + , and B + , in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B
Zhu, Wuming; Trickey, S. B.
2017-12-01
In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li+, Be+, and B+, in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B field.
International Nuclear Information System (INIS)
Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Teresa; Skylaris, Chris-Kriton; Head-Gordon, Martin
2016-01-01
Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set produces <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.
Behavior and neural basis of near-optimal visual search
Ma, Wei Ji; Navalpakkam, Vidhya; Beck, Jeffrey M; van den Berg, Ronald; Pouget, Alexandre
2013-01-01
The ability to search efficiently for a target in a cluttered environment is one of the most remarkable functions of the nervous system. This task is difficult under natural circumstances, as the reliability of sensory information can vary greatly across space and time and is typically a priori unknown to the observer. In contrast, visual-search experiments commonly use stimuli of equal and known reliability. In a target detection task, we randomly assigned high or low reliability to each item on a trial-by-trial basis. An optimal observer would weight the observations by their trial-to-trial reliability and combine them using a specific nonlinear integration rule. We found that humans were near-optimal, regardless of whether distractors were homogeneous or heterogeneous and whether reliability was manipulated through contrast or shape. We present a neural-network implementation of near-optimal visual search based on probabilistic population coding. The network matched human performance. PMID:21552276
Basis set expansion for inverse problems in plasma diagnostic analysis
Jones, B.; Ruiz, C. L.
2013-07-01
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)], 10.1063/1.1482156 is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20-25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
Radiobiological basis for setting neutron radiation safety standards
International Nuclear Information System (INIS)
Straume, T.
1985-01-01
Present neutron standards, adopted more than 20 yr ago from a weak radiobiological data base, have been in doubt for a number of years and are currently under challenge. Moreover, recent dosimetric re-evaluations indicate that Hiroshima neutron doses may have been much lower than previously thought, suggesting that direct data for neutron-induced cancer in humans may in fact not be available. These recent developments make it urgent to determine the extent to which neutron cancer risk in man can be estimated from data that are available. Two approaches are proposed here that are anchored in particularly robust epidemiological and experimental data and appear most likely to provide reliable estimates of neutron cancer risk in man. The first approach uses gamma-ray dose-response relationships for human carcinogenesis, available from Nagasaki (Hiroshima data are also considered), together with highly characterized neutron and gamma-ray data for human cytogenetics. When tested against relevant experimental data, this approach either adequately predicts or somewhat overestimates neutron tumorigenesis (and mutagenesis) in animals. The second approach also uses the Nagasaki gamma-ray cancer data, but together with neutron RBEs from animal tumorigenesis studies. Both approaches give similar results and provide a basis for setting neutron radiation safety standards. They appear to be an improvement over previous approaches, including those that rely on highly uncertain maximum neutron RBEs and unnecessary extrapolations of gamma-ray data to very low doses. Results suggest that, at the presently accepted neutron dose limit of 0.5 rad/yr, the cancer mortality risk to radiation workers is not very different from accidental mortality risks to workers in various nonradiation occupations
Basis set expansion for inverse problems in plasma diagnostic analysis
Energy Technology Data Exchange (ETDEWEB)
Jones, B.; Ruiz, C. L. [Sandia National Laboratories, PO Box 5800, Albuquerque, New Mexico 87185 (United States)
2013-07-15
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)] is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20–25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
First-principle modelling of forsterite surface properties: Accuracy of methods and basis sets.
Demichelis, Raffaella; Bruno, Marco; Massaro, Francesco R; Prencipe, Mauro; De La Pierre, Marco; Nestola, Fabrizio
2015-07-15
The seven main crystal surfaces of forsterite (Mg2 SiO4 ) were modeled using various Gaussian-type basis sets, and several formulations for the exchange-correlation functional within the density functional theory (DFT). The recently developed pob-TZVP basis set provides the best results for all properties that are strongly dependent on the accuracy of the wavefunction. Convergence on the structure and on the basis set superposition error-corrected surface energy can be reached also with poorer basis sets. The effect of adopting different DFT functionals was assessed. All functionals give the same stability order for the various surfaces. Surfaces do not exhibit any major structural differences when optimized with different functionals, except for higher energy orientations where major rearrangements occur around the Mg sites at the surface or subsurface. When dispersions are not accounted for, all functionals provide similar surface energies. The inclusion of empirical dispersions raises the energy of all surfaces by a nearly systematic value proportional to the scaling factor s of the dispersion formulation. An estimation for the surface energy is provided through adopting C6 coefficients that are more suitable than the standard ones to describe O-O interactions in minerals. A 2 × 2 supercell of the most stable surface (010) was optimized. No surface reconstruction was observed. The resulting structure and surface energy show no difference with respect to those obtained when using the primitive cell. This result validates the (010) surface model here adopted, that will serve as a reference for future studies on adsorption and reactivity of water and carbon dioxide at this interface. © 2015 Wiley Periodicals, Inc.
MRD-CI potential surfaces using balanced basis sets. IV. The H2 molecule and the H3 surface
International Nuclear Information System (INIS)
Wright, J.S.; Kruus, E.
1986-01-01
The utility of midbond functions in molecular calculations was tested in two cases where the correct results are known: the H 2 potential curve and the collinear H 3 potential surface. For H 2 , a variety of basis sets both with and without bond functions was compared to the exact nonrelativistic potential curve of Kolos and Wolniewicz [J. Chem. Phys. 43, 2429 (1965)]. It was found that optimally balanced basis sets at two levels of quality were the double zeta single polarization plus sp bond function basis (BF1) and the triple zeta double polarization plus two sets of sp bond function basis (BF2). These gave bond dissociation energies D/sub e/ = 4.7341 and 4.7368 eV, respectively (expt. 4.7477 eV). Four basis sets were tested for basis set superposition errors, which were found to be small relative to basis set incompleteness and therefore did not affect any conclusions regarding basis set balance. Basis sets BF1 and BF2 were used to construct potential surfaces for collinear H 3 , along with the corresponding basis sets DZ*P and TZ*PP which contain no bond functions. Barrier heights of 12.52, 10.37, 10.06, and 9.96 kcal/mol were obtained for basis sets DZ*P, TZ*PP, BF1, and BF2, respectively, compared to an estimated limiting value of 9.60 kcal/mol. Difference maps, force constants, and relative rms deviations show that the bond functions improve the surface shape as well as the barrier height
Optimal timing for intravenous administration set replacement.
Gillies, D; O'Riordan, L; Wallen, M; Morrison, A; Rankin, K; Nagy, S
2005-10-19
Administration of intravenous therapy is a common occurrence within the hospital setting. Routine replacement of administration sets has been advocated to reduce intravenous infusion contamination. If decreasing the frequency of changing intravenous administration sets does not increase infection rates, a change in practice could result in considerable cost savings. The objective of this review was to identify the optimal interval for the routine replacement of intravenous administration sets when infusate or parenteral nutrition (lipid and non-lipid) solutions are administered to people in hospital via central or peripheral venous catheters. We searched The Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, CINAHL, EMBASE: all from inception to February 2004; reference lists of identified trials, and bibliographies of published reviews. We also contacted researchers in the field. We did not have a language restriction. We included all randomized or quasi-randomized controlled trials addressing the frequency of replacing intravenous administration sets when parenteral nutrition (lipid and non-lipid containing solutions) or infusions (excluding blood) were administered to people in hospital via a central or peripheral catheter. Two authors assessed all potentially relevant studies. We resolved disagreements between the two authors by discussion with a third author. We collected data for the outcomes; infusate contamination; infusate-related bloodstream infection; catheter contamination; catheter-related bloodstream infection; all-cause bloodstream infection and all-cause mortality. We identified 23 references for review. We excluded eight of these studies; five because they did not fit the inclusion criteria and three because of inadequate data. We extracted data from the remaining 15 references (13 studies) with 4783 participants. We conclude that there is no evidence that changing intravenous administration sets more often than every 96 hours
Dynamical pruning of static localized basis sets in time-dependent quantum dynamics
McCormack, D.A.
2006-01-01
We investigate the viability of dynamical pruning of localized basis sets in time-dependent quantum wave packet methods. Basis functions that have a very small population at any given time are removed from the active set. The basis functions themselves are time independent, but the set of active
Correlation consistent basis sets for actinides. II. The atoms Ac and Np-Lr.
Feng, Rulin; Peterson, Kirk A
2017-08-28
New correlation consistent basis sets optimized using the all-electron third-order Douglas-Kroll-Hess (DKH3) scalar relativistic Hamiltonian are reported for the actinide elements Ac and Np through Lr. These complete the series of sets reported previously for Th-U [K. A. Peterson, J. Chem. Phys. 142, 074105 (2015); M. Vasiliu et al., J. Phys. Chem. A 119, 11422 (2015)]. The new sets range in size from double- to quadruple-zeta and encompass both those optimized for valence (6s6p5f7s6d) and outer-core electron correlations (valence + 5s5p5d). The final sets have been contracted for both the DKH3 and eXact 2-component (X2C) Hamiltonians, yielding cc-pVnZ-DK3/cc-pVnZ-X2C sets for valence correlation and cc-pwCVnZ-DK3/cc-pwCVnZ-X2C sets for outer-core correlation (n = D, T, Q in each case). In order to test the effectiveness of the new basis sets, both atomic and molecular benchmark calculations have been carried out. In the first case, the first three atomic ionization potentials (IPs) of all the actinide elements Ac-Lr have been calculated using the Feller-Peterson-Dixon (FPD) composite approach, primarily with the multireference configuration interaction (MRCI) method. Excellent convergence towards the respective complete basis set (CBS) limits is achieved with the new sets, leading to good agreement with experiment, where these exist, after accurately accounting for spin-orbit effects using the 4-component Dirac-Hartree-Fock method. For a molecular test, the IP and atomization energy (AE) of PuO 2 have been calculated also using the FPD method but using a coupled cluster approach with spin-orbit coupling accounted for using the 4-component MRCI. The present calculations yield an IP 0 for PuO 2 of 159.8 kcal/mol, which is in excellent agreement with the experimental electron transfer bracketing value of 162 ± 3 kcal/mol. Likewise, the calculated 0 K AE of 305.6 kcal/mol is in very good agreement with the currently accepted experimental value of 303.1 ± 5 kcal
On the performance of atomic natural orbital basis sets: A full configuration interaction study
International Nuclear Information System (INIS)
Illas, F.; Ricart, J.M.; Rubio, J.; Bagus, P.S.
1990-01-01
The performance of atomic natural orbital (ANO) basis sets has been studied by comparing self-consistant field (SCF) and full configuration interaction (CI) results obtained for the first row atoms and hydrides. The ANO results have been compared with those obtained using a segmented basis set containing the same number of contracted basis functions. The total energies obtained with the ANO basis sets are always lower than the one obtained by using the segmented one. However, for the hydrides, differential electronic correlation energy obtained with the ANO basis set may be smaller than the one recovered with the segmented set. We relate this poorer differential correlation energy for the ANO basis set to the fact that only one contracted d function is used for the ANO and segmented basis sets
Generation of Optimal Basis Functions for Reconstruction of Power Distribution
Energy Technology Data Exchange (ETDEWEB)
Park, Moonghu [Sejong Univ., Seoul (Korea, Republic of)
2014-05-15
This study proposes GMDH to find not only the best functional form but also the optimal parameters those describe the power distribution most accurately. A total of 1,060 cases of axially 1-dimensional core power distributions of 20-nodes are generated by 3-dimensional core analysis code covering BOL to EOL core burnup histories to validate the method. Axially five-point box powers at in-core detectors are considered as measurements. The reconstructed axial power shapes using GMDH method are compared to the reference power shapes. The results show that the proposed method is very robust and accurate compared with spline fitting method. It is shown that the GMDH analysis can give optimal basis functions for core power shape reconstruction. The in-core measurements are the 5 detector snapshots and the 20-node power distribution is successfully reconstructed. The effectiveness of the method is demonstrated by comparing the results of spline fitting for BOL, saddle and top-skewed power shapes.
On the effects of basis set truncation and electron correlation in conformers of 2-hydroxy-acetamide
Szarecka, A.; Day, G.; Grout, P. J.; Wilson, S.
Ab initio quantum chemical calculations have been used to study the differences in energy between two gas phase conformers of the 2-hydroxy-acetamide molecule that possess intramolecular hydrogen bonding. In particular, rotation around the central C-C bond has been considered as a factor determining the structure of the hydrogen bond and stabilization of the conformer. Energy calculations include full geometiy optimization using both the restricted matrix Hartree-Fock model and second-order many-body perturbation theory with a number of commonly used basis sets. The basis sets employed ranged from the minimal STO-3G set to [`]split-valence' sets up to 6-31 G. The effects of polarization functions were also studied. The results display a strong basis set dependence.
Vibration behavior optimization of planetary gear sets
Directory of Open Access Journals (Sweden)
Farshad Shakeri Aski
2014-12-01
Full Text Available This paper presents a global optimization method focused on planetary gear vibration reduction by means of tip relief profile modifications. A nonlinear dynamic model is used to study the vibration behavior. In order to investigate the optimal radius and amplitude, Brute Force method optimization is used. One approach in optimization is straightforward and requires considerable computation power: brute force methods try to calculate all possible solutions and decide afterwards which one is the best. Results show the influence of optimal profile on planetary gear vibrations.
Multiple-scattering theory with a truncated basis set
International Nuclear Information System (INIS)
Zhang, X.; Butler, W.H.
1992-01-01
Multiple-scattering theory (MST) is an extremely efficient technique for calculating the electronic structure of an assembly of atoms. The wave function in MST is expanded in terms of spherical waves centered on each atom and indexed by their orbital and azimuthal quantum numbers, l and m. The secular equation which determines the characteristic energies can be truncated at a value of the orbital angular momentum l max , for which the higher angular momentum phase shifts, δ l (l>l max ), are sufficiently small. Generally, the wave-function coefficients which are calculated from the secular equation are also truncated at l max . Here we point out that this truncation of the wave function is not necessary and is in fact inconsistent with the truncation of the secular equation. A consistent procedure is described in which the states with higher orbital angular momenta are retained but with their phase shifts set to zero. We show that this treatment gives smooth, continuous, and correctly normalized wave functions and that the total charge density calculated from the corresponding Green function agrees with the Lloyd formula result. We also show that this augmented wave function can be written as a linear combination of Andersen's muffin-tin orbitals in the case of muffin-tin potentials, and can be used to generalize the muffin-tin orbital idea to full-cell potentals
Optimal trading quantity integration as a basis for optimal portfolio management
Directory of Open Access Journals (Sweden)
Saša Žiković
2005-06-01
Full Text Available The author in this paper points out the reason behind calculating and using optimal trading quantity in conjunction with Markowitz’s Modern portfolio theory. In the opening part the author presents an example of calculating optimal weights using Markowitz’s Mean-Variance approach, followed by an explanation of basic logic behind optimal trading quantity. The use of optimal trading quantity is not limited to systems with Bernoulli outcome, but can also be used when trading shares, futures, options etc. Optimal trading quantity points out two often-overlooked axioms: (1 a system with negative mathematical expectancy can never be transformed in a system with positive mathematical expectancy, (2 by missing the optimal trading quantity an investor can turn a system with positive expectancy into a negative one. Optimal trading quantity is that quantity which maximizes geometric mean (growth function of a particular system. To determine the optimal trading quantity for simpler systems, with a very limited number of outcomes, a set of Kelly’s formulas is appropriate. In the conclusion the summary of the paper is presented.
Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures.
Papior, Nick R; Calogero, Gaetano; Brandbyge, Mads
2018-06-27
We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C 60 ). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.
International Nuclear Information System (INIS)
Chen Pingxing; Li Chengzu
2004-01-01
Nonlocality without entanglement is an interesting field. A manifestation of quantum nonlocality without entanglement is the possible local indistinguishability of orthogonal product states. In this paper we analyze the character of operators to distinguish the elements of a full product basis set in a multipartite system, and show that distinguishing perfectly these product bases needs only local projective measurements and classical communication, and these measurements cannot damage each product basis. Employing these conclusions one can discuss local distinguishability of the elements of any full product basis set easily. Finally we discuss the generalization of these results to the locally distinguishability of the elements of incomplete product basis set
New basis set for the prediction of the specific rotation in flexible biological molecules
DEFF Research Database (Denmark)
Baranowska-Łaczkowska, Angelika; Z. Łaczkowski, Krzysztof Z. Łaczkowski; Henriksen, Christian
2016-01-01
are compared to those obtained with the (d-)aug-cc-pVXZ (X = D, T and Q) basis sets of Dunning et al. The ORP values are in good overall agreement with the aug-cc-pVTZ results making the ORP a good basis set for routine TD-DFT optical rotation calculations of conformationally flexible molecules. The results...
Sulcal set optimization for cortical surface registration.
Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M
2010-04-15
Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.
Set-valued optimization an introduction with applications
Khan, Akhtar A; Zalinescu, Constantin
2014-01-01
Set-valued optimization is a vibrant and expanding branch of mathematics that deals with optimization problems where the objective map and/or the constraints maps are set-valued maps acting between certain spaces. Since set-valued maps subsumes single valued maps, set-valued optimization provides an important extension and unification of the scalar as well as the vector optimization problems. Therefore this relatively new discipline has justifiably attracted a great deal of attention in recent years. This book presents, in a unified framework, basic properties on ordering relations, solution c
Optimal timing for intravascular administration set replacement.
Ullman, Amanda J; Cooke, Marie L; Gillies, Donna; Marsh, Nicole M; Daud, Azlina; McGrail, Matthew R; O'Riordan, Elizabeth; Rickard, Claire M
2013-09-15
The tubing (administration set) attached to both venous and arterial catheters may contribute to bacteraemia and other infections. The rate of infection may be increased or decreased by routine replacement of administration sets. This review was originally published in 2005 and was updated in 2012. The objective of this review was to identify any relationship between the frequency with which administration sets are replaced and rates of microbial colonization, infection and death. We searched The Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2012, Issue 6), MEDLINE (1950 to June 2012), CINAHL (1982 to June 2012), EMBASE (1980 to June 2012), reference lists of identified trials and bibliographies of published reviews. The original search was performed in February 2004. We also contacted researchers in the field. We applied no language restriction. We included all randomized or controlled clinical trials on the frequency of venous or arterial catheter administration set replacement in hospitalized participants. Two review authors assessed all potentially relevant studies. We resolved disagreements between the two review authors by discussion with a third review author. We collected data for seven outcomes: catheter-related infection; infusate-related infection; infusate microbial colonization; catheter microbial colonization; all-cause bloodstream infection; mortality; and cost. We pooled results from studies that compared different frequencies of administration set replacement, for instance, we pooled studies that compared replacement ≥ every 96 hours versus every 72 hours with studies that compared replacement ≥ every 48 hours versus every 24 hours. We identified 26 studies for this updated review, 10 of which we excluded; six did not fulfil the inclusion criteria and four did not report usable data. We extracted data from the remaining 18 references (16 studies) with 5001 participants: study designs included neonate and adult
Dynamical basis sets for algebraic variational calculations in quantum-mechanical scattering theory
Sun, Yan; Kouri, Donald J.; Truhlar, Donald G.; Schwenke, David W.
1990-01-01
New basis sets are proposed for linear algebraic variational calculations of transition amplitudes in quantum-mechanical scattering problems. These basis sets are hybrids of those that yield the Kohn variational principle (KVP) and those that yield the generalized Newton variational principle (GNVP) when substituted in Schlessinger's stationary expression for the T operator. Trial calculations show that efficiencies almost as great as that of the GNVP and much greater than the KVP can be obtained, even for basis sets with the majority of the members independent of energy.
Quantum Dynamics with Short-Time Trajectories and Minimal Adaptive Basis Sets.
Saller, Maximilian A C; Habershon, Scott
2017-07-11
Methods for solving the time-dependent Schrödinger equation via basis set expansion of the wave function can generally be categorized as having either static (time-independent) or dynamic (time-dependent) basis functions. We have recently introduced an alternative simulation approach which represents a middle road between these two extremes, employing dynamic (classical-like) trajectories to create a static basis set of Gaussian wavepackets in regions of phase-space relevant to future propagation of the wave function [J. Chem. Theory Comput., 11, 8 (2015)]. Here, we propose and test a modification of our methodology which aims to reduce the size of basis sets generated in our original scheme. In particular, we employ short-time classical trajectories to continuously generate new basis functions for short-time quantum propagation of the wave function; to avoid the continued growth of the basis set describing the time-dependent wave function, we employ Matching Pursuit to periodically minimize the number of basis functions required to accurately describe the wave function. Overall, this approach generates a basis set which is adapted to evolution of the wave function while also being as small as possible. In applications to challenging benchmark problems, namely a 4-dimensional model of photoexcited pyrazine and three different double-well tunnelling problems, we find that our new scheme enables accurate wave function propagation with basis sets which are around an order-of-magnitude smaller than our original trajectory-guided basis set methodology, highlighting the benefits of adaptive strategies for wave function propagation.
Zaleśny, Robert; Baranowska-Łączkowska, Angelika; Medveď, Miroslav; Luis, Josep M
2015-09-08
In the present work, we perform an assessment of several property-oriented atomic basis sets in computing (hyper)polarizabilities with a focus on the vibrational contributions. Our analysis encompasses the Pol and LPol-ds basis sets of Sadlej and co-workers, the def2-SVPD and def2-TZVPD basis sets of Rappoport and Furche, and the ORP basis set of Baranowska-Łączkowska and Łączkowski. Additionally, we use the d-aug-cc-pVQZ and aug-cc-pVTZ basis sets of Dunning and co-workers to determine the reference estimates of the investigated electric properties for small- and medium-sized molecules, respectively. We combine these basis sets with ab initio post-Hartree-Fock quantum-chemistry approaches (including the coupled cluster method) to calculate electronic and nuclear relaxation (hyper)polarizabilities of carbon dioxide, formaldehyde, cis-diazene, and a medium-sized Schiff base. The primary finding of our study is that, among all studied property-oriented basis sets, only the def2-TZVPD and ORP basis sets yield nuclear relaxation (hyper)polarizabilities of small molecules with average absolute errors less than 5.5%. A similar accuracy for the nuclear relaxation (hyper)polarizabilites of the studied systems can also be reached using the aug-cc-pVDZ basis set (5.3%), although for more accurate calculations of vibrational contributions, i.e., average absolute errors less than 1%, the aug-cc-pVTZ basis set is recommended. It was also demonstrated that anharmonic contributions to first and second hyperpolarizabilities of a medium-sized Schiff base are particularly difficult to accurately predict at the correlated level using property-oriented basis sets. For instance, the value of the nuclear relaxation first hyperpolarizability computed at the MP2/def2-TZVPD level of theory is roughly 3 times larger than that determined using the aug-cc-pVTZ basis set. We link the failure of the def2-TZVPD basis set with the difficulties in predicting the first-order field
REGION AGRICULTURE DEVELOPMENT ON THE BASIS OF OPTIMIZATION MODELLING
Directory of Open Access Journals (Sweden)
V.P. Neganova
2008-06-01
Full Text Available The scientific substantiation of accommodation of an agricultural production of territorial divisions of region is a complex social-economic problem. The decision of this problem demands definition market-oriented criteria of an optimality. The author considers three criteria of optimality: maximum of profit; maximum of gross output without production costs and costs for soil fertility simple reproduction; maximum of marginal income. Conclusion is drawn that the best criterion of optimization of production is the maximum the marginal income (the marginal income without constant costs, which will raise economic and ecological efficiency of an agricultural production at all management levels. As a result of agricultural production optimization the Republic Bashkortostan will become self-provided and taking out (foodstuffs region of Russia. In this case the republic is capable to provide with food substances (protein, carbohydrates and etc. 5.8 – 6.5 million person. It exceeds a population of republic on 40 – 60 %.
International Nuclear Information System (INIS)
Kollmar, Christian; Neese, Frank
2014-01-01
The role of the static Kohn-Sham (KS) response function describing the response of the electron density to a change of the local KS potential is discussed in both the theory of the optimized effective potential (OEP) and the so-called inverse Kohn-Sham problem involving the task to find the local KS potential for a given electron density. In a general discussion of the integral equation to be solved in both cases, it is argued that a unique solution of this equation can be found even in case of finite atomic orbital basis sets. It is shown how a matrix representation of the response function can be obtained if the exchange-correlation potential is expanded in terms of a Schmidt-orthogonalized basis comprising orbitals products of occupied and virtual orbitals. The viability of this approach in both OEP theory and the inverse KS problem is illustrated by numerical examples
A Hartree–Fock study of the confined helium atom: Local and global basis set approaches
Energy Technology Data Exchange (ETDEWEB)
Young, Toby D., E-mail: tyoung@ippt.pan.pl [Zakład Metod Komputerowych, Instytut Podstawowych Prolemów Techniki Polskiej Akademia Nauk, ul. Pawińskiego 5b, 02-106 Warszawa (Poland); Vargas, Rubicelia [Universidad Autónoma Metropolitana Iztapalapa, División de Ciencias Básicas e Ingenierías, Departamento de Química, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, D.F. C.P. 09340, México (Mexico); Garza, Jorge, E-mail: jgo@xanum.uam.mx [Universidad Autónoma Metropolitana Iztapalapa, División de Ciencias Básicas e Ingenierías, Departamento de Química, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, D.F. C.P. 09340, México (Mexico)
2016-02-15
Two different basis set methods are used to calculate atomic energy within Hartree–Fock theory. The first is a local basis set approach using high-order real-space finite elements and the second is a global basis set approach using modified Slater-type orbitals. These two approaches are applied to the confined helium atom and are compared by calculating one- and two-electron contributions to the total energy. As a measure of the quality of the electron density, the cusp condition is analyzed. - Highlights: • Two different basis set methods for atomic Hartree–Fock theory. • Galerkin finite element method and modified Slater-type orbitals. • Confined atom model (helium) under small-to-extreme confinement radii. • Detailed analysis of the electron wave-function and the cusp condition.
The Bethe Sum Rule and Basis Set Selection in the Calculation of Generalized Oscillator Strengths
DEFF Research Database (Denmark)
Cabrera-Trujillo, Remigio; Sabin, John R.; Oddershede, Jens
1999-01-01
Fulfillment of the Bethe sum rule may be construed as a measure of basis set quality for atomic and molecular properties involving the generalized oscillator strength distribution. It is first shown that, in the case of a complete basis, the Bethe sum rule is fulfilled exactly in the random phase...
A two-center-oscillator-basis as an alternative set for heavy ion processes
International Nuclear Information System (INIS)
Tornow, V.; Reinhard, P.G.; Drechsel, D.
1977-01-01
The two-center-oscillator-basis, which is constructed from harmonic oscillator wave functions developing about two different centers, suffers from numerical problems at small center separations due to the overcompleteness of the set. In order to overcome these problems we admix higer oscillator wave functions before the orthogonalization, or antisymmetrization resp. This yields a numerically stable basis set at each center separation. The results obtained for the potential energy suface are comparable with the results of more elaborate models. (orig.) [de
Plumley, Joshua A.; Dannenberg, J. J.
2011-01-01
We evaluate the performance of nine functionals (B3LYP, M05, M05-2X, M06, M06-2X, B2PLYP, B2PLYPD, X3LYP, B97D and MPWB1K) in combination with 16 basis sets ranging in complexity from 6-31G(d) to aug-cc-pV5Z for the calculation of the H-bonded water dimer with the goal of defining which combinations of functionals and basis sets provide a combination of economy and accuracy for H-bonded systems. We have compared the results to the best non-DFT molecular orbital calculations and to experimenta...
On sets of vectors of a finite vector space in which every subset of basis size is a basis II
Ball, Simeon; De Beule, Jan
2012-01-01
This article contains a proof of the MDS conjecture for k a parts per thousand currency sign 2p - 2. That is, that if S is a set of vectors of in which every subset of S of size k is a basis, where q = p (h) , p is prime and q is not and k a parts per thousand currency sign 2p - 2, then |S| a parts per thousand currency sign q + 1. It also contains a short proof of the same fact for k a parts per thousand currency sign p, for all q.
Collimator setting optimization in intensity modulated radiotherapy
International Nuclear Information System (INIS)
Williams, M.; Hoban, P.
2001-01-01
Full text: The aim of this study was to investigate the role of collimator angle and bixel size settings in IMRT when using the step and shoot method of delivery. Of particular interest is minimisation of the total monitor units delivered. Beam intensity maps with bixel size 10 x 10 mm were segmented into MLC leaf sequences and the collimator angle optimised to minimise the total number of MU's. The monitor units were estimated from the maximum sum of positive-gradient intensity changes along the direction of leaf motion. To investigate the use of low resolution maps at optimum collimator angles, several high resolution maps with bixel size 5 x 5 mm were generated. These were resampled into bixel sizes, 5 x 10 mm and 10 x 10 mm and the collimator angle optimised to minimise the RMS error between the original and resampled map. Finally, a clinical IMRT case was investigated with the collimator angle optimised. Both the dose distribution and dose-volume histograms were compared between the standard IMRT plan and the optimised plan. For the 10 x 10 mm bixel maps there was a variation of 5% - 40% in monitor units at the different collimator angles. The maps with a high degree of radial symmetry showed little variation. For the resampled 5 x 5 mm maps, a small RMS error was achievable with a 5 x 10 mm bixel size at particular collimator positions. This was most noticeable for maps with an elongated intensity distribution. A comparison between the 5 x 5 mm bixel plan and the 5 x 10 mm showed no significant difference in dose distribution. The monitor units required to deliver an intensity modulated field can be reduced by rotating the collimator and aligning the direction of leaf motion with the axis of the fluence map that has the least intensity. Copyright (2001) Australasian College of Physical Scientists and Engineers in Medicine
Continuum contributions to dipole oscillator-strength sum rules for hydrogen in finite basis sets
DEFF Research Database (Denmark)
Oddershede, Jens; Ogilvie, John F.; Sauer, Stephan P. A.
2017-01-01
Calculations of the continuum contributions to dipole oscillator sum rules for hydrogen are performed using both exact and basis-set representations of the stick spectra of the continuum wave function. We show that the same results are obtained for the sum rules in both cases, but that the conver......Calculations of the continuum contributions to dipole oscillator sum rules for hydrogen are performed using both exact and basis-set representations of the stick spectra of the continuum wave function. We show that the same results are obtained for the sum rules in both cases......, but that the convergence towards the final results with increasing excitation energies included in the sum over states is slower in the basis-set cases when we use the best basis. We argue also that this conclusion most likely holds also for larger atoms or molecules....
Elitism set based particle swarm optimization and its application
Directory of Open Access Journals (Sweden)
Yanxia Sun
2017-01-01
Full Text Available Topology plays an important role for Particle Swarm Optimization (PSO to achieve good optimization performance. It is difficult to find one topology structure for the particles to achieve better optimization performance than the others since the optimization performance not only depends on the searching abilities of the particles, also depends on the type of the optimization problems. Three elitist set based PSO algorithm without using explicit topology structure is proposed in this paper. An elitist set, which is based on the individual best experience, is used to communicate among the particles. Moreover, to avoid the premature of the particles, different statistical methods have been used in these three proposed methods. The performance of the proposed PSOs is compared with the results of the standard PSO 2011 and several PSO with different topologies, and the simulation results and comparisons demonstrate that the proposed PSO with adaptive probabilistic preference can achieve good optimization performance.
Zhang, Xing; Carter, Emily A.
2018-01-01
We revisit the static response function-based Kohn-Sham (KS) inversion procedure for determining the KS effective potential that corresponds to a given target electron density within finite atomic orbital basis sets. Instead of expanding the potential in an auxiliary basis set, we directly update the potential in its matrix representation. Through numerical examples, we show that the reconstructed density rapidly converges to the target density. Preliminary results are presented to illustrate the possibility of obtaining a local potential in real space from the optimized potential in its matrix representation. We have further applied this matrix-based KS inversion approach to density functional embedding theory. A proof-of-concept study of a solvated proton transfer reaction demonstrates the method's promise.
International Nuclear Information System (INIS)
Woon, D.E.; Dunning, T.H. Jr.
1994-01-01
An accurate description of the electrical properties of atoms and molecules is critical for quantitative predictions of the nonlinear properties of molecules and of long-range atomic and molecular interactions between both neutral and charged species. We report a systematic study of the basis sets required to obtain accurate correlated values for the static dipole (α 1 ), quadrupole (α 2 ), and octopole (α 3 ) polarizabilities and the hyperpolarizability (γ) of the rare gas atoms He, Ne, and Ar. Several methods of correlation treatment were examined, including various orders of Moller--Plesset perturbation theory (MP2, MP3, MP4), coupled-cluster theory with and without perturbative treatment of triple excitations [CCSD, CCSD(T)], and singles and doubles configuration interaction (CISD). All of the basis sets considered here were constructed by adding even-tempered sets of diffuse functions to the correlation consistent basis sets of Dunning and co-workers. With multiply-augmented sets we find that the electrical properties of the rare gas atoms converge smoothly to values that are in excellent agreement with the available experimental data and/or previously computed results. As a further test of the basis sets presented here, the dipole polarizabilities of the F - and Cl - anions and of the HCl and N 2 molecules are also reported
Optimal set of selected uranium enrichments that minimizes blending consequences
International Nuclear Information System (INIS)
Nachlas, J.A.; Kurstedt, H.A. Jr.; Lobber, J.S. Jr.
1977-01-01
Identities, quantities, and costs associated with producing a set of selected enrichments and blending them to provide fuel for existing reactors are investigated using an optimization model constructed with appropriate constraints. Selected enrichments are required for either nuclear reactor fuel standardization or potential uranium enrichment alternatives such as the gas centrifuge. Using a mixed-integer linear program, the model minimizes present worth costs for a 39-product-enrichment reference case. For four ingredients, the marginal blending cost is only 0.18% of the total direct production cost. Natural uranium is not an optimal blending ingredient. Optimal values reappear in most sets of ingredient enrichments
Optimal Set-Point Synthesis in HVAC Systems
DEFF Research Database (Denmark)
Komareji, Mohammad; Stoustrup, Jakob; Rasmussen, Henrik
2007-01-01
This paper presents optimal set-point synthesis for a heating, ventilating, and air-conditioning (HVAC) system. This HVAC system is made of two heat exchangers: an air-to-air heat exchanger and a water-to-air heat exchanger. The objective function is composed of the electrical power for different...... components, encompassing fans, primary/secondary pump, tertiary pump, and air-to-air heat exchanger wheel; and a fraction of thermal power used by the HVAC system. The goals that have to be achieved by the HVAC system appear as constraints in the optimization problem. To solve the optimization problem......, a steady state model of the HVAC system is derived while different supplying hydronic circuits are studied for the water-to-air heat exchanger. Finally, the optimal set-points and the optimal supplying hydronic circuit are resulted....
A choice of the parameters of NPP steam generators on the basis of vector optimization
International Nuclear Information System (INIS)
Lemeshev, V.U.; Metreveli, D.G.
1981-01-01
The optimization problem of the parameters of the designed systems is considered as the problem of multicriterion optimization. It is proposed to choose non-dominant, optimal according to Pareto, parameters. An algorithm is built on the basis of the required and sufficient non-dominant conditions to find non-dominant solutions. This algorithm has been employed to solve the problem on a choice of optimal parameters for the counterflow shell-tube steam generator of NPP of BRGD type [ru
On the use of Locally Dense Basis Sets in the Calculation of EPR Hyperfine Couplings
DEFF Research Database (Denmark)
Hedegård, Erik D.; Sauer, Stephan P. A.; Milhøj, Birgitte O.
2013-01-01
The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site in the c......The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site...
On the use of locally dense basis sets in the calculation of EPR hyperfine couplings
DEFF Research Database (Denmark)
Milhøj, Birgitte Olai; Hedegård, Erik D.; Sauer, Stephan P. A.
2013-01-01
The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site in the c......The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site...
Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni
Shuttleworth, I.G.
2015-01-01
© 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.
Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni
Shuttleworth, I.G.
2015-11-01
© 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.
Global Optimization for Transport Network Expansion and Signal Setting
Liu, Haoxiang; Wang, David Z. W.; Yue, Hao
2015-01-01
This paper proposes a model to address an urban transport planning problem involving combined network design and signal setting in a saturated network. Conventional transport planning models usually deal with the network design problem and signal setting problem separately. However, the fact that network capacity design and capacity allocation determined by network signal setting combine to govern the transport network performance requires the optimal transport planning to consider the two pr...
Magnetic anisotropy basis sets for epitaxial (110) and (111) REFe2 nanofilms
International Nuclear Information System (INIS)
Bowden, G J; Martin, K N; Fox, A; Rainford, B D; Groot, P A J de
2008-01-01
Magnetic anisotropy basis sets for the cubic Laves phase rare earth intermetallic REFe 2 compounds are discussed in some detail. Such compounds can be either free standing, or thin films grown in either (110) or (111) mode using molecular beam epitaxy. For the latter, it is useful to rotate to a new coordinate system where the z-axis coincides with the growth axes of the film. In this paper, three symmetry adapted basis sets are given, for multi-pole moments up to n = 12. These sets can be used for free-standing compounds and for (110) and (111) epitaxial films. In addition, the distortion of REFe 2 films, grown on sapphire substrates, is also considered. The distortions are different for the (110) and (111) films. Strain-induced harmonic sets are given for both specific and general distortions. Finally, some predictions are made concerning the preferred direction of easy magnetization in (111) molecular beam epitaxy grown REFe 2 films
Correlation consistent basis sets for actinides. I. The Th and U atoms
Energy Technology Data Exchange (ETDEWEB)
Peterson, Kirk A., E-mail: kipeters@wsu.edu [Department of Chemistry, Washington State University, Pullman, Washington 99164-4630 (United States)
2015-02-21
New correlation consistent basis sets based on both pseudopotential (PP) and all-electron Douglas-Kroll-Hess (DKH) Hamiltonians have been developed from double- to quadruple-zeta quality for the actinide atoms thorium and uranium. Sets for valence electron correlation (5f6s6p6d), cc − pV nZ − PP and cc − pV nZ − DK3, as well as outer-core correlation (valence + 5s5p5d), cc − pwCV nZ − PP and cc − pwCV nZ − DK3, are reported (n = D, T, Q). The -PP sets are constructed in conjunction with small-core, 60-electron PPs, while the -DK3 sets utilized the 3rd-order Douglas-Kroll-Hess scalar relativistic Hamiltonian. Both series of basis sets show systematic convergence towards the complete basis set limit, both at the Hartree-Fock and correlated levels of theory, making them amenable to standard basis set extrapolation techniques. To assess the utility of the new basis sets, extensive coupled cluster composite thermochemistry calculations of ThF{sub n} (n = 2 − 4), ThO{sub 2}, and UF{sub n} (n = 4 − 6) have been carried out. After accurately accounting for valence and outer-core correlation, spin-orbit coupling, and even Lamb shift effects, the final 298 K atomization enthalpies of ThF{sub 4}, ThF{sub 3}, ThF{sub 2}, and ThO{sub 2} are all within their experimental uncertainties. Bond dissociation energies of ThF{sub 4} and ThF{sub 3}, as well as UF{sub 6} and UF{sub 5}, were similarly accurate. The derived enthalpies of formation for these species also showed a very satisfactory agreement with experiment, demonstrating that the new basis sets allow for the use of accurate composite schemes just as in molecular systems composed only of lighter atoms. The differences between the PP and DK3 approaches were found to increase with the change in formal oxidation state on the actinide atom, approaching 5-6 kcal/mol for the atomization enthalpies of ThF{sub 4} and ThO{sub 2}. The DKH3 atomization energy of ThO{sub 2} was calculated to be smaller than the DKH2
Accurate Conformational Energy Differences of Carbohydrates: A Complete Basis Set Extrapolation
Czech Academy of Sciences Publication Activity Database
Csonka, G. I.; Kaminský, Jakub
2011-01-01
Roč. 7, č. 4 (2011), s. 988-997 ISSN 1549-9618 Institutional research plan: CEZ:AV0Z40550506 Keywords : MP2 * basis set extrapolation * saccharides Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.215, year: 2011
Predicting Pt-195 NMR chemical shift using new relativistic all-electron basis set
Paschoal, D.; Fonseca Guerra, C.; de Oliveira, M.A.L.; Ramalho, T.C.; Dos Santos, H.F.
2016-01-01
Predicting NMR properties is a valuable tool to assist the experimentalists in the characterization of molecular structure. For heavy metals, such as Pt-195, only a few computational protocols are available. In the present contribution, all-electron Gaussian basis sets, suitable to calculate the
Incomplete basis-set problem. V. Application of CIBS to many-electron systems
International Nuclear Information System (INIS)
McDowell, K.; Lewis, L.
1982-01-01
Five versions of CIBS (corrections to an incomplete basis set) theory are used to compute first and second corrections to Roothaan--Hartree--Fock energies via expansion of a given basis set. Version one is an order by order perturbation approximation which neglects virtual orbitals; version two is a full CIBS expansion which neglects virtual orbitals; version three is an order by order perturbation approximation which includes virtual orbitals; version four is a full CIBS expansion which includes orthogonalization to virtual orbitals but neglects virtual orbital coupling terms; and version five is a full CIBS expansion with inclusion of coupling to virtual orbitals. Results are presented for the atomic and molecular systems He, Be, H 2 , LiH, Li 2 , and H 2 O. Version five is shown to produce a corrected Hartree--Fock energy which is essentially in agreement with a comparable SCF result using the same expanded basis set. Versions one through four yield varying degrees of agreement; however, it is evident that the effect of the virtual orbitals must be included. From the results, CIBS version five is shown to be a viable quantitative procedure which can be used to expand or to study the use of basis sets in quantum chemistry
Energy Technology Data Exchange (ETDEWEB)
Borges, A.; Solomon, G. C. [Department of Chemistry and Nano-Science Center, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen Ø (Denmark)
2016-05-21
Single molecule conductance measurements are often interpreted through computational modeling, but the complexity of these calculations makes it difficult to directly link them to simpler concepts and models. Previous work has attempted to make this connection using maximally localized Wannier functions and symmetry adapted basis sets, but their use can be ambiguous and non-trivial. Starting from a Hamiltonian and overlap matrix written in a hydrogen-like basis set, we demonstrate a simple approach to obtain a new basis set that is chemically more intuitive and allows interpretation in terms of simple concepts and models. By diagonalizing the Hamiltonians corresponding to each atom in the molecule, we obtain a basis set that can be partitioned into pseudo-σ and −π and allows partitioning of the Landuaer-Büttiker transmission as well as create simple Hückel models that reproduce the key features of the full calculation. This method provides a link between complex calculations and simple concepts and models to provide intuition or extract parameters for more complex model systems.
Global Optimization for Bus Line Timetable Setting Problem
Directory of Open Access Journals (Sweden)
Qun Chen
2014-01-01
Full Text Available This paper defines bus timetables setting problem during each time period divided in terms of passenger flow intensity; it is supposed that passengers evenly arrive and bus runs are set evenly; the problem is to determine bus runs assignment in each time period to minimize the total waiting time of passengers on platforms if the number of the total runs is known. For such a multistage decision problem, this paper designed a dynamic programming algorithm to solve it. Global optimization procedures using dynamic programming are developed. A numerical example about bus runs assignment optimization of a single line is given to demonstrate the efficiency of the proposed methodology, showing that optimizing buses’ departure time using dynamic programming can save computational time and find the global optimal solution.
Vector optimization set-valued and variational analysis
Chen, Guang-ya; Yang, Xiaogi
2005-01-01
This book is devoted to vector or multiple criteria approaches in optimization. Topics covered include: vector optimization, vector variational inequalities, vector variational principles, vector minmax inequalities and vector equilibrium problems. In particular, problems with variable ordering relations and set-valued mappings are treated. The nonlinear scalarization method is extensively used throughout the book to deal with various vector-related problems. The results presented are original and should be interesting to researchers and graduates in applied mathematics and operations research
An editor for the maintenance and use of a bank of contracted Gaussian basis set functions
International Nuclear Information System (INIS)
Taurian, O.E.
1984-01-01
A bank of basis sets to be used in ab-initio calculations has been created. The bases are sets of contracted Gaussian type orbitals to be used as input to any molecular integral package. In this communication we shall describe the organization of the bank and a portable editor program which was designed for its maintenance and use. This program is operated by commands and it may be used to obtain any kind of information about the bases in the bank as well as to produce output to be directly used as input for different integral programs. The editor may also be used to format basis sets in the conventional way utilized in publications, as well as to generate a complete, or partial, manual of the contents of the bank if so desired. (orig.)
Training set optimization under population structure in genomic selection.
Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E
2015-01-01
Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.
Heyde, Frank; Löhne, Andreas; Rudloff, Birgit; Schrage, Carola
2015-01-01
This volume presents five surveys with extensive bibliographies and six original contributions on set optimization and its applications in mathematical finance and game theory. The topics range from more conventional approaches that look for minimal/maximal elements with respect to vector orders or set relations, to the new complete-lattice approach that comprises a coherent solution concept for set optimization problems, along with existence results, duality theorems, optimality conditions, variational inequalities and theoretical foundations for algorithms. Modern approaches to scalarization methods can be found as well as a fundamental contribution to conditional analysis. The theory is tailor-made for financial applications, in particular risk evaluation and [super-]hedging for market models with transaction costs, but it also provides a refreshing new perspective on vector optimization. There is no comparable volume on the market, making the book an invaluable resource for researchers working in vector o...
Optimal projection of observations in a Bayesian setting
Giraldi, Loic
2018-03-18
Optimal dimensionality reduction methods are proposed for the Bayesian inference of a Gaussian linear model with additive noise in presence of overabundant data. Three different optimal projections of the observations are proposed based on information theory: the projection that minimizes the Kullback–Leibler divergence between the posterior distributions of the original and the projected models, the one that minimizes the expected Kullback–Leibler divergence between the same distributions, and the one that maximizes the mutual information between the parameter of interest and the projected observations. The first two optimization problems are formulated as the determination of an optimal subspace and therefore the solution is computed using Riemannian optimization algorithms on the Grassmann manifold. Regarding the maximization of the mutual information, it is shown that there exists an optimal subspace that minimizes the entropy of the posterior distribution of the reduced model; a basis of the subspace can be computed as the solution to a generalized eigenvalue problem; an a priori error estimate on the mutual information is available for this particular solution; and that the dimensionality of the subspace to exactly conserve the mutual information between the input and the output of the models is less than the number of parameters to be inferred. Numerical applications to linear and nonlinear models are used to assess the efficiency of the proposed approaches, and to highlight their advantages compared to standard approaches based on the principal component analysis of the observations.
Optimal regional biases in ECB interest rate setting
Arnold, I.J.M.
2005-01-01
This paper uses a simple model of optimal monetary policy to consider whether the influence of national output and inflation rates on ECB interest rate setting should equal a country’s weight in the eurozone economy. The findings depend on assumptions regarding interest rate elasticities, exchange
Pseudo-atomic orbitals as basis sets for the O(N) DFT code CONQUEST
Energy Technology Data Exchange (ETDEWEB)
Torralba, A S; Brazdova, V; Gillan, M J; Bowler, D R [Materials Simulation Laboratory, UCL, Gower Street, London WC1E 6BT (United Kingdom); Todorovic, M; Miyazaki, T [National Institute for Materials Science, 1-2-1 Sengen, Tsukuba, Ibaraki 305-0047 (Japan); Choudhury, R [London Centre for Nanotechnology, UCL, 17-19 Gordon Street, London WC1H 0AH (United Kingdom)], E-mail: david.bowler@ucl.ac.uk
2008-07-23
Various aspects of the implementation of pseudo-atomic orbitals (PAOs) as basis functions for the linear scaling CONQUEST code are presented. Preliminary results for the assignment of a large set of PAOs to a smaller space of support functions are encouraging, and an important related proof on the necessary symmetry of the support functions is shown. Details of the generation and integration schemes for the PAOs are also given.
Level-Set Topology Optimization with Aeroelastic Constraints
Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia
2015-01-01
Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.
Current-voltage curves for molecular junctions computed using all-electron basis sets
International Nuclear Information System (INIS)
Bauschlicher, Charles W.; Lawson, John W.
2006-01-01
We present current-voltage (I-V) curves computed using all-electron basis sets on the conducting molecule. The all-electron results are very similar to previous results obtained using effective core potentials (ECP). A hybrid integration scheme is used that keeps the all-electron calculations cost competitive with respect to the ECP calculations. By neglecting the coupling of states to the contacts below a fixed energy cutoff, the density matrix for the core electrons can be evaluated analytically. The full density matrix is formed by adding this core contribution to the valence part that is evaluated numerically. Expanding the definition of the core in the all-electron calculations significantly reduces the computational effort and, up to biases of about 2 V, the results are very similar to those obtained using more rigorous approaches. The convergence of the I-V curves and transmission coefficients with respect to basis set is discussed. The addition of diffuse functions is critical in approaching basis set completeness
Constructing DNA Barcode Sets Based on Particle Swarm Optimization.
Wang, Bin; Zheng, Xuedong; Zhou, Shihua; Zhou, Changjun; Wei, Xiaopeng; Zhang, Qiang; Wei, Ziqi
2018-01-01
Following the completion of the human genome project, a large amount of high-throughput bio-data was generated. To analyze these data, massively parallel sequencing, namely next-generation sequencing, was rapidly developed. DNA barcodes are used to identify the ownership between sequences and samples when they are attached at the beginning or end of sequencing reads. Constructing DNA barcode sets provides the candidate DNA barcodes for this application. To increase the accuracy of DNA barcode sets, a particle swarm optimization (PSO) algorithm has been modified and used to construct the DNA barcode sets in this paper. Compared with the extant results, some lower bounds of DNA barcode sets are improved. The results show that the proposed algorithm is effective in constructing DNA barcode sets.
International Nuclear Information System (INIS)
Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook; Kim, Woo Youn
2015-01-01
We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal to 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems
Directory of Open Access Journals (Sweden)
O. V. Fomin
2013-10-01
Full Text Available Purpose. Presentation of features and example of the use of the offered determination algorithm of optimum geometrical parameters for the components of freight cars on the basis of the generalized mathematical models, which is realized using computer. Methodology. The developed approach to search for optimal geometrical parameters can be described as the determination of optimal decision of the selected set of possible variants. Findings. The presented application example of the offered algorithm proved its operation capacity and efficiency of use. Originality. The determination procedure of optimal geometrical parameters for freight car components on the basis of the generalized mathematical models was formalized in the paper. Practical value. Practical introduction of the research results for universal open cars allows one to reduce container of their design and accordingly to increase the carrying capacity almost by100 kg with the improvement of strength characteristics. Taking into account the mass of their park this will provide a considerable economic effect when producing and operating. The offered approach is oriented to the distribution of the software packages (for example Microsoft Excel, which are used by technical services of the most enterprises, and does not require additional capital investments (acquisitions of the specialized programs and proper technical staff training. This proves the correctness of the research direction. The offered algorithm can be used for the solution of other optimization tasks on the basis of the generalized mathematical models.
Directory of Open Access Journals (Sweden)
GHOLAMIAN, A. S.
2009-06-01
Full Text Available In this paper, a magnet shape optimization method for reduction of cogging torque and torque ripple in Permanent Magnet (PM brushless DC motors is presented by using the reduced basis technique coupled by finite element and design of experiments methods. The primary objective of the method is to reduce the enormous number of design variables required to define the magnet shape. The reduced basis technique is a weighted combination of several basis shapes. The aim of the method is to find the best combination using the weights for each shape as the design variables. A multi-level design process is developed to find suitable basis shapes or trial shapes at each level that can be used in the reduced basis technique. Each level is treated as a separated optimization problem until the required objective is achieved. The experimental design of Taguchi method is used to build the approximation model and to perform optimization. This method is demonstrated on the magnet shape optimization of a 6-poles/18-slots PM BLDC motor.
Directory of Open Access Journals (Sweden)
Vesko Drašković
2008-08-01
Full Text Available According to the World Health Organization’s data, obesity is one of the main risk factors for the human health, especially in so called “mature age”, that is in forties and fiftees of the human’s life. There are many causes of obesity, and the most common ones are unadequate or excessive nutrition, low quality food rich in fats and highly caloric sweetener, unsufficient physical activity – hypokinesy, but also technical and technological development of the modern World (TV, cell phones, elevators, cars etc.. The objective of this research is to define the obesity of adults living in the urban settings through BMI (body mass index and to create, on the basis of these findings, the basis for different applications of the recreational sports programme.
Langhoff, P. W.; Winstead, C. L.
Early studies of the electronically excited states of molecules by John A. Pople and coworkers employing ab initio single-excitation configuration interaction (SECI) calculations helped to simulate related applications of these methods to the partial-channel photoionization cross sections of polyatomic molecules. The Gaussian representations of molecular orbitals adopted by Pople and coworkers can describe SECI continuum states when sufficiently large basis sets are employed. Minimal-basis virtual Fock orbitals stabilized in the continuous portions of such SECI spectra are generally associated with strong photoionization resonances. The spectral attributes of these resonance orbitals are illustrated here by revisiting previously reported experimental and theoretical studies of molecular formaldehyde (H2CO) in combination with recently calculated continuum orbital amplitudes.
Gaspari, Roberto; Rapallo, Arnaldo
2008-06-28
In this work a new method is proposed for the choice of basis functions in diffusion theory (DT) calculations. This method, named hybrid basis approach (HBA), combines the two previously adopted long time sorting procedure (LTSP) and maximum correlation approximation (MCA) techniques; the first emphasizing contributions from the long time dynamics, the latter being based on the local correlations along the chain. In order to fulfill this task, the HBA procedure employs a first order basis set corresponding to a high order MCA one and generates upper order approximations according to LTSP. A test of the method is made first on a melt of cis-1,4-polyisoprene decamers where HBA and LTSP are compared in terms of efficiency. Both convergence properties and numerical stability are improved by the use of the HBA basis set whose performance is evaluated on local dynamics, by computing the correlation times of selected bond vectors along the chain, and on global ones, through the eigenvalues of the diffusion operator L. Further use of the DT with a HBA basis set has been made on a 71-mer of syndiotactic trans-1,2-polypentadiene in toluene solution, whose dynamical properties have been computed with a high order calculation and compared to the "numerical experiment" provided by the molecular dynamics (MD) simulation in explicit solvent. The necessary equilibrium averages have been obtained by a vacuum trajectory of the chain where solvent effects on conformational properties have been reproduced with a proper screening of the nonbonded interactions, corresponding to a definite value of the mean radius of gyration of the polymer in vacuum. Results show a very good agreement between DT calculations and the MD numerical experiment. This suggests a further use of DT methods with the necessary input quantities obtained by the only knowledge of some experimental values, i.e., the mean radius of gyration of the chain and the viscosity of the solution, and by a suitable vacuum
Grimme, Stefan; Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas
2015-08-07
A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of "low-cost" electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT methods
International Nuclear Information System (INIS)
Grimme, Stefan; Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas
2015-01-01
A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of “low-cost” electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT
Topology optimization of hyperelastic structures using a level set method
Chen, Feifei; Wang, Yiqiang; Wang, Michael Yu; Zhang, Y. F.
2017-12-01
Soft rubberlike materials, due to their inherent compliance, are finding widespread implementation in a variety of applications ranging from assistive wearable technologies to soft material robots. Structural design of such soft and rubbery materials necessitates the consideration of large nonlinear deformations and hyperelastic material models to accurately predict their mechanical behaviour. In this paper, we present an effective level set-based topology optimization method for the design of hyperelastic structures that undergo large deformations. The method incorporates both geometric and material nonlinearities where the strain and stress measures are defined within the total Lagrange framework and the hyperelasticity is characterized by the widely-adopted Mooney-Rivlin material model. A shape sensitivity analysis is carried out, in the strict sense of the material derivative, where the high-order terms involving the displacement gradient are retained to ensure the descent direction. As the design velocity enters into the shape derivative in terms of its gradient and divergence terms, we develop a discrete velocity selection strategy. The whole optimization implementation undergoes a two-step process, where the linear optimization is first performed and its optimized solution serves as the initial design for the subsequent nonlinear optimization. It turns out that this operation could efficiently alleviate the numerical instability and facilitate the optimization process. To demonstrate the validity and effectiveness of the proposed method, three compliance minimization problems are studied and their optimized solutions present significant mechanical benefits of incorporating the nonlinearities, in terms of remarkable enhancement in not only the structural stiffness but also the critical buckling load.
Correlation consistent basis sets for lanthanides: The atoms La–Lu
Energy Technology Data Exchange (ETDEWEB)
Lu, Qing; Peterson, Kirk A., E-mail: kipeters@wsu.edu [Department of Chemistry, Washington State University, Pullman, Washington 99164-4630 (United States)
2016-08-07
Using the 3rd-order Douglas-Kroll-Hess (DKH3) Hamiltonian, all-electron correlation consistent basis sets of double-, triple-, and quadruple-zeta quality have been developed for the lanthanide elements La through Lu. Basis sets designed for the recovery of valence correlation (defined here as 4f5s5p5d6s), cc-pVnZ-DK3, and outer-core correlation (valence + 4s4p4d), cc-pwCVnZ-DK3, are reported (n = D, T, and Q). Systematic convergence of both Hartree-Fock and correlation energies towards their respective complete basis set (CBS) limits are observed. Benchmark calculations of the first three ionization potentials (IPs) of La through Lu are reported at the DKH3 coupled cluster singles and doubles with perturbative triples, CCSD(T), level of theory, including effects of correlation down through the 4s electrons. Spin-orbit coupling is treated at the 2-component HF level. After extrapolation to the CBS limit, the average errors with respect to experiment were just 0.52, 1.14, and 4.24 kcal/mol for the 1st, 2nd, and 3rd IPs, respectively, compared to the average experimental uncertainties of 0.03, 1.78, and 2.65 kcal/mol, respectively. The new basis sets are also used in CCSD(T) benchmark calculations of the equilibrium geometries, atomization energies, and heats of formation for Gd{sub 2}, GdF, and GdF{sub 3}. Except for the equilibrium geometry and harmonic frequency of GdF, which are accurately known from experiment, all other calculated quantities represent significant improvements compared to the existing experimental quantities. With estimated uncertainties of about ±3 kcal/mol, the 0 K atomization energies (298 K heats of formation) are calculated to be (all in kcal/mol): 33.2 (160.1) for Gd{sub 2}, 151.7 (−36.6) for GdF, and 447.1 (−295.2) for GdF{sub 3}.
Symmetry-adapted basis sets automatic generation for problems in chemistry and physics
Avery, John Scales; Avery, James Emil
2012-01-01
In theoretical physics, theoretical chemistry and engineering, one often wishes to solve partial differential equations subject to a set of boundary conditions. This gives rise to eigenvalue problems of which some solutions may be very difficult to find. For example, the problem of finding eigenfunctions and eigenvalues for the Hamiltonian of a many-particle system is usually so difficult that it requires approximate methods, the most common of which is expansion of the eigenfunctions in terms of basis functions that obey the boundary conditions of the problem. The computational effort needed
Cluster analysis by optimal decomposition of induced fuzzy sets
Energy Technology Data Exchange (ETDEWEB)
Backer, E
1978-01-01
Nonsupervised pattern recognition is addressed and the concept of fuzzy sets is explored in order to provide the investigator (data analyst) additional information supplied by the pattern class membership values apart from the classical pattern class assignments. The basic ideas behind the pattern recognition problem, the clustering problem, and the concept of fuzzy sets in cluster analysis are discussed, and a brief review of the literature of the fuzzy cluster analysis is given. Some mathematical aspects of fuzzy set theory are briefly discussed; in particular, a measure of fuzziness is suggested. The optimization-clustering problem is characterized. Then the fundamental idea behind affinity decomposition is considered. Next, further analysis takes place with respect to the partitioning-characterization functions. The iterative optimization procedure is then addressed. The reclassification function is investigated and convergence properties are examined. Finally, several experiments in support of the method suggested are described. Four object data sets serve as appropriate test cases. 120 references, 70 figures, 11 tables. (RWR)
Laun, Joachim; Vilela Oliveira, Daniel; Bredow, Thomas
2018-02-22
Consistent basis sets of double- and triple-zeta valence with polarization quality for the fifth period have been derived for periodic quantum-chemical solid-state calculations with the crystalline-orbital program CRYSTAL. They are an extension of the pob-TZVP basis sets, and are based on the full-relativistic effective core potentials (ECPs) of the Stuttgart/Cologne group and on the def2-SVP and def2-TZVP valence basis of the Ahlrichs group. We optimized orbital exponents and contraction coefficients to supply robust and stable self-consistent field (SCF) convergence for a wide range of different compounds. The computed crystal structures are compared to those obtained with standard basis sets available from the CRYSTAL basis set database. For the applied hybrid density functional PW1PW, the average deviations of calculated lattice constants from experimental references are smaller with pob-DZVP and pob-TZVP than with standard basis sets. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
DEFF Research Database (Denmark)
Silva-Junior, Mario R.; Sauer, Stephan P. A.; Schreiber, Marko
2010-01-01
Vertical electronic excitation energies and one-electron properties of 28 medium-sized molecules from a previously proposed benchmark set are revisited using the augmented correlation-consistent triple-zeta aug-cc-pVTZ basis set in CC2, CCSDR(3), and CC3 calculations. The results are compared...... to those obtained previously with the smaller TZVP basis set. For each of the three coupled cluster methods, a correlation coefficient greater than 0.994 is found between the vertical excitation energies computed with the two basis sets. The deviations of the CC2 and CCSDR(3) results from the CC3 reference...... values are very similar for both basis sets, thus confirming previous conclusions on the intrinsic accuracy of CC2 and CCSDR(3). This similarity justifies the use of CC2- or CCSDR(3)-based corrections to account for basis set incompleteness in CC3 studies of vertical excitation energies. For oscillator...
Chacon-Madrid, Heber J; Murphy, Benjamin N; Pandis, Spyros N; Donahue, Neil M
2012-10-16
We use a two-dimensional volatility basis set (2D-VBS) box model to simulate secondary organic aerosol (SOA) mass yields of linear oxygenated molecules: n-tridecanal, 2- and 7-tridecanone, 2- and 7-tridecanol, and n-pentadecane. A hybrid model with explicit, a priori treatment of the first-generation products for each precursor molecule, followed by a generic 2D-VBS mechanism for later-generation chemistry, results in excellent model-measurement agreement. This strongly confirms that the 2D-VBS mechanism is a predictive tool for SOA modeling but also suggests that certain important first-generation products for major primary SOA precursors should be treated explicitly for optimal SOA predictions.
Optimal Set Anode Potentials Vary in Bioelectrochemical Systems
Wagner, Rachel C.
2010-08-15
In bioelectrochemical systems (BESs), the anode potential can be set to a fixed voltage using a potentiostat, but there is no accepted method for defining an optimal potential. Microbes can theoretically gain more energy by reducing a terminal electron acceptor with a more positive potential, for example oxygen compared to nitrate. Therefore, more positive anode potentials should allow microbes to gain more energy per electron transferred than a lower potential, but this can only occur if the microbe has metabolic pathways capable of capturing the available energy. Our review of the literature shows that there is a general trend of improved performance using more positive potentials, but there are several notable cases where biofilm growth and current generation improved or only occurred at more negative potentials. This suggests that even with diverse microbial communities, it is primarily the potential of the terminal respiratory proteins used by certain exoelectrogenic bacteria, and to a lesser extent the anode potential, that determines the optimal growth conditions in the reactor. Our analysis suggests that additional bioelectrochemical investigations of both pure and mixed cultures, over a wide range of potentials, are needed to better understand how to set and evaluate optimal anode potentials for improving BES performance. © 2010 American Chemical Society.
Optimization Settings in the Fuzzy Combined Mamdani PID Controller
Kudinov, Y. I.; Pashchenko, F. F.; Pashchenko, A. F.; Kelina, A. Y.; Kolesnikov, V. A.
2017-11-01
In the present work the actual problem of determining the optimal settings of fuzzy parallel proportional-integral-derivative (PID) controller is considered to control nonlinear plants that is not always possible to perform with classical linear PID controllers. In contrast to the linear fuzzy PID controllers there are no analytical methods of settings calculation. In this paper, we develop a numerical optimization approach to determining the coefficients of a fuzzy PID controller. Decomposition method of optimization is proposed, the essence of which was as follows. All homogeneous coefficients were distributed to the relevant groups, for example, three error coefficients, the three coefficients of the changes of errors and the three coefficients of the outputs P, I and D components. Consistently in each of such groups the search algorithm was selected that has determined the coefficients under which we receive the schedule of the transition process satisfying all the applicable constraints. Thus, with the help of Matlab and Simulink in a reasonable time were found the factors of a fuzzy PID controller, which meet the accepted limitations on the transition process.
Many-body calculations of molecular electric polarizabilities in asymptotically complete basis sets
Monten, Ruben; Hajgató, Balázs; Deleuze, Michael S.
2011-10-01
The static dipole polarizabilities of Ne, CO, N2, F2, HF, H2O, HCN, and C2H2 (acetylene) have been determined close to the Full-CI limit along with an asymptotically complete basis set (CBS), according to the principles of a Focal Point Analysis. For this purpose the results of Finite Field calculations up to the level of Coupled Cluster theory including Single, Double, Triple, Quadruple and perturbative Pentuple excitations [CCSDTQ(P)] were used, in conjunction with suited extrapolations of energies obtained using augmented and doubly-augmented Dunning's correlation consistent polarized valence basis sets of improving quality. The polarizability characteristics of C2H4 (ethylene) and C2H6 (ethane) have been determined on the same grounds at the CCSDTQ level in the CBS limit. Comparison is made with results obtained using lower levels in electronic correlation, or taking into account the relaxation of the molecular structure due to an adiabatic polarization process. Vibrational corrections to electronic polarizabilities have been empirically estimated according to Born-Oppenheimer Molecular Dynamical simulations employing Density Functional Theory. Confrontation with experiment ultimately indicates relative accuracies of the order of 1 to 2%.
Polarization functions for the modified m6-31G basis sets for atoms Ga through Kr.
Mitin, Alexander V
2013-09-05
The 2df polarization functions for the modified m6-31G basis sets of the third-row atoms Ga through Kr (Int J Quantum Chem, 2007, 107, 3028; Int J. Quantum Chem, 2009, 109, 1158) are proposed. The performances of the m6-31G, m6-31G(d,p), and m6-31G(2df,p) basis sets were examined in molecular calculations carried out by the density functional theory (DFT) method with B3LYP hybrid functional, Møller-Plesset perturbation theory of the second order (MP2), quadratic configuration interaction method with single and double substitutions and were compared with those for the known 6-31G basis sets as well as with the other similar 641 and 6-311G basis sets with and without polarization functions. Obtained results have shown that the performances of the m6-31G, m6-31G(d,p), and m6-31G(2df,p) basis sets are better in comparison with the performances of the known 6-31G, 6-31G(d,p) and 6-31G(2df,p) basis sets. These improvements are mainly reached due to better approximations of different electrons belonging to the different atomic shells in the modified basis sets. Applicability of the modified basis sets in thermochemical calculations is also discussed. © 2013 Wiley Periodicals, Inc.
INDUSTRIAL AREA AS A BASIS FOR SPATIAL OPTIMIZATION OF LAND USE IN KIEV
Directory of Open Access Journals (Sweden)
Tsviakh О.
2017-02-01
Full Text Available In article deals with problem of using the urban land, including land under the industrial objects in Kiev. Also was analysed the ways of optimization the urban land using. Today become particularly acute the problem for efficient use of urban land use, including land for industrial facilities non-functioning as a reserve future development of Kyiv-based ecological-economic approach to solving them. However, to ensure sustainable development of urban population (preserve and improve health, improve working conditions, improve living conditions, increase the construction of social and affordable housing, reducing unemployment, creating new jobs, improving the ecological state of the environment within large cities , you need to identify ways to optimize existing urban land use. The complexity of management decisions is determined, above all, the fact that in most cities of Ukraine territorial resources are exhausted and vacant land plots require significant investment. Also, a significant proportion of non-functioning industrial enterprises, which occupy large areas in the city were in Kyiv surrounded by residential development, buffer zones, technogenic disturbed and contaminated land. These objects be removed outside the settlements and the land on which they are to be re-cultivated and restoration for more ecological, economically feasible and sustainable use. The rapid development of large cities around the world and increase their impact on the environment and society is accompanied by a set ekonominyh, environmental and social issues that significantly influence the development of land relations in settlements in general. Today in Kyiv observed the changing dynamics of land area, which is to reduce the share of agricultural and forestry purposes and to increase the territory of other categories. The process of de-industrialization and suburbanization of urban land use is inevitable. They in turn accelerate other processes - "crowding out
2010-10-01
... physician services in a teaching setting. 415.170 Section 415.170 Public Health CENTERS FOR MEDICARE... BY PHYSICIANS IN PROVIDERS, SUPERVISING PHYSICIANS IN TEACHING SETTINGS, AND RESIDENTS IN CERTAIN SETTINGS Physician Services in Teaching Settings § 415.170 Conditions for payment on a fee schedule basis...
Energy Technology Data Exchange (ETDEWEB)
Zhang, Gaigong [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Lin, Lin, E-mail: linlin@math.berkeley.edu [Department of Mathematics, University of California, Berkeley, Berkeley, CA 94720 (United States); Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Hu, Wei, E-mail: whu@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Yang, Chao, E-mail: cyang@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Pask, John E., E-mail: pask1@llnl.gov [Physics Division, Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States)
2017-04-15
Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynman forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Since the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H{sub 2} and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.
Velocity-gauge real-time TDDFT within a numerical atomic orbital basis set
Pemmaraju, C. D.; Vila, F. D.; Kas, J. J.; Sato, S. A.; Rehr, J. J.; Yabana, K.; Prendergast, David
2018-05-01
The interaction of laser fields with solid-state systems can be modeled efficiently within the velocity-gauge formalism of real-time time dependent density functional theory (RT-TDDFT). In this article, we discuss the implementation of the velocity-gauge RT-TDDFT equations for electron dynamics within a linear combination of atomic orbitals (LCAO) basis set framework. Numerical results obtained from our LCAO implementation, for the electronic response of periodic systems to both weak and intense laser fields, are compared to those obtained from established real-space grid and Full-Potential Linearized Augmented Planewave approaches. Potential applications of the LCAO based scheme in the context of extreme ultra-violet and soft X-ray spectroscopies involving core-electronic excitations are discussed.
Rohmer, Jeremy
2016-04-01
Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.
Energy Technology Data Exchange (ETDEWEB)
Hsu, Po Jen; Lai, S. K., E-mail: sklai@coll.phy.ncu.edu.tw [Complex Liquids Laboratory, Department of Physics, National Central University, Chungli 320, Taiwan and Molecular Science and Technology Program, Taiwan International Graduate Program, Academia Sinica, Taipei 115, Taiwan (China); Rapallo, Arnaldo [Istituto per lo Studio delle Macromolecole (ISMAC) Consiglio Nazionale delle Ricerche (CNR), via E. Bassini 15, C.A.P 20133 Milano (Italy)
2014-03-14
Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit
International Nuclear Information System (INIS)
Hsu, Po Jen; Lai, S. K.; Rapallo, Arnaldo
2014-01-01
Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit
International Nuclear Information System (INIS)
Ryjov, A.; Loginov, D.
1994-01-01
The problem of choosing an optimal set of significances of qualitative attributes for information retrieval in databases is addressed. Given a particular database, a set of significances is called optimal if it results in the minimization of losses of information and information noise for information retrieval in the data base. Obviously, such a set of significances depends on the statistical parameters of the data base. The software, which enables to calculate on the basis of the statistical parameters of the given data base, the losses of information and the information noise for arbitrary sets of significances of qualitative attributes, is described. The software also permits to compare various sets of significances of qualitative attributes and to choose the optimal set of significances
International Nuclear Information System (INIS)
Ermakov, A.I.; Belousov, V.V.
2007-01-01
Relaxation effect of functions of the basis sets (BS) STO-3G and 6-31G* on their equilibration in the series of isoelectron molecules: LiF, BeO, BN and C 2 is considered. Values of parameters (exponential factor of basis functions, orbital exponents of Gauss primitives and coefficients of their grouping) of basis functions in molecules are discovered using the criterion of minimum of energy by the unlimited Hartree-Fock method calculations (UHF) with the help of direct optimization of parameters: the simplex-method and Rosenbrock method. Certain schemes of optimization differing by the amount of varying parameters have been done. Interaction of basis functions parameters of concerned sets through medium values of the Gauss exponents is established. Effects of relaxation on the change of full energy and relative errors of the calculations of interatomic distances, normal oscillations frequencies, dissociation energy and other properties of molecules are considered. Change of full energy during the relaxation of basis functions (RBF) STO-3G and 6-31G* amounts 1100 and 80 kJ/mol correspondingly, and it is in need of the account during estimation of energetic characteristics, especially for systems with high-polar chemical bonds. The relaxation BS STO-3G practically in all considered cases improves description of molecular properties, whereas the relaxation BS 6-31G* lightly effects on its equilibration [ru
International Nuclear Information System (INIS)
Dalmasse, Kevin; Nychka, Douglas W.; Gibson, Sarah E.; Fan, Yuhong; Flyer, Natasha
2016-01-01
The Coronal Multichannel Polarimeter (CoMP) routinely performs coronal polarimetric measurements using the Fe XIII 10747 and 10798 lines, which are sensitive to the coronal magnetic field. However, inverting such polarimetric measurements into magnetic field data is a difficult task because the corona is optically thin at these wavelengths and the observed signal is therefore the integrated emission of all the plasma along the line of sight. To overcome this difficulty, we take on a new approach that combines a parameterized 3D magnetic field model with forward modeling of the polarization signal. For that purpose, we develop a new, fast and efficient, optimization method for model-data fitting: the Radial-basis-functions Optimization Approximation Method (ROAM). Model-data fitting is achieved by optimizing a user-specified log-likelihood function that quantifies the differences between the observed polarization signal and its synthetic/predicted analog. Speed and efficiency are obtained by combining sparse evaluation of the magnetic model with radial-basis-function (RBF) decomposition of the log-likelihood function. The RBF decomposition provides an analytical expression for the log-likelihood function that is used to inexpensively estimate the set of parameter values optimizing it. We test and validate ROAM on a synthetic test bed of a coronal magnetic flux rope and show that it performs well with a significantly sparse sample of the parameter space. We conclude that our optimization method is well-suited for fast and efficient model-data fitting and can be exploited for converting coronal polarimetric measurements, such as the ones provided by CoMP, into coronal magnetic field data.
Numerical Aspects of Atomic Physics: Helium Basis Sets and Matrix Diagonalization
Jentschura, Ulrich; Noble, Jonathan
2014-03-01
We present a matrix diagonalization algorithm for complex symmetric matrices, which can be used in order to determine the resonance energies of auto-ionizing states of comparatively simple quantum many-body systems such as helium. The algorithm is based in multi-precision arithmetic and proceeds via a tridiagonalization of the complex symmetric (not necessarily Hermitian) input matrix using generalized Householder transformations. Example calculations involving so-called PT-symmetric quantum systems lead to reference values which pertain to the imaginary cubic perturbation (the imaginary cubic anharmonic oscillator). We then proceed to novel basis sets for the helium atom and present results for Bethe logarithms in hydrogen and helium, obtained using the enhanced numerical techniques. Some intricacies of ``canned'' algorithms such as those used in LAPACK will be discussed. Our algorithm, for complex symmetric matrices such as those describing cubic resonances after complex scaling, is faster than LAPACK's built-in routines, for specific classes of input matrices. It also offer flexibility in terms of the calculation of the so-called implicit shift, which is used in order to ``pivot'' the system toward the convergence to diagonal form. We conclude with a wider overview.
The Raman Spectrum of the Squarate (C4O4-2 Anion: An Ab Initio Basis Set Dependence Study
Directory of Open Access Journals (Sweden)
Miranda Sandro G. de
2002-01-01
Full Text Available The Raman excitation profile of the squarate anion, C4O4-2 , was calculated using ab initio methods at the Hartree-Fock using Linear Response Theory (LRT for six excitation frequencies: 632.5, 514.5, 488.0, 457.9, 363.8 and 337.1 nm. Five basis set functions (6-31G*, 6-31+G*, cc-pVDZ, aug-cc-pVDZ and Sadlej's polarizability basis set were investigated aiming to evaluate the performance of the 6-31G* set for numerical convergence and computational cost in relation to the larger basis sets. All basis sets reproduce the main spectroscopic features of the Raman spectrum of this anion for the excitation interval investigated. The 6-31G* basis set presented, on average, the same accuracy of numerical results as the larger sets but at a fraction of the computational cost showing that it is suitable for the theoretical investigation of the squarate dianion and its complexes and derivatives.
Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland
2009-04-01
Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.
Optimization of the nuclear power engineering safety on the basis of social and economic parameters
International Nuclear Information System (INIS)
Kozlov, V.F.; Kuz'min, I.I.; Lystsov, V.N.; Amosova, T.V.; Makhutov, N.A.; Men'shikov, V.F.
1995-01-01
Principle of optimization of nuclear power engineering safety is presented on the basis of estimating the risks to the man's health with an account of peculiarities of socio-economic system and other types of economic activities in the region. Average expected duration of forthcoming life and costs of its prolongation serve as a unit for measuring the man's safety. It is shown that if the expenditures on NPP technical safety exceed the scientifically substantiated costs for this region with application of the above principle, than the risk for population will exceed the minimum achievable level. 8 refs., 2 figs., 1 tab
Model of Optimal Collision Avoidance Manoeuvre on the Basis of Electronic Data Collection
Directory of Open Access Journals (Sweden)
Jelenko Švetak
2005-11-01
Full Text Available The results of the data analyses show that accidents mostlyinclude damages to the ship's hull and collisions. Generally allaccidents of ships can be divided into two basic categories.First, accidents in which measures for damage control shouldbe taken immediately, and second, those which require a littlemore patient reaction. The very fact that collisions belong to thefirst category provided the incentive for writing the current paper.The proposed model of optimal collision avoidance manoeuvreof ships on the basis of electronic data collection wasmade by means of the navigation simulator NTPRO- 1000,Transas manufacturer, Russian Federation.
Yang, Guo Sheng; Wang, Xiao Yang; Li, Xue Dong
2018-03-01
With the establishment of the integrated model of relay protection and the scale of the power system expanding, the global setting and optimization of relay protection is an extremely difficult task. This paper presents a kind of application in relay protection of global optimization improved particle swarm optimization algorithm and the inverse time current protection as an example, selecting reliability of the relay protection, selectivity, quick action and flexibility as the four requires to establish the optimization targets, and optimizing protection setting values of the whole system. Finally, in the case of actual power system, the optimized setting value results of the proposed method in this paper are compared with the particle swarm algorithm. The results show that the improved quantum particle swarm optimization algorithm has strong search ability, good robustness, and it is suitable for optimizing setting value in the relay protection of the whole power system.
Optimality Conditions in Differentiable Vector Optimization via Second-Order Tangent Sets
International Nuclear Information System (INIS)
Jimenez, Bienvenido; Novo, Vicente
2004-01-01
We provide second-order necessary and sufficient conditions for a point to be an efficient element of a set with respect to a cone in a normed space, so that there is only a small gap between necessary and sufficient conditions. To this aim, we use the common second-order tangent set and the asymptotic second-order cone utilized by Penot. As an application we establish second-order necessary conditions for a point to be a solution of a vector optimization problem with an arbitrary feasible set and a twice Frechet differentiable objective function between two normed spaces. We also establish second-order sufficient conditions when the initial space is finite-dimensional so that there is no gap with necessary conditions. Lagrange multiplier rules are also given
Expressing clinical data sets with openEHR archetypes: a solid basis for ubiquitous computing.
Garde, Sebastian; Hovenga, Evelyn; Buck, Jasmin; Knaup, Petra
2007-12-01
The purpose of this paper is to analyse the feasibility and usefulness of expressing clinical data sets (CDSs) as openEHR archetypes. For this, we present an approach to transform CDS into archetypes, and outline typical problems with CDS and analyse whether some of these problems can be overcome by the use of archetypes. Literature review and analysis of a selection of existing Australian, German, other European and international CDSs; transfer of a CDS for Paediatric Oncology into openEHR archetypes; implementation of CDSs in application systems. To explore the feasibility of expressing CDS as archetypes an approach to transform existing CDSs into archetypes is presented in this paper. In case of the Paediatric Oncology CDS (which consists of 260 data items) this lead to the definition of 48 openEHR archetypes. To analyse the usefulness of expressing CDS as archetypes, we identified nine problems with CDS that currently remain unsolved without a common model underpinning the CDS. Typical problems include incompatible basic data types and overlapping and incompatible definitions of clinical content. A solution to most of these problems based on openEHR archetypes is motivated. With regard to integrity constraints, further research is required. While openEHR cannot overcome all barriers to Ubiquitous Computing, it can provide the common basis for ubiquitous presence of meaningful and computer-processable knowledge and information, which we believe is a basic requirement for Ubiquitous Computing. Expressing CDSs as openEHR archetypes is feasible and advantageous as it fosters semantic interoperability, supports ubiquitous computing, and helps to develop archetypes that are arguably of better quality than the original CDS.
International Nuclear Information System (INIS)
Keppler, Jan Horst; Meunier, William; Coquentin, Alexandre
2017-01-01
Interconnections for cross-border electricity flows are at the heart of the project to create a common European electricity market. At the time, increase in production from variable renewables clustered during a limited numbers of hours reduces the availability of existing transport infrastructures. This calls for higher levels of optimal interconnection capacity than in the past. In complement to existing scenario-building exercises such as the TYNDP that respond to the challenge of determining optimal levels of infrastructure provision, the present paper proposes a new empirically-based methodology to perform Cost-Benefit analysis for the determination of optimal interconnection capacity, using as an example the French-German cross-border trade. Using a very fine dataset of hourly supply and demand curves (aggregated auction curves) for the year 2014 from the EPEX Spot market, it constructs linearized net export (NEC) and net import demand curves (NIDC) for both countries. This allows assessing hour by hour the welfare impacts for incremental increases in interconnection capacity. Summing these welfare increases over the 8 760 hours of the year, this provides the annual total for each step increase of interconnection capacity. Confronting welfare benefits with the annual cost of augmenting interconnection capacity indicated the socially optimal increase in interconnection capacity between France and Germany on the basis of empirical market micro-data. (authors)
Directory of Open Access Journals (Sweden)
Lagerev I.A.
2016-12-01
Full Text Available In this paper the mathematical models of the main types of turning hydraulic engines, which at the present time widely used in the construction of handling systems of domestic and foreign mobile transport-technological machines wide functionality. They allow to take into consideration the most significant from the viewpoint of ensuring high technical-economic indicators of hydraulic efficiency criteria – minimum mass (weight, their volume and losses of power. On the basis of these mathematical models the problem of multicriterial constrained optimization of the constructive sizes of turning hydraulic engines are subject to complex constructive, strength and deformation limits. It allows you to de-velop the hydraulic engines in an optimized design which is required for the purpose of designing a comprehensive measure takes into account efficiency criteria. The multicriterial optimization problem is universal in nature, so when designing a turning hydraulic engines allows for one-, two - and three-criteria optimization without making any changes in the solution algorithm. This is a significant advantage for the development of universal software for the automation of design of mobile transport-technological machines.
Optimal projection of observations in a Bayesian setting
Giraldi, Loic; Le Maî tre, Olivier P.; Hoteit, Ibrahim; Knio, Omar
2018-01-01
, and the one that maximizes the mutual information between the parameter of interest and the projected observations. The first two optimization problems are formulated as the determination of an optimal subspace and therefore the solution is computed using
Basis set effects on the energy and hardness profiles of the ...
Indian Academy of Sciences (India)
Unknown
maximum hardness principle (MHP); spurious stationary points; hydrogen fluoride dimer. 1. Introduction ... This error can be solved when accounting for the basis ..... DURSI for financial support through the Distinguished. University Research ...
Optimizing Distributed Machine Learning for Large Scale EEG Data Set
Directory of Open Access Journals (Sweden)
M Bilal Shaikh
2017-06-01
Full Text Available Distributed Machine Learning (DML has gained its importance more than ever in this era of Big Data. There are a lot of challenges to scale machine learning techniques on distributed platforms. When it comes to scalability, improving the processor technology for high level computation of data is at its limit, however increasing machine nodes and distributing data along with computation looks as a viable solution. Different frameworks and platforms are available to solve DML problems. These platforms provide automated random data distribution of datasets which miss the power of user defined intelligent data partitioning based on domain knowledge. We have conducted an empirical study which uses an EEG Data Set collected through P300 Speller component of an ERP (Event Related Potential which is widely used in BCI problems; it helps in translating the intention of subject w h i l e performing any cognitive task. EEG data contains noise due to waves generated by other activities in the brain which contaminates true P300Speller. Use of Machine Learning techniques could help in detecting errors made by P300 Speller. We are solving this classification problem by partitioning data into different chunks and preparing distributed models using Elastic CV Classifier. To present a case of optimizing distributed machine learning, we propose an intelligent user defined data partitioning approach that could impact on the accuracy of distributed machine learners on average. Our results show better average AUC as compared to average AUC obtained after applying random data partitioning which gives no control to user over data partitioning. It improves the average accuracy of distributed learner due to the domain specific intelligent partitioning by the user. Our customized approach achieves 0.66 AUC on individual sessions and 0.75 AUC on mixed sessions, whereas random / uncontrolled data distribution records 0.63 AUC.
International Nuclear Information System (INIS)
Caravaca, M A; Casali, R A
2005-01-01
The SIESTA approach based on pseudopotentials and a localized basis set is used to calculate the electronic, elastic and equilibrium properties of P 2 1 /c, Pbca, Pnma, Fm3m, P4 2 nmc and Pa3 phases of HfO 2 . Using separable Troullier-Martins norm-conserving pseudopotentials which include partial core corrections for Hf, we tested important physical properties as a function of the basis set size, grid size and cut-off ratio of the pseudo-atomic orbitals (PAOs). We found that calculations in this oxide with the LDA approach and using a minimal basis set (simple zeta, SZ) improve calculated phase transition pressures with respect to the double-zeta basis set and LDA (DZ-LDA), and show similar accuracy to that determined with the PPPW and GGA approach. Still, the equilibrium volumes and structural properties calculated with SZ-LDA compare better with experiments than the GGA approach. The bandgaps and elastic and structural properties calculated with DZ-LDA are accurate in agreement with previous state of the art ab initio calculations and experimental evidence and cannot be improved with a polarized basis set. These calculated properties show low sensitivity to the PAO localization parameter range between 40 and 100 meV. However, this is not true for the relative energy, which improves upon decrease of the mentioned parameter. We found a non-linear behaviour in the lattice parameters with pressure in the P 2 1 /c phase, showing a discontinuity of the derivative of the a lattice parameter with respect to external pressure, as found in experiments. The common enthalpy values calculated with the minimal basis set give pressure transitions of 3.3 and 10.8?GPa for P2 1 /c → Pbca and Pbca → Pnma, respectively, in accordance with different high pressure experimental values
Energy Technology Data Exchange (ETDEWEB)
Caravaca, M A [Facultad de Ingenieria, Universidad Nacional del Nordeste, Avenida Las Heras 727, 3500-Resistencia (Argentina); Casali, R A [Facultad de Ciencias Exactas y Naturales y Agrimensura, Universidad Nacional del Nordeste, Avenida Libertad, 5600-Corrientes (Argentina)
2005-09-21
The SIESTA approach based on pseudopotentials and a localized basis set is used to calculate the electronic, elastic and equilibrium properties of P 2{sub 1}/c, Pbca, Pnma, Fm3m, P4{sub 2}nmc and Pa3 phases of HfO{sub 2}. Using separable Troullier-Martins norm-conserving pseudopotentials which include partial core corrections for Hf, we tested important physical properties as a function of the basis set size, grid size and cut-off ratio of the pseudo-atomic orbitals (PAOs). We found that calculations in this oxide with the LDA approach and using a minimal basis set (simple zeta, SZ) improve calculated phase transition pressures with respect to the double-zeta basis set and LDA (DZ-LDA), and show similar accuracy to that determined with the PPPW and GGA approach. Still, the equilibrium volumes and structural properties calculated with SZ-LDA compare better with experiments than the GGA approach. The bandgaps and elastic and structural properties calculated with DZ-LDA are accurate in agreement with previous state of the art ab initio calculations and experimental evidence and cannot be improved with a polarized basis set. These calculated properties show low sensitivity to the PAO localization parameter range between 40 and 100 meV. However, this is not true for the relative energy, which improves upon decrease of the mentioned parameter. We found a non-linear behaviour in the lattice parameters with pressure in the P 2{sub 1}/c phase, showing a discontinuity of the derivative of the a lattice parameter with respect to external pressure, as found in experiments. The common enthalpy values calculated with the minimal basis set give pressure transitions of 3.3 and 10.8?GPa for P2{sub 1}/c {yields} Pbca and Pbca {yields} Pnma, respectively, in accordance with different high pressure experimental values.
International Nuclear Information System (INIS)
Brorsen, Kurt R.; Sirjoosingh, Andrew; Pak, Michael V.; Hammes-Schiffer, Sharon
2015-01-01
The nuclear electronic orbital (NEO) reduced explicitly correlated Hartree-Fock (RXCHF) approach couples select electronic orbitals to the nuclear orbital via Gaussian-type geminal functions. This approach is extended to enable the use of a restricted basis set for the explicitly correlated electronic orbitals and an open-shell treatment for the other electronic orbitals. The working equations are derived and the implementation is discussed for both extensions. The RXCHF method with a restricted basis set is applied to HCN and FHF − and is shown to agree quantitatively with results from RXCHF calculations with a full basis set. The number of many-particle integrals that must be calculated for these two molecules is reduced by over an order of magnitude with essentially no loss in accuracy, and the reduction factor will increase substantially for larger systems. Typically, the computational cost of RXCHF calculations with restricted basis sets will scale in terms of the number of basis functions centered on the quantum nucleus and the covalently bonded neighbor(s). In addition, the RXCHF method with an odd number of electrons that are not explicitly correlated to the nuclear orbital is implemented using a restricted open-shell formalism for these electrons. This method is applied to HCN + , and the nuclear densities are in qualitative agreement with grid-based calculations. Future work will focus on the significance of nonadiabatic effects in molecular systems and the further enhancement of the NEO-RXCHF approach to accurately describe such effects
Energy Technology Data Exchange (ETDEWEB)
Brorsen, Kurt R.; Sirjoosingh, Andrew; Pak, Michael V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu [Department of Chemistry, University of Illinois at Urbana-Champaign, 600 South Mathews Ave., Urbana, Illinois 61801 (United States)
2015-06-07
The nuclear electronic orbital (NEO) reduced explicitly correlated Hartree-Fock (RXCHF) approach couples select electronic orbitals to the nuclear orbital via Gaussian-type geminal functions. This approach is extended to enable the use of a restricted basis set for the explicitly correlated electronic orbitals and an open-shell treatment for the other electronic orbitals. The working equations are derived and the implementation is discussed for both extensions. The RXCHF method with a restricted basis set is applied to HCN and FHF{sup −} and is shown to agree quantitatively with results from RXCHF calculations with a full basis set. The number of many-particle integrals that must be calculated for these two molecules is reduced by over an order of magnitude with essentially no loss in accuracy, and the reduction factor will increase substantially for larger systems. Typically, the computational cost of RXCHF calculations with restricted basis sets will scale in terms of the number of basis functions centered on the quantum nucleus and the covalently bonded neighbor(s). In addition, the RXCHF method with an odd number of electrons that are not explicitly correlated to the nuclear orbital is implemented using a restricted open-shell formalism for these electrons. This method is applied to HCN{sup +}, and the nuclear densities are in qualitative agreement with grid-based calculations. Future work will focus on the significance of nonadiabatic effects in molecular systems and the further enhancement of the NEO-RXCHF approach to accurately describe such effects.
Hill, J. Grant; Peterson, Kirk A.
2017-12-01
New correlation consistent basis sets based on pseudopotential (PP) Hamiltonians have been developed from double- to quintuple-zeta quality for the late alkali (K-Fr) and alkaline earth (Ca-Ra) metals. These are accompanied by new all-electron basis sets of double- to quadruple-zeta quality that have been contracted for use with both Douglas-Kroll-Hess (DKH) and eXact 2-Component (X2C) scalar relativistic Hamiltonians. Sets for valence correlation (ms), cc-pVnZ-PP and cc-pVnZ-(DK,DK3/X2C), in addition to outer-core correlation [valence + (m-1)sp], cc-p(w)CVnZ-PP and cc-pwCVnZ-(DK,DK3/X2C), are reported. The -PP sets have been developed for use with small-core PPs [I. S. Lim et al., J. Chem. Phys. 122, 104103 (2005) and I. S. Lim et al., J. Chem. Phys. 124, 034107 (2006)], while the all-electron sets utilized second-order DKH Hamiltonians for 4s and 5s elements and third-order DKH for 6s and 7s. The accuracy of the basis sets is assessed through benchmark calculations at the coupled-cluster level of theory for both atomic and molecular properties. Not surprisingly, it is found that outer-core correlation is vital for accurate calculation of the thermodynamic and spectroscopic properties of diatomic molecules containing these elements.
Hill, J Grant; Peterson, Kirk A
2017-12-28
New correlation consistent basis sets based on pseudopotential (PP) Hamiltonians have been developed from double- to quintuple-zeta quality for the late alkali (K-Fr) and alkaline earth (Ca-Ra) metals. These are accompanied by new all-electron basis sets of double- to quadruple-zeta quality that have been contracted for use with both Douglas-Kroll-Hess (DKH) and eXact 2-Component (X2C) scalar relativistic Hamiltonians. Sets for valence correlation (ms), cc-pVnZ-PP and cc-pVnZ-(DK,DK3/X2C), in addition to outer-core correlation [valence + (m-1)sp], cc-p(w)CVnZ-PP and cc-pwCVnZ-(DK,DK3/X2C), are reported. The -PP sets have been developed for use with small-core PPs [I. S. Lim et al., J. Chem. Phys. 122, 104103 (2005) and I. S. Lim et al., J. Chem. Phys. 124, 034107 (2006)], while the all-electron sets utilized second-order DKH Hamiltonians for 4s and 5s elements and third-order DKH for 6s and 7s. The accuracy of the basis sets is assessed through benchmark calculations at the coupled-cluster level of theory for both atomic and molecular properties. Not surprisingly, it is found that outer-core correlation is vital for accurate calculation of the thermodynamic and spectroscopic properties of diatomic molecules containing these elements.
Topology optimization problems with design-dependent sets of constraints
DEFF Research Database (Denmark)
Schou, Marie-Louise Højlund
Topology optimization is a design tool which is used in numerous fields. It can be used whenever the design is driven by weight and strength considerations. The basic concept of topology optimization is the interpretation of partial differential equation coefficients as effective material...... properties and designing through changing these coefficients. For example, consider a continuous structure. Then the basic concept is to represent this structure by small pieces of material that are coinciding with the elements of a finite element model of the structure. This thesis treats stress constrained...... structural topology optimization problems. For such problems a stress constraint for an element should only be present in the optimization problem when the structural design variable corresponding to this element has a value greater than zero. We model the stress constrained topology optimization problem...
Simulating the oxygen content of ambient organic aerosol with the 2D volatility basis set
Directory of Open Access Journals (Sweden)
B. N. Murphy
2011-08-01
Full Text Available A module predicting the oxidation state of organic aerosol (OA has been developed using the two-dimensional volatility basis set (2D-VBS framework. This model is an extension of the 1D-VBS framework and tracks saturation concentration and oxygen content of organic species during their atmospheric lifetime. The host model, a one-dimensional Lagrangian transport model, is used to simulate air parcels arriving at Finokalia, Greece during the Finokalia Aerosol Measurement Experiment in May 2008 (FAME-08. Extensive observations were collected during this campaign using an aerosol mass spectrometer (AMS and a thermodenuder to determine the chemical composition and volatility, respectively, of the ambient OA. Although there are several uncertain model parameters, the consistently high oxygen content of OA measured during FAME-08 (O:C = 0.8 can help constrain these parameters and elucidate OA formation and aging processes that are necessary for achieving the high degree of oxygenation observed. The base-case model reproduces observed OA mass concentrations (measured mean = 3.1 μg m^{−3}, predicted mean = 3.3 μg m^{−3} and O:C (predicted O:C = 0.78 accurately. A suite of sensitivity studies explore uncertainties due to (1 the anthropogenic secondary OA (SOA aging rate constant, (2 assumed enthalpies of vaporization, (3 the volatility change and number of oxygen atoms added for each generation of aging, (4 heterogeneous chemistry, (5 the oxidation state of the first generation of compounds formed from SOA precursor oxidation, and (6 biogenic SOA aging. Perturbations in most of these parameters do impact the ability of the model to predict O:C well throughout the simulation period. By comparing measurements of the O:C from FAME-08, several sensitivity cases including a high oxygenation case, a low oxygenation case, and biogenic SOA aging case are found to unreasonably depict OA aging, keeping in mind that this study does not consider
DEFF Research Database (Denmark)
Pavese, Christian; Tibaldi, Carlo; Larsen, Torben J.
2016-01-01
The aim is to provide a fast and reliable approach to estimate ultimate blade loads for a multidisciplinary design optimization (MDO) framework. For blade design purposes, the standards require a large amount of computationally expensive simulations, which cannot be efficiently run each cost...... function evaluation of an MDO process. This work describes a method that allows integrating the calculation of the blade load envelopes inside an MDO loop. Ultimate blade load envelopes are calculated for a baseline design and a design obtained after an iteration of an MDO. These envelopes are computed...... for a full standard design load basis (DLB) and a deterministic reduced DLB. Ultimate loads extracted from the two DLBs with the two blade designs each are compared and analyzed. Although the reduced DLB supplies ultimate loads of different magnitude, the shape of the estimated envelopes are similar...
Simulation-based robust optimization for signal timing and setting.
2009-12-30
The performance of signal timing plans obtained from traditional approaches for : pre-timed (fixed-time or actuated) control systems is often unstable under fluctuating traffic : conditions. This report develops a general approach for optimizing the ...
Setting of the Optimal Parameters of Melted Glass
Czech Academy of Sciences Publication Activity Database
Luptáková, Natália; Matejíčka, L.; Krečmer, N.
2015-01-01
Roč. 10, č. 1 (2015), s. 73-79 ISSN 1802-2308 Institutional support: RVO:68081723 Keywords : Striae * Glass * Glass melting * Regression * Optimal parameters Subject RIV: JH - Ceramics, Fire-Resistant Materials and Glass
Matrix-product-state method with local basis optimization for nonequilibrium electron-phonon systems
Heidrich-Meisner, Fabian; Brockt, Christoph; Dorfner, Florian; Vidmar, Lev; Jeckelmann, Eric
We present a method for simulating the time evolution of quasi-one-dimensional correlated systems with strongly fluctuating bosonic degrees of freedom (e.g., phonons) using matrix product states. For this purpose we combine the time-evolving block decimation (TEBD) algorithm with a local basis optimization (LBO) approach. We discuss the performance of our approach in comparison to TEBD with a bare boson basis, exact diagonalization, and diagonalization in a limited functional space. TEBD with LBO can reduce the computational cost by orders of magnitude when boson fluctuations are large and thus it allows one to investigate problems that are out of reach of other approaches. First, we test our method on the non-equilibrium dynamics of a Holstein polaron and show that it allows us to study the regime of strong electron-phonon coupling. Second, the method is applied to the scattering of an electronic wave packet off a region with electron-phonon coupling. Our study reveals a rich physics including transient self-trapping and dissipation. Supported by Deutsche Forschungsgemeinschaft (DFG) via FOR 1807.
Czech Academy of Sciences Publication Activity Database
Zahradník, Rudolf; Šroubková, Libuše
2005-01-01
Roč. 104, č. 1 (2005), s. 52-63 ISSN 0020-7608 Institutional research plan: CEZ:AV0Z40400503 Keywords : intermolecular complexes * van der Waals species * ab initio calculations * complete basis set values * estimates Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.192, year: 2005
Varandas, António J. C.
2018-04-01
Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.
Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan
2013-09-26
We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface.
Hutter, Jürg
2003-03-01
An efficient formulation of time-dependent linear response density functional theory for the use within the plane wave basis set framework is presented. The method avoids the transformation of the Kohn-Sham matrix into the canonical basis and references virtual orbitals only through a projection operator. Using a Lagrangian formulation nuclear derivatives of excited state energies within the Tamm-Dancoff approximation are derived. The algorithms were implemented into a pseudo potential/plane wave code and applied to the calculation of adiabatic excitation energies, optimized geometries and vibrational frequencies of three low lying states of formaldehyde. An overall good agreement with other time-dependent density functional calculations, multireference configuration interaction calculations and experimental data was found.
International Nuclear Information System (INIS)
Mohr, Stephan; Masella, Michel; Ratcliff, Laura E.; Genovese, Luigi
2017-01-01
We present, within Kohn-Sham Density Functional Theory calculations, a quantitative method to identify and assess the partitioning of a large quantum mechanical system into fragments. We then introduce a simple and efficient formalism (which can be written as generalization of other well-known population analyses) to extract, from first principles, electrostatic multipoles for these fragments. The corresponding fragment multipoles can in this way be seen as reliable (pseudo-) observables. By applying our formalism within the code BigDFT, we show that the usage of a minimal set of in-situ optimized basis functions is of utmost importance for having at the same time a proper fragment definition and an accurate description of the electronic structure. With this approach it becomes possible to simplify the modeling of environmental fragments by a set of multipoles, without notable loss of precision in the description of the active quantum mechanical region. Furthermore, this leads to a considerable reduction of the degrees of freedom by an effective coarsegraining approach, eventually also paving the way towards efficient QM/QM and QM/MM methods coupling together different levels of accuracy.
Typed Sets as a Basis for Object-Oriented Database Schemas
Balsters, H.; de By, R.A.; Zicari, R.
The object-oriented data model TM is a language that is based on the formal theory of FM, a typed language with object-oriented features such as attributes and methods in the presence of subtyping. The general (typed) set constructs of FM allow one to deal with (database) constraints in TM. The
Many-Body Energy Decomposition with Basis Set Superposition Error Corrections.
Mayer, István; Bakó, Imre
2017-05-09
The problem of performing many-body decompositions of energy is considered in the case when BSSE corrections are also performed. It is discussed that the two different schemes that have been proposed go back to the two different interpretations of the original Boys-Bernardi counterpoise correction scheme. It is argued that from the physical point of view the "hierarchical" scheme of Valiron and Mayer should be preferred and not the scheme recently discussed by Ouyang and Bettens, because it permits the energy of the individual monomers and all the two-body, three-body, etc. energy components to be free of unphysical dependence on the arrangement (basis functions) of other subsystems in the cluster.
Investigation of confined hydrogen atom in spherical cavity, using B-splines basis set
Directory of Open Access Journals (Sweden)
M Barezi
2011-03-01
Full Text Available Studying confined quantum systems (CQS is very important in nano technology. One of the basic CQS is a hydrogen atom confined in spherical cavity. In this article, eigenenergies and eigenfunctions of hydrogen atom in spherical cavity are calculated, using linear variational method. B-splines are used as basis functions, which can easily construct the trial wave functions with appropriate boundary conditions. The main characteristics of B-spline are its high localization and its flexibility. Besides, these functions have numerical stability and are able to spend high volume of calculation with good accuracy. The energy levels as function of cavity radius are analyzed. To check the validity and efficiency of the proposed method, extensive convergence test of eigenenergies in different cavity sizes has been carried out.
Efficient G0W0 using localized basis sets: a benchmark for molecules
Koval, Petr; Per Ljungberg, Mathias; Sanchez-Portal, Daniel
Electronic structure calculations within Hedin's GW approximation are becoming increasingly accessible to the community. In particular, as it has been shown earlier and we confirm by calculations using our MBPT_LCAO package, the computational cost of the so-called G0W0 can be made comparable to the cost of a regular Hartree-Fock calculation. In this work, we study the performance of our new implementation of G0W0 to reproduce the ionization potentials of all 117 closed-shell molecules belonging to the G2/97 test set, using a pseudo-potential starting point provided by the popular density-functional package SIESTA. Moreover, the ionization potentials and electron affinities of a set of 24 acceptor molecules are compared to experiment and to reference all-electron calculations. PK: Guipuzcoa Fellow; PK,ML,DSP: Deutsche Forschungsgemeinschaft (SFB1083); PK,DSP: MINECO MAT2013-46593-C6-2-P.
The prefabricated building risk decision research of DM technology on the basis of Rough Set
Guo, Z. L.; Zhang, W. B.; Ma, L. H.
2017-08-01
With the resources crises and more serious pollution, the green building has been strongly advocated by most countries and become a new building style in the construction field. Compared with traditional building, the prefabricated building has its own irreplaceable advantages but is influenced by many uncertainties. So far, a majority of scholars have been studying based on qualitative researches from all of the word. This paper profoundly expounds its significance about the prefabricated building. On the premise of the existing research methods, combined with rough set theory, this paper redefines the factors which affect the prefabricated building risk. Moreover, it quantifies risk factors and establish an expert knowledge base through assessing. And then reduced risk factors about the redundant attributes and attribute values, finally form the simplest decision rule. This simplest decision rule, which is based on the DM technology of rough set theory, provides prefabricated building with a controllable new decision-making method.
Nagata, Takeshi; Iwata, Suehiro
2004-02-22
The locally projected self-consistent field molecular orbital method for molecular interaction (LP SCF MI) is reformulated for multifragment systems. For the perturbation expansion, two types of the local excited orbitals are defined; one is fully local in the basis set on a fragment, and the other has to be partially delocalized to the basis sets on the other fragments. The perturbation expansion calculations only within single excitations (LP SE MP2) are tested for water dimer, hydrogen fluoride dimer, and colinear symmetric ArM+ Ar (M = Na and K). The calculated binding energies of LP SE MP2 are all close to the corresponding counterpoise corrected SCF binding energy. By adding the single excitations, the deficiency in LP SCF MI is thus removed. The results suggest that the exclusion of the charge-transfer effects in LP SCF MI might indeed be the cause of the underestimation for the binding energy. (c) 2004 American Institute of Physics.
Depression screening optimization in an academic rural setting.
Aleem, Sohaib; Torrey, William C; Duncan, Mathew S; Hort, Shoshana J; Mecchella, John N
2015-01-01
Primary care plays a critical role in screening and management of depression. The purpose of this paper is to focus on leveraging the electronic health record (EHR) as well as work flow redesign to improve the efficiency and reliability of the process of depression screening in two adult primary care clinics of a rural academic institution in USA. The authors utilized various process improvement tools from lean six sigma methodology including project charter, swim lane process maps, critical to quality tree, process control charts, fishbone diagrams, frequency impact matrix, mistake proofing and monitoring plan in Define-Measure-Analyze-Improve-Control format. Interventions included change in depression screening tool, optimization of data entry in EHR. EHR data entry optimization; follow up of positive screen, staff training and EHR redesign. Depression screening rate for office-based primary care visits improved from 17.0 percent at baseline to 75.9 percent in the post-intervention control phase (p<0.001). Follow up of positive depression screen with Patient History Questionnaire-9 data collection remained above 90 percent. Duplication of depression screening increased from 0.6 percent initially to 11.7 percent and then decreased to 4.7 percent after optimization of data entry by patients and flow staff. Impact of interventions on clinical outcomes could not be evaluated. Successful implementation, sustainability and revision of a process improvement initiative to facilitate screening, follow up and management of depression in primary care requires accounting for voice of the process (performance metrics), system limitations and voice of the customer (staff and patients) to overcome various system, customer and human resource constraints.
Usvyat, Denis; Civalleri, Bartolomeo; Maschio, Lorenzo; Dovesi, Roberto; Pisani, Cesare; Schütz, Martin
2011-06-07
The atomic orbital basis set limit is approached in periodic correlated calculations for solid LiH. The valence correlation energy is evaluated at the level of the local periodic second order Møller-Plesset perturbation theory (MP2), using basis sets of progressively increasing size, and also employing "bond"-centered basis functions in addition to the standard atom-centered ones. Extended basis sets, which contain linear dependencies, are processed only at the MP2 stage via a dual basis set scheme. The local approximation (domain) error has been consistently eliminated by expanding the orbital excitation domains. As a final result, it is demonstrated that the complete basis set limit can be reached for both HF and local MP2 periodic calculations, and a general scheme is outlined for the definition of high-quality atomic-orbital basis sets for solids. © 2011 American Institute of Physics
COMPROMISE, OPTIMAL AND TRACTIONAL ACCOUNTS ON PARETO SET
Directory of Open Access Journals (Sweden)
V. V. Lahuta
2010-11-01
Full Text Available The problem of optimum traction calculations is considered as a problem about optimum distribution of a resource. The dynamic programming solution is based on a step-by-step calculation of set of points of Pareto-optimum values of a criterion function (energy expenses and a resource (time.
International Nuclear Information System (INIS)
Hollauer, E.; Nascimento, M.A.C.
1985-01-01
The photoionization cross-section and dynamic polarizability for lithium atom are calculated using a discrete basis set to represent both the bound and the continuum-states of the atom, to construct an approximation to the dynamic polarizability. From the imaginary part of the complex dynamic polarizability one extracts the photoionization cross-section and from its real part the dynamic polarizability. The results are in good agreement with the experiments and other more elaborate calculations (Author) [pt
Czech Academy of Sciences Publication Activity Database
Kupka, T.; Nieradka, M.; Stachów, M.; Pluta, T.; Nowak, P.; Kjaer, H.; Kongsted, J.; Kaminský, Jakub
2012-01-01
Roč. 116, č. 14 (2012), s. 3728-3738 ISSN 1089-5639 R&D Projects: GA ČR GPP208/10/P356 Institutional research plan: CEZ:AV0Z40550506 Keywords : consistent basis-sets * density-functional methods * ab-inition calculations * polarization propagator approximation Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.771, year: 2012
International Nuclear Information System (INIS)
Bykov, V.P.; Gerasimov, A.V.
1992-08-01
A new variational method without a basis set for calculation of the eigenvalues and eigenfunctions of Hamiltonians is suggested. The expansion of this method for the Coulomb potentials is given. Calculation of the energy and charge distribution in the two-electron system for different values of the nuclear charge Z is made. It is shown that at small Z the Coulomb forces disintegrate the electron cloud into two clots. (author). 3 refs, 4 figs, 1 tab
Optimization models using fuzzy sets and possibility theory
Orlovski, S
1987-01-01
Optimization is of central concern to a number of discip lines. Operations Research and Decision Theory are often consi dered to be identical with optimizationo But also in other areas such as engineering design, regional policy, logistics and many others, the search for optimal solutions is one of the prime goals. The methods and models which have been used over the last decades in these areas have primarily been "hard" or "crisp", i. e. the solutions were considered to be either fea sible or unfeasible, either above a certain aspiration level or below. This dichotomous structure of methods very often forced the modeller to approximate real problem situations of the more-or-less type by yes-or-no-type models, the solutions of which might turn out not to be the solutions to the real prob lems. This is particularly true if the problem under considera tion includes vaguely defined relationships, human evaluations, uncertainty due to inconsistent or incomplete evidence, if na tural language has to be...
Approximating the Pareto set of multiobjective linear programs via robust optimization
Gorissen, B.L.; den Hertog, D.
2012-01-01
We consider problems with multiple linear objectives and linear constraints and use adjustable robust optimization and polynomial optimization as tools to approximate the Pareto set with polynomials of arbitrarily large degree. The main difference with existing techniques is that we optimize a
Kolmann, Stephen J.; Jordan, Meredith J. T.
2010-02-01
One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol-1 at the CCSD(T)/6-31G∗ level of theory, has a 4 kJ mol-1 dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol-1 lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol-1 lower in energy at the CCSD(T)/6-31G∗ level of theory. Ideally, for sub-kJ mol-1 thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.
Kolmann, Stephen J; Jordan, Meredith J T
2010-02-07
One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol(-1) at the CCSD(T)/6-31G* level of theory, has a 4 kJ mol(-1) dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol(-1) lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol(-1) lower in energy at the CCSD(T)/6-31G* level of theory. Ideally, for sub-kJ mol(-1) thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.
Optimizing distance-based methods for large data sets
Scholl, Tobias; Brenner, Thomas
2015-10-01
Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.
Directory of Open Access Journals (Sweden)
Odintsova Tetiana M.
2017-04-01
Full Text Available The article is aimed at studying the optimal factors of formation of the population savings as the basis for investment resources of the regional economy. A factorial (nonlinear correlative-regression analysis of the formation of savings of the population of Ukraine was completed. On its basis a forecast of the optimal structure and volumes of formation of the population incomes was carried out taking into consideration impact of fundamental factors on these incomes. Such approach provides to identify the marginal volumes of tax burden, population savings, and capital investments, directed to economic growth.
Three-body problem in quantum mechanics: Hyperspherical elliptic coordinates and harmonic basis sets
International Nuclear Information System (INIS)
Aquilanti, Vincenzo; Tonzani, Stefano
2004-01-01
Elliptic coordinates within the hyperspherical formalism for three-body problems were proposed some time ago [V. Aquilanti, S. Cavalli, and G. Grossi, J. Chem. Phys. 85, 1362 (1986)] and recently have also found application, for example, in chemical reaction theory [see O. I. Tolstikhin and H. Nakamura, J. Chem. Phys. 108, 8899 (1998)]. Here we consider their role in providing a smooth transition between the known 'symmetric' and 'asymmetric' parametrizations, and focus on the corresponding hyperspherical harmonics. These harmonics, which will be called hyperspherical elliptic, involve products of two associated Lame polynomials. We will provide an expansion of these new sets in a finite series of standard hyperspherical harmonics, producing a powerful tool for future applications in the field of scattering and bound-state quantum-mechanical three-body problems
Perturbing engine performance measurements to determine optimal engine control settings
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2014-12-30
Methods and systems for optimizing a performance of a vehicle engine are provided. The method includes determining an initial value for a first engine control parameter based on one or more detected operating conditions of the vehicle engine, determining a value of an engine performance variable, and artificially perturbing the determined value of the engine performance variable. The initial value for the first engine control parameter is then adjusted based on the perturbed engine performance variable causing the engine performance variable to approach a target engine performance variable. Operation of the vehicle engine is controlled based on the adjusted initial value for the first engine control parameter. These acts are repeated until the engine performance variable approaches the target engine performance variable.
A set of rules for constructing an admissible set of D optimal exact ...
African Journals Online (AJOL)
In the search for a D-optimal exact design using the combinatorial iterative technique introduced by Onukogu and Iwundu, 2008, all the support points that make up the experimental region are grouped into H concentric balls according to their distances from the centre. Any selection of N support points from the balls defines ...
Directory of Open Access Journals (Sweden)
Mahdi Hasanzadeh Golshani
2015-08-01
Full Text Available In this project thesis, initial blank shape optimization of a twin elliptical cup to reduce earring phenomenon in anisotropic sheet deep drawing process was studied .The purpose of this study is optimization of initial blank for reduction of the ears height value. The optimization process carried out using finite element method approach, which is coupled with Taguchi design of experiments and reduced basis technique methods. The deep drawing process was simulated in FEM software ABAQUS 6.12. The results of optimization show earring height and, in addition, a number of design variables and time of process can be reduced by using this methods. After optimization process with the proposed method, the maximum reduction of the earring height would be from 21.08 mm to 0.07 mm and also it could be reduced to 0 in some of the directions. The proposed optimization design in this article allows the designers to select the practical basis shapes. This leads to obtain better results at the end of the optimization process, to reduce design variables, and also to prevent repeating the optimization steps for indirect shapes.
DEFF Research Database (Denmark)
Le, T.H.A.; Pham, D. T.; Canh, Nam Nguyen
2010-01-01
Both the efficient and weakly efficient sets of an affine fractional vector optimization problem, in general, are neither convex nor given explicitly. Optimization problems over one of these sets are thus nonconvex. We propose two methods for optimizing a real-valued function over the efficient...... and weakly efficient sets of an affine fractional vector optimization problem. The first method is a local one. By using a regularization function, we reformulate the problem into a standard smooth mathematical programming problem that allows applying available methods for smooth programming. In case...... the objective function is linear, we have investigated a global algorithm based upon a branch-and-bound procedure. The algorithm uses Lagrangian bound coupling with a simplicial bisection in the criteria space. Preliminary computational results show that the global algorithm is promising....
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
Krogel, Jaron T.; Reboredo, Fernando A.
2018-01-01
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.
Energy Technology Data Exchange (ETDEWEB)
Roehle, I.
1999-11-01
A Doppler Global Velocimeter was set up in the frame of a PhD thesis. This velocimeter is optimized to carry out high accuracy, three component, time averaged planar velocity measurements. The anemometer was successfully applied to wind tunnel and test rig flows, and the measurement accuracy was investigated. A volumetric data-set of the flow field inside an industrial combustion chamber was measured. This data field contained about 400.000 vectors. DGV measurements in the intake of a jet engine model were carried out applying a fibre bundle boroskope. The flow structure of the wake of a car model in a wind tunnel was investigated. The measurement accuracy of the DGV-System is {+-}0.5 m/s when operated under ideal conditions. This study can serve as a basis to evaluate the use of DGV for aerodynamic development experiments. (orig.) [German] Im Rahmen der Dissertation wurde ein auf hohe Messgenauigkeit optimiertes DGV-Geraet fuer zeitlich gemittelte Drei-Komponenten-Geschwindigkeitsmessungen entwickelt und gebaut, an Laborstroemungen, an Teststaenden und an Windkanaelen erfolgreich eingesetzt und das Potential der Messtechnik, insbesondere im Hinblick auf Messgenauigkeit, untersucht. Im Fall einer industriellen Brennkammer konnte ein Volumen-Datensatz des Stroemungsfeldes erstellt werden, dessen Umfang bei ca. 400.000 Vektoren lag. Es wurden DGV-Messungen mittels eines flexiblen Endoskops auf Basis eines Faserbuendels durchgefuehrt und damit die Stroemung in einem Flugzeugeinlauf vermessen. Es wurden DGV-Messungen im Nachlauf eines PKW-Modells in einem Windkanal durchgefuehrt. Die Messgenauigkeit des erstellten DGV-Systems betraegt unter Idealbedingungen {+-}0,5 m/s. Durch die Arbeit wurde eine Basis zur Beurteilung des Nutzens der DGV-Technik fuer aerodynamische Entwicklungsarbeiten geschaffen. (orig.)
Directory of Open Access Journals (Sweden)
Sulaymanova D.K.
2017-02-01
Full Text Available in this article the ways of optimization and enhancement of production of meat in the Kyrgyz Republic are considered. And also on the basis of statistical data and economic-mathematical modeling forecast calculations for the predicted years (2016–2020 are performed. Questions to increase the production volume by separate types of meat in the country are considered.
Kruse, Holger; Grimme, Stefan
2012-04-21
chemistry yields MAD=0.68 kcal/mol, which represents a huge improvement over plain B3LYP/6-31G* (MAD=2.3 kcal/mol). Application of gCP-corrected B97-D3 and HF-D3 on a set of large protein-ligand complexes prove the robustness of the method. Analytical gCP gradients make optimizations of large systems feasible with small basis sets, as demonstrated for the inter-ring distances of 9-helicene and most of the complexes in Hobza's S22 test set. The method is implemented in a freely available FORTRAN program obtainable from the author's website.
Kruse, Holger; Grimme, Stefan
2012-04-01
chemistry yields MAD=0.68 kcal/mol, which represents a huge improvement over plain B3LYP/6-31G* (MAD=2.3 kcal/mol). Application of gCP-corrected B97-D3 and HF-D3 on a set of large protein-ligand complexes prove the robustness of the method. Analytical gCP gradients make optimizations of large systems feasible with small basis sets, as demonstrated for the inter-ring distances of 9-helicene and most of the complexes in Hobza's S22 test set. The method is implemented in a freely available FORTRAN program obtainable from the author's website.
Cameras and settings for optimal image capture from UAVs
Smith, Mike; O'Connor, James; James, Mike R.
2017-04-01
Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.
Energy Technology Data Exchange (ETDEWEB)
Witte, Jonathon [Department of Chemistry, University of California, Berkeley, California 94720 (United States); Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Neaton, Jeffrey B. [Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Department of Physics, University of California, Berkeley, California 94720 (United States); Kavli Energy Nanosciences Institute at Berkeley, Berkeley, California 94720 (United States); Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu [Department of Chemistry, University of California, Berkeley, California 94720 (United States); Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States)
2016-05-21
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methods and systems examined, the most complete basis is Jensen’s pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.
Simon, Sílvia; Duran, Miquel
1997-08-01
Quantum molecular similarity (QMS) techniques are used to assess the response of the electron density of various small molecules to application of a static, uniform electric field. Likewise, QMS is used to analyze the changes in electron density generated by the process of floating a basis set. The results obtained show an interrelation between the floating process, the optimum geometry, and the presence of an external field. Cases involving the Le Chatelier principle are discussed, and an insight on the changes of bond critical point properties, self-similarity values and density differences is performed.
Structural basis for inhibition of the histone chaperone activity of SET/TAF-Iβ by cytochrome c.
González-Arzola, Katiuska; Díaz-Moreno, Irene; Cano-González, Ana; Díaz-Quintana, Antonio; Velázquez-Campoy, Adrián; Moreno-Beltrán, Blas; López-Rivas, Abelardo; De la Rosa, Miguel A
2015-08-11
Chromatin is pivotal for regulation of the DNA damage process insofar as it influences access to DNA and serves as a DNA repair docking site. Recent works identify histone chaperones as key regulators of damaged chromatin's transcriptional activity. However, understanding how chaperones are modulated during DNA damage response is still challenging. This study reveals that the histone chaperone SET/TAF-Iβ interacts with cytochrome c following DNA damage. Specifically, cytochrome c is shown to be translocated into cell nuclei upon induction of DNA damage, but not upon stimulation of the death receptor or stress-induced pathways. Cytochrome c was found to competitively hinder binding of SET/TAF-Iβ to core histones, thereby locking its histone-binding domains and inhibiting its nucleosome assembly activity. In addition, we have used NMR spectroscopy, calorimetry, mutagenesis, and molecular docking to provide an insight into the structural features of the formation of the complex between cytochrome c and SET/TAF-Iβ. Overall, these findings establish a framework for understanding the molecular basis of cytochrome c-mediated blocking of SET/TAF-Iβ, which subsequently may facilitate the development of new drugs to silence the oncogenic effect of SET/TAF-Iβ's histone chaperone activity.
Zhao, Bin; Wang, Shuxiao; Donahue, Neil M; Chuang, Wayne; Hildebrandt Ruiz, Lea; Ng, Nga L; Wang, Yangjun; Hao, Jiming
2015-02-17
We evaluate the one-dimensional volatility basis set (1D-VBS) and two-dimensional volatility basis set (2D-VBS) in simulating the aging of SOA derived from toluene and α-pinene against smog-chamber experiments. If we simulate the first-generation products with empirical chamber fits and the subsequent aging chemistry with a 1D-VBS or a 2D-VBS, the models mostly overestimate the SOA concentrations in the toluene oxidation experiments. This is because the empirical chamber fits include both first-generation oxidation and aging; simulating aging in addition to this results in double counting of the initial aging effects. If the first-generation oxidation is treated explicitly, the base-case 2D-VBS underestimates the SOA concentrations and O:C increase of the toluene oxidation experiments; it generally underestimates the SOA concentrations and overestimates the O:C increase of the α-pinene experiments. With the first-generation oxidation treated explicitly, we could modify the 2D-VBS configuration individually for toluene and α-pinene to achieve good model-measurement agreement. However, we are unable to simulate the oxidation of both toluene and α-pinene with the same 2D-VBS configuration. We suggest that future models should implement parallel layers for anthropogenic (aromatic) and biogenic precursors, and that more modeling studies and laboratory research be done to optimize the "best-guess" parameters for each layer.
The 6-31B(d) basis set and the BMC-QCISD and BMC-CCSD multicoefficient correlation methods.
Lynch, Benjamin J; Zhao, Yan; Truhlar, Donald G
2005-03-03
Three new multicoefficient correlation methods (MCCMs) called BMC-QCISD, BMC-CCSD, and BMC-CCSD-C are optimized against 274 data that include atomization energies, electron affinities, ionization potentials, and reaction barrier heights. A new basis set called 6-31B(d) is developed and used as part of the new methods. BMC-QCISD has mean unsigned errors in calculating atomization energies per bond and barrier heights of 0.49 and 0.80 kcal/mol, respectively. BMC-CCSD has mean unsigned errors of 0.42 and 0.71 kcal/mol for the same two quantities. BMC-CCSD-C is an equally effective variant of BMC-CCSD that employs Cartesian rather than spherical harmonic basis sets. The mean unsigned error of BMC-CCSD or BMC-CCSD-C for atomization energies, barrier heights, ionization potentials, and electron affinities is 22% lower than G3SX(MP2) at an order of magnitude less cost for gradients for molecules with 9-13 atoms, and it scales better (N6 vs N,7 where N is the number of atoms) when the size of the molecule is increased.
Feller, David; Peterson, Kirk A
2013-08-28
The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies 0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.
Pole Shape Optimization of Permanent Magnet Synchronous Motors Using the Reduced Basis Technique
Directory of Open Access Journals (Sweden)
A. Jabbari
2010-03-01
Full Text Available In the present work, an integrated method of pole shape design optimization for reduction of torque pulsation components in permanent magnet synchronous motors is developed. A progressive design process is presented to find feasible optimal shapes. This method is applied on the pole shape optimization of two prototype permanent magnet synchronous motors, i.e., 4-poles/6-slots and 4-poles-12slots.
International Nuclear Information System (INIS)
Atif, Maimoon; Al-Sulaiman, Fahad A.
2015-01-01
Highlights: • Differential evolution optimization model was developed to optimize the heliostat field. • Five optical parameters were considered for the optimization of the optical efficiency. • Optimization using insolation weighted and un-weighted annual efficiency are developed. • The daily averaged annual optical efficiency was calculated to be 0.5023 while the monthly was 0.5025. • The insolation weighted daily averaged annual efficiency was 0.5634. - Abstract: Optimization of a heliostat field is an essential task to make a solar central receiver system effective because major optical losses are associated with the heliostat fields. In this study, a mathematical model was developed to effectively optimize the heliostat field on annual basis using differential evolution, which is an evolutionary algorithm. The heliostat field layout optimization is based on the calculation of five optical performance parameters: the mirror or the heliostat reflectivity, the cosine factor, the atmospheric attenuation factor, the shadowing and blocking factor, and the intercept factor. This model calculates all the aforementioned performance parameters at every stage of the optimization, until the best heliostat field layout based on annual performance is obtained. Two different approaches were undertaken to optimize the heliostat field layout: one with optimizing insolation weighted annual efficiency and the other with optimizing the un-weighted annual efficiency. Moreover, an alternate approach was also proposed to efficiently optimize the heliostat field in which the number of computational time steps was considerably reduced. It was observed that the daily averaged annual optical efficiency was calculated to be 0.5023 as compared to the monthly averaged annual optical efficiency, 0.5025. Moreover, the insolation weighted daily averaged annual efficiency of the heliostat field was 0.5634 for Dhahran, Saudi Arabia. The code developed can be used for any other
Comparison of optimization of loading patterns on the basis of SA and PMA algorithms
International Nuclear Information System (INIS)
Beliczai, Botond
2007-01-01
Optimization of loading patterns is a very important task from economical point of view in a nuclear power plant. The optimization algorithms used for this purpose can be categorized basically into two categories: deterministic ones and stochastic ones. In the Paks nuclear power plant a deterministic optimization procedure is used to optimize the loading pattern at BOC, so that the core would have maximal reactivity reserve. To the group of stochastic optimization procedures belong mainly simulated annealing (SA) procedures and genetic algorithms (GA). There are new procedures as well, which try to combine the advantages of SAs and GAs. One of them is called population mutation annealing algorithm (PMA). In the Paks NPP we would like to introduce fuel assemblies including burnable poison (Gd) in the near future. In order to be able to find the optimal loading pattern (or near-optimal loading patterns) in that case, we have to optimize our core not only for objective functions defined at BOC, but at EOC as well. For this purpose I used stochastic algorithms (SA and PMA) to investigate loading pattern optimization results for different objective functions at BOC. (author)
DEFF Research Database (Denmark)
Avery, John Scales; Rettrup, Sten; Avery, James Emil
automatically with computer techniques. The method has a wide range of applicability, and can be used to solve difficult eigenvalue problems in a number of fields. The book is of special interest to quantum theorists, computer scientists, computational chemists and applied mathematicians....
A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM
Burger, Martin
2011-04-01
In this paper, we construct a level set method for an elliptic obstacle problem, which can be reformulated as a shape optimization problem. We provide a detailed shape sensitivity analysis for this reformulation and a stability result for the shape Hessian at the optimal shape. Using the shape sensitivities, we construct a geometric gradient flow, which can be realized in the context of level set methods. We prove the convergence of the gradient flow to an optimal shape and provide a complete analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its behavior through several computational experiments. © 2011 World Scientific Publishing Company.
International Nuclear Information System (INIS)
Yao, Y.X.; Wang, C.Z.; Ho, K.M.
2010-01-01
A chemical bonding scheme is presented for the analysis of solid-state systems. The scheme is based on the intrinsic oriented quasiatomic minimal-basis-set orbitals (IO-QUAMBOs) previously developed by Ivanic and Ruedenberg for molecular systems. In the solid-state scheme, IO-QUAMBOs are generated by a unitary transformation of the quasiatomic orbitals located at each site of the system with the criteria of maximizing the sum of the fourth power of interatomic orbital bond order. Possible bonding and antibonding characters are indicated by the single particle matrix elements, and can be further examined by the projected density of states. We demonstrate the method by applications to graphene and (6,0) zigzag carbon nanotube. The oriented-orbital scheme automatically describes the system in terms of sp 2 hybridization. The effect of curvature on the electronic structure of the zigzag carbon nanotube is also manifested in the deformation of the intrinsic oriented orbitals as well as a breaking of symmetry leading to nonzero single particle density matrix elements. In an additional study, the analysis is performed on the Al 3 V compound. The main covalent bonding characters are identified in a straightforward way without resorting to the symmetry analysis. Our method provides a general way for chemical bonding analysis of ab initio electronic structure calculations with any type of basis sets.
Pappas, Iosif
2016-01-01
PID controllers are extensively used in industry. Although many tuning methodologies exist, finding good controller settings is not an easy task and frequently optimization-based design is preferred to satisfy more complex criteria. In this thesis, the focus was to find which tuning approaches, if any, present close to optimal behavior. Pareto-optimal controllers were found for different first and second-order processes with time delay. Performance was quantified in terms of the integrat...
A Novel Rough Set Reduct Algorithm for Medical Domain Based on Bee Colony Optimization
Suguna, N.; Thanushkodi, K.
2010-01-01
Feature selection refers to the problem of selecting relevant features which produce the most predictive outcome. In particular, feature selection task is involved in datasets containing huge number of features. Rough set theory has been one of the most successful methods used for feature selection. However, this method is still not able to find optimal subsets. This paper proposes a new feature selection method based on Rough set theory hybrid with Bee Colony Optimization (BCO) in an attempt...
Rabli, Djamal; McCarroll, Ronald
2018-02-01
This review surveys the different theoretical approaches, used to describe inelastic and rearrangement processes in collisions involving atoms and ions. For a range of energies from a few meV up to about 1 keV, the adiabatic representation is expected to be valid and under these conditions, inelastic and rearrangement processes take place via a network of avoided crossings of the potential energy curves of the collision system. In general, such avoided crossings are finite in number. The non-adiabatic coupling, due to the breakdown of the Born-Oppenheimer separation of the electronic and nuclear variables, depends on the ratio of the electron mass to the nuclear mass terms in the total Hamiltonian. By limiting terms in the total Hamiltonian correct to first order in the electron to nuclear mass ratio, a system of reaction coordinates is found which allows for a correct description of both inelastic channels. The connection between the use of reaction coordinates in the quantum description and the electron translation factors of the impact parameter approach is established. A major result is that only when reaction coordinates are used, is it possible to introduce the notion of a minimal basis set. Such a set must include all avoided crossings including both radial coupling and long range Coriolis coupling. But, only when reactive coordinates are used, can such a basis set be considered as complete. In particular when the centre of nuclear mass is used as centre of coordinates, rather than the correct reaction coordinates, it is shown that erroneous results are obtained. A few results to illustrate this important point are presented: one concerning a simple two-state Landau-Zener type avoided crossing, the other concerning a network of multiple crossings in a typical electron capture process involving a highly charged ion with a neutral atom.
Setting the optimal type of equipment to be adopted and the optimal time to replace it
Albici, Mihaela
2009-01-01
The mathematical models of equipment’s wear and tear, and replacement theory aim at deciding on the purchase selection of a certain equipment type, the optimal exploitation time of the equipment, the time and ways to replace or repair it, or to ensure its spare parts, the equipment’s performance in the technical progress context, the opportunities to modernize it etc.
DEFF Research Database (Denmark)
Otomori, Masaki; Yamada, Takayuki; Andkjær, Jacob Anders
2013-01-01
. A level set-based topology optimization method incorporating a fictitious interface energy is used to find optimized configurations of the ferrite material. The numerical results demonstrate that the optimization successfully found an appropriate ferrite configuration that functions as an electromagnetic......This paper presents a structural optimization method for the design of an electromagnetic cloak made of ferrite material. Ferrite materials exhibit a frequency-dependent degree of permeability, due to a magnetic resonance phenomenon that can be altered by changing the magnitude of an externally...
A multilevel, level-set method for optimizing eigenvalues in shape design problems
International Nuclear Information System (INIS)
Haber, E.
2004-01-01
In this paper, we consider optimal design problems that involve shape optimization. The goal is to determine the shape of a certain structure such that it is either as rigid or as soft as possible. To achieve this goal we combine two new ideas for an efficient solution of the problem. First, we replace the eigenvalue problem with an approximation by using inverse iteration. Second, we use a level set method but rather than propagating the front we use constrained optimization methods combined with multilevel continuation techniques. Combining these two ideas we obtain a robust and rapid method for the solution of the optimal design problem
Aerostructural Level Set Topology Optimization for a Common Research Model Wing
Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia
2014-01-01
The purpose of this work is to use level set topology optimization to improve the design of a representative wing box structure for the NASA common research model. The objective is to minimize the total compliance of the structure under aerodynamic and body force loading, where the aerodynamic loading is coupled to the structural deformation. A taxi bump case was also considered, where only body force loads were applied. The trim condition that aerodynamic lift must balance the total weight of the aircraft is enforced by allowing the root angle of attack to change. The level set optimization method is implemented on an unstructured three-dimensional grid, so that the method can optimize a wing box with arbitrary geometry. Fast matching and upwind schemes are developed for an unstructured grid, which make the level set method robust and efficient. The adjoint method is used to obtain the coupled shape sensitivities required to perform aerostructural optimization of the wing box structure.
Optimization of ultrasonic arrays design and setting using a differential evolution
International Nuclear Information System (INIS)
Puel, B.; Chatillon, S.; Calmon, P.; Lesselier, D.
2011-01-01
Optimization of both design and setting of phased arrays could be not so easy when they are performed manually via parametric studies. An optimization method based on an Evolutionary Algorithm and numerical simulation is proposed and evaluated. The Randomized Adaptive Differential Evolution has been adapted to meet the specificities of the non-destructive testing applications. In particular, the solution of multi-objective problems is aimed at with the implementation of the concept of pareto-optimal sets of solutions. The algorithm has been implemented and connected to the ultrasonic simulation modules of the CIVA software used as forward model. The efficiency of the method is illustrated on two realistic cases of application: optimization of the position and delay laws of a flexible array inspecting a nozzle, considered as a mono-objective problem; and optimization of the design of a surrounded array and its delay laws, considered as a constrained bi-objective problem. (authors)
A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM
Burger, Martin; Matevosyan, Norayr; Wolfram, Marie-Therese
2011-01-01
analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its
CHESS-changing horizon efficient set search: A simple principle for multiobjective optimization
DEFF Research Database (Denmark)
Borges, Pedro Manuel F. C.
2000-01-01
This paper presents a new concept for generating approximations to the non-dominated set in multiobjective optimization problems. The approximation set A is constructed by solving several single-objective minimization problems in which a particular function D(A, z) is minimized. A new algorithm t...
Optimal Interest-Rate Setting in a Dynamic IS/AS Model
DEFF Research Database (Denmark)
Jensen, Henrik
2011-01-01
This note deals with interest-rate setting in a simple dynamic macroeconomic setting. The purpose is to present some basic and central properties of an optimal interest-rate rule. The model framework predates the New-Keynesian paradigm of the late 1990s and onwards (it is accordingly dubbed “Old...
van der Burg, Eeke; de Leeuw, Jan; Verdegaal, Renée
1988-01-01
Homogeneity analysis, or multiple correspondence analysis, is usually applied tok separate variables. In this paper we apply it to sets of variables by using sums within sets. The resulting technique is called OVERALS. It uses the notion of optimal scaling, with transformations that can be multiple
Utilization of reduced fuelling ripple set in ROP detector layout optimization
International Nuclear Information System (INIS)
Kastanya, Doddy
2012-01-01
Highlights: ► ADORE is an ROP detect layout optimization algorithm in CANDU reactors. ► The effect of using reduced set of fuelling ripples in ADORE is assessed. ► Significant speedup can be realized by adopting this approach. ► The quality of the results is comparable to results from full set of ripples. - Abstract: The ADORE (Alternative Detector layout Optimization for REgional overpower protection system) algorithm for performing the optimization of regional overpower protection (ROP) for CANDU® reactors has been recently developed. This algorithm utilizes the simulated annealing (SA) stochastic optimization technique to come up with an optimized detector layout for the ROP systems. For each history in the SA iteration where a particular detector layout is evaluated, the goodness of this detector layout is measured in terms of its trip set point value which is obtained by performing a probabilistic trip set point calculation using the ROVER-F code. Since during each optimization execution thousands of candidate detector layouts are evaluated, the overall optimization process is time consuming. Since for each ROVER-F evaluation the number of fuelling ripples controls the execution time, reducing the number of fuelling ripples will reduce the overall execution time. This approach has been investigated and the results are presented in this paper. The challenge is to construct a set of representative fuelling ripples which will significantly speedup the optimization process while guaranteeing that the resulting detector layout has similar quality to the ones produced when the complete set of fuelling ripples is employed.
New STO(II-3Gmag family basis sets for the calculations of the molecules magnetic properties
Directory of Open Access Journals (Sweden)
Karina Kapusta
2015-10-01
Full Text Available An efficient approach for construction of physically justified STO(II-3Gmag family basis sets for calculation of molecules magnetic properties has been proposed. The procedure of construction based upon the taken into account the second order of perturbation theory in the magnetic field case. Analytical form of correction functions has been obtained using the closed representation of the Green functions by the solution of nonhomogeneous Schrödinger equation for the model problem of "one-electron atom in the external uniform magnetic field". Their performance has been evaluated for the DFT level calculations carried out with a number of functionals. The test calculations of magnetic susceptibility and 1H nuclear magnetic shielding tensors demonstrated a good agreement of the calculated values with the experimental data.
International Nuclear Information System (INIS)
Piacentino, A.; Cardona, F.
2008-01-01
The optimization of synthesis, design and operation in trigeneration systems for building applications is a quite complex task, due to the high number of decision variables, the presence of irregular heat, cooling and electric load profiles and the variable electricity price. Consequently, computer-aided techniques are usually adopted to achieve the optimal solution, based either on iterative techniques, linear or non-linear programming or evolutionary search. Large efforts have been made in improving algorithm efficiency, which have resulted in an increasingly rapid convergence to the optimal solution and in reduced calculation time; robust algorithm have also been formulated, assuming stochastic behaviour for energy loads and prices. This paper is based on the assumption that margins for improvements in the optimization of trigeneration systems still exist, which require an in-depth understanding of plant's energetic behaviour. Robustness in the optimization of trigeneration systems has more to do with a 'correct and comprehensive' than with an 'efficient' modelling, being larger efforts required to energy specialists rather than to experts in efficient algorithms. With reference to a mixed integer linear programming model implemented in MatLab for a trigeneration system including a pressurized (medium temperature) heat storage, the relevant contribute of thermoeconomics and energo-environmental analysis in the phase of mathematical modelling and code testing are shown
The optimization of heat supply centralization on the basis of boiler-rooms
International Nuclear Information System (INIS)
Arshakian, D.
1992-01-01
In this article the problem of finding of the optimum of heat supply centralization of towns and insutrial districts on the basis of boiler-rooms, using organic and nuclear fuel in the natural-climatic conditions and town-building transitions of Armenia is considered. (orig.) [de
Khvostichenko, Daria; Choi, Andrew; Boulatov, Roman
2008-04-24
We investigated the effect of several computational variables, including the choice of the basis set, application of symmetry constraints, and zero-point energy (ZPE) corrections, on the structural parameters and predicted ground electronic state of model 5-coordinate hemes (iron(II) porphines axially coordinated by a single imidazole or 2-methylimidazole). We studied the performance of B3LYP and B3PW91 with eight Pople-style basis sets (up to 6-311+G*) and B97-1, OLYP, and TPSS functionals with 6-31G and 6-31G* basis sets. Only hybrid functionals B3LYP, B3PW91, and B97-1 reproduced the quintet ground state of the model hemes. With a given functional, the choice of the basis set caused up to 2.7 kcal/mol variation of the quintet-triplet electronic energy gap (DeltaEel), in several cases, resulting in the inversion of the sign of DeltaEel. Single-point energy calculations with triple-zeta basis sets of the Pople (up to 6-311G++(2d,2p)), Ahlrichs (TZVP and TZVPP), and Dunning (cc-pVTZ) families showed the same trend. The zero-point energy of the quintet state was approximately 1 kcal/mol lower than that of the triplet, and accounting for ZPE corrections was crucial for establishing the ground state if the electronic energy of the triplet state was approximately 1 kcal/mol less than that of the quintet. Within a given model chemistry, effects of symmetry constraints and of a "tense" structure of the iron porphine fragment coordinated to 2-methylimidazole on DeltaEel were limited to 0.3 kcal/mol. For both model hemes the best agreement with crystallographic structural data was achieved with small 6-31G and 6-31G* basis sets. Deviation of the computed frequency of the Fe-Im stretching mode from the experimental value with the basis set decreased in the order: nonaugmented basis sets, basis sets with polarization functions, and basis sets with polarization and diffuse functions. Contraction of Pople-style basis sets (double-zeta or triple-zeta) affected the results
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Optimal decision making on the basis of evidence represented in spike trains.
Zhang, Jiaxiang; Bogacz, Rafal
2010-05-01
Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.
International Nuclear Information System (INIS)
Kim, D.S.; Seong, P.H.
1994-01-01
This paper describes the optimal testing input sets required for the fault diagnosis of the nuclear power plant digital electronic circuits. With the complicated systems such as very large scale integration (VLSI), nuclear power plant (NPP), and aircraft, testing is the major factor of the maintenance of the system. Particularly, diagnosis time grows quickly with the complexity of the component. In this research, for reduce diagnosis time the authors derived the optimal testing sets that are the minimal testing sets required for detecting the failure and for locating of the failed component. For reduced diagnosis time, the technique presented by Hayes fits best for the approach to testing sets generation among many conventional methods. However, this method has the following disadvantages: (a) it considers only the simple network (b) it concerns only whether the system is in failed state or not and does not provide the way to locate the failed component. Therefore the authors have derived the optimal testing input sets that resolve these problems by Hayes while preserving its advantages. When they applied the optimal testing sets to the automatic fault diagnosis system (AFDS) which incorporates the advanced fault diagnosis method of artificial intelligence technique, they found that the fault diagnosis using the optimal testing sets makes testing the digital electronic circuits much faster than that using exhaustive testing input sets; when they applied them to test the Universal (UV) Card which is a nuclear power plant digital input/output solid state protection system card, they reduced the testing time up to about 100 times
International Nuclear Information System (INIS)
Kovrizhkin, Yu.L.; Skalozubov, V.I.; Kochneva, V.Yu.
2009-01-01
The main results of the developments in the sphere of NPPs with WWER production efficiency increasing by the way of the maintenance optimization planning, testing and monitoring of the equipment and systems are shown. The attention is paid to the metal control during maintenance period of Power Unit. The realization methods of the transition concept at the repair according to the technical condition are resulted
Optimal combination of marketing instruments as a basis for tourist destination strategic management
Zupanovic, Ivo
2007-01-01
The marketing mix concept is understood as a certain combination of the instruments that form a tourist offer. Marketing mix enables integration of marketing activities, in order to satisfy the needs of tourist clients,but also in order to achieve the goals of tourist subjects business. Marketing instruments optimising is necessary during the adaptation to certain tourist segments.A suitable marketing mix is not just a simple sum of different instruments but an optimal combination of certain ...
OPTIMIZATION OF AGGREGATION AND SEQUENTIAL-PARALLEL EXECUTION MODES OF INTERSECTING OPERATION SETS
Directory of Open Access Journals (Sweden)
G. М. Levin
2016-01-01
Full Text Available A mathematical model and a method for the problem of optimization of aggregation and of sequential- parallel execution modes of intersecting operation sets are proposed. The proposed method is based on the two-level decomposition scheme. At the top level the variant of aggregation for groups of operations is selected, and at the lower level the execution modes of operations are optimized for a fixed version of aggregation.
The FERMI (at) Elettra Technical Optimization Study: Preliminary Parameter Set and Initial Studies
International Nuclear Information System (INIS)
Byrd, John; Corlett, John; Doolittle, Larry; Fawley, William; Lidia, Steven; Penn, Gregory; Ratti, Alex; Staples, John; Wilcox, Russell; Wurtele, Jonathan; Zholents, Alexander
2005-01-01
The goal of the FERMI (at) Elettra Technical Optimization Study is to produce a machine design and layout consistent with user needs for radiation in the approximate ranges 100 nm to 40 nm, and 40 nm to 10 nm, using seeded FEL's. The Study will involve collaboration between Italian and US physicists and engineers, and will form the basis for the engineering design and the cost estimation
Ultrafuzziness Optimization Based on Type II Fuzzy Sets for Image Thresholding
Directory of Open Access Journals (Sweden)
Hudan Studiawan
2010-11-01
Full Text Available Image thresholding is one of the processing techniques to provide high quality preprocessed image. Image vagueness and bad illumination are common obstacles yielding in a poor image thresholding output. By assuming image as fuzzy sets, several different fuzzy thresholding techniques have been proposed to remove these obstacles during threshold selection. In this paper, we proposed an algorithm for thresholding image using ultrafuzziness optimization to decrease uncertainty in fuzzy system by common fuzzy sets like type II fuzzy sets. Optimization was conducted by involving ultrafuzziness measurement for background and object fuzzy sets separately. Experimental results demonstrated that the proposed image thresholding method had good performances for images with high vagueness, low level contrast, and grayscale ambiguity.
Yang, Yanchao; Jiang, Hong; Liu, Congbin; Lan, Zhongli
2013-03-01
Cognitive radio (CR) is an intelligent wireless communication system which can dynamically adjust the parameters to improve system performance depending on the environmental change and quality of service. The core technology for CR is the design of cognitive engine, which introduces reasoning and learning methods in the field of artificial intelligence, to achieve the perception, adaptation and learning capability. Considering the dynamical wireless environment and demands, this paper proposes a design of cognitive engine based on the rough sets (RS) and radial basis function neural network (RBF_NN). The method uses experienced knowledge and environment information processed by RS module to train the RBF_NN, and then the learning model is used to reconfigure communication parameters to allocate resources rationally and improve system performance. After training learning model, the performance is evaluated according to two benchmark functions. The simulation results demonstrate the effectiveness of the model and the proposed cognitive engine can effectively achieve the goal of learning and reconfiguration in cognitive radio.
Morgan, W James; Matthews, Devin A; Ringholm, Magnus; Agarwal, Jay; Gong, Justin Z; Ruud, Kenneth; Allen, Wesley D; Stanton, John F; Schaefer, Henry F
2018-03-13
Geometric energy derivatives which rely on core-corrected focal-point energies extrapolated to the complete basis set (CBS) limit of coupled cluster theory with iterative and noniterative quadruple excitations, CCSDTQ and CCSDT(Q), are used as elements of molecular gradients and, in the case of CCSDT(Q), expansion coefficients of an anharmonic force field. These gradients are used to determine the CCSDTQ/CBS and CCSDT(Q)/CBS equilibrium structure of the S 0 ground state of H 2 CO where excellent agreement is observed with previous work and experimentally derived results. A fourth-order expansion about this CCSDT(Q)/CBS reference geometry using the same level of theory produces an exceptional level of agreement to spectroscopically observed vibrational band origins with a MAE of 0.57 cm -1 . Second-order vibrational perturbation theory (VPT2) and variational discrete variable representation (DVR) results are contrasted and discussed. Vibration-rotation, anharmonicity, and centrifugal distortion constants from the VPT2 analysis are reported and compared to previous work. Additionally, an initial application of a sum-over-states fourth-order vibrational perturbation theory (VPT4) formalism is employed herein, utilizing quintic and sextic derivatives obtained with a recursive algorithmic approach for response theory.
Directory of Open Access Journals (Sweden)
HU Qi-guo
2017-01-01
Full Text Available For reducing the vehicle compartment low frequency noise, the Optimal Latin hypercube sampling method was applied to perform experimental design for sampling in the factorial design space. The thickness parameters of the panels with larger acoustic contribution was considered as factors, as well as the vehicle mass, seventh rank modal frequency of body, peak sound pressure of test point and sound pressure root-mean-square value as responses. By using the RBF(radial basis function neuro-network method, an approximation model of four responses about six factors was established. Further more, error analysis of established approximation model was performed in this paper. To optimize the panel’s thickness parameter, the adaptive simulated annealing algorithm was im-plemented. Optimization results show that the peak sound pressure of driver’s head was reduced by 4.45dB and 5.47dB at frequency 158HZ and 134Hz respec-tively. The test point pressure were significantly reduced at other frequency as well. The results indicate that through the optimization the vehicle interior cavity noise was reduced effectively, and the acoustical comfort of the vehicle was im-proved significantly.
Sensitivity of the optimal parameter settings for a LTE packet scheduler
Fernandez-Diaz, I.; Litjens, R.; van den Berg, C.A.; Dimitrova, D.C.; Spaey, K.
Advanced packet scheduling schemes in 3G/3G+ mobile networks provide one or more parameters to optimise the trade-off between QoS and resource efficiency. In this paper we study the sensitivity of the optimal parameter setting for packet scheduling in LTE radio networks with respect to various
A Decomposition Model for HPLC-DAD Data Set and Its Solution by Particle Swarm Optimization
Directory of Open Access Journals (Sweden)
Lizhi Cui
2014-01-01
Full Text Available This paper proposes a separation method, based on the model of Generalized Reference Curve Measurement and the algorithm of Particle Swarm Optimization (GRCM-PSO, for the High Performance Liquid Chromatography with Diode Array Detection (HPLC-DAD data set. Firstly, initial parameters are generated to construct reference curves for the chromatogram peaks of the compounds based on its physical principle. Then, a General Reference Curve Measurement (GRCM model is designed to transform these parameters to scalar values, which indicate the fitness for all parameters. Thirdly, rough solutions are found by searching individual target for every parameter, and reinitialization only around these rough solutions is executed. Then, the Particle Swarm Optimization (PSO algorithm is adopted to obtain the optimal parameters by minimizing the fitness of these new parameters given by the GRCM model. Finally, spectra for the compounds are estimated based on the optimal parameters and the HPLC-DAD data set. Through simulations and experiments, following conclusions are drawn: (1 the GRCM-PSO method can separate the chromatogram peaks and spectra from the HPLC-DAD data set without knowing the number of the compounds in advance even when severe overlap and white noise exist; (2 the GRCM-PSO method is able to handle the real HPLC-DAD data set.
Approximating the Pareto Set of Multiobjective Linear Programs via Robust Optimization
Gorissen, B.L.; den Hertog, D.
2012-01-01
Abstract: The Pareto set of a multiobjective optimization problem consists of the solutions for which one or more objectives can not be improved without deteriorating one or more other objectives. We consider problems with linear objectives and linear constraints and use Adjustable Robust
Blankman, P; Groot Jebbink, E; Preis, C; Bikker, I.; Gommers, D.
2012-01-01
INTRODUCTION. Electrical Impedance Tomography (EIT) is a non-invasive imaging technique, which can be used to visualize ventilation. Ventilation will be measured by impedance changes due to ventilation. OBJECTIVES. The aim of this study was to optimize PEEP settings based on the ventilation area of
Energy Technology Data Exchange (ETDEWEB)
Keller, J.; Blarigan, P. Van [Sandia National Labs., Livermore, CA (United States)
1998-08-01
In this manuscript the authors report on two projects each of which the goal is to produce cost effective hydrogen utilization technologies. These projects are: (1) the development of an electrical generation system using a conventional four-stroke spark-ignited internal combustion engine generator combination (SI-GenSet) optimized for maximum efficiency and minimum emissions, and (2) the development of a novel internal combustion engine concept. The SI-GenSet will be optimized to run on either hydrogen or hydrogen-blends. The novel concept seeks to develop an engine that optimizes the Otto cycle in a free piston configuration while minimizing all emissions. To this end the authors are developing a rapid combustion homogeneous charge compression ignition (HCCI) engine using a linear alternator for both power take-off and engine control. Targeted applications include stationary electrical power generation, stationary shaft power generation, hybrid vehicles, and nearly any other application now being accomplished with internal combustion engines.
Physiological basis of barley yield under near optimal and stress conditions
Directory of Open Access Journals (Sweden)
Pržulj Novo
2004-01-01
Full Text Available Average barley yield fall below its potential due to incidence of stresses. Water stress is the main environmental factor limiting yield. The component a priori more sensitive to most stresses is the amount of radiation absorbed. The effect of stresses influence on the total amount of radiation absorbed by barley crop during its vegetation and the photosynthetic efficiency of radiation conversion. Growth inhibition is accompanied by reductions in leaf and cell wall extensibility. Grain yield under drought conditions is source limited. Supply of assimilates to the developing inflorescence plays a critical role in establishing final grain number and grain size. Grain weight is negatively affected by drought, high temperature, and any other factors that may reduce grain filling duration and grain filling rate. Awns and glaucousness confer better performance of barley under drought stress conditions. Barley responds with an increased accumulation of a number of proteins when subjected to different stress inducing cell dehydration. Screening techniques that are able to identify desirable genotypes based on the evaluation of physiological traits related to stress evasion and stress resistance maybe useful in breeding barley for resistance to stress, particularly drought stress. Crop management and breeding can reduce the incidence of stress on yield. The effect of these practices is sustained by an understanding of their physiology. In this paper the physiological basis of the processes determining barley yield and the incidence of stresses on photosynthetic metabolism that determine grain yield of barley is discussed. .
Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun
2017-08-07
This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.
Wang, Yang; Wu, Lin
2018-07-01
Low-Rank Representation (LRR) is arguably one of the most powerful paradigms for Multi-view spectral clustering, which elegantly encodes the multi-view local graph/manifold structures into an intrinsic low-rank self-expressive data similarity embedded in high-dimensional space, to yield a better graph partition than their single-view counterparts. In this paper we revisit it with a fundamentally different perspective by discovering LRR as essentially a latent clustered orthogonal projection based representation winged with an optimized local graph structure for spectral clustering; each column of the representation is fundamentally a cluster basis orthogonal to others to indicate its members, which intuitively projects the view-specific feature representation to be the one spanned by all orthogonal basis to characterize the cluster structures. Upon this finding, we propose our technique with the following: (1) We decompose LRR into latent clustered orthogonal representation via low-rank matrix factorization, to encode the more flexible cluster structures than LRR over primal data objects; (2) We convert the problem of LRR into that of simultaneously learning orthogonal clustered representation and optimized local graph structure for each view; (3) The learned orthogonal clustered representations and local graph structures enjoy the same magnitude for multi-view, so that the ideal multi-view consensus can be readily achieved. The experiments over multi-view datasets validate its superiority, especially over recent state-of-the-art LRR models. Copyright © 2018 Elsevier Ltd. All rights reserved.
Trace element analysis in an optimized set-up for total reflection PIXE (TPIXE)
International Nuclear Information System (INIS)
Van Kan, J.A.; Vis, R.D.
1996-01-01
A newly constructed chamber for measuring with MeV proton beams at small incidence angles (0 to 35 mrad) is used to analyse trace elements on flat surfaces such as Si wafers, quartz substrates and perspex. This set-up is constructed in such a way that the X-ray detector can reach very large solid angles, larger than 1 sr. Using these large solid angles in combination with the reduction of bremsstrahlungs background, lower limits of detection (LOD) using TPIXE can be obtained as compared with PIXE in the conventional geometry. Standard solutions are used to determine the LODs obtainable with TPIXE in the optimized set-up. These solutions contain traces of As and Sr with concentrations down to 20 ppb in an insulin solution. The limits of detection found are compared with earlier ones obtained with TPIXE in a non optimized set-up and with TXRF results. (author)
International Nuclear Information System (INIS)
Huang, Yanjun; Khajepour, Amir; Ding, Haitao; Bagheri, Farshid; Bahrami, Majid
2017-01-01
Highlights: • A novel two-layer energy-saving controller for automotive A/C-R system is developed. • A set-point optimizer at the outer loop is designed based on the steady state model. • A sliding mode controller in the inner loop is built. • Extensively experiments studies show that about 9% energy can be saving by this controller. - Abstract: This paper presents an energy-saving controller for automotive air-conditioning/refrigeration (A/C-R) systems. With their extensive application in homes, industry, and vehicles, A/C-R systems are consuming considerable amounts of energy. The proposed controller consists of two different time-scale layers. The outer or the slow time-scale layer called a set-point optimizer is used to find the set points related to energy efficiency by using the steady state model; whereas, the inner or the fast time-scale layer is used to track the obtained set points. In the inner loop, thanks to its robustness, a sliding mode controller (SMC) is utilized to track the set point of the cargo temperature. The currently used on/off controller is presented and employed as a basis for comparison to the proposed controller. More importantly, the real experimental results under several disturbed scenarios are analysed to demonstrate how the proposed controller can improve performance while reducing the energy consumption by 9% comparing with the on/off controller. The controller is suitable for any type of A/C-R system even though it is applied to an automotive A/C-R system in this paper.
DEFF Research Database (Denmark)
Kupka, Teobald; Stachów, Michal; Kaminsky, Jakub
2013-01-01
, estimated from calculations with the family of polarizationconsistent pcS-n basis sets is reported. This dependence was also supported by inspection of profiles of deviation between CBS estimated nuclear shieldings and obtained with significantly smaller basis sets pcS-2 and aug-cc-pVTZ-J for the selected......A linear correlation between isotropic nuclear magnetic shielding constants for seven model molecules (CH2O, H2O, HF, F2, HCN, SiH4 and H2S) calculated with 37 methods (34 density functionals, RHF, MP2 and CCSD(T) ), with affordable pcS-2 basis set and corresponding complete basis set results...... set of 37 calculation methods. It was possible to formulate a practical approach of estimating the values of isotropic nuclear magnetic shielding constants at the CCSD(T)/CBS and MP2/CBS levels from affordable CCSD(T)/pcS-2, MP2/pcS-2 and DFT/CBS calculations with pcS-n basis sets. The proposed method...
Cameron, David; Ubels, Jasper; Norström, Fredrik
2018-01-01
The amount a government should be willing to invest in adopting new medical treatments has long been under debate. With many countries using formal cost-effectiveness (C/E) thresholds when examining potential new treatments and ever-growing medical costs, accurately setting the level of a C/E threshold can be essential for an efficient healthcare system. The aim of this systematic review is to describe the prominent approaches to setting a C/E threshold, compile available national-level C/E threshold data and willingness-to-pay (WTP) data, and to discern whether associations exist between these values, gross domestic product (GDP) and health-adjusted life expectancy (HALE). This review further examines current obstacles faced with the presently available data. A systematic review was performed to collect articles which have studied national C/E thresholds and willingness-to-pay (WTP) per quality-adjusted life year (QALY) in the general population. Associations between GDP, HALE, WTP, and C/E thresholds were analyzed with correlations. Seventeen countries were identified from nine unique sources to have formal C/E thresholds within our inclusion criteria. Thirteen countries from nine sources were identified to have WTP per QALY data within our inclusion criteria. Two possible associations were identified: C/E thresholds with HALE (quadratic correlation of 0.63), and C/E thresholds with GDP per capita (polynomial correlation of 0.84). However, these results are based on few observations and therefore firm conclusions cannot be made. Most national C/E thresholds identified in our review fall within the WHO's recommended range of one-to-three times GDP per capita. However, the quality and quantity of data available regarding national average WTP per QALY, opportunity costs, and C/E thresholds is poor in comparison to the importance of adequate investment in healthcare. There exists an obvious risk that countries might either over- or underinvest in healthcare if they
Ramírez-Solís, A; Zicovich-Wilson, C M; Hernández-Lamoneda, R; Ochoa-Calle, A J
2017-01-25
The question of the non-magnetic (NM) vs. antiferromagnetic (AF) nature of the ε phase of solid oxygen is a matter of great interest and continuing debate. In particular, it has been proposed that the ε phase is actually composed of two phases, a low-pressure AF ε 1 phase and a higher pressure NM ε 0 phase [Crespo et al., Proc. Natl. Acad. Sci. U. S. A., 2014, 111, 10427]. We address this problem through periodic spin-restricted and spin-polarized Kohn-Sham density functional theory calculations at pressures from 10 to 50 GPa using calibrated GGA and hybrid exchange-correlation functionals with Gaussian atomic basis sets. The two possible configurations for the antiferromagnetic (AF1 and AF2) coupling of the 0 ≤ S ≤ 1 O 2 molecules in the (O 2 ) 4 unit cell were studied. Full enthalpy-driven geometry optimizations of the (O 2 ) 4 unit cells were done to study the pressure evolution of the enthalpy difference between the non-magnetic and both antiferromagnetic structures. We also address the evolution of structural parameters and the spin-per-molecule vs. pressure. We find that the spin-less solution becomes more stable than both AF structures above 50 GPa and, crucially, the spin-less solution yields lattice parameters in much better agreement with experimental data at all pressures than the AF structures. The optimized AF2 broken-symmetry structures lead to large errors of the a and b lattice parameters when compared with experiments. The results for the NM model are in much better agreement with the experimental data than those found for both AF models and are consistent with a completely non-magnetic (O 2 ) 4 unit cell for the low-pressure regime of the ε phase.
Energy Technology Data Exchange (ETDEWEB)
Urquhart, B.; Sengupta, M.; Keller, J.
2012-09-01
A multi-objective optimization was performed to allocate 2MW of PV among four candidate sites on the island of Lanai such that energy was maximized and variability in the form of ramp rates was minimized. This resulted in an optimal solution set which provides a range of geographic allotment alternatives for the fixed PV capacity. Within the optimal set, a tradeoff between energy produced and variability experienced was found, whereby a decrease in variability always necessitates a simultaneous decrease in energy. A design point within the optimal set was selected for study which decreased extreme ramp rates by over 50% while only decreasing annual energy generation by 3% over the maximum generation allocation. To quantify the allotment mix selected, a metric was developed, called the ramp ratio, which compares ramping magnitude when all capacity is allotted to a single location to the aggregate ramping magnitude in a distributed scenario. The ramp ratio quantifies simultaneously how much smoothing a distributed scenario would experience over single site allotment and how much a single site is being under-utilized for its ability to reduce aggregate variability. This paper creates a framework for use by cities and municipal utilities to reduce variability impacts while planning for high penetration of PV on the distribution grid.
Topology optimization in acoustics and elasto-acoustics via a level-set method
Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.
2018-04-01
Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.
Energy Technology Data Exchange (ETDEWEB)
Boermans, T.; Bettgenhaeuser, K.; Hermelink, A.; Schimschar, S. [Ecofys, Utrecht (Netherlands)
2011-05-15
On the European level, the principles for the requirements for the energy performance of buildings are set by the Energy Performance of Buildings Directive (EPBD). Dating from December 2002, the EPBD has set a common framework from which the individual Member States in the EU developed or adapted their individual national regulations. The EPBD in 2008 and 2009 underwent a recast procedure, with final political agreement having been reached in November 2009. The new Directive was then formally adopted on May 19, 2010. Among other clarifications and new provisions, the EPBD recast introduces a benchmarking mechanism for national energy performance requirements for the purpose of determining cost-optimal levels to be used by Member States for comparing and setting these requirements. The previous EPBD set out a general framework to assess the energy performance of buildings and required Member States to define maximum values for energy delivered to meet the energy demand associated with the standardised use of the building. However it did not contain requirements or guidance related to the ambition level of such requirements. As a consequence, building regulations in the various Member States have been developed by the use of different approaches (influenced by different building traditions, political processes and individual market conditions) and resulted in different ambition levels where in many cases cost optimality principles could justify higher ambitions. The EPBD recast now requests that Member States shall ensure that minimum energy performance requirements for buildings are set 'with a view to achieving cost-optimal levels'. The cost optimum level shall be calculated in accordance with a comparative methodology. The objective of this report is to contribute to the ongoing discussion in Europe around the details of such a methodology by describing possible details on how to calculate cost optimal levels and pointing towards important factors and
Security Optimization for Distributed Applications Oriented on Very Large Data Sets
Directory of Open Access Journals (Sweden)
Mihai DOINEA
2010-01-01
Full Text Available The paper presents the main characteristics of applications which are working with very large data sets and the issues related to security. First section addresses the optimization process and how it is approached when dealing with security. The second section describes the concept of very large datasets management while in the third section the risks related are identified and classified. Finally, a security optimization schema is presented with a cost-efficiency analysis upon its feasibility. Conclusions are drawn and future approaches are identified.
Wang, Feng; Pang, Wenning; Duffy, Patrick
2012-12-01
Performance of a number of commonly used density functional methods in chemistry (B3LYP, Bhandh, BP86, PW91, VWN, LB94, PBe0, SAOP and X3LYP and the Hartree-Fock (HF) method) has been assessed using orbital momentum distributions of the 7σ orbital of nitrous oxide (NNO), which models electron behaviour in a chemically significant region. The density functional methods are combined with a number of Gaussian basis sets (Pople's 6-31G*, 6-311G**, DGauss TZVP and Dunning's aug-cc-pVTZ as well as even-tempered Slater basis sets, namely, et-DZPp, et-QZ3P, et-QZ+5P and et-pVQZ). Orbital momentum distributions of the 7σ orbital in the ground electronic state of NNO, which are obtained from a Fourier transform into momentum space from single point electronic calculations employing the above models, are compared with experimental measurement of the same orbital from electron momentum spectroscopy (EMS). The present study reveals information on performance of (a) the density functional methods, (b) Gaussian and Slater basis sets, (c) combinations of the density functional methods and basis sets, that is, the models, (d) orbital momentum distributions, rather than a group of specific molecular properties and (e) the entire region of chemical significance of the orbital. It is found that discrepancies of this orbital between the measured and the calculated occur in the small momentum region (i.e. large r region). In general, Slater basis sets achieve better overall performance than the Gaussian basis sets. Performance of the Gaussian basis sets varies noticeably when combining with different Vxc functionals, but Dunning's augcc-pVTZ basis set achieves the best performance for the momentum distributions of this orbital. The overall performance of the B3LYP and BP86 models is similar to newer models such as X3LYP and SAOP. The present study also demonstrates that the combinations of the density functional methods and the basis sets indeed make a difference in the quality of the
SETTING OF TASK OF OPTIMIZATION OF THE ACTIVITY OF A MACHINE-BUILDING CLUSTER COMPANY
Directory of Open Access Journals (Sweden)
A. V. Romanenko
2014-01-01
Full Text Available The work is dedicated to the development of methodological approaches to the management of machine-building enterprise on the basis of cost reduction, optimization of the portfolio of orders and capacity utilization in the process of operational management. Evaluation of economic efficiency of such economic entities of the real sector of the economy is determined, including the timing of orders, which depend on the issues of building a production facility, maintenance of fixed assets and maintain them at a given level. Formulated key components of economic-mathematical model of industrial activity and is defined as the optimization criterion. As proposed formula accumulating profits due to production capacity and technology to produce products current direct variable costs, the amount of property tax and expenses appearing as a result of manifestations of variance when performing replacement of production tasks for a single period of time. The main component of the optimization of the production activity of the enterprise on the basis of this criterion is the vector of direct variable costs. It depends on the number of types of products in the current portfolio of orders, production schedules production, the normative time for the release of a particular product available Fund time efficient production positions, the current valuation for certain groups of technological operations and the current priority of operations for the degree of readiness performed internal orders. Modeling of industrial activity based on the proposed provisions would allow the enterprises of machine-building cluster, active innovation, improve the efficient use of available production resources by optimizing current operations at the high uncertainty of the magnitude of the demand planning and carrying out maintenance and routine repairs.
A Binary Cat Swarm Optimization Algorithm for the Non-Unicost Set Covering Problem
Directory of Open Access Journals (Sweden)
Broderick Crawford
2015-01-01
Full Text Available The Set Covering Problem consists in finding a subset of columns in a zero-one matrix such that they cover all the rows of the matrix at a minimum cost. To solve the Set Covering Problem we use a metaheuristic called Binary Cat Swarm Optimization. This metaheuristic is a recent swarm metaheuristic technique based on the cat behavior. Domestic cats show the ability to hunt and are curious about moving objects. Based on this, the cats have two modes of behavior: seeking mode and tracing mode. We are the first ones to use this metaheuristic to solve this problem; our algorithm solves a set of 65 Set Covering Problem instances from OR-Library.
Optimal wage setting for an export oriented firm under labor taxes and labor mobility
Directory of Open Access Journals (Sweden)
Raúl Ponce Rodríguez
2005-01-01
Full Text Available In this paper it is developed a theoretical model to study the incentives that a labor tax might induce in terms of the optimal wage setting for an export oriented firm. In particular, we analyze the interaction of a labor tax that tends to reduce the wage due the firm is induced to shift backwards the tax burden to its employees minimizing the possible increase in the payroll costs and a fall of profits. However a lower wage might not be an optimal response to the establishment of a labor tax because it increases the labor turnover and as a result the firm faces both: an outputs opportunity cost and a labors turnover cost. The firm thus optimally decides to respond to the qualification and labor taxes by increasing the after tax wage.
International Nuclear Information System (INIS)
Lin Lin; Lu Jianfeng; Ying Lexing; Weinan, E
2012-01-01
Kohn–Sham density functional theory is one of the most widely used electronic structure theories. In the pseudopotential framework, uniform discretization of the Kohn–Sham Hamiltonian generally results in a large number of basis functions per atom in order to resolve the rapid oscillations of the Kohn–Sham orbitals around the nuclei. Previous attempts to reduce the number of basis functions per atom include the usage of atomic orbitals and similar objects, but the atomic orbitals generally require fine tuning in order to reach high accuracy. We present a novel discretization scheme that adaptively and systematically builds the rapid oscillations of the Kohn–Sham orbitals around the nuclei as well as environmental effects into the basis functions. The resulting basis functions are localized in the real space, and are discontinuous in the global domain. The continuous Kohn–Sham orbitals and the electron density are evaluated from the discontinuous basis functions using the discontinuous Galerkin (DG) framework. Our method is implemented in parallel and the current implementation is able to handle systems with at least thousands of atoms. Numerical examples indicate that our method can reach very high accuracy (less than 1 meV) with a very small number (4–40) of basis functions per atom.
A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks
De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio
2016-05-01
This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.
Directory of Open Access Journals (Sweden)
Ali Gorener
2013-04-01
Full Text Available Location selection problem in banking is an important issue for the commercial success in competitive environment. There is a strategic fit between the location selection decision and overall performance of a new branch. Providing physical service in requested location as well as alternative distribution channels to meet profitable client needs is the current problematic to achieve the competitive advantage over the rivalry in financial system. In this paper, an integrated model has been developed to support in the decision of branch location selection for a new bank branch. Analytic Hierarchy Process (AHP technique has been conducted to prioritize of evaluation criteria, and multi-objective optimization on the basis of ratio analysis (MOORA method has been applied to rank location alternatives of bank branch.
Searching for optimal integer solutions to set partitioning problems using column generation
Bredström, David; Jörnsten, Kurt; Rönnqvist, Mikael
2007-01-01
We describe a new approach to produce integer feasible columns to a set partitioning problem directly in solving the linear programming (LP) relaxation using column generation. Traditionally, column generation is aimed to solve the LP relaxation as quick as possible without any concern of the integer properties of the columns formed. In our approach we aim to generate the columns forming the optimal integer solution while simultaneously solving the LP relaxation. By this we can re...
A Method of Forming the Optimal Set of Disjoint Path in Computer Networks
Directory of Open Access Journals (Sweden)
As'ad Mahmoud As'ad ALNASER
2017-04-01
Full Text Available This work provides a short analysis of algorithms of multipath routing. The modified algorithm of formation of the maximum set of not crossed paths taking into account their metrics is offered. Optimization of paths is carried out due to their reconfiguration with adjacent deadlock path. Reconfigurations are realized within the subgraphs including only peaks of the main and an adjacent deadlock path. It allows to reduce the field of formation of an optimum path and time complexity of its formation.
Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph
2015-01-01
Multivariate biomarkers that can predict the effectiveness of targeted therapy in individual patients are highly desired. Previous biomarker discovery studies have largely focused on the identification of single biomarker signatures, aimed at maximizing prediction accuracy. Here, we present a different approach that identifies multiple biomarkers by simultaneously optimizing their predictive power, number of features, and proximity to the drug target in a protein-protein interaction network. To this end, we incorporated NSGA-II, a fast and elitist multi-objective optimization algorithm that is based on the principle of Pareto optimality, into the biomarker discovery workflow. The method was applied to quantitative phosphoproteome data of 19 non-small cell lung cancer (NSCLC) cell lines from a previous biomarker study. The algorithm successfully identified a total of 77 candidate biomarker signatures predicting response to treatment with dasatinib. Through filtering and similarity clustering, this set was trimmed to four final biomarker signatures, which then were validated on an independent set of breast cancer cell lines. All four candidates reached the same good prediction accuracy (83%) as the originally published biomarker. Although the newly discovered signatures were diverse in their composition and in their size, the central protein of the originally published signature - integrin β4 (ITGB4) - was also present in all four Pareto signatures, confirming its pivotal role in predicting dasatinib response in NSCLC cell lines. In summary, the method presented here allows for a robust and simultaneous identification of multiple multivariate biomarkers that are optimized for prediction performance, size, and relevance.
Optimization of the primary collimator settings for fractionated IMRT stereotactic radiotherapy
International Nuclear Information System (INIS)
Tobler, Matt; Leavitt, Dennis D.; Watson, Gordon
2004-01-01
Advances in field-shaping techniques for stereotactic radiosurgery/radiotherapy have allowed dynamic adjustment of field shape with gantry rotation (dynamic conformal arc) in an effort to minimize dose to critical structures. Recent work evaluated the potential for increased sparing of dose to normal tissues when the primary collimator setting is optimized to only the size necessary to cover the largest shape of the dynamic micro multi leaf field. Intensity-modulated radiotherapy (IMRT) is now a treatment option for patients receiving stereotactic radiotherapy treatments. This multisegmentation of the dose delivered through multiple fixed treatment fields provides for delivery of uniform dose to the tumor volume while allowing sparing of critical structures, particularly for patients whose tumor volumes are less suited for rotational treatment. For these segmented fields, the total number of monitor units (MUs) delivered may be much greater than the number of MUs required if dose delivery occurred through an unmodulated treatment field. As a result, undesired dose delivered, as leakage through the leaves to tissues outside the area of interest, will be proportionally increased. This work will evaluate the role of optimization of the primary collimator setting for these IMRT treatment fields, and compare these results to treatment fields where the primary collimator settings have not been optimized
Application of HGSO to security based optimal placement and parameter setting of UPFC
International Nuclear Information System (INIS)
Tarafdar Hagh, Mehrdad; Alipour, Manijeh; Teimourzadeh, Saeed
2014-01-01
Highlights: • A new method for solving the security based UPFC placement and parameter setting problem is proposed. • The proposed method is a global method for all mixed-integer problems. • The proposed method has the ability of the parallel search in binary and continues space. • By using the proposed method, most of the problems due to line contingencies are solved. • Comparison studies are done to compare the performance of the proposed method. - Abstract: This paper presents a novel method to solve security based optimal placement and parameter setting of unified power flow controller (UPFC) problem based on hybrid group search optimization (HGSO) technique. Firstly, HGSO is introduced in order to solve mix-integer type problems. Afterwards, the proposed method is applied to the security based optimal placement and parameter setting of UPFC problem. The focus of the paper is to enhance the power system security through eliminating or minimizing the over loaded lines and the bus voltage limit violations under single line contingencies. Simulation studies are carried out on the IEEE 6-bus, IEEE 14-bus and IEEE 30-bus systems in order to verify the accuracy and robustness of the proposed method. The results indicate that by using the proposed method, the power system remains secure under single line contingencies
International Nuclear Information System (INIS)
Frisch, M.J.; Binkley, J.S.; Schaefer, H.F. III
1984-01-01
The relative energies of the stationary points on the FH 2 and H 2 CO nuclear potential energy surfaces relevant to the hydrogen atom abstraction, H 2 elimination and 1,2-hydrogen shift reactions have been examined using fourth-order Moller--Plesset perturbation theory and a variety of basis sets. The theoretical absolute zero activation energy for the F+H 2 →FH+H reaction is in better agreement with experiment than previous theoretical studies, and part of the disagreement between earlier theoretical calculations and experiment is found to result from the use of assumed rather than calculated zero-point vibrational energies. The fourth-order reaction energy for the elimination of hydrogen from formaldehyde is within 2 kcal mol -1 of the experimental value using the largest basis set considered. The qualitative features of the H 2 CO surface are unchanged by expansion of the basis set beyond the polarized triple-zeta level, but diffuse functions and several sets of polarization functions are found to be necessary for quantitative accuracy in predicted reaction and activation energies. Basis sets and levels of perturbation theory which represent good compromises between computational efficiency and accuracy are recommended
Directory of Open Access Journals (Sweden)
Yu Zhou
2017-01-01
Full Text Available The train-set circulation plan problem (TCPP belongs to the rolling stock scheduling (RSS problem and is similar to the aircraft routing problem (ARP in airline operations and the vehicle routing problem (VRP in the logistics field. However, TCPP involves additional complexity due to the maintenance constraint of train-sets: train-sets must conduct maintenance tasks after running for a certain time and distance. The TCPP is nondeterministic polynomial hard (NP-hard. There is no available algorithm that can obtain the optimal global solution, and many factors such as the utilization mode and the maintenance mode impact the solution of the TCPP. This paper proposes a train-set circulation optimization model to minimize the total connection time and maintenance costs and describes the design of an efficient multiple-population genetic algorithm (MPGA to solve this model. A realistic high-speed railway (HSR case is selected to verify our model and algorithm, and, then, a comparison of different algorithms is carried out. Furthermore, a new maintenance mode is proposed, and related implementation requirements are discussed.
Directory of Open Access Journals (Sweden)
Soumya Banerjee
2011-03-01
Full Text Available Congested roads, high traffic, and parking problems are major concerns for any modern city planning. Congestion of on-street spaces in official neighborhoods may give rise to inappropriate parking areas in office and shopping mall complex during the peak time of official transactions. This paper proposes an intelligent and optimized scheme to solve parking space problem for a small city (e.g., Mauritius using a reactive search technique (named as Tabu Search assisted by rough set. Rough set is being used for the extraction of uncertain rules that exist in the databases of parking situations. The inclusion of rough set theory depicts the accuracy and roughness, which are used to characterize uncertainty of the parking lot. Approximation accuracy is employed to depict accuracy of a rough classification [1] according to different dynamic parking scenarios. And as such, the hybrid metaphor proposed comprising of Tabu Search and rough set could provide substantial research directions for other similar hard optimization problems.
A parametric level-set approach for topology optimization of flow domains
DEFF Research Database (Denmark)
Pingen, Georg; Waidmann, Matthias; Evgrafov, Anton
2010-01-01
of the design variables in the traditional approaches is seen as a possible cause for the slow convergence. Non-smooth material distributions are suspected to trigger premature onset of instationary flows which cannot be treated by steady-state flow models. In the present work, we study whether the convergence...... and the versatility of topology optimization methods for fluidic systems can be improved by employing a parametric level-set description. In general, level-set methods allow controlling the smoothness of boundaries, yield a non-local influence of design variables, and decouple the material description from the flow...... field discretization. The parametric level-set method used in this study utilizes a material distribution approach to represent flow boundaries, resulting in a non-trivial mapping between design variables and local material properties. Using a hydrodynamic lattice Boltzmann method, we study...
Energy Technology Data Exchange (ETDEWEB)
Friese, Daniel H., E-mail: daniel.h.friese@uit.no [Centre for Theoretical and Computational Chemistry CTCC, Department of Chemistry, University of Tromsø, N-9037 Tromsø (Norway); Törk, Lisa; Hättig, Christof, E-mail: christof.haettig@rub.de [Lehrstuhl für Theoretische Chemie, Ruhr-Universität Bochum, D-44801 Bochum (Germany)
2014-11-21
We present scaling factors for vibrational frequencies calculated within the harmonic approximation and the correlated wave-function methods coupled cluster singles and doubles model (CC2) and Møller-Plesset perturbation theory (MP2) with and without a spin-component scaling (SCS or spin-opposite scaling (SOS)). Frequency scaling factors and the remaining deviations from the reference data are evaluated for several non-augmented basis sets of the cc-pVXZ family of generally contracted correlation-consistent basis sets as well as for the segmented contracted TZVPP basis. We find that the SCS and SOS variants of CC2 and MP2 lead to a slightly better accuracy for the scaled vibrational frequencies. The determined frequency scaling factors can also be used for vibrational frequencies calculated for excited states through response theory with CC2 and the algebraic diagrammatic construction through second order and their spin-component scaled variants.
Brouwer, A.; Hoogendoorn, M.; Naarding, E.
2015-01-01
In this paper we evaluate the International Accounting Standards Board’s (IASB) efforts, in a discussion paper (DP) of 2013, to develop a new conceptual framework (CF) in the light of its stated ambition to establish a robust and consistent basis for future standard setting, thereby guiding standard
DEFF Research Database (Denmark)
Faber, Rasmus; Sauer, Stephan P. A.
2018-01-01
The basis set convergence of nuclear spin-spin coupling constants (SSCC) calculated at the coupled cluster singles and doubles (CCSD) level has been investigated for ten difficult molecules. Eight of the molecules contain fluorine atoms and nine contain double or triple bonds. Results obtained...
Van der Veen, J.W.; Van Ormondt, D.; De Beer, R.
2012-01-01
In this work we report on generating/using simulated metabolite basis sets for the quantification of in vivo MRS signals, assuming that they have been acquired by using the PRESS pulse sequence. To that end we have employed the classes and functions of the GAMMA C++ library. By using several
Seldner, K.
1977-01-01
An algorithm was developed to optimally control the traffic signals at each intersection using a discrete time traffic model applicable to heavy or peak traffic. Off line optimization procedures were applied to compute the cycle splits required to minimize the lengths of the vehicle queues and delay at each intersection. The method was applied to an extensive traffic network in Toledo, Ohio. Results obtained with the derived optimal settings are compared with the control settings presently in use.
Directory of Open Access Journals (Sweden)
Qingyan Wang
2015-01-01
Full Text Available Thrust bearing is one part with the highest failure rate in hydroturbine generator set, which is primarily due to heavy axial load. Such heavy load often makes oil film destruction, bearing friction, and even burning. It is necessary to study the load and the reduction method. The dynamic thrust is an important factor to influence the axial load and reduction design of electromagnetic device. Therefore, in the paper, combined with the structure features of vertical turbine, the hydraulic thrust is analyzed accurately. Then, take the turbine model HL-220-LT-550, for instance; the electromagnetic levitation load reduction device is designed, and its mathematical model is built, whose purpose is to minimize excitation loss and total quality under the constraints of installation space, connection layout, and heat dissipation. Particle swarm optimization (PSO is employed to search for the optimum solution; finally, the result is verified by finite element method (FEM, which demonstrates that the optimized structure is more effective.
Level set method for optimal shape design of MRAM core. Micromagnetic approach
International Nuclear Information System (INIS)
Melicher, Valdemar; Cimrak, Ivan; Keer, Roger van
2008-01-01
We aim at optimizing the shape of the magnetic core in MRAM memories. The evolution of the magnetization during the writing process is described by the Landau-Lifshitz equation (LLE). The actual shape of the core in one cell is characterized by the coefficient γ. Cost functional f=f(γ) expresses the quality of the writing process having in mind the competition between the full-select and the half-select element. We derive an explicit form of the derivative F=∂f/∂γ which allows for the use of gradient-type methods for the actual computation of the optimized shape (e.g., steepest descend method). The level set method (LSM) is employed for the representation of the piecewise constant coefficient γ
On the optimal identification of tag sets in time-constrained RFID configurations.
Vales-Alonso, Javier; Bueno-Delgado, María Victoria; Egea-López, Esteban; Alcaraz, Juan José; Pérez-Mañogil, Juan Manuel
2011-01-01
In Radio Frequency Identification facilities the identification delay of a set of tags is mainly caused by the random access nature of the reading protocol, yielding a random identification time of the set of tags. In this paper, the cumulative distribution function of the identification time is evaluated using a discrete time Markov chain for single-set time-constrained passive RFID systems, namely those ones where a single group of tags is assumed to be in the reading area and only for a bounded time (sojourn time) before leaving. In these scenarios some tags in a set may leave the reader coverage area unidentified. The probability of this event is obtained from the cumulative distribution function of the identification time as a function of the sojourn time. This result provides a suitable criterion to minimize the probability of losing tags. Besides, an identification strategy based on splitting the set of tags in smaller subsets is also considered. Results demonstrate that there are optimal splitting configurations that reduce the overall identification time while keeping the same probability of losing tags.
Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin
2017-08-01
Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously.
Energy Technology Data Exchange (ETDEWEB)
Zipori, I.; Bustan, A.; Kerem, Z.; Dag, A.
2016-07-01
In modern oil olive orchards, mechanical harvesting technologies have significantly accelerated harvesting outputs, thereby allowing for careful planning of harvest timing. While optimizing harvest time may have profound effects on oil yield and quality, the necessary tools to precisely determine the best date are rather scarce. For instance, the commonly used indicator, the fruit ripening index, does not necessarily correlate with oil accumulation. Oil content per fruit fresh weight is strongly affected by fruit water content, making the ripening index an unreliable indicator. However, oil in the paste, calculated on a dry weight basis (OPDW), provides a reliable indication of oil accumulation in the fruit. In most cultivars tested here, OPDW never exceeded ca. 0.5 g·g–1 dry weight, making this threshold the best indicator for the completion of oil accumulation and its consequent reduction in quality thereafter. The rates of OPDW and changes in quality parameters strongly depend on local conditions, such as climate, tree water status and fruit load. We therefore propose a fast and easy method to determine and monitor the OPDW in a given orchard. The proposed method is a useful tool for the determination of optimal harvest timing, particularly in large plots under intensive cultivation practices, with the aim of increasing orchard revenues. The results of this research can be directly applied in olive orchards, especially in large-scale operations. By following the proposed method, individual plots can be harvested according to sharp thresholds of oil accumulation status and pre-determined oil quality parameters, thus effectively exploiting the potentials of oil yield and quality. The method can become a powerful tool for scheduling the harvest throughout the season, and at the same time forecasting the flow of olives to the olive mill. (Author)
Wigdahl, J.; Agurto, C.; Murray, V.; Barriga, S.; Soliz, P.
2013-03-01
Diabetic retinopathy (DR) affects more than 4.4 million Americans age 40 and over. Automatic screening for DR has shown to be an efficient and cost-effective way to lower the burden on the healthcare system, by triaging diabetic patients and ensuring timely care for those presenting with DR. Several supervised algorithms have been developed to detect pathologies related to DR, but little work has been done in determining the size of the training set that optimizes an algorithm's performance. In this paper we analyze the effect of the training sample size on the performance of a top-down DR screening algorithm for different types of statistical classifiers. Results are based on partial least squares (PLS), support vector machines (SVM), k-nearest neighbor (kNN), and Naïve Bayes classifiers. Our dataset consisted of digital retinal images collected from a total of 745 cases (595 controls, 150 with DR). We varied the number of normal controls in the training set, while keeping the number of DR samples constant, and repeated the procedure 10 times using randomized training sets to avoid bias. Results show increasing performance in terms of area under the ROC curve (AUC) when the number of DR subjects in the training set increased, with similar trends for each of the classifiers. Of these, PLS and k-NN had the highest average AUC. Lower standard deviation and a flattening of the AUC curve gives evidence that there is a limit to the learning ability of the classifiers and an optimal number of cases to train on.
Directory of Open Access Journals (Sweden)
MILOŠ MADIĆ
2015-11-01
Full Text Available The role of non-conventional machining processes (NCMPs in today’s manufacturing environment has been well acknowledged. For effective utilization of the capabilities and advantages of different NCMPs, selection of the most appropriate NCMP for a given machining application requires consideration of different conflicting criteria. The right choice of the NCMP is critical to the success and competitiveness of the company. As the NCMP selection problem involves consideration of different conflicting criteria, of different relative importance, the multi-criteria decision making (MCDM methods are very useful in systematical selection of the most appropriate NCMP. This paper presents the application of a recent MCDM method, i.e., the multi-objective optimization on the basis of ratio analysis (MOORA method to solve NCMP selection which has been defined considering different performance criteria of four most widely used NCMPs. In order to determine the relative significance of considered quality criteria a pair-wise comparison matrix of the analytic hierarchy process was used. The results obtained using the MOORA method showed perfect correlation with those obtained by the technique for order preference by similarity to ideal solution (TOPSIS method which proves the applicability and potentiality of this MCDM method for solving complex NCMP selection problems.
Regis, Rommel G.
2014-02-01
This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.
Optimal structural inference of signaling pathways from unordered and overlapping gene sets.
Acharya, Lipi R; Judeh, Thair; Wang, Guangdi; Zhu, Dongxiao
2012-02-15
A plethora of bioinformatics analysis has led to the discovery of numerous gene sets, which can be interpreted as discrete measurements emitted from latent signaling pathways. Their potential to infer signaling pathway structures, however, has not been sufficiently exploited. Existing methods accommodating discrete data do not explicitly consider signal cascading mechanisms that characterize a signaling pathway. Novel computational methods are thus needed to fully utilize gene sets and broaden the scope from focusing only on pairwise interactions to the more general cascading events in the inference of signaling pathway structures. We propose a gene set based simulated annealing (SA) algorithm for the reconstruction of signaling pathway structures. A signaling pathway structure is a directed graph containing up to a few hundred nodes and many overlapping signal cascades, where each cascade represents a chain of molecular interactions from the cell surface to the nucleus. Gene sets in our context refer to discrete sets of genes participating in signal cascades, the basic building blocks of a signaling pathway, with no prior information about gene orderings in the cascades. From a compendium of gene sets related to a pathway, SA aims to search for signal cascades that characterize the optimal signaling pathway structure. In the search process, the extent of overlap among signal cascades is used to measure the optimality of a structure. Throughout, we treat gene sets as random samples from a first-order Markov chain model. We evaluated the performance of SA in three case studies. In the first study conducted on 83 KEGG pathways, SA demonstrated a significantly better performance than Bayesian network methods. Since both SA and Bayesian network methods accommodate discrete data, use a 'search and score' network learning strategy and output a directed network, they can be compared in terms of performance and computational time. In the second study, we compared SA and
DEFF Research Database (Denmark)
Otomori, Masaki; Yamada, Takayuki; Izui, Kazuhiro
2012-01-01
This paper presents a level set-based topology optimization method for the design of negative permeability dielectric metamaterials. Metamaterials are artificial materials that display extraordinary physical properties that are unavailable with natural materials. The aim of the formulated...... optimization problem is to find optimized layouts of a dielectric material that achieve negative permeability. The presence of grayscale areas in the optimized configurations critically affects the performance of metamaterials, positively as well as negatively, but configurations that contain grayscale areas...... are highly impractical from an engineering and manufacturing point of view. Therefore, a topology optimization method that can obtain clear optimized configurations is desirable. Here, a level set-based topology optimization method incorporating a fictitious interface energy is applied to a negative...
COMPARING INTRA- AND INTERENVIRONMENTAL PARAMETERS OF OPTIMAL SETTING IN BREEDING EXPERIMENTS
Directory of Open Access Journals (Sweden)
Domagoj Šimić
2004-06-01
Full Text Available A series of biometrical and quantitative-genetic parameters, not well known in Croatia, are being used for the most important agronomic traits to determine optimal genotype setting within a location as well as among locations. Objectives of the study are to estimate and to compare 1 parameters of intra-environment setting (effective mean square error EMSE, in lattice design, relative efficiency RE, of lattice design LD, compared to randomized complete block design RCBD, and repeatability Rep, of a plot value, and 2 operative heritability h2, as a parameter of inter-environment setting in an experiment with 72 maize hybrids. Trials were set up at four environments (two locations in two years evaluating grain yield and stalk rot. EMSE values corresponded across environments for both traits, while the estimations for RE of LD varied inconsistently over environments and traits. Rep estimates were more different over environments than traits. Rep values did not correspond with h2 estimates: Rep estimates for stalk rot were higher than those for grain yield, while h2 for grain yield was higher than for stalk rot in all instances. Our results suggest that due to importance of genotype × environment interaction, there is a need for multienvironment trials for both traits. If the experiment framework should be reduced due to economic or other reasons, decreasing number of locations in a year rather than decreasing number of years of investigation is recommended.
Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search
Directory of Open Access Journals (Sweden)
Simon Fong
2013-01-01
Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.
Application of Fuzzy Sets for the Improvement of Routing Optimization Heuristic Algorithms
Directory of Open Access Journals (Sweden)
Mattas Konstantinos
2016-12-01
Full Text Available The determination of the optimal circular path has become widely known for its difficulty in producing a solution and for the numerous applications in the scope of organization and management of passenger and freight transport. It is a mathematical combinatorial optimization problem for which several deterministic and heuristic models have been developed in recent years, applicable to route organization issues, passenger and freight transport, storage and distribution of goods, waste collection, supply and control of terminals, as well as human resource management. Scope of the present paper is the development, with the use of fuzzy sets, of a practical, comprehensible and speedy heuristic algorithm for the improvement of the ability of the classical deterministic algorithms to identify optimum, symmetrical or non-symmetrical, circular route. The proposed fuzzy heuristic algorithm is compared to the corresponding deterministic ones, with regard to the deviation of the proposed solution from the best known solution and the complexity of the calculations needed to obtain this solution. It is shown that the use of fuzzy sets reduced up to 35% the deviation of the solution identified by the classical deterministic algorithms from the best known solution.
International Nuclear Information System (INIS)
Kojima, Akihiro; Watanabe, Hiroyuki; Arao, Yuichi; Kawasaki, Masaaki; Takaki, Akihiro; Matsumoto, Masanori
2007-01-01
In this study, we examined whether the optimal energy window (EW) setting depending on an energy resolution of a gamma camera, which we previously proposed, is valid on planar scintigraphic imaging using Tl-201, Ga-67, Tc-99m, and I-123. Image acquisitions for line sources and paper sheet phantoms containing each radionuclide were performed in air and with scattering materials. For the six photopeaks excluding the Hg-201 characteristic x-rays' one, the conventional 20%-width energy window (EW20%) setting and the optimal energy window (optimal EW) setting (15%-width below 100 keV and 13%-width above 100 keV) were compared. For the Hg-201 characteristic x-rays' photopeak, the conventional on-peak EW20% setting was compared with the off-peak EW setting (73 keV-25%) and the wider off-peak EW setting (77 keV-29%). Image-count ratio (defined as the ratio of the image counts obtained with an EW and the total image counts obtained with the EW covered the whole photopeak for a line source in air), image quality, spatial resolutions (full width half maximum (FWHM) and full width tenth maximum (FWTM) values), count-profile curves, and defect-contrast values were compared between the conventional EW setting and the optimal EW setting. Except for the Hg-201 characteristic x-rays, the image-count ratios were 94-99% for the EW20% setting, but 78-89% for the optimal EW setting. However, the optimal EW setting reduced scatter fraction (defined as the scattered-to-primary counts ratio) effectively, as compared with the EW20% setting. Consequently, all the images with the optimal EW setting gave better image quality than ones with the EW20% setting. For the Hg-201 characteristic x-rays, the off-peak EW setting showed great improvement in image quality in comparison with the EW20% setting and the wider off-peak EW setting gave the best results. In conclusion, from our planar imaging study it was shown that although the optimal EW setting proposed by us gives less image-count ratio by
Assessing the optimality of ASHRAE climate zones using high resolution meteorological data sets
Fils, P. D.; Kumar, J.; Collier, N.; Hoffman, F. M.; Xu, M.; Forbes, W.
2017-12-01
Energy consumed by built infrastructure constitutes a significant fraction of the nation's energy budget. According to 2015 US Energy Information Agency report, 41% of the energy used in the US was going to residential and commercial buildings. Additional research has shown that 32% of commercial building energy goes into heating and cooling the building. The American National Standards Institute and the American Society of Heating Refrigerating and Air-Conditioning Engineers Standard 90.1 provides climate zones for current state-of-practice since heating and cooling demands are strongly influenced by spatio-temporal weather variations. For this reason, we have been assessing the optimality of the climate zones using high resolution daily climate data from NASA's DAYMET database. We analyzed time series of meteorological data sets for all ASHRAE climate zones between 1980-2016 inclusively. We computed the mean, standard deviation, and other statistics for a set of meteorological variables (solar radiation, maximum and minimum temperature)within each zone. By plotting all the zonal statistics, we analyzed patterns and trends in those data over the past 36 years. We compared the means of each zone to its standard deviation to determine the range of spatial variability that exist within each zone. If the band around the mean is too large, it indicates that regions in the zone experience a wide range of weather conditions and perhaps a common set of building design guidelines would lead to a non-optimal energy consumption scenario. In this study we have observed a strong variation in the different climate zones. Some have shown consistent patterns in the past 36 years, indicating that the zone was well constructed, while others have greatly deviated from their mean indicating that the zone needs to be reconstructed. We also looked at redesigning the climate zones based on high resolution climate data. We are using building simulations models like EnergyPlus to develop
International Nuclear Information System (INIS)
Procaccia, H.; Cordier, R.; Muller, S.
1994-07-01
Statistical decision theory could be a alternative for the optimization of preventive maintenance periodicity. In effect, this theory concerns the situation in which a decision maker has to make a choice between a set of reasonable decisions, and where the loss associated to a given decision depends on a probabilistic risk, called state of nature. In the case of maintenance optimization, the decisions to be analyzed are different periodicities proposed by the experts, given the observed feedback experience, the states of nature are the associated failure probabilities, and the losses are the expectations of the induced cost of maintenance and of consequences of the failures. As failure probabilities concern rare events, at the ultimate state of RCM analysis (failure of sub-component), and as expected foreseeable behaviour of equipment has to be evaluated by experts, Bayesian approach is successfully used to compute states of nature. In Bayesian decision theory, a prior distribution for failure probabilities is modeled from expert knowledge, and is combined with few stochastic information provided by feedback experience, giving a posterior distribution of failure probabilities. The optimized decision is the decision that minimizes the expected loss over the posterior distribution. This methodology has been applied to inspection and maintenance optimization of cylinders of diesel generator engines of 900 MW nuclear plants. In these plants, auxiliary electric power is supplied by 2 redundant diesel generators which are tested every 2 weeks during about 1 hour. Until now, during yearly refueling of each plant, one endoscopic inspection of diesel cylinders is performed, and every 5 operating years, all cylinders are replaced. RCM has shown that cylinder failures could be critical. So Bayesian decision theory has been applied, taking into account expert opinions, and possibility of aging when maintenance periodicity is extended. (authors). 8 refs., 5 figs., 1 tab
Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.
2012-10-01
We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.
International Nuclear Information System (INIS)
Buemi, Giuseppe
2004-01-01
Ab initio calculations of hydrogen bridge energies (E HB ) of 2-halophenols were carried out at various levels of sophistication using a variety of basis sets in order to verify their ability in reproducing the experimentally-determined gas phase ordering, and the related experimental frequencies of the O-H vibration stretching mode. The semiempirical AM1 and PM3 approaches were adopted, too. Calculations were extended to the O-H...X bridge of a particular conformation of 2,4-dihalo-malonaldehyde. The results and their trend with respect to the electronegativity of the halogen series are highly dependant on the basis set. The less sophisticated 3-21G, CEP121G and LANL2DZ basis sets (with and without correlation energy inclusion) predict E HB decreasing on decreasing the electronegativity power whilst the opposite is generally found when more extended bases are used. However, all high level calculations confirm the nearly negligible energy differences between the examined O-H...X bridges
Directory of Open Access Journals (Sweden)
Zhenyu Mei
2012-01-01
Full Text Available The ongoing controversy about in what condition should we set the curb parking has few definitive answers because comprehensive research in this area has been lacking. Our goal is to present a set of heuristic urban street speed functions under mixed traffic flow by taking into account impacts of curb parking. Two impacts have been defined to classify and quantify the phenomena of motor vehicles' speed dynamics in terms of curb parking. The first impact is called Space impact, which is caused by the curb parking types. The other one is the Time impact, which results from the driver maneuvering in or out of parking space. In this paper, based on the empirical data collected from six typical urban streets in Nanjing, China, two models have been proposed to describe these phenomena for one-way traffic and two-way traffic, respectively. An intensive experiment has been conducted in order to calibrate and validate these proposed models, by taking into account the complexity of the model parameters. We also provide guidelines in terms of how to cluster and calculate those models' parameters. Results from these models demonstrated promising performance of modeling motor vehicles' speed for mixed traffic flow under the influence of curb parking.
Adaptive Conflict-Free Optimization of Rule Sets for Network Security Packet Filtering Devices
Directory of Open Access Journals (Sweden)
Andrea Baiocchi
2015-01-01
Full Text Available Packet filtering and processing rules management in firewalls and security gateways has become commonplace in increasingly complex networks. On one side there is a need to maintain the logic of high level policies, which requires administrators to implement and update a large amount of filtering rules while keeping them conflict-free, that is, avoiding security inconsistencies. On the other side, traffic adaptive optimization of large rule lists is useful for general purpose computers used as filtering devices, without specific designed hardware, to face growing link speeds and to harden filtering devices against DoS and DDoS attacks. Our work joins the two issues in an innovative way and defines a traffic adaptive algorithm to find conflict-free optimized rule sets, by relying on information gathered with traffic logs. The proposed approach suits current technology architectures and exploits available features, like traffic log databases, to minimize the impact of ACO development on the packet filtering devices. We demonstrate the benefit entailed by the proposed algorithm through measurements on a test bed made up of real-life, commercial packet filtering devices.
International Nuclear Information System (INIS)
Ferreira, Jose C.; Gaspar-Cunha, Antonio; Fonseca, Carlos M.
2007-01-01
Most of the real world optimization problems involve multiple, usually conflicting, optimization criteria. Generating Pareto optimal solutions plays an important role in multi-objective optimization, and the problem is considered to be solved when the Pareto optimal set is found, i.e., the set of non-dominated solutions. Multi-Objective Evolutionary Algorithms based on the principle of Pareto optimality are designed to produce the complete set of non-dominated solutions. However, this is not allays enough since the aim is not only to know the Pareto set but, also, to obtain one solution from this Pareto set. Thus, the definition of a methodology able to select a single solution from the set of non-dominated solutions (or a region of the Pareto frontier), and taking into account the preferences of a Decision Maker (DM), is necessary. A different method, based on a weighted stress function, is proposed. It is able to integrate the user's preferences in order to find the best region of the Pareto frontier accordingly with these preferences. This method was tested on some benchmark test problems, with two and three criteria, and on a polymer extrusion problem. This methodology is able to select efficiently the best Pareto-frontier region for the specified relative importance of the criteria
Directory of Open Access Journals (Sweden)
Ling Li
Full Text Available Polycomb repressive complex 2 (PRC2, a histone H3 lysine 27 methyltransferase, plays a key role in gene regulation and is a known epigenetics drug target for cancer therapy. The WD40 domain-containing protein EED is the regulatory subunit of PRC2. It binds to the tri-methylated lysine 27 of the histone H3 (H3K27me3, and through which stimulates the activity of PRC2 allosterically. Recently, we disclosed a novel PRC2 inhibitor EED226 which binds to the K27me3-pocket on EED and showed strong antitumor activity in xenograft mice model. Here, we further report the identification and validation of four other EED binders along with EED162, the parental compound of EED226. The crystal structures for all these five compounds in complex with EED revealed a common deep pocket induced by the binding of this diverse set of compounds. This pocket was created after significant conformational rearrangement of the aromatic cage residues (Y365, Y148 and F97 in the H3K27me3 binding pocket of EED, the width of which was delineated by the side chains of these rearranged residues. In addition, all five compounds interact with the Arg367 at the bottom of the pocket. Each compound also displays unique features in its interaction with EED, suggesting the dynamics of the H3K27me3 pocket in accommodating the binding of different compounds. Our results provide structural insights for rational design of novel EED binder for the inhibition of PRC2 complex activity.
Optimal allocation of the limited oral cholera vaccine supply between endemic and epidemic settings.
Moore, Sean M; Lessler, Justin
2015-10-06
The World Health Organization (WHO) recently established a global stockpile of oral cholera vaccine (OCV) to be preferentially used in epidemic response (reactive campaigns) with any vaccine remaining after 1 year allocated to endemic settings. Hence, the number of cholera cases or deaths prevented in an endemic setting represents the minimum utility of these doses, and the optimal risk-averse response to any reactive vaccination request (i.e. the minimax strategy) is one that allocates the remaining doses between the requested epidemic response and endemic use in order to ensure that at least this minimum utility is achieved. Using mathematical models, we find that the best minimax strategy is to allocate the majority of doses to reactive campaigns, unless the request came late in the targeted epidemic. As vaccine supplies dwindle, the case for reactive use of the remaining doses grows stronger. Our analysis provides a lower bound for the amount of OCV to keep in reserve when responding to any request. These results provide a strategic context for the fulfilment of requests to the stockpile, and define allocation strategies that minimize the number of OCV doses that are allocated to suboptimal situations. © 2015 The Authors.
Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set
Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.
2017-12-01
In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of
International Nuclear Information System (INIS)
Freedhoff, Helen
2004-01-01
We study an aggregate of N identical two-level atoms (TLA's) coupled by the retarded interatomic interaction, using the Lehmberg-Agarwal master equation. First, we calculate the entangled eigenstates of the system; then, we use these eigenstates as a basis set for the projection of the master equation. We demonstrate that in this basis the equations of motion for the level populations, as well as the expressions for the emission and absorption spectra, assume a simple mathematical structure and allow for a transparent physical interpretation. To illustrate the use of the general theory in emission processes, we study an isosceles triangle of atoms, and present in the long wavelength limit the (cascade) emission spectrum for a hexagon of atoms fully excited at t=0. To illustrate its use for absorption processes, we tabulate (in the same limit) the biexciton absorption frequencies, linewidths, and relative intensities for polygons consisting of N=2,...,9 TLA's
International Nuclear Information System (INIS)
Kotasidis, Fotis A.; Zaidi, Habib
2014-01-01
Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailed investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis function
Energy Technology Data Exchange (ETDEWEB)
NONE
2012-09-19
Within the 2nd Bayreuth expert meeting on biomass at 6th June, 2012 in Bayreuth (Federal Republic of Germany), the following lectures were held: (1) Presentation of the activities in the bio energy sector of the Landwirtschaftliche Lehranstalt Bayreuth (Rainer Prischenk); (2) State of the art of utilizing biogas in Oberfranken from the view of FVB e.V. (Wolfgang Holland Goetz); (3) Optimization of the plant operation by means of an intelligent control (Christian Seier); (4) Process optimization by means of identification of losses of biogas and evaluation of the load behaviour and emission behaviour of gas engines (Wolfgang Schreier); (5) Data acquisition and implementation of optimization measures from the point of view of an environmental verifier (Thorsten Grantner); (6) Economic analysis and optimization by means of the Lfl program BZA Biogas (Josef Winkler); (7) Detailed data acquisition as a necessary basis of the process optimization (Timo Herfter); (8) Case examples of the biologic support of biogas plants and their correct evaluation (Birgit Pfeifer); (9) A systematic acquisition of operational data as a basis for the increase of efficiency using the Praxisforschungsbiogasanlage of the University Hohenheim (Hans-Joachim Naegele); (10) Practical report: The biogas plant Sochenberg towards 100% utilization of energy (Uli Bader).
Handbook of Gaussian basis sets
International Nuclear Information System (INIS)
Poirier, R.; Kari, R.; Csizmadia, I.G.
1985-01-01
A collection of a large body of information is presented useful for chemists involved in molecular Gaussian computations. Every effort has been made by the authors to collect all available data for cartesian Gaussian as found in the literature up to July of 1984. The data in this text includes a large collection of polarization function exponents but in this case the collection is not complete. Exponents for Slater type orbitals (STO) were included for completeness. This text offers a collection of Gaussian exponents primarily without criticism. (Auth.)
Uvarova, Svetlana; Kutsygina, Olga; Smorodina, Elena; Gumba, Khuta
2018-03-01
The effectiveness and sustainability of an enterprise are based on the effectiveness and sustainability of its portfolio of projects. When creating a production program for a construction company based on a portfolio of projects and related to the planning and implementation of initiated organizational and economic changes, the problem of finding the optimal "risk-return" ratio of the program (portfolio of projects) is solved. The article proposes and approves the methodology of forming a portfolio of enterprise projects on the basis of the correspondence principle. Optimization of the portfolio of projects on the criterion of "risk-return" also contributes to the company's sustainability.
Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review
Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J.; Mojaza, Matin
2015-12-01
A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme—this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the ‘principle of maximum conformality’ (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the ‘principle of minimum sensitivity’ (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R e+e- and Γ(H\\to b\\bar{b}) up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on
Directory of Open Access Journals (Sweden)
Zipori, I.
2016-06-01
Full Text Available In modern oil olive orchards, mechanical harvesting technologies have significantly accelerated harvesting outputs, thereby allowing for careful planning of harvest timing. While optimizing harvest time may have profound effects on oil yield and quality, the necessary tools to precisely determine the best date are rather scarce. For instance, the commonly used indicator, the fruit ripening index, does not necessarily correlate with oil accumulation. Oil content per fruit fresh weight is strongly affected by fruit water content, making the ripening index an unreliable indicator. However, oil in the paste, calculated on a dry weight basis (OPDW, provides a reliable indication of oil accumulation in the fruit. In most cultivars tested here, OPDW never exceeded ca. 0.5 g.g–1 dry weight, making this threshold the best indicator for the completion of oil accumulation and its consequent reduction in quality thereafter. The rates of OPDW and changes in quality parameters strongly depend on local conditions, such as climate, tree water status and fruit load. We therefore propose a fast and easy method to determine and monitor the OPDW in a given orchard. The proposed method is a useful tool for the determination of optimal harvest timing, particularly in large plots under intensive cultivation practices, with the aim of increasing orchard revenues. The results of this research can be directly applied in olive orchards, especially in large-scale operations. By following the proposed method, individual plots can be harvested according to sharp thresholds of oil accumulation status and pre-determined oil quality parameters, thus effectively exploiting the potentials of oil yield and quality. The method can become a powerful tool for scheduling the harvest throughout the season, and at the same time forecasting the flow of olives to the olive mill.En los modernos olivares, las tecnologías de recogida mecánica han acelerado significativamente la recogida
Zen, Andrea; Luo, Ye; Sorella, Sandro; Guidoni, Leonardo
2014-01-01
Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely, the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudopotential, and the basis set for QMC calculations. We also introduce a new method for the computation of forces with finite variance on open systems and a new strategy for the definition of the atomic orbitals involved in the Jastrow-Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets. PMID:24526929
International Nuclear Information System (INIS)
Hao, W; Jinji, G
2012-01-01
Compressing the vibration signal of a rolling bearing has important significance to wireless monitoring and remote diagnosis of fans and pumps which is widely used in the petrochemical industry. In this paper, according to the characteristics of the vibration signal in a rolling bearing, a compression method based on the optimal selection of wavelet packet basis is proposed. We analyze several main attributes of wavelet packet basis and the effect to the compression of the vibration signal in a rolling bearing using wavelet packet transform in various compression ratios, and proposed a method to precisely select a wavelet packet basis. Through an actual signal, we come to the conclusion that an orthogonal wavelet packet basis with low vanishing moment should be used to compress the vibration signal of a rolling bearing to get an accurate energy proportion between the feature bands in the spectrum of reconstructing the signal. Within these low vanishing moments, orthogonal wavelet packet basis, and 'coif' wavelet packet basis can obtain the best signal-to-noise ratio in the same compression ratio for its best symmetry.
Directory of Open Access Journals (Sweden)
Olga Kostyukova
2017-11-01
Full Text Available The paper is devoted to study of a special class of semi-infinite problems arising in nonlinear parametric Semi-infinite Programming, when the differential properties of the solutions are being studied. These problems are convex and possess noncompact index sets. In the paper, we present conditions guaranteeing the existence of optimal solutions, and prove new optimality criterion. An example illustrating the obtained results is presented.
Löptien, U.; Dietze, H.
2014-06-01
The Baltic Sea is a seasonally ice-covered, marginal sea, situated in central northern Europe. It is an essential waterway connecting highly industrialised countries. Because ship traffic is intermittently hindered by sea ice, the local weather services have been monitoring sea ice conditions for decades. In the present study we revisit a historical monitoring data set, covering the winters 1960/1961. This data set, dubbed Data Bank for Baltic Sea Ice and Sea Surface Temperatures (BASIS) ice, is based on hand-drawn maps that were collected and then digitised 1981 in a joint project of the Finnish Institute of Marine Research (today Finish Meteorological Institute (FMI)) and the Swedish Meteorological and Hydrological Institute (SMHI). BASIS ice was designed for storage on punch cards and all ice information is encoded by five digits. This makes the data hard to access. Here we present a post-processed product based on the original five-digit code. Specifically, we convert to standard ice quantities (including information on ice types), which we distribute in the current and free Network Common Data Format (NetCDF). Our post-processed data set will help to assess numerical ice models and provide easy-to-access unique historical reference material for sea ice in the Baltic Sea. In addition we provide statistics showcasing the data quality. The website www.baltic-ocean.org hosts the post-prossed data and the conversion code. The data are also archived at the Data Publisher for Earth & Environmental Science PANGEA (doi:10.1594/PANGEA.832353).
Löptien, U.; Dietze, H.
2014-12-01
The Baltic Sea is a seasonally ice-covered, marginal sea in central northern Europe. It is an essential waterway connecting highly industrialised countries. Because ship traffic is intermittently hindered by sea ice, the local weather services have been monitoring sea ice conditions for decades. In the present study we revisit a historical monitoring data set, covering the winters 1960/1961 to 1978/1979. This data set, dubbed Data Bank for Baltic Sea Ice and Sea Surface Temperatures (BASIS) ice, is based on hand-drawn maps that were collected and then digitised in 1981 in a joint project of the Finnish Institute of Marine Research (today the Finnish Meteorological Institute (FMI)) and the Swedish Meteorological and Hydrological Institute (SMHI). BASIS ice was designed for storage on punch cards and all ice information is encoded by five digits. This makes the data hard to access. Here we present a post-processed product based on the original five-digit code. Specifically, we convert to standard ice quantities (including information on ice types), which we distribute in the current and free Network Common Data Format (NetCDF). Our post-processed data set will help to assess numerical ice models and provide easy-to-access unique historical reference material for sea ice in the Baltic Sea. In addition we provide statistics showcasing the data quality. The website http://www.baltic-ocean.org hosts the post-processed data and the conversion code. The data are also archived at the Data Publisher for Earth & Environmental Science, PANGAEA (doi:10.1594/PANGAEA.832353).
Aduda, K.O.; Zeiler, W.; Boxem, G.; Sayigh, Ali
2015-01-01
Electricity energy generation and its supply through electricity networks are mainly organized in a top-down, centralized manner. Energy consumption can be predicted quite accurately at a high level, and this forms the basis for prescheduling the production by large power plants. Only few actors are
Aduda, Kennedy; Zeiler, Wim; Boxem, Gert; Sayigh, Ali
2016-01-01
Electricity energy generation and its supply through electricity networks are mainly organized in a top-down, centralized manner. Energy consumption can be predicted quite accurately at a high level, and this forms the basis for prescheduling the production by large power plants. Only few actors are
Optimization of GEANT4 settings for Proton Pencil Beam Scanning simulations using GATE
Energy Technology Data Exchange (ETDEWEB)
Grevillot, Loic, E-mail: loic.grevillot@gmail.co [Universite de Lyon, F-69622 Lyon (France); Creatis, CNRS UMR 5220, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France); IBA, B-1348 Louvain-la-Neuve (Belgium); Frisson, Thibault [Universite de Lyon, F-69622 Lyon (France); Creatis, CNRS UMR 5220, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France); Zahra, Nabil [Universite de Lyon, F-69622 Lyon (France); IPNL, CNRS UMR 5822, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France); Bertrand, Damien; Stichelbaut, Frederic [IBA, B-1348 Louvain-la-Neuve (Belgium); Freud, Nicolas [Universite de Lyon, F-69622 Lyon (France); CNDRI, INSA-Lyon, F-69621 Villeurbanne Cedex (France); Sarrut, David [Universite de Lyon, F-69622 Lyon (France); Creatis, CNRS UMR 5220, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France)
2010-10-15
This study reports the investigation of different GEANT4 settings for proton therapy applications in the context of Treatment Planning System comparisons. The GEANT4.9.2 release was used through the GATE platform. We focused on the Pencil Beam Scanning delivery technique, which allows for intensity modulated proton therapy applications. The most relevant options and parameters (range cut, step size, database binning) for the simulation that influence the dose deposition were investigated, in order to determine a robust, accurate and efficient simulation environment. In this perspective, simulations of depth-dose profiles and transverse profiles at different depths and energies between 100 and 230 MeV have been assessed against reference measurements in water and PMMA. These measurements were performed in Essen, Germany, with the IBA dedicated Pencil Beam Scanning system, using Bragg-peak chambers and radiochromic films. GEANT4 simulations were also compared to the PHITS.2.14 and MCNPX.2.5.0 Monte Carlo codes. Depth-dose simulations reached 0.3 mm range accuracy compared to NIST CSDA ranges, with a dose agreement of about 1% over a set of five different energies. The transverse profiles simulated using the different Monte Carlo codes showed discrepancies, with up to 15% difference in beam widening between GEANT4 and MCNPX in water. A 8% difference between the GEANT4 multiple scattering and single scattering algorithms was observed. The simulations showed the inability of reproducing the measured transverse dose spreading with depth in PMMA, corroborating the fact that GEANT4 underestimates the lateral dose spreading. GATE was found to be a very convenient simulation environment to perform this study. A reference physics-list and an optimized parameters-list have been proposed. Satisfactory agreement against depth-dose profiles measurements was obtained. The simulation of transverse profiles using different Monte Carlo codes showed significant deviations. This point
Optimization of super-resolution processing using incomplete image sets in PET imaging.
Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R
2008-12-01
Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of
Optimally setting up directed searches for continuous gravitational waves in Advanced LIGO O1 data
Ming, Jing; Papa, Maria Alessandra; Krishnan, Badri; Prix, Reinhard; Beer, Christian; Zhu, Sylvia J.; Eggenstein, Heinz-Bernd; Bock, Oliver; Machenschalk, Bernd
2018-02-01
In this paper we design a search for continuous gravitational waves from three supernova remnants: Vela Jr., Cassiopeia A (Cas A) and G347.3. These systems might harbor rapidly rotating neutron stars emitting quasiperiodic gravitational radiation detectable by the advanced LIGO detectors. Our search is designed to use the volunteer computing project Einstein@Home for a few months and assumes the sensitivity and duty cycles of the advanced LIGO detectors during their first science run. For all three supernova remnants, the sky positions of their central compact objects are well known but the frequency and spin-down rates of the neutron stars are unknown which makes the searches computationally limited. In a previous paper we have proposed a general framework for deciding on what target we should spend computational resources and in what proportion, what frequency and spin-down ranges we should search for every target, and with what search setup. Here we further expand this framework and apply it to design a search directed at detecting continuous gravitational wave signals from the most promising three supernova remnants identified as such in the previous work. Our optimization procedure yields broad frequency and spin-down searches for all three objects, at an unprecedented level of sensitivity: The smallest detectable gravitational wave strain h0 for Cas A is expected to be 2 times smaller than the most sensitive upper limits published to date, and our proposed search, which was set up and ran on the volunteer computing project Einstein@Home, covers a much larger frequency range.
A two-level strategy to realize life-cycle production optimization in an operational setting
Essen, van G.M.; Hof, Van den P.M.J.; Jansen, J.D.
2012-01-01
We present a two-level strategy to improve robustness against uncertainty and model errors in life-cycle flooding optimization. At the upper level, a physics-based large-scale reservoir model is used to determine optimal life-cycle injection and production profiles. At the lower level these profiles
A two-level strategy to realize life-cycle production optimization in an operational setting
Essen, van G.M.; Hof, Van den P.M.J.; Jansen, J.D.
2013-01-01
We present a two-level strategy to improve robustness against uncertainty and model errors in life-cycle flooding optimization. At the upper level, a physics-based large-scale reservoir model is used to determine optimal life-cycle injection and production profiles. At the lower level these profiles
Santra, Biswajit; Michaelides, Angelos; Scheffler, Matthias
2007-11-01
The ability of several density-functional theory (DFT) exchange-correlation functionals to describe hydrogen bonds in small water clusters (dimer to pentamer) in their global minimum energy structures is evaluated with reference to second order Møller-Plesset perturbation theory (MP2). Errors from basis set incompleteness have been minimized in both the MP2 reference data and the DFT calculations, thus enabling a consistent systematic evaluation of the true performance of the tested functionals. Among all the functionals considered, the hybrid X3LYP and PBE0 functionals offer the best performance and among the nonhybrid generalized gradient approximation functionals, mPWLYP and PBE1W perform best. The popular BLYP and B3LYP functionals consistently underbind and PBE and PW91 display rather variable performance with cluster size.
International Nuclear Information System (INIS)
Wang, C.S.; Freeman, A.J.
1979-01-01
We present the self-consistent numerical-basis-set linear combination of atomic orbitals (LCAO) discrete variational method for treating the electronic structure of thin films. As in the case of bulk solids, this method provides for thin films accurate solutions of the one-particle local density equations with a non-muffin-tin potential. Hamiltonian and overlap matrix elements are evaluated accurately by means of a three-dimensional numerical Diophantine integration scheme. Application of this method is made to the self-consistent solution of one-, three-, and five-layer Ni(001) unsupported films. The LCAO Bloch basis set consists of valence orbitals (3d, 4s, and 4p states for transition metals) orthogonalized to the frozen-core wave functions. The self-consistent potential is obtained iteratively within the superposition of overlapping spherical atomic charge density model with the atomic configurations treated as adjustable parameters. Thus the crystal Coulomb potential is constructed as a superposition of overlapping spherically symmetric atomic potentials and, correspondingly, the local density Kohn-Sham (α = 2/3) potential is determined from a superposition of atomic charge densities. At each iteration in the self-consistency procedure, the crystal charge density is evaluated using a sampling of 15 independent k points in (1/8)th of the irreducible two-dimensional Brillouin zone. The total density of states (DOS) and projected local DOS (by layer plane) are calculated using an analytic linear energy triangle method (presented as an Appendix) generalized from the tetrahedron scheme for bulk systems. Distinct differences are obtained between the surface and central plane local DOS. The central plane DOS is found to converge rapidly to the DOS of bulk paramagnetic Ni obtained by Wang and Callaway. Only a very small surplus charge (0.03 electron/atom) is found on the surface planes, in agreement with jellium model calculations
Simulation of neuro-fuzzy model for optimization of combine header setting
Directory of Open Access Journals (Sweden)
S Zareei
2016-09-01
of reel tine bar from cutter bar and vertical distance of reel tine bar from cutter bar could be recommended according to minimize header loss. Conclusions In the final step, the designed controller was simulated in SIMULINK. The Controller can change setting of header components in order to their impaction gathering loss and in each step, compare gathering loss with optimal value and If it was more than optimum then change the settings again. The simulation results were evaluated satisfactory.
International Nuclear Information System (INIS)
Ma, Huan; Si, Fengqi; Kong, Yu; Zhu, Kangping; Yan, Wensheng
2017-01-01
Highlights: • Aerodynamic field around dry cooling tower is presented with numerical model. • Performances of cooling deltas are figured out by air inflow velocity analysis. • Setting angles of wind-break walls are optimized to improve cooling performance. • Optimized walls can reduce the interference on air inflow at low wind speeds. • Optimized walls create stronger outside secondary flow at high wind speeds. - Abstract: To get larger cooling performance enhancement for natural draft dry cooling tower with vertical cooling deltas under crosswind, setting angles of wind-break walls were optimized. Considering specific structure of each cooling delta, an efficient numerical model was established and validated by some published results. Aerodynamic fields around cooling deltas under various crosswind speeds were presented, and outlet water temperatures of the two columns of cooling delta were exported as well. It was found that for each cooling delta, there was a difference in cooling performance between the two columns, which is closely related to the characteristic of main airflow outside the tower. Using the present model, air inflow deviation angles at cooling deltas’ inlet were calculated, and the effects of air inflow deviation on outlet water temperatures of the two columns for corresponding cooling delta were explained in detail. Subsequently, at cooling deltas’ inlet along radial direction of the tower, setting angles of wind-break walls were optimized equal to air inflow deviation angles when no airflow separation appeared outside the tower, while equal to zero when outside airflow separation occurred. In addition, wind-break walls with optimized setting angles were verified to be extremely effective, compared to the previous radial walls.
Alhamwi, Alaa; Kleinhans, David; Weitemeyer, Stefan; Vogt, Thomas
2014-12-01
Renewable Energy sources are gaining importance in the Middle East and North Africa (MENA) region. The purpose of this study is to quantify the optimal mix of renewable power generation in the MENA region, taking Morocco as a case study. Based on hourly meteorological data and load data, a 100% solar-plus-wind only scenario for Morocco is investigated. For the optimal mix analyses, a mismatch energy modelling approach is adopted with the objective to minimise the required storage capacities. For a hypothetical Moroccan energy supply system which is entirely based on renewable energy sources, our results show that the minimum storage capacity is achieved at a share of 63% solar and 37% wind power generations.
Chapman, Larry S; Pelletier, Kenneth R
2004-01-01
This paper provides an (OHE) overview of a population health management (PHM) approach to the creation of optimal healing environments (OHEs) in worksite and corporate settings. It presents a framework for consideration as the context for potential research projects to examine the health, well-being, and economic effects of a set of newer "virtual" prevention interventions operating in an integrated manner in worksite settings. The main topics discussed are the fundamentals of PHM with basic terminology and core principles, a description of PHM core technology and implications of a PHM approach to creating OHEs.
Li, Zhongwei; Xin, Yuezhen; Wang, Xun; Sun, Beibei; Xia, Shengyu; Li, Hui
2016-01-01
Phellinus is a kind of fungus and is known as one of the elemental components in drugs to avoid cancers. With the purpose of finding optimized culture conditions for Phellinus production in the laboratory, plenty of experiments focusing on single factor were operated and large scale of experimental data were generated. In this work, we use the data collected from experiments for regression analysis, and then a mathematical model of predicting Phellinus production is achieved. Subsequently, a gene-set based genetic algorithm is developed to optimize the values of parameters involved in culture conditions, including inoculum size, PH value, initial liquid volume, temperature, seed age, fermentation time, and rotation speed. These optimized values of the parameters have accordance with biological experimental results, which indicate that our method has a good predictability for culture conditions optimization. PMID:27610365
Directory of Open Access Journals (Sweden)
A. Riccio
2007-12-01
Full Text Available In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides.
We first introduce the theoretical basis (with its roots sinking into the Bayes theorem and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b.
This approach also provides a way to systematically reduce (and quantify model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.
Campos, Cesar T; Jorge, Francisco E; Alves, Júlia M A
2012-09-01
Recently, segmented all-electron contracted double, triple, quadruple, quintuple, and sextuple zeta valence plus polarization function (XZP, X = D, T, Q, 5, and 6) basis sets for the elements from H to Ar were constructed for use in conjunction with nonrelativistic and Douglas-Kroll-Hess Hamiltonians. In this work, in order to obtain a better description of some molecular properties, the XZP sets for the second-row elements were augmented with high-exponent d "inner polarization functions," which were optimized in the molecular environment at the second-order Møller-Plesset level. At the coupled cluster level of theory, the inclusion of tight d functions for these elements was found to be essential to improve the agreement between theoretical and experimental zero-point vibrational energies (ZPVEs) and atomization energies. For all of the molecules studied, the ZPVE errors were always smaller than 0.5 %. The atomization energies were also improved by applying corrections due to core/valence correlation and atomic spin-orbit effects. This led to estimates for the atomization energies of various compounds in the gaseous phase. The largest error (1.2 kcal mol(-1)) was found for SiH(4).
Energy Technology Data Exchange (ETDEWEB)
Kotasidis, Fotis A., E-mail: Fotis.Kotasidis@unige.ch [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland and Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Zaidi, Habib [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva (Switzerland); Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva (Switzerland); Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, 9700 RB (Netherlands)
2014-06-15
Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailed investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis
Energy Technology Data Exchange (ETDEWEB)
Wang, Xu; Ding, Jie; Guo, Wan-Qian; Ren, Nan-Qi [State Key Laboratory of Urban Water Resource and Environment, Harbin Institute of Technology, 202 Haihe Road, Nangang District, Harbin, Heilongjiang 150090 (China)
2010-10-15
The objective of conducting experiments in a laboratory is to gain data that helps in designing and operating large-scale biological processes. However, the scale-up and design of industrial-scale biohydrogen production reactors is still uncertain. In this paper, an established and proven Eulerian-Eulerian computational fluid dynamics (CFD) model was employed to perform hydrodynamics assessments of an industrial-scale continuous stirred-tank reactor (CSTR) for biohydrogen production. The merits of the laboratory-scale CSTR and industrial-scale CSTR were compared and analyzed on the basis of CFD simulation. The outcomes demonstrated that there are many parameters that need to be optimized in the industrial-scale reactor, such as the velocity field and stagnation zone. According to the results of hydrodynamics evaluation, the structure of industrial-scale CSTR was optimized and the results are positive in terms of advancing the industrialization of biohydrogen production. (author)
The Bayesian statistical decision theory applied to the optimization of generating set maintenance
International Nuclear Information System (INIS)
Procaccia, H.; Cordier, R.; Muller, S.
1994-11-01
The difficulty in RCM methodology is the allocation of a new periodicity of preventive maintenance on one equipment when a critical failure has been identified: until now this new allocation has been based on the engineer's judgment, and one must wait for a full cycle of feedback experience before to validate it. Statistical decision theory could be a more rational alternative for the optimization of preventive maintenance periodicity. This methodology has been applied to inspection and maintenance optimization of cylinders of diesel generator engines of 900 MW nuclear plants, and has shown that previous preventive maintenance periodicity can be extended. (authors). 8 refs., 5 figs
Directory of Open Access Journals (Sweden)
V. A. Sednin
2017-01-01
Full Text Available On the basis of the gas compressor units of compressor plants of a main gas pipeline mathematical models of the macro-level were generated for analysis and parametric optimization of combined energy-and-technology units. In continuation of the study these models was applied to obtain the regression dependencies. For this purpose, a numerical experiment was used which had been designed with the use of regression analysis mathematical tool, which assumes that the test results should represent independent, normally distributed, random variables with approximately equal variance. Herewith we study the dependence of the optimization criterion on the value of control parameters (factors. Planning, conducting and processing of results of the experiment was conducted in the following sequence: choice of the optimization criteria, selection of control parameters (factors, encoding factors, the matrix of experiment compiling, assessing significance of regression coefficients, testing the adequacy of the model and reproducibility of the experiments. As the optimization criteria the electricity capacity and efficiency of combined energy-technology units were adopted. As control parameters for the installation with a gas-expansion-and-generator machine the temperature of the fuel gas before the expander, the pressure of the fuel gas after the expander and the temperature of the air supplied to the compressor of the engine were adopted, while for the steam turbine the adopted optimization criteria were compression in the compressor of the engine, the steam consumption for the technology and the temperature of the air supplied to the compressor of the engine. The application of the outlined methodological approach makes it possible to obtain a simple polynomial dependence, which significantly simplify the procedures of analysis, parametric optimization and evaluation of efficiency in the feasibility studies of the options of construction of the energy
Social welfare and the Affordable Care Act: is it ever optimal to set aside comparative cost?
Mortimer, Duncan; Peacock, Stuart
2012-10-01
The creation of the Patient-Centered Outcomes Research Institute (PCORI) under the Affordable Care Act has set comparative effectiveness research (CER) at centre stage of US health care reform. Comparative cost analysis has remained marginalised and it now appears unlikely that the PCORI will require comparative cost data to be collected as an essential component of CER. In this paper, we review the literature to identify ethical and distributional objectives that might motivate calls to set priorities without regard to comparative cost. We then present argument and evidence to consider whether there is any plausible set of objectives and constraints against which priorities can be set without reference to comparative cost. We conclude that - to set aside comparative cost even after accounting for ethical and distributional constraints - would be truly to act as if money is no object. Copyright © 2012 Elsevier Ltd. All rights reserved.
Tauber, Sean; Navarro, Daniel J; Perfors, Amy; Steyvers, Mark
2017-07-01
Recent debates in the psychological literature have raised questions about the assumptions that underpin Bayesian models of cognition and what inferences they license about human cognition. In this paper we revisit this topic, arguing that there are 2 qualitatively different ways in which a Bayesian model could be constructed. The most common approach uses a Bayesian model as a normative standard upon which to license a claim about optimality. In the alternative approach, a descriptive Bayesian model need not correspond to any claim that the underlying cognition is optimal or rational, and is used solely as a tool for instantiating a substantive psychological theory. We present 3 case studies in which these 2 perspectives lead to different computational models and license different conclusions about human cognition. We demonstrate how the descriptive Bayesian approach can be used to answer different sorts of questions than the optimal approach, especially when combined with principled tools for model evaluation and model selection. More generally we argue for the importance of making a clear distinction between the 2 perspectives. Considerable confusion results when descriptive models and optimal models are conflated, and if Bayesians are to avoid contributing to this confusion it is important to avoid making normative claims when none are intended. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Jacob, S A; Ng, W L; Do, V
2015-02-01
There is wide variation in the proportion of newly diagnosed cancer patients who receive chemotherapy, indicating the need for a benchmark rate of chemotherapy utilisation. This study describes an evidence-based model that estimates the proportion of new cancer patients in whom chemotherapy is indicated at least once (defined as the optimal chemotherapy utilisation rate). The optimal chemotherapy utilisation rate can act as a benchmark for measuring and improving the quality of care. Models of optimal chemotherapy utilisation were constructed for each cancer site based on indications for chemotherapy identified from evidence-based treatment guidelines. Data on the proportion of patient- and tumour-related attributes for which chemotherapy was indicated were obtained, using population-based data where possible. Treatment indications and epidemiological data were merged to calculate the optimal chemotherapy utilisation rate. Monte Carlo simulations and sensitivity analyses were used to assess the effect of controversial chemotherapy indications and variations in epidemiological data on our model. Chemotherapy is indicated at least once in 49.1% (95% confidence interval 48.8-49.6%) of all new cancer patients in Australia. The optimal chemotherapy utilisation rates for individual tumour sites ranged from a low of 13% in thyroid cancers to a high of 94% in myeloma. The optimal chemotherapy utilisation rate can serve as a benchmark for planning chemotherapy services on a population basis. The model can be used to evaluate service delivery by comparing the benchmark rate with patterns of care data. The overall estimate for other countries can be obtained by substituting the relevant distribution of cancer types. It can also be used to predict future chemotherapy workload and can be easily modified to take into account future changes in cancer incidence, presentation stage or chemotherapy indications. Copyright © 2014 The Royal College of Radiologists. Published by
Directory of Open Access Journals (Sweden)
Tańczuk Mariusz
2017-01-01
Full Text Available District heating technologies should be efficient, effective and environmentally friendly. The majority of the communal heating systems in Poland produce district hot water in coal-fired boilers. A large number of them are considerably worn out, low-efficient in the summer time and will not comply with forthcoming regulations. One of the possible solution for such plants is repowering with new CHP systems or new boilers fuelled with fuels alternative to coal. Optimisation analysis of the target configuration of municipal heat generating plant is analysed in the paper. The work concerns repowering the existing conventional heat generating plant according to eight different scenarios of the plant configuration meeting technical and environmental requirements forecasted for the year of 2035. The maximum demand for heat of the system supplied by the plant is 185 MW. Taking into account different technical configurations on one side, and different energy and fuel prices on the other side, the comparative cost-benefits analysis of the assumed scenarios has been made. The basic economical index NPV (net present value has been derived for each analysed scenario and the results have been compared and discussed. It was also claimed that the scenario with CHP based on ICE engines is optimal.
Tańczuk, Mariusz; Radziewicz, Wojciech; Olszewski, Eligiusz; Skorek, Janusz
2017-10-01
District heating technologies should be efficient, effective and environmentally friendly. The majority of the communal heating systems in Poland produce district hot water in coal-fired boilers. A large number of them are considerably worn out, low-efficient in the summer time and will not comply with forthcoming regulations. One of the possible solution for such plants is repowering with new CHP systems or new boilers fuelled with fuels alternative to coal. Optimisation analysis of the target configuration of municipal heat generating plant is analysed in the paper. The work concerns repowering the existing conventional heat generating plant according to eight different scenarios of the plant configuration meeting technical and environmental requirements forecasted for the year of 2035. The maximum demand for heat of the system supplied by the plant is 185 MW. Taking into account different technical configurations on one side, and different energy and fuel prices on the other side, the comparative cost-benefits analysis of the assumed scenarios has been made. The basic economical index NPV (net present value) has been derived for each analysed scenario and the results have been compared and discussed. It was also claimed that the scenario with CHP based on ICE engines is optimal.
2016-10-13
and addressedwhen the qubit is usedwithin a fault-tolerant quantum computation scheme. 1. Introduction One of themain challenges in the physical...supplied in the supplementarymaterial. Additionally, we have supplied the datafiles constructed from the experiments, alongwith the Python notebook used to...New J. Phys. 18 (2016) 103018 doi:10.1088/1367-2630/18/10/103018 PAPER Optimization of a solid-state electron spin qubit using gate set tomography
Optimization of the size and shape of the set-in nozzle for a PWR reactor pressure vessel
Energy Technology Data Exchange (ETDEWEB)
Murtaza, Usman Tariq, E-mail: maniiut@yahoo.com; Javed Hyder, M., E-mail: hyder@pieas.edu.pk
2015-04-01
Highlights: • The size and shape of the set-in nozzle of the RPV have been optimized. • The optimized nozzle ensure the reduction of the mass around 198 kg per nozzle. • The mass of the RPV should be minimized for better fracture toughness. - Abstract: The objective of this research work is to optimize the size and shape of the set-in nozzle for a typical reactor pressure vessel (RPV) of a 300 MW pressurized water reactor. The analysis was performed by optimizing the four design variables which control the size and shape of the nozzle. These variables are inner radius of the nozzle, thickness of the nozzle, taper angle at the nozzle-cylinder intersection, and the point where taper of the nozzle starts from. It is concluded that the optimum design of the nozzle is the one that minimizes the two conflicting state variables, i.e., the stress intensity (Tresca yield criterion) and the mass of the RPV.
Directory of Open Access Journals (Sweden)
A. P. Karpenko
2016-01-01
Full Text Available We consider a class of algorithms for multi-objective optimization - Pareto-approximation algorithms, which suppose a preliminary building of finite-dimensional approximation of a Pareto set, thereby also a Pareto front of the problem. The article gives an overview of population and non-population algorithms of the Pareto-approximation, identifies their strengths and weaknesses, and presents a canonical algorithm "predator-prey", showing its shortcomings. We offer a number of modifications of the canonical algorithm "predator-prey" with the aim to overcome the drawbacks of this algorithm, present the results of a broad study of the efficiency of these modifications of the algorithm. The peculiarity of the study is the use of the quality indicators of the Pareto-approximation, which previous publications have not used. In addition, we present the results of the meta-optimization of the modified algorithm, i.e. determining the optimal values of some free parameters of the algorithm. The study of efficiency of the modified algorithm "predator-prey" has shown that the proposed modifications allow us to improve the following indicators of the basic algorithm: cardinality of a set of the archive solutions, uniformity of archive solutions, and computation time. By and large, the research results have shown that the modified and meta-optimized algorithm enables achieving exactly the same approximation as the basic algorithm, but with the number of preys being one order less. Computational costs are proportionally reduced.
Energy Technology Data Exchange (ETDEWEB)
Rodriguez-Bautista, Mariano; Díaz-García, Cecilia; Navarrete-López, Alejandra M.; Vargas, Rubicelia; Garza, Jorge, E-mail: jgo@xanum.uam.mx [Departamento de Química, División de Ciencias Básicas e Ingeniería, Universidad Autónoma Metropolitana-Iztapalapa, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa C. P. 09340, México D. F., México (Mexico)
2015-07-21
In this report, we use a new basis set for Hartree-Fock calculations related to many-electron atoms confined by soft walls. One- and two-electron integrals were programmed in a code based in parallel programming techniques. The results obtained with this proposal for hydrogen and helium atoms were contrasted with other proposals to study just one and two electron confined atoms, where we have reproduced or improved the results previously reported. Usually, an atom enclosed by hard walls has been used as a model to study confinement effects on orbital energies, the main conclusion reached by this model is that orbital energies always go up when the confinement radius is reduced. However, such an observation is not necessarily valid for atoms confined by penetrable walls. The main reason behind this result is that for atoms with large polarizability, like beryllium or potassium, external orbitals are delocalized when the confinement is imposed and consequently, the internal orbitals behave as if they were in an ionized atom. Naturally, the shell structure of these atoms is modified drastically when they are confined. The delocalization was an argument proposed for atoms confined by hard walls, but it was never verified. In this work, the confinement imposed by soft walls allows to analyze the delocalization concept in many-electron atoms.
Rodriguez-Bautista, Mariano; Díaz-García, Cecilia; Navarrete-López, Alejandra M; Vargas, Rubicelia; Garza, Jorge
2015-07-21
In this report, we use a new basis set for Hartree-Fock calculations related to many-electron atoms confined by soft walls. One- and two-electron integrals were programmed in a code based in parallel programming techniques. The results obtained with this proposal for hydrogen and helium atoms were contrasted with other proposals to study just one and two electron confined atoms, where we have reproduced or improved the results previously reported. Usually, an atom enclosed by hard walls has been used as a model to study confinement effects on orbital energies, the main conclusion reached by this model is that orbital energies always go up when the confinement radius is reduced. However, such an observation is not necessarily valid for atoms confined by penetrable walls. The main reason behind this result is that for atoms with large polarizability, like beryllium or potassium, external orbitals are delocalized when the confinement is imposed and consequently, the internal orbitals behave as if they were in an ionized atom. Naturally, the shell structure of these atoms is modified drastically when they are confined. The delocalization was an argument proposed for atoms confined by hard walls, but it was never verified. In this work, the confinement imposed by soft walls allows to analyze the delocalization concept in many-electron atoms.
Richard, Ryan M.
2016-01-05
© 2016 American Chemical Society. In designing organic materials for electronics applications, particularly for organic photovoltaics (OPV), the ionization potential (IP) of the donor and the electron affinity (EA) of the acceptor play key roles. This makes OPV design an appealing application for computational chemistry since IPs and EAs are readily calculable from most electronic structure methods. Unfortunately reliable, high-accuracy wave function methods, such as coupled cluster theory with single, double, and perturbative triples [CCSD(T)] in the complete basis set (CBS) limit are too expensive for routine applications to this problem for any but the smallest of systems. One solution is to calibrate approximate, less computationally expensive methods against a database of high-accuracy IP/EA values; however, to our knowledge, no such database exists for systems related to OPV design. The present work is the first of a multipart study whose overarching goal is to determine which computational methods can be used to reliably compute IPs and EAs of electron acceptors. This part introduces a database of 24 known organic electron acceptors and provides high-accuracy vertical IP and EA values expected to be within ±0.03 eV of the true non-relativistic, vertical CCSD(T)/CBS limit. Convergence of IP and EA values toward the CBS limit is studied systematically for the Hartree-Fock, MP2 correlation, and beyond-MP2 coupled cluster contributions to the focal point estimates.
Optimal Switching Control of Burner Setting for a Compact Marine Boiler Design
DEFF Research Database (Denmark)
Solberg, Brian; Andersen, Palle; Maciejowski, Jan M.
2010-01-01
This paper discusses optimal control strategies for switching between different burner modes in a novel compact marine boiler design. The ideal behaviour is defined in a performance index the minimisation of which defines an ideal trade-off between deviations in boiler pressure and water level...... approach is based on a generalisation of hysteresis control. The strategies are verified on a simulation model of the compact marine boiler for control of low/high burner load switches. ...
Fast-Solving Quasi-Optimal LS-S3VM Based on an Extended Candidate Set.
Ma, Yuefeng; Liang, Xun; Kwok, James T; Li, Jianping; Zhou, Xiaoping; Zhang, Haiyan
2018-04-01
The semisupervised least squares support vector machine (LS-S 3 VM) is an important enhancement of least squares support vector machines in semisupervised learning. Given that most data collected from the real world are without labels, semisupervised approaches are more applicable than standard supervised approaches. Although a few training methods for LS-S 3 VM exist, the problem of deriving the optimal decision hyperplane efficiently and effectually has not been solved. In this paper, a fully weighted model of LS-S 3 VM is proposed, and a simple integer programming (IP) model is introduced through an equivalent transformation to solve the model. Based on the distances between the unlabeled data and the decision hyperplane, a new indicator is designed to represent the possibility that the label of an unlabeled datum should be reversed in each iteration during training. Using the indicator, we construct an extended candidate set consisting of the indices of unlabeled data with high possibilities, which integrates more information from unlabeled data. Our algorithm is degenerated into a special scenario of the previous algorithm when the extended candidate set is reduced into a set with only one element. Two strategies are utilized to determine the descent directions based on the extended candidate set. Furthermore, we developed a novel method for locating a good starting point based on the properties of the equivalent IP model. Combined with the extended candidate set and the carefully computed starting point, a fast algorithm to solve LS-S 3 VM quasi-optimally is proposed. The choice of quasi-optimal solutions results in low computational cost and avoidance of overfitting. Experiments show that our algorithm equipped with the two designed strategies is more effective than other algorithms in at least one of the following three aspects: 1) computational complexity; 2) generalization ability; and 3) flexibility. However, our algorithm and other algorithms have
International Nuclear Information System (INIS)
Won, Byung Hee; Kim, Kyung O; Kim, Jong Kyung; Kim, Soon Young
2012-01-01
The Core Protection Calculator System (CPCS) is an automated device which is adopted to inspect the safety parameters such as Departure from Nuclear Boiling Ratio (DNBR) and Local Power Density (LPD) during normal operation. One function of the CPCS is to predict the axial power distributions using function sets in cubic spline method. Another function of that is to impose penalty when the estimated distribution by the spline method disagrees with embedded data in CPCS (i.e., over 8%). In conventional CPCS, restricted function sets are used to synthesize axial power shape, whereby it occasionally can draw a disagreement between synthesized data and the embedded data. For this reason, the study on improvement for power distributions synthesis in CPCS has been conducted in many countries. In this study, many function sets (more than 18,000 types) differing from the conventional ones were evaluated in each power shape. Matlab code was used for calculating/arranging the numerous cases of function sets. Their synthesis performance was also evaluated through error between conventional data and consequences calculated by new function sets
Directory of Open Access Journals (Sweden)
A. P. Tsimpidi
2010-01-01
Full Text Available New primary and secondary organic aerosol modules have been added to PMCAMx, a three dimensional chemical transport model (CTM, for use with the SAPRC99 chemistry mechanism based on recent smog chamber studies. The new modelling framework is based on the volatility basis-set approach: both primary and secondary organic components are assumed to be semivolatile and photochemically reactive and are distributed in logarithmically spaced volatility bins. This new framework with the use of the new volatility basis parameters for low-NO_{x} and high-NO_{x} conditions tends to predict 4–6 times higher anthropogenic SOA concentrations than those predicted with the older generation of models. The resulting PMCAMx-2008 was applied in Mexico City Metropolitan Area (MCMA for approximately a week during April 2003 during a period of very low regional biomass burning impact. The emission inventory, which uses as a starting point the MCMA 2004 official inventory, is modified and the primary organic aerosol (POA emissions are distributed by volatility based on dilution experiments. The predicted organic aerosol (OA concentrations peak in the center of Mexico City, reaching values above 40 μg m^{−3}. The model predictions are compared with the results of the Positive Matrix Factorization (PMF analysis of the Aerosol Mass Spectrometry (AMS observations. The model reproduces both Hydrocarbon-like Organic Aerosol (HOA and Oxygenated Organic Aerosol (OOA concentrations and diurnal profiles. The small OA underprediction during the rush-hour periods and overprediction in the afternoon suggest potential improvements to the description of fresh primary organic emissions and the formation of the oxygenated organic aerosols, respectively, although they may also be due to errors in the simulation of dispersion and vertical mixing. However, the AMS OOA data are not specific enough to prove that the model reproduces the organic aerosol
Alcaraz, Kassandra I.; Kreuter, Matthew W.; Bryan, Rebecca P.
2009-01-01
Objective Rarely have Geographic Information Systems (GIS) been used to inform community-based outreach and intervention planning. This study sought to identify community settings most likely to reach individuals from geographically localized areas. Method An observational study conducted in an urban city in Missouri during 2003–2007 placed computerized breast cancer education kiosks in seven types of community settings: beauty salons, churches, health fairs, neighborhood health centers, Laundromats, public libraries and social service agencies. We used GIS to measure distance between kiosk users’ (n=7,297) home ZIP codes and the location where they used the kiosk. Mean distances were compared across settings. Results Mean distance between individuals’ home ZIP codes and the location where they used the kiosk varied significantly (pLaundromats (2.3 miles) and public libraries (2.8 miles) and greatest among kiosk users at health fairs (7.6 miles). Conclusion Some community settings are more likely than others to reach highly localized populations. A better understanding of how and where to reach specific populations can complement the progress already being made in identifying populations at increased disease risk. PMID:19422844
Training a whole-book LSTM-based recognizer with an optimal training set
Soheili, Mohammad Reza; Yousefi, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier
2018-04-01
Despite the recent progress in OCR technologies, whole-book recognition, is still a challenging task, in particular in case of old and historical books, that the unknown font faces or low quality of paper and print contributes to the challenge. Therefore, pre-trained recognizers and generic methods do not usually perform up to required standards, and usually the performance degrades for larger scale recognition tasks, such as of a book. Such reportedly low error-rate methods turn out to require a great deal of manual correction. Generally, such methodologies do not make effective use of concepts such redundancy in whole-book recognition. In this work, we propose to train Long Short Term Memory (LSTM) networks on a minimal training set obtained from the book to be recognized. We show that clustering all the sub-words in the book, and using the sub-word cluster centers as the training set for the LSTM network, we can train models that outperform any identical network that is trained with randomly selected pages of the book. In our experiments, we also show that although the sub-word cluster centers are equivalent to about 8 pages of text for a 101- page book, a LSTM network trained on such a set performs competitively compared to an identical network that is trained on a set of 60 randomly selected pages of the book.
Setting Optimal Bounds on Risk in Asset Allocation - a Convex Program
Directory of Open Access Journals (Sweden)
James E. Falk
2002-10-01
Full Text Available The 'Portfolio Selection Problem' is traditionally viewed as selecting a mix of investment opportunities that maximizes the expected return subject to a bound on risk. However, in reality, portfolios are made up of a few 'asset classes' that consist of similar opportunities. The asset classes are managed by individual `sub-managers', under guidelines set by an overall portfolio manager. Once a benchmark (the `strategic' allocation has been set, an overall manager may choose to allow the sub-managers some latitude in which opportunities make up the classes. He may choose some overall bound on risk (as measured by the variance and wish to set bounds that constrain the submanagers. Mathematically we show that the problem is equivalent to finding a hyper-rectangle of maximal volume within an ellipsoid. It is a convex program, albeit with potentially a large number of constraints. We suggest a cutting plane algorithm to solve the problem and include computational results on a set of randomly generated problems as well as a real-world problem taken from the literature.
An Optimized, Grid Independent, Narrow Band Data Structure for High Resolution Level Sets
DEFF Research Database (Denmark)
Nielsen, Michael Bang; Museth, Ken
2004-01-01
enforced by the convex boundaries of an underlying cartesian computational grid. Here we present a novel very memory efficient narrow band data structure, dubbed the Sparse Grid, that enables the representation of grid independent high resolution level sets. The key features our new data structure are...
Role of pharmacists in optimizing the use of anticancer drugs in the clinical setting
Directory of Open Access Journals (Sweden)
Ma CSJ
2014-02-01
Full Text Available Carolyn SJ Ma Department of Pharmacy Practice, Daniel K. Inouye College of Pharmacy, University of Hawaii at Hilo, Honolulu, HI, USA Abstract: Oncology pharmacists, also known as oncology pharmacy specialists (OPSs have specialized knowledge of anticancer medications and their role in cancer. As essential member of the interdisciplinary team, OPSs optimize the benefits of drug therapy, help to minimize toxicities and work with patients on supportive care issues. The OPSs expanded role as experts in drug therapy extends to seven major key elements of medication management that include: selection, procurement, storage, preparation/dispensing, prescribing/dosing/transcribing, administration and monitoring/evaluation/education. As front line caregivers in hospital, ambulatory care, long-term care facilities, and community specialty pharmacies, the OPS also helps patients in areas of supportive care including nausea and vomiting, hematologic support, nutrition and infection control. This role helps the patient in the recovery phase between treatment cycles and adherence to chemotherapy treatment schedules essential for optimal treatment and outcome. Keywords: oncology pharmacist, oncology pharmacy specialist, medication management, chemotherapy
Optimizing the Nutritional Support of Adult Patients in the Setting of Cirrhosis.
Perumpail, Brandon J; Li, Andrew A; Cholankeril, George; Kumari, Radhika; Ahmed, Aijaz
2017-10-13
The aim of this work is to develop a pragmatic approach in the assessment and management strategies of patients with cirrhosis in order to optimize the outcomes in this patient population. A systematic review of literature was conducted through 8 July 2017 on the PubMed Database looking for key terms, such as malnutrition, nutrition, assessment, treatment, and cirrhosis. Articles and studies looking at associations between nutrition and cirrhosis were reviewed. An assessment of malnutrition should be conducted in two stages: the first, to identify patients at risk for malnutrition based on the severity of liver disease, and the second, to perform a complete multidisciplinary nutritional evaluation of these patients. Optimal management of malnutrition should focus on meeting recommended daily goals for caloric intake and inclusion of various nutrients in the diet. The nutritional goals should be pursued by encouraging and increasing oral intake or using other measures, such as oral supplementation, enteral nutrition, or parenteral nutrition. Although these strategies to improve nutritional support have been well established, current literature on the topic is limited in scope. Further research should be implemented to test if this enhanced approach is effective.
Optimizing the Nutritional Support of Adult Patients in the Setting of Cirrhosis
Directory of Open Access Journals (Sweden)
Brandon J. Perumpail
2017-10-01
Full Text Available Aim: The aim of this work is to develop a pragmatic approach in the assessment and management strategies of patients with cirrhosis in order to optimize the outcomes in this patient population. Method: A systematic review of literature was conducted through 8 July 2017 on the PubMed Database looking for key terms, such as malnutrition, nutrition, assessment, treatment, and cirrhosis. Articles and studies looking at associations between nutrition and cirrhosis were reviewed. Results: An assessment of malnutrition should be conducted in two stages: the first, to identify patients at risk for malnutrition based on the severity of liver disease, and the second, to perform a complete multidisciplinary nutritional evaluation of these patients. Optimal management of malnutrition should focus on meeting recommended daily goals for caloric intake and inclusion of various nutrients in the diet. The nutritional goals should be pursued by encouraging and increasing oral intake or using other measures, such as oral supplementation, enteral nutrition, or parenteral nutrition. Conclusions: Although these strategies to improve nutritional support have been well established, current literature on the topic is limited in scope. Further research should be implemented to test if this enhanced approach is effective.
Tsimpidi, A. P.; Karydis, V. A.; Pandis, S. N.; Zavala, M.; Lei, W.; Molina, L. T.
2007-12-01
Anthropogenic air pollution is an increasingly serious problem for public health, agriculture, and global climate. Organic material (OM) contributes ~ 20-50% to the total fine aerosol mass at continental mid-latitudes. Although OM accounts for a large fraction of PM2.5 concentration worldwide, the contributions of primary and secondary organic aerosol have been difficult to quantify. In this study, new primary and secondary organic aerosol modules were added to PMCAMx, a three dimensional chemical transport model (Gaydos et al., 2007), for use with the SAPRC99 chemistry mechanism (Carter, 2000; ENVIRON, 2006) based on recent smog chamber studies (Robinson et al., 2007). The new modeling framework is based on the volatility basis-set approach (Lane et al., 2007): both primary and secondary organic components are assumed to be semivolatile and photochemically reactive and are distributed in logarithmically spaced volatility bins. The emission inventory, which uses as starting point the MCMA 2004 official inventory (CAM, 2006), is modified and the primary organic aerosol (POA) emissions are distributed by volatility based on dilution experiments (Robinson et al., 2007). Sensitivity tests where POA is considered as nonvolatile and POA and SOA as chemically reactive are also described. In all cases PMCAMx is applied in the Mexico City Metropolitan Area during March 2006. The modeling domain covers a 180x180x6 km region in the MCMA with 3x3 km grid resolution. The model predictions are compared with Aerodyne's Aerosol Mass Spectrometry (AMS) observations from the MILAGRO Campaign. References Robinson, A. L.; Donahue, N. M.; Shrivastava, M. K.; Weitkamp, E. A.; Sage, A. M.; Grieshop, A. P.; Lane, T. E.; Pandis, S. N.; Pierce, J. R., 2007. Rethinking organic aerosols: semivolatile emissions and photochemical aging. Science 315, 1259-1262. Gaydos, T. M.; Pinder, R. W.; Koo, B.; Fahey, K. M.; Pandis, S. N., 2007. Development and application of a three- dimensional aerosol
Yoshida, Tatsusada; Hayashi, Takahisa; Mashima, Akira; Chuman, Hiroshi
2015-10-01
One of the most challenging problems in computer-aided drug discovery is the accurate prediction of the binding energy between a ligand and a protein. For accurate estimation of net binding energy ΔEbind in the framework of the Hartree-Fock (HF) theory, it is necessary to estimate two additional energy terms; the dispersion interaction energy (Edisp) and the basis set superposition error (BSSE). We previously reported a simple and efficient dispersion correction, Edisp, to the Hartree-Fock theory (HF-Dtq). In the present study, an approximation procedure for estimating BSSE proposed by Kruse and Grimme, a geometrical counterpoise correction (gCP), was incorporated into HF-Dtq (HF-Dtq-gCP). The relative weights of the Edisp (Dtq) and BSSE (gCP) terms were determined to reproduce ΔEbind calculated with CCSD(T)/CBS or /aug-cc-pVTZ (HF-Dtq-gCP (scaled)). The performance of HF-Dtq-gCP (scaled) was compared with that of B3LYP-D3(BJ)-bCP (dispersion corrected B3LYP with the Boys and Bernadi counterpoise correction (bCP)), by taking ΔEbind (CCSD(T)-bCP) of small non-covalent complexes as 'a golden standard'. As a critical test, HF-Dtq-gCP (scaled)/6-31G(d) and B3LYP-D3(BJ)-bCP/6-31G(d) were applied to the complex model for HIV-1 protease and its potent inhibitor, KNI-10033. The present results demonstrate that HF-Dtq-gCP (scaled) is a useful and powerful remedy for accurately and promptly predicting ΔEbind between a ligand and a protein, albeit it is a simple correction procedure. Copyright © 2015 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Q. J. Zhang
2013-06-01
Full Text Available Simulations with the chemistry transport model CHIMERE are compared to measurements performed during the MEGAPOLI (Megacities: Emissions, urban, regional and Global Atmospheric POLlution and climate effects, and Integrated tools for assessment and mitigation summer campaign in the Greater Paris region in July 2009. The volatility-basis-set approach (VBS is implemented into this model, taking into account the volatility of primary organic aerosol (POA and the chemical aging of semi-volatile organic species. Organic aerosol is the main focus and is simulated with three different configurations with a modified treatment of POA volatility and modified secondary organic aerosol (SOA formation schemes. In addition, two types of emission inventories are used as model input in order to test the uncertainty related to the emissions. Predictions of basic meteorological parameters and primary and secondary pollutant concentrations are evaluated, and four pollution regimes are defined according to the air mass origin. Primary pollutants are generally overestimated, while ozone is consistent with observations. Sulfate is generally overestimated, while ammonium and nitrate levels are well simulated with the refined emission data set. As expected, the simulation with non-volatile POA and a single-step SOA formation mechanism largely overestimates POA and underestimates SOA. Simulation of organic aerosol with the VBS approach taking into account the aging of semi-volatile organic compounds (SVOC shows the best correlation with measurements. High-concentration events observed mostly after long-range transport are well reproduced by the model. Depending on the emission inventory used, simulated POA levels are either reasonable or underestimated, while SOA levels tend to be overestimated. Several uncertainties related to the VBS scheme (POA volatility, SOA yields, the aging parameterization, to emission input data, and to simulated OH levels can be responsible for
The Role of eHealth in Optimizing Preventive Care in the Primary Care Setting.
Carey, Mariko; Noble, Natasha; Mansfield, Elise; Waller, Amy; Henskens, Frans; Sanson-Fisher, Rob
2015-05-22
Modifiable health risk behaviors such as smoking, overweight and obesity, risky alcohol consumption, physical inactivity, and poor nutrition contribute to a substantial proportion of the world's morbidity and mortality burden. General practitioners (GPs) play a key role in identifying and managing modifiable health risk behaviors. However, these are often underdetected and undermanaged in the primary care setting. We describe the potential of eHealth to help patients and GPs to overcome some of the barriers to managing health risk behaviors. In particular, we discuss (1) the role of eHealth in facilitating routine collection of patient-reported data on lifestyle risk factors, and (2) the role of eHealth in improving clinical management of identified risk factors through provision of tailored feedback, point-of-care reminders, tailored educational materials, and referral to online self-management programs. Strategies to harness the capacity of the eHealth medium, including the use of dynamic features and tailoring to help end users engage with, understand, and apply information need to be considered and maximized. Finally, the potential challenges in implementing eHealth solutions in the primary care setting are discussed. In conclusion, there is significant potential for innovative eHealth solutions to make a contribution to improving preventive care in the primary care setting. However, attention to issues such as data security and designing eHealth interfaces that maximize engagement from end users will be important to moving this field forward.
Using a Robust Design Approach to Optimize Chair Set-up in Wheelchair Sport
Directory of Open Access Journals (Sweden)
David S. Haydon
2018-02-01
Full Text Available Optimisation of wheelchairs for court sports is currently a difficult and time-consuming process due to the broad range of impairments across athletes, difficulties in monitoring on-court performance, and the trade-off set-up that parameters have on key performance variables. A robust design approach to this problem can potentially reduce the amount of testing required, and therefore allow for individual on-court assessments. This study used orthogonal design with four set-up factors (seat height, depth, and angle, as well as tyre pressure at three levels (current, decreased, and increased for three elite wheelchair rugby players. Each player performed two maximal effort sprints from a stationary position in nine different set-ups, with this allowing for detailed analysis of each factor and level. Whilst statistical significance is difficult to obtain due to the small sample size, meaningful difference results aligning with previous research findings were identified and provide support for the use of this approach.
Wright, Dannen D; Wright, Alex J; Boulter, Tyler D; Bernhisel, Ashlie A; Stagg, Brian C; Zaugg, Brian; Pettey, Jeff H; Ha, Larry; Ta, Brian T; Olson, Randall J
2017-09-01
To determine the optimum bottle height, vacuum, aspiration rate, and power settings in the peristaltic mode of the Whitestar Signature Pro machine with Ellips FX tip action (transversal). John A. Moran Eye Center Laboratories, University of Utah, Salt Lake City, Utah, USA. Experimental study. Porcine lens nuclei were hardened with formalin and cut into 2.0 mm cubes. Lens cubes were emulsified using transversal and fragment removal time (efficiency), and fragment bounces off the tip (chatter) were measured to determine optimum aspiration rate, bottle height, vacuum, and power settings in the peristaltic mode. Efficiency increased in a linear fashion with increasing bottle height and vacuum. The most efficient aspiration rate was 50 mL/min, with 60 mL/min statistically similar. Increasing power increased efficiency up to 90% with increased chatter at 100%. The most efficient values for the settings tested were bottle height at 100 cm, vacuum at 600 mm Hg, aspiration rate of 50 or 60 mL/min, and power at 90%. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F
2016-08-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Community-based interventions to optimize early childhood development in low resource settings.
Maulik, P K; Darmstadt, G L
2009-08-01
Interventions targeting the early childhood period (0 to 3 years) help to improve neuro-cognitive functioning throughout life. Some of the more low cost, low resource-intensive community practices for this age-group are play, reading, music and tactile stimulation. This research was conducted to summarize the evidence regarding the effectiveness of such strategies on child development, with particular focus on techniques that may be transferable to developing countries and to children at risk of developing secondary impairments. PubMed, PsycInfo, Embase, ERIC, CINAHL and Cochrane were searched for studies involving the above strategies for early intervention. Reference lists of these studies were scanned and other studies were incorporated based on snow-balling. Overall, 76 articles corresponding to 53 studies, 24 of which were randomized controlled trials, were identified. Sixteen of those studies were from low- and middle-income countries. Play and reading were the two commonest interventions and showed positive impact on intellectual development of the child. Music was evaluated primarily in intensive care settings. Kangaroo Mother Care, and to a lesser extent massage, also showed beneficial effects. Improvement in parent-child interaction was common to all the interventions. Play and reading were effective interventions for early childhood interventions in low- and middle-income countries. More research is needed to judge the effectiveness of music. Kangaroo Mother Care is effective for low birth weight babies in resource poor settings, but further research is needed in community settings. Massage is useful, but needs more rigorous research prior to being advocated for community-level interventions.
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf
2013-01-01
The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein. PMID:23897484
Deptuła, A.; Partyka, M. A.
2014-08-01
The method of minimization of complex partial multi-valued logical functions determines the degree of importance of construction and exploitation parameters playing the role of logical decision variables. Logical functions are taken into consideration in the issues of modelling machine sets. In multi-valued logical functions with weighting products, it is possible to use a modified Quine - McCluskey algorithm of multi-valued functions minimization. Taking into account weighting coefficients in the logical tree minimization reflects a physical model of the object being analysed much better
Crop Evaluation System Optimization: Attribute Weights Determination Based on Rough Sets Theory
Directory of Open Access Journals (Sweden)
Ruihong Wang
2017-01-01
Full Text Available The present study is mainly a continuation of our previous study, which is about a crop evaluation system development that is based on grey relational analysis. In that system, the attribute weight determination affects the evaluation result directly. Attribute weight is usually ascertained by decision-makers experience knowledge. In this paper, we utilize rough sets theory to calculate attribute significance and then combine it with weight given by decision-maker. This method is a comprehensive consideration of subjective experience knowledge and objective situation; thus it can acquire much more ideal results. Finally, based on this method, we improve the system based on ASP.NET technology.
OPTIMIZATION-BASED APPROACH TO TILING OF FINITE AREAS WITH ARBITRARY SETS OF WANG TILES
Directory of Open Access Journals (Sweden)
Marek Tyburec
2017-11-01
Full Text Available Wang tiles proved to be a convenient tool for the design of aperiodic tilings in computer graphics and in materials engineering. While there are several algorithms for generation of finite-sized tilings, they exploit the specific structure of individual tile sets, which prevents their general usage. In this contribution, we reformulate the NP-complete tiling generation problem as a binary linear program, together with its linear and semidefinite relaxations suitable for the branch and bound method. Finally, we assess the performance of the established formulations on generations of several aperiodic tilings reported in the literature, and conclude that the linear relaxation is better suited for the problem.
Directory of Open Access Journals (Sweden)
V. A. Baturin
2017-03-01
Full Text Available An optimal control problem for discrete systems is considered. A method of successive improvements along with its modernization based on the expansion of the main structures of the core algorithm about the parameter is suggested. The idea of the method is based on local approximation of attainability set, which is described by the zeros of the Bellman function in the special problem of optimal control. The essence of the problem is as follows: from the end point of the phase is required to find a path that minimizes functional deviations of the norm from the initial state. If the initial point belongs to the attainability set of the original controlled system, the value of the Bellman function equal to zero, otherwise the value of the Bellman function is greater than zero. For this special task Bellman equation is considered. The support approximation and Bellman equation are selected. The Bellman function is approximated by quadratic terms. Along the allowable trajectory, this approximation gives nothing, because Bellman function and its expansion coefficients are zero. We used a special trick: an additional variable is introduced, which characterizes the degree of deviation of the system from the initial state, thus it is obtained expanded original chain. For the new variable initial nonzero conditions is selected, thus obtained trajectory is lying outside attainability set and relevant Bellman function is greater than zero, which allows it to hold a non-trivial approximation. As a result of these procedures algorithms of successive improvements is designed. Conditions for relaxation algorithms and conditions for the necessary conditions of optimality are also obtained.
Radenkovic, Dina; Kobayashi, Hisataka; Remsey-Semmelweis, Ernö; Seifalian, Alexander M
2016-08-01
Breast cancer is the most common cancer in the world. Sentinel lymph node (SLN) biopsy is used for staging of axillary lymph nodes. Organic dyes and radiocolloid are currently used for SLN mapping, but expose patients to ionizing radiation, are unstable during surgery and cause local tissue damage. Quantum dots (QD) could be used for SLN mapping without the need for biopsy. Surgical resection of the primary tumor is the optimal treatment for early-diagnosed breast cancer, but due to difficulties in defining tumor margins, cancer cells often remain leading to reoccurrences. Functionalized QD could be used for image-guided tumor resection to allow visualization of cancer cells. Near Infrared QD are photostable and have improved deep tissue penetration. Slow elimination of QD raises concerns of potential accumulation. Nevertheless, promising findings with cadmium-free QD in recent in vivo studies and first in-human trial suggest huge potential for cancer diagnostic and therapy. Copyright © 2016 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Rica Gonen
2013-11-01
Full Text Available We analyze the space of deterministic, dominant-strategy incentive compatible, individually rational and Pareto optimal combinatorial auctions. We examine a model with multidimensional types, nonidentical items, private values and quasilinear preferences for the players with one relaxation; the players are subject to publicly-known budget constraints. We show that the space includes dictatorial mechanisms and that if dictatorial mechanisms are ruled out by a natural anonymity property, then an impossibility of design is revealed. The same impossibility naturally extends to other abstract mechanisms with an arbitrary outcome set if one maintains the original assumptions of players with quasilinear utilities, public budgets and nonnegative prices.
Optimal set of agri-environmental indicators for the agricultural sector of Czech Republic
Directory of Open Access Journals (Sweden)
Jiří Hřebíček
2013-01-01
Full Text Available Current trends of agri-environmental indicators evaluation (i.e., the measurement of environmental performance and farm reporting are discussed in the paper focusing on the agriculture sector. From the perspective of agricultural policy, there are two broad decisions to make: which indicators to recommend and promote to farmers, and which indicators to collect to assist in agriculture policy-making. We introduce several general approaches for indicators to collect to assist in policy-making (European Union, Organization for Economic Cooperation and Development and Food and Agriculture Organization of the United Nations in the first part of our paper and given the differences in decision-making problems faced by these sets of decision makers. We continue in the second part of the paper with a proposal of indicators to recommend and promote to farmers in the Czech Republic.
International Nuclear Information System (INIS)
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf
2013-01-01
A systematic approach to the scaling and merging of data from multiple crystals in macromolecular crystallography is introduced and explained. The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein
Energy Technology Data Exchange (ETDEWEB)
Foadi, James [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Imperial College, London SW7 2AZ (United Kingdom); Aller, Pierre [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Alguel, Yilmaz; Cameron, Alex [Imperial College, London SW7 2AZ (United Kingdom); Axford, Danny; Owen, Robin L. [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Armour, Wes [Oxford e-Research Centre (OeRC), Keble Road, Oxford OX1 3QG (United Kingdom); Waterman, David G. [Research Complex at Harwell (RCaH), Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0FA (United Kingdom); Iwata, So [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Imperial College, London SW7 2AZ (United Kingdom); Evans, Gwyndaf, E-mail: gwyndaf.evans@diamond.ac.uk [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom)
2013-08-01
A systematic approach to the scaling and merging of data from multiple crystals in macromolecular crystallography is introduced and explained. The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein.
Myosin-II sets the optimal response time scale of chemotactic amoeba
Hsu, Hsin-Fang; Westendorf, Christian; Tarantola, Marco; Bodenschatz, Eberhard; Beta, Carsten
2014-03-01
The response dynamics of the actin cytoskeleton to external chemical stimuli plays a fundamental role in numerous cellular functions. One of the key players that governs the dynamics of the actin network is the motor protein myosin-II. Here we investigate the role of myosin-II in the response of the actin system to external stimuli. We used a microfluidic device in combination with a photoactivatable chemoattractant to apply stimuli to individual cells with high temporal resolution. We directly compare the actin dynamics in Dictyostelium discodelium wild type (WT) cells to a knockout mutant that is deficient in myosin-II (MNL). Similar to the WT a small population of MNL cells showed self-sustained oscillations even in absence of external stimuli. The actin response of MNL cells to a short pulse of chemoattractant resembles WT during the first 15 sec but is significantly delayed afterward. The amplitude of the dominant peak in the power spectrum from the response time series of MNL cells to periodic stimuli with varying period showed a clear resonance peak at a forcing period of 36 sec, which is significantly delayed as compared to the resonance at 20 sec found for the WT. This shift indicates an important role of myosin-II in setting the response time scale of motile amoeba. Institute of Physics und Astronomy, University of Potsdam, Karl-Liebknecht-Str. 24/25, 14476 Potsdam, Germany.
Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.
Directory of Open Access Journals (Sweden)
Vasanthan Maruthapillai
Full Text Available In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face and change in marker distance (change in distance between the original and new marker positions, were used to extract three statistical features (mean, variance, and root mean square from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.
Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.
Maruthapillai, Vasanthan; Murugappan, Murugappan
2016-01-01
In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.
Directory of Open Access Journals (Sweden)
Dongliang Liu
2017-03-01
Full Text Available In order to optimize the ventilation effect of ammonia leakage in the refrigeration machinery room, a food processing enterprise is selected as the subject investigated. The velocity and concentration field distribution during the process of ammonia leakage are discussed through simulation of refrigeration machinery room using CFD software. The ventilation system of the room is optimized in three aspects which are named air distribution, ventilation volume and discharge outlet. The influence of the ammonia alarm system through ventilation is also analyzed. The results show that it will be better to set the discharge outlet at the top of the plant than at the side of the wall, and the smaller of the distance between the air outlet and the ammonia gathering area, the better of the effect of ventilation will be. The air flow can be improved and the vortex flow can be reduced if the ventilation volume, the number of air vents and the exhaust velocity are reasonably arranged. Not only the function of the alarm could be ensured, but also the scope of the detection area could be enlarged if the detectors are set on the ceiling of the refrigeration units or the ammonia storage vessel.
Szwedowski, T D; Fialkov, J; Pakdel, A; Whyne, C M
2013-01-01
Accurate representation of skeletal structures is essential for quantifying structural integrity, for developing accurate models, for improving patient-specific implant design and in image-guided surgery applications. The complex morphology of thin cortical structures of the craniofacial skeleton (CFS) represents a significant challenge with respect to accurate bony segmentation. This technical study presents optimized processing steps to segment the three-dimensional (3D) geometry of thin cortical bone structures from CT images. In this procedure, anoisotropic filtering and a connected components scheme were utilized to isolate and enhance the internal boundaries between craniofacial cortical and trabecular bone. Subsequently, the shell-like nature of cortical bone was exploited using boundary-tracking level-set methods with optimized parameters determined from large-scale sensitivity analysis. The process was applied to clinical CT images acquired from two cadaveric CFSs. The accuracy of the automated segmentations was determined based on their volumetric concurrencies with visually optimized manual segmentations, without statistical appraisal. The full CFSs demonstrated volumetric concurrencies of 0.904 and 0.719; accuracy increased to concurrencies of 0.936 and 0.846 when considering only the maxillary region. The highly automated approach presented here is able to segment the cortical shell and trabecular boundaries of the CFS in clinical CT images. The results indicate that initial scan resolution and cortical-trabecular bone contrast may impact performance. Future application of these steps to larger data sets will enable the determination of the method's sensitivity to differences in image quality and CFS morphology.
DEFF Research Database (Denmark)
Brandbyge, Mads
2014-01-01
, different from what would be obtained by using an orthogonal basis, and dividing surfaces defined in real-space. We argue that this assumption is not required to be fulfilled to get exact results. We show how the current/transmission calculated by the standard Greens function method is independent...
Czech Academy of Sciences Publication Activity Database
Li, F.; Wang, L.; Zhao, J.; Xie, J. R. H.; Riley, Kevin Eugene; Chen, Z.
2011-01-01
Roč. 130, 2/3 (2011), s. 341-352 ISSN 1432-881X Institutional research plan: CEZ:AV0Z40550506 Keywords : water cluster * density functional theory * MP2 . CCSD(T) * basis set * relative energies Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.162, year: 2011
DEFF Research Database (Denmark)
Christensen, Niels Johan; Kepp, Kasper Planeta
2014-01-01
, very high (R2∼0.99) correlation was observed between logKm (ABTS) and binding-pocket charge due to sites 157, 161, 269, 271, and 333, i.e. laccases optimal for ABTS turnover have positively charged anchor points in their pockets. Our work also demonstrates how activity-constraints can markedly improve...
International Nuclear Information System (INIS)
Guerra, Fabio A.; Coelho, Leandro dos S.
2008-01-01
An important problem in engineering is the identification of nonlinear systems, among them radial basis function neural networks (RBF-NN) using Gaussian activation functions models, which have received particular attention due to their potential to approximate nonlinear behavior. Several design methods have been proposed for choosing the centers and spread of Gaussian functions and training the RBF-NN. The selection of RBF-NN parameters such as centers, spreads, and weights can be understood as a system identification problem. This paper presents a hybrid training approach based on clustering methods (k-means and c-means) to tune the centers of Gaussian functions used in the hidden layer of RBF-NNs. This design also uses particle swarm optimization (PSO) for centers (local clustering search method) and spread tuning, and the Penrose-Moore pseudoinverse for the adjustment of RBF-NN weight outputs. Simulations involving this RBF-NN design to identify Lorenz's chaotic system indicate that the performance of the proposed method is superior to that of the conventional RBF-NN trained for k-means and the Penrose-Moore pseudoinverse for multi-step ahead forecasting
Perkins, Rosie; Reid, Helen; Araújo, Liliana S; Clark, Terry; Williamon, Aaron
2017-01-01
Student health and wellbeing within higher education has been documented as poor in relation to the general population. This is a particular problem among students at music conservatoires, who are studying within a unique educational context that is known to generate both physical and psychological challenges. This article examines how conservatoire students experience health and wellbeing within their institutional context, using a framework from health promotion to focus attention on perceived enablers and barriers to optimal health in relation to three levels: lifestyle, support services, and conservatoire environment. In order to respond to the individuality of students' experiences, a qualitative approach was taken based on semi-structured interviews with 20 current or recent conservatoire students in the United Kingdom. Thematic analysis revealed a complex set of enablers and barriers: (i) lifestyle enablers included value placed on the importance of optimal health and wellbeing for musicians and daily practices to enable this; lifestyle barriers included struggling to maintain healthy lifestyles within the context of musical practice and learning; (ii) support enablers included accessible support sources within and beyond the conservatoire; support barriers included a perceived lack of availability or awareness of appropriate support; (iii) environmental enablers included positive and enjoyable experiences of performance as well as strong relationships and communities; environmental barriers included experiences of comparison and competition, pressure and stress, challenges with negative performance feedback, psychological distress, and perceived overwork. The findings reveal a need for health promotion to focus not only on individuals but also on the daily practices and routines of conservatoires. Additionally, they suggest that continued work is required to embed health and wellbeing support as an integral component of conservatoire education, raising
Directory of Open Access Journals (Sweden)
Rosie Perkins
2017-06-01
Full Text Available Student health and wellbeing within higher education has been documented as poor in relation to the general population. This is a particular problem among students at music conservatoires, who are studying within a unique educational context that is known to generate both physical and psychological challenges. This article examines how conservatoire students experience health and wellbeing within their institutional context, using a framework from health promotion to focus attention on perceived enablers and barriers to optimal health in relation to three levels: lifestyle, support services, and conservatoire environment. In order to respond to the individuality of students’ experiences, a qualitative approach was taken based on semi-structured interviews with 20 current or recent conservatoire students in the United Kingdom. Thematic analysis revealed a complex set of enablers and barriers: (i lifestyle enablers included value placed on the importance of optimal health and wellbeing for musicians and daily practices to enable this; lifestyle barriers included struggling to maintain healthy lifestyles within the context of musical practice and learning; (ii support enablers included accessible support sources within and beyond the conservatoire; support barriers included a perceived lack of availability or awareness of appropriate support; (iii environmental enablers included positive and enjoyable experiences of performance as well as strong relationships and communities; environmental barriers included experiences of comparison and competition, pressure and stress, challenges with negative performance feedback, psychological distress, and perceived overwork. The findings reveal a need for health promotion to focus not only on individuals but also on the daily practices and routines of conservatoires. Additionally, they suggest that continued work is required to embed health and wellbeing support as an integral component of conservatoire
Hamdi, Basma; Mabrouk, Mohamed Tahar; Kairouani, Lakdar; Kheiri, Abdelhamid
2017-06-01
Different configurations of organic Rankine cycle (ORC) systems are potential thermodynamic concepts for power generation from low grade heat. The aim of this work is to investigate and optimize the performances of the three main ORC systems configurations: basic ORC, ORC with internal heat exchange (IHE) and regenerative ORC. The evaluation for those configurations was performed using seven working fluids with typical different thermodynamic behaviours (R245fa, R601a, R600a, R227ea, R134a, R1234ze and R1234yf). The optimization has been performed using a genetic algorithm under a comprehensive set of operative parameters such as the fluid evaporating temperature, the fraction of flow rate or the pressure at the steam extracting point in the turbine. Results show that there is no general best ORC configuration for all those fluids. However, there is a suitable configuration for each fluid. Contribution to the topical issue "Materials for Energy harvesting, conversion and storage II (ICOME 2016)", edited by Jean-Michel Nunzi, Rachid Bennacer and Mohammed El Ganaoui
International Nuclear Information System (INIS)
Zhidkov, P.E.
1998-01-01
We consider the problem u''=f(u 2 )u (0 2 ) (for r→∞) = -∞. It is known that this problem possesses a sequence of solutions {u n } n=0,1,2... such that the nth solution u x (x) has precisely n roots in the interval (0,1). We prove the existence of a constant s 0 0 , an arbitrary above-described sequence of solutions of our problem is a basis of the space H s (0, 1)
Kleidon, Axel; Renner, Maik
2016-04-01
, which then links this thermodynamic approach to optimality in vegetation. We also contrast this approach to common, semi-empirical approaches of surface-atmosphere exchange and discuss how thermodynamics may set a broader range of transport limitations and optimality in the soil-plant-atmosphere system.
International Nuclear Information System (INIS)
Lee, Chieh-Hsiu Jason; Aleman, Dionne M; Sharpe, Michael B
2011-01-01
The beam orientation optimization (BOO) problem in intensity modulated radiation therapy (IMRT) treatment planning is a nonlinear problem, and existing methods to obtain solutions to the BOO problem are time consuming due to the complex nature of the objective function and size of the solution space. These issues become even more difficult in total marrow irradiation (TMI), where many more beams must be used to cover a vastly larger treatment area than typical site-specific treatments (e.g., head-and-neck, prostate, etc). These complications result in excessively long computation times to develop IMRT treatment plans for TMI, so we attempt to develop methods that drastically reduce treatment planning time. We transform the BOO problem into the classical set cover problem (SCP) and use existing methods to solve SCP to obtain beam solutions. Although SCP is NP-Hard, our methods obtain beam solutions that result in quality treatments in minutes. We compare our approach to an integer programming solver for the SCP to illustrate the speed advantage of our approach.
Energy Technology Data Exchange (ETDEWEB)
Lee, Chieh-Hsiu Jason; Aleman, Dionne M [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, ON M5S 3G8 (Canada); Sharpe, Michael B, E-mail: chjlee@mie.utoronto.ca, E-mail: aleman@mie.utoronto.ca, E-mail: michael.sharpe@rmp.uhn.on.ca [Princess Margaret Hospital, Department of Radiation Oncology, University of Toronto, 610 University Avenue, Toronto, ON M5G 2M9 (Canada)
2011-09-07
The beam orientation optimization (BOO) problem in intensity modulated radiation therapy (IMRT) treatment planning is a nonlinear problem, and existing methods to obtain solutions to the BOO problem are time consuming due to the complex nature of the objective function and size of the solution space. These issues become even more difficult in total marrow irradiation (TMI), where many more beams must be used to cover a vastly larger treatment area than typical site-specific treatments (e.g., head-and-neck, prostate, etc). These complications result in excessively long computation times to develop IMRT treatment plans for TMI, so we attempt to develop methods that drastically reduce treatment planning time. We transform the BOO problem into the classical set cover problem (SCP) and use existing methods to solve SCP to obtain beam solutions. Although SCP is NP-Hard, our methods obtain beam solutions that result in quality treatments in minutes. We compare our approach to an integer programming solver for the SCP to illustrate the speed advantage of our approach.
Ha, Linh Khanh; Krüger, Jens; Dihl Comba, João Luiz; Silva, Cláudio T; Joshi, Sarang
2012-06-01
Image population analysis is the class of statistical methods that plays a central role in understanding the development, evolution, and disease of a population. However, these techniques often require excessive computational power and memory that are compounded with a large number of volumetric inputs. Restricted access to supercomputing power limits its influence in general research and practical applications. In this paper we introduce ISP, an Image-Set Processing streaming framework that harnesses the processing power of commodity heterogeneous CPU/GPU systems and attempts to solve this computational problem. In ISP, we introduce specially designed streaming algorithms and data structures that provide an optimal solution for out-of-core multiimage processing problems both in terms of memory usage and computational efficiency. ISP makes use of the asynchronous execution mechanism supported by parallel heterogeneous systems to efficiently hide the inherent latency of the processing pipeline of out-of-core approaches. Consequently, with computationally intensive problems, the ISP out-of-core solution can achieve the same performance as the in-core solution. We demonstrate the efficiency of the ISP framework on synthetic and real datasets.
International Nuclear Information System (INIS)
Barashenkov, V.S.; Pogodaev, G.N.; Polanski, A.; Popov, Yu.P.; Puzynin, I.V.; Sisakyan, A.N.; Sosnin, A.N.
1998-01-01
Mathematical modeling and thermal flux estimations show that a combination of installations available at present at JINR - the plutonium reactor IBR-30 and the 660 MeV proton phasotron with the current of the extracted beam 0.25 mkA, i.e. 10% of its average value, - allows one to construct an air-cooled electronuclear set-up with the multiplication coefficient K eff ≅ 0.94, the neutron yield N tot ≅ 10 14 - 10 15 and the heat generation about to 10 kW. This set-up will demonstrate a possibility to construct subcritical transmutation-power generating electronuclear systems safe and stable in operation and applicable for utilization of weapon grade and technical plutonium. Kinetics of the electronuclear system will be investigated, in particular, fluctuations of the value of K eff for various parameters of the proton beam. Cross sections of nuclear reactions which are important for the estimations of an efficiency of various conditions of the nuclear waste transmutation and the neutron fluxes together with the heat distributions inside and outside the plutonium core will be measured. A comparison of these data with theoretical calculations allows one to check up and to develop significantly the methods of mathematical modeling of electronuclear systems. We suppose also to estimate the possibilities of ARC-method for the burning of radioactive wastes and to study the influence of various reflectors and multipartitioning of the core which increases the neutron yield. (author)
Aschepkov, Leonid T; Kim, Taekyun; Agarwal, Ravi P
2016-01-01
This book is based on lectures from a one-year course at the Far Eastern Federal University (Vladivostok, Russia) as well as on workshops on optimal control offered to students at various mathematical departments at the university level. The main themes of the theory of linear and nonlinear systems are considered, including the basic problem of establishing the necessary and sufficient conditions of optimal processes. In the first part of the course, the theory of linear control systems is constructed on the basis of the separation theorem and the concept of a reachability set. The authors prove the closure of a reachability set in the class of piecewise continuous controls, and the problems of controllability, observability, identification, performance and terminal control are also considered. The second part of the course is devoted to nonlinear control systems. Using the method of variations and the Lagrange multipliers rule of nonlinear problems, the authors prove the Pontryagin maximum principle for prob...
Foreest, N. D. van; Wijngaard, J.
2010-01-01
The single-product, stationary inventory problem with set-up cost is one of the classical problems in stochastic operations research. Theories have been developed to cope with finite production capacity in periodic review systems, and it has been proved that optimal policies for these cases are not
Palacios, S.G.
2015-01-01
In health facilities in resource-constrained settings, a lack of access to sustainable and reliable electricity can result on a sub-optimal delivery of healthcare services, as they do not have lighting for medical procedures and power to run essential equipment and devices to treat their patients.
Hallowell, Nina; Snowdon, Claire; Morrow, Susan; Norman, Jane E; Denison, Fiona C; Lawton, Julia
2016-06-01
Hope has therapeutic value because it enables people to cope with uncertainty about their future health. Indeed, hope, or therapeutic optimism (TO), is seen as an essential aspect of the provision and experience of medical care. The role of TO in clinical research has been briefly discussed, but the concept, and whether it can be transferred from care to research and from patients to clinicians, has not been fully investigated. The role played by TO in research emerged during interviews with staff involved in a peripartum trial. This paper unpacks the concept of TO in this setting and considers the role it may play in the wider delivery of clinical trials. The Got-it trial is a UK-based, randomised placebo-controlled trial that investigates the use of sublingual glyceryl trinitrate (GTN) spray to treat retained placenta. Qualitative data were collected in open-ended interviews with obstetricians, research and clinical midwives (n =27) involved in trial recruitment. Data were analysed using the method of constant comparison. TO influenced staff engagement with Got-it at different points in the trial and in different ways. Prior knowledge of, and familiarity with, GTN meant that from the outset staff perceived the trial as low risk. TO facilitated staff involvement in the trial; staff who already understood GTN's effects were optimistic that it would work, and staff collaborated because they hoped that the trial would address what they identified as an important clinical need. TO could fluctuate over the course of the trial, and was sustained or undermined by unofficial observation of clinical outcomes and speculations about treatment allocation. Thus, TO appeared to be influenced by key situational factors: prior knowledge and experience, clinical need and observed participant outcomes. Situational TO plays a role in facilitating staff engagement with clinical research. TO may affect trial recruitment by enabling staff to sustain the levels of uncertainty, or
International Nuclear Information System (INIS)
Osgouee, Ahmad
2010-01-01
many advanced control methods proposed for the control of nuclear SG water level, operators are still experiencing difficulties especially at low powers. Therefore, it seems that a suitable controller to replace the manual operations is still needed. In this paper optimization of SGL set-points and designing a robust control for SGL control system using will be discussed
Energy Technology Data Exchange (ETDEWEB)
Pollak, J.; Wozniak, A.W.; Dynia, Z.; Lipanowicz, T.
2004-07-01
Modern methods referred to as 'artificial intelligence' have been applied to combustion optimization and implementation of selected diagnostic functions for the milling system of a pulverized lignite-fired boiler. The results of combustion optimization have shown significant improvement of efficiency and reduction of NO, emission. Fuzzy logic has been used to develop, among other things, a fan mill overload detection system.
Sousa, Vitor; Dias-Ferreira, Celia; Vaz, João M; Meireles, Inês
2018-05-01
Extensive research has been carried out on waste collection costs mainly to differentiate costs of distinct waste streams and spatial optimization of waste collection services (e.g. routes, number, and location of waste facilities). However, waste collection managers also face the challenge of optimizing assets in time, for instance deciding when to replace and how to maintain, or which technological solution to adopt. These issues require a more detailed knowledge about the waste collection services' cost breakdown structure. The present research adjusts the methodology for buildings' life-cycle cost (LCC) analysis, detailed in the ISO 15686-5:2008, to the waste collection assets. The proposed methodology is then applied to the waste collection assets owned and operated by a real municipality in Portugal (Cascais Ambiente - EMAC). The goal is to highlight the potential of the LCC tool in providing a baseline for time optimization of the waste collection service and assets, namely assisting on decisions regarding equipment operation and replacement.
Directory of Open Access Journals (Sweden)
Jinshui Zhang
2017-04-01
Full Text Available This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD, to determine optimal parameters for support vector data description (SVDD model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient (C and kernel width (s, in mapping homogeneous specific land cover.
Ma, Y T; Wubs, A M; Mathieu, A; Heuvelink, E; Zhu, J Y; Hu, B G; Cournède, P H; de Reffye, P
2011-04-01
Many indeterminate plants can have wide fluctuations in the pattern of fruit-set and harvest. Fruit-set in these types of plants depends largely on the balance between source (assimilate supply) and sink strength (assimilate demand) within the plant. This study aims to evaluate the ability of functional-structural plant models to simulate different fruit-set patterns among Capsicum cultivars through source-sink relationships. A greenhouse experiment of six Capsicum cultivars characterized with different fruit weight and fruit-set was conducted. Fruit-set patterns and potential fruit sink strength were determined through measurement. Source and sink strength of other organs were determined via the GREENLAB model, with a description of plant organ weight and dimensions according to plant topological structure established from the measured data as inputs. Parameter optimization was determined using a generalized least squares method for the entire growth cycle. Fruit sink strength differed among cultivars. Vegetative sink strength was generally lower for large-fruited cultivars than for small-fruited ones. The larger the size of the fruit, the larger variation there was in fruit-set and fruit yield. Large-fruited cultivars need a higher source-sink ratio for fruit-set, which means higher demand for assimilates. Temporal heterogeneity of fruit-set affected both number and yield of fruit. The simulation study showed that reducing heterogeneity of fruit-set was obtained by different approaches: for example, increasing source strength; decreasing vegetative sink strength, source-sink ratio for fruit-set and flower appearance rate; and harvesting individual fruits earlier before full ripeness. Simulation results showed that, when we increased source strength or decreased vegetative sink strength, fruit-set and fruit weight increased. However, no significant differences were found between large-fruited and small-fruited groups of cultivars regarding the effects of source
Di, Guoqing; Lu, Kuanguang; Shi, Xiaofan
2018-03-08
Annoyance ratings obtained from listening experiments are widely used in studies on health effect of environmental noise. In listening experiments, participants usually give the annoyance rating of each noise sample according to its relative annoyance degree among all samples in the experimental sample set if there are no reference sound samples, which leads to poor comparability between experimental results obtained from different experimental sample sets. To solve this problem, this study proposed to add several pink noise samples with certain loudness levels into experimental sample sets as reference sound samples. On this basis, the standard curve between logarithmic mean annoyance and loudness level of pink noise was used to calibrate the experimental results and the calibration procedures were described in detail. Furthermore, as a case study, six different types of noise sample sets were selected to conduct listening experiments using this method to examine the applicability of it. Results showed that the differences in the annoyance ratings of each identical noise sample from different experimental sample sets were markedly decreased after calibration. The determination coefficient ( R ²) of linear fitting functions between psychoacoustic annoyance (PA) and mean annoyance (MA) of noise samples from different experimental sample sets increased obviously after calibration. The case study indicated that the method above is applicable to calibrating annoyance ratings obtained from different types of noise sample sets. After calibration, the comparability of annoyance ratings of noise samples from different experimental sample sets can be distinctly improved.
Directory of Open Access Journals (Sweden)
N.J. Timofeeva
2011-05-01
Full Text Available This article examines the financial planning of working capital organizations, in particular presented a software implementation of the algorithm analyzes the budget forecast working capital, identify and take advantage of temporarily free money using a model of a decision on the choice of the optimal bond portfolio, consistent with the free flow of liquidity of the enterprise.
Directory of Open Access Journals (Sweden)
Aliya Nurfaizovna Gabdulakhatova
2017-06-01
Full Text Available The relevance of this topic is due to the fact that inventory management is one of the important parts of the enterprise management policy in the servicing industry. The article summarizes the importance of stock optimization at the enterprise, presents the stages of inventory management, provides a horizontal analysis of the main results of the enterprise’s activities, and, based on the tree of problems method and the decision tree, identifies the problems and ways to optimize the business processes for inventory management. One of the ways to improve the financial condition of an enterprise, by improving business processes in inventory management, which will require certain investments, suggests the use of logistics center services. The purpose of this work is to identify ways to optimize the activities of the enterprise in inventory management. Methodology: There were used methods of analysis of financial and economic activity, also economic and mathematical methods. Results: the most informative parameters showing the efficiency of optimization of the enterprise’s activities. Practical implications it is expedient to apply the received results the economic subjects which produce products with subsequent sale.
Directory of Open Access Journals (Sweden)
Elena B. Krylova
2015-01-01
Full Text Available The article highlights the problems associated with the maintenance of financial stability of municipalities. The process of improving the regulatory framework in the public sector of the economy, ensuring the development of economic entities of financial relations at the municipal level, and contributing to the achievement of financial sustainability of municipalities. We consider a number of theses that constitute the theoretical basis of the financial sustainability of municipalities.
Directory of Open Access Journals (Sweden)
Akanksha Mishra
2017-05-01
Full Text Available In a deregulated electricity market it may at times become difficult to dispatch all the required power that is scheduled to flow due to congestion in transmission lines. An Interline Power Flow Controller (IPFC can be used to reduce the system loss and power flow in the heavily loaded line, improve stability and loadability of the system. This paper proposes a Disparity Line Utilization Factor for the optimal placement and Gravitational Search algorithm based optimal tuning of IPFC to control the congestion in transmission lines. DLUF ranks the transmission lines in terms of relative line congestion. The IPFC is accordingly placed in the most congested and the least congested line connected to the same bus. Optimal sizing of IPFC is carried using Gravitational Search algorithm. A multi-objective function has been chosen for tuning the parameters of the IPFC. The proposed method is implemented on an IEEE-30 bus test system. Graphical representations have been included in the paper showing reduction in LUF of the transmission lines after the placement of an IPFC. A reduction in active power and reactive power loss of the system by about 6% is observed after an optimally tuned IPFC has been included in the power system. The effectiveness of the proposed tuning method has also been shown in the paper through the reduction in the values of the objective functions.
Walsh, David; McCartney, Gerry; McCullough, Sarah; van der Pol, Marjon; Buchanan, Duncan; Jones, Russell
2015-09-01
Many theories have been proposed to explain the high levels of 'excess' mortality (i.e. higher mortality over and above that explained by differences in socio-economic circumstances) shown in Scotland-and, especially, in its largest city, Glasgow-compared with elsewhere in the UK. One such proposal relates to differences in optimism, given previously reported evidence of the health benefits of an optimistic outlook. A representative survey of Glasgow, Liverpool and Manchester was undertaken in 2011. Optimism was measured by the Life Orientation Test (Revised) (LOT-R), and compared between the cities by means of multiple linear regression models, adjusting for any differences in sample characteristics. Unadjusted analyses showed LOT-R scores to be similar in Glasgow and Liverpool (mean score (SD): 14.7 (4.0) for both), but lower in Manchester (13.9 (3.8)). This was consistent in analyses by age, gender and social class. Multiple regression confirmed the city results: compared with Glasgow, optimism was either similar (Liverpool: adjusted difference in mean score: -0.16 (95% CI -0.45 to 0.13)) or lower (Manchester: -0.85 (-1.14 to -0.56)). The reasons for high levels of Scottish 'excess' mortality remain unclear. However, differences in psychological outlook such as optimism appear to be an unlikely explanation. © The Author 2015. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Directory of Open Access Journals (Sweden)
Siwei Han
2018-03-01
Full Text Available An optimal load-tracking operation strategy for a grid-connected tubular solid oxide fuel cell (SOFC is studied based on the steady-state analysis of the system thermodynamics and electrochemistry. Control of the SOFC is achieved by a two-level hierarchical control system. In the upper level, optimal setpoints of output voltage and the current corresponding to unit load demand is obtained through a nonlinear optimization by minimizing the SOFC’s internal power waste. In the lower level, a combined L1-MPC control strategy is designed to achieve fast set point tracking under system nonlinearities, while maintaining a constant fuel utilization factor. To prevent fuel starvation during the transient state resulting from the output power surging, a fuel flow constraint is imposed on the MPC with direct electron balance calculation. The proposed control schemes are testified on the grid-connected SOFC model.
Uvarova Svetlana; Kutsygina Olga; Smorodina Elena; Gumba Khuta
2018-01-01
The effectiveness and sustainability of an enterprise are based on the effectiveness and sustainability of its portfolio of projects. When creating a production program for a construction company based on a portfolio of projects and related to the planning and implementation of initiated organizational and economic changes, the problem of finding the optimal "risk-return" ratio of the program (portfolio of projects) is solved. The article proposes and approves the methodology of forming a por...
Directory of Open Access Journals (Sweden)
Zhila Esna Ashari
Full Text Available Type IV secretion systems (T4SS are multi-protein complexes in a number of bacterial pathogens that can translocate proteins and DNA to the host. Most T4SSs function in conjugation and translocate DNA; however, approximately 13% function to secrete proteins, delivering effector proteins into the cytosol of eukaryotic host cells. Upon entry, these effectors manipulate the host cell's machinery for their own benefit, which can result in serious illness or death of the host. For this reason recognition of T4SS effectors has become an important subject. Much previous work has focused on verifying effectors experimentally, a costly endeavor in terms of money, time, and effort. Having good predictions for effectors will help to focus experimental validations and decrease testing costs. In recent years, several scoring and machine learning-based methods have been suggested for the purpose of predicting T4SS effector proteins. These methods have used different sets of features for prediction, and their predictions have been inconsistent. In this paper, an optimal set of features is presented for predicting T4SS effector proteins using a statistical approach. A thorough literature search was performed to find features that have been proposed. Feature values were calculated for datasets of known effectors and non-effectors for T4SS-containing pathogens for four genera with a sufficient number of known effectors, Legionella pneumophila, Coxiella burnetii, Brucella spp, and Bartonella spp. The features were ranked, and less important features were filtered out. Correlations between remaining features were removed, and dimensional reduction was accomplished using principal component analysis and factor analysis. Finally, the optimal features for each pathogen were chosen by building logistic regression models and evaluating each model. The results based on evaluation of our logistic regression models confirm the effectiveness of our four optimal sets of
Metzger-Filho, Otto; Azambuja, Evandro de; Bradbury, Ian; Saini, Kamal S.; Bines, Jose; Simon, Sergio D. [UNIFESP; Van Dooren, Veerle; Aktan, Gursel; Pritchard, Kathleen I.; Wolff, Antonio C.; Smith, Ian; Jackisch, Christian; Lang, Istvan; Untch, Michael; Boyle, Frances
2013-01-01
Purpose. This study measured the time taken for setting up the different facets of Adjuvant Lapatinib and/or Trastuzumab Treatment Optimization (ALTTO), an international phase III study being conducted in 44 participating countries.Methods. Time to regulatory authority (RA) approval, time to ethics committee/institutional review board (EC/IRB) approval, time from study approval by EC/IRB to first randomized patient, and time from first to last randomized patient were prospectively collected i...
DEFF Research Database (Denmark)
Sessarego, Matias; Shen, Wen Zhong
2018-01-01
Modern wind turbine aero-structural blade design codes generally use a smaller fraction of the full design load base (DLB) or neglect turbulent inflow as defined by the International Electrotechnical Commission standards. The current article describes an automated blade design optimization method...... based on surrogate modeling that includes a very large number of design load cases (DLCs) including turbulence. In the present work, 325 DLCs representative of the full DLB are selected based on the message-passing-interface (MPI) limitations in Matlab. Other methods are currently being investigated, e.......g. a Python MPI implementation, to overcome the limitations in Matlab MPI and ultimately achieve a full DLB optimization framework. The reduced DLB and the annual energy production are computed using the state-of-the-art aero-servo-elastic tool HAWC2. Furthermore, some of the interior dimensions of the blade...
International Nuclear Information System (INIS)
Jin, L.; Huang, G.H.; Fan, Y.R.; Wang, L.; Wu, T.
2015-01-01
Highlights: • Propose a new energy PIS-IT2FSLP model for Xiamen City under uncertainties. • Analyze the energy supply, demand, and its flow structure of this city. • Use real energy statistics to prove the superiority of PIS-IT2FSLP method. • Obtain optimal solutions that reflect environmental requirements. • Help local authorities devise an optimal energy strategy for this local area. - Abstract: In this study, a new Pseudo-optimal Inexact Stochastic Interval Type-2 Fuzzy Sets Linear Programming (PIS-IT2FSLP) energy model is developed to support energy system planning and environment requirements under uncertainties for Xiamen City. The PIS-IT2FSLP model is based on an integration of interval Type 2 (T2) Fuzzy Sets (FS) boundary programming and stochastic linear programming techniques, enables it to have robust abilities to the tackle uncertainties expressed as T2 FS intervals and probabilistic distributions within a general optimization framework. This new model can sophisticatedly facilitate system analysis of energy supply and energy conversion processes, and environmental requirements as well as provide capacity expansion options with multiple periods. The PIS-IT2FSLP model was applied to a real case study of Xiamen energy systems. Based on a robust two-step solution algorithm, reasonable solutions have been obtained, which reflect tradeoffs between economic and environmental requirements, and among seasonal volatility energy demands of the right hand side constraints of Xiamen energy system. Thus, the lower and upper solutions of PIS-IT2FSLP would then help local energy authorities adjust current energy patterns, and discover an optimal energy strategy for the development of Xiamen City
International Nuclear Information System (INIS)
Loef, Johan; Lind, Bengt K.; Brahme, Anders
1998-01-01
A new general beam optimization algorithm for inverse treatment planning is presented. It utilizes a new formulation of the probability to achieve complication-free tumour control. The new formulation explicitly describes the dependence of the treatment outcome on the incident fluence distribution, the patient geometry, the radiobiological properties of the patient and the fractionation schedule. In order to account for both measured and non-measured positioning uncertainties, the algorithm is based on a combination of dynamic and stochastic optimization techniques. Because of the difficulty in measuring all aspects of the intra- and interfractional variations in the patient geometry, such as internal organ displacements and deformations, these uncertainties are primarily accounted for in the treatment planning process by intensity modulation using stochastic optimization. The information about the deviations from the nominal fluence profiles and the nominal position of the patient relative to the beam that is obtained by portal imaging during treatment delivery, is used in a feedback loop to automatically adjust the profiles and the location of the patient for all subsequent treatments. Based on the treatment delivered in previous fractions, the algorithm furnishes optimal corrections for the remaining dose delivery both with regard to the fluence profile and its position relative to the patient. By dynamically refining the beam configuration from fraction to fraction, the algorithm generates an optimal sequence of treatments that very effectively reduces the influence of systematic and random set-up uncertainties to minimize and almost eliminate their overall effect on the treatment. Computer simulations have shown that the present algorithm leads to a significant increase in the probability of uncomplicated tumour control compared with the simple classical approach of adding fixed set-up margins to the internal target volume. (author)
Results on Parity-Check Matrices With Optimal Stopping And/Or Dead-End Set Enumerators
Weber, J.H.; Abdel-Ghaffar, K.A.S.
2008-01-01
The performance of iterative decoding techniques for linear block codes correcting erasures depends very much on the sizes of the stopping sets associated with the underlying Tanner graph, or, equivalently, the parity-check matrix representing the code. In this correspondence, we introduce the
Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-01
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.
Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar
2017-09-05
The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Libra, J.A.; Biskup, M.; Wiesmann, U. [Technische Univ. Berlin (Germany). Inst. fuer Verfahrenstechnik; Sahlmann, C.; Gnirss, R. [Berliner Wasserbetriebe, Berlin (Germany)
1999-07-01
The largest single energy consumer at sewage treatment plants is the ventilation system of activated sludge tanks. This is why controlling and optimizing ventilation systems is the most appropriate approach to the cutting down of energy costs. The present paper reports on measurements of dynamic oxygen transfer by means of the off-gas method under operating conditions at the Berlin-Ruhleben sewage treatment plant. (orig.) [German] Der groesste Einzelenergieverbraucher auf Klaerwerken ist das Belueftungssystem von Belebungsbecken. Deshalb ist die Kontrolle und Optimierung der Belueftungssysteme der geeignete Weg zur Verringerung der Energiekosten. In diesem Beitrag wird ueber Messungen des dynamischen Sauerstoffeintrags mit der Abluft-Methode unter Betriebsbedingungen im Klaerwerk Berlin-Ruhleben berichtet. (orig.)
Directory of Open Access Journals (Sweden)
Ronald van 't Klooster
Full Text Available A typical MR imaging protocol to study the status of atherosclerosis in the carotid artery consists of the application of multiple MR sequences. Since scanner time is limited, a balance has to be reached between the duration of the applied MR protocol and the quantity and quality of the resulting images which are needed to assess the disease. In this study an objective method to optimize the MR sequence set for classification of soft plaque in vessel wall images of the carotid artery using automated image segmentation was developed. The automated method employs statistical pattern recognition techniques and was developed based on an extensive set of MR contrast weightings and corresponding manual segmentations of the vessel wall and soft plaque components, which were validated by histological sections. Evaluation of the results from nine contrast weightings showed the tradeoff between scan duration and automated image segmentation performance. For our dataset the best segmentation performance was achieved by selecting five contrast weightings. Similar performance was achieved with a set of three contrast weightings, which resulted in a reduction of scan time by more than 60%. The presented approach can help others to optimize MR imaging protocols by investigating the tradeoff between scan duration and automated image segmentation performance possibly leading to shorter scanning times and better image interpretation. This approach can potentially also be applied to other research fields focusing on different diseases and anatomical regions.
Directory of Open Access Journals (Sweden)
V. A. Skvortsova
2012-01-01
Full Text Available The article contains the results of the study of clinical and biochemical blood markers of iron metabolism in infants. This study represents a part of the research, aimed to scientific confirmation of the statements associated with additional food introduction and stated in the «National program of the infants feeding optimization in the Russian Federation». In controlled conditions the children were divided into 2 main groups: feeding with breast milk and with artificial milk formulas. Each group was divided into sub-groups according to the age of the additional food introduction: 4, 5 or 6 months. The received data suggest that the iron content was appropriate in both groups at the age of 4 months before the additional food introduction; there was a gradual decrease of several values after that, especially marked in children feeding with breast milk and later introduction of additional food. The comparative analysis showed that at the age of 9 months the lowest values were in breast-fed children with additional food introduction at the age of 6 months. This can be associated not only with late additional food introduction, but also with difficulties occurring when beginning it at this age and leading to insufficient supply with certain nutrients, including iron. The detailed analysis of diets for children of different sub-groups will be discussed in the next article.
An Optimized Spline-Based Registration of a 3D CT to a Set of C-Arm Images
Directory of Open Access Journals (Sweden)
2006-01-01
Full Text Available We have developed an algorithm for the rigid-body registration of a CT volume to a set of C-arm images. The algorithm uses a gradient-based iterative minimization of a least-squares measure of dissimilarity between the C-arm images and projections of the CT volume. To compute projections, we use a novel method for fast integration of the volume along rays. To improve robustness and speed, we take advantage of a coarse-to-fine processing of the volume/image pyramids. To compute the projections of the volume, the gradient of the dissimilarity measure, and the multiresolution data pyramids, we use a continuous image/volume model based on cubic B-splines, which ensures a high interpolation accuracy and a gradient of the dissimilarity measure that is well defined everywhere. We show the performance of our algorithm on a human spine phantom, where the true alignment is determined using a set of fiducial markers.
Barr Fritcher, Emily G; Voss, Jesse S; Brankley, Shannon M; Campion, Michael B; Jenkins, Sarah M; Keeney, Matthew E; Henry, Michael R; Kerr, Sarah M; Chaiteerakij, Roongruedee; Pestova, Ekaterina V; Clayton, Amy C; Zhang, Jun; Roberts, Lewis R; Gores, Gregory J; Halling, Kevin C; Kipp, Benjamin R
2015-12-01
Pancreatobiliary cancer is detected by fluorescence in situ hybridization (FISH) of pancreatobiliary brush samples with UroVysion probes, originally designed to detect bladder cancer. We designed a set of new probes to detect pancreatobiliary cancer and compared its performance with that of UroVysion and routine cytology analysis. We tested a set of FISH probes on tumor tissues (cholangiocarcinoma or pancreatic carcinoma) and non-tumor tissues from 29 patients. We identified 4 probes that had high specificity for tumor vs non-tumor tissues; we called this set of probes pancreatobiliary FISH. We performed a retrospective analysis of brush samples from 272 patients who underwent endoscopic retrograde cholangiopancreatography for evaluation of malignancy at the Mayo Clinic; results were available from routine cytology and FISH with UroVysion probes. Archived residual specimens were retrieved and used to evaluate the pancreatobiliary FISH probes. Cutoff values for FISH with the pancreatobiliary probes were determined using 89 samples and validated in the remaining 183 samples. Clinical and pathologic evidence of malignancy in the pancreatobiliary tract within 2 years of brush sample collection was used as the standard; samples from patients without malignancies were used as negative controls. The validation cohort included 85 patients with malignancies (46.4%) and 114 patients with primary sclerosing cholangitis (62.3%). Samples containing cells above the cutoff for polysomy (copy number gain of ≥2 probes) were classified as positive in FISH with the UroVysion and pancreatobiliary probes. Multivariable logistic regression was used to estimate associations between clinical and pathology findings and results from FISH. The combination of FISH probes 1q21, 7p12, 8q24, and 9p21 identified cancer cells with 93% sensitivity and 100% specificity in pancreatobiliary tissue samples and were therefore included in the pancreatobiliary probe set. In the validation cohort of
Directory of Open Access Journals (Sweden)
Mustafa Akpinar
2017-06-01
Full Text Available The increase of energy consumption in the world is reflected in the consumption of natural gas. However, this increment requires additional investment. This effect leads imbalances in terms of demand forecasting, such as applying penalties in the case of error rates occurring beyond the acceptable limits. As the forecasting errors increase, penalties increase exponentially. Therefore, the optimal use of natural gas as a scarce resource is important. There are various demand forecast ranges for natural gas and the most difficult range among these demands is the day-ahead forecasting, since it is hard to implement and makes predictions with low error rates. The objective of this study is stabilizing gas tractions on day-ahead demand forecasting using low-consuming subscriber data for minimizing error using univariate artificial bee colony-based artificial neural networks (ANN-ABC. For this purpose, households and low-consuming commercial users’ four-year consumption data between the years of 2011–2014 are gathered in daily periods. Previous consumption values are used to forecast day-ahead consumption values with sliding window technique and other independent variables are not taken into account. Dataset is divided into two parts. First, three-year daily consumption values are used with a seven day window for training the networks, while the last year is used for the day-ahead demand forecasting. Results show that ANN-ABC is a strong, stable, and effective method with a low error rate of 14.9 mean absolute percentage error (MAPE for training utilizing MAPE with a univariate sliding window technique.
DEFF Research Database (Denmark)
Hovgaard, Tobias Gybel; Larsen, Lars F. S.; Jørgensen, John Bagterp
2012-01-01
We consider the optimization of power set-points to a large number of wind turbines arranged within close vicinity of each other in a wind farm. The goal is to maximize the total electric power extracted from the wind, taking the wake effects that couple the individual turbines in the farm into a...... is far superior to, a more naive distribution scheme. We employ a fast convex quadratic programming solver to carry out the iterations in the range of microseconds for even large wind farms....
International Nuclear Information System (INIS)
Phan Trong Phuc; Luu Anh Tuyen; La Ly Nguyen; Nguyen Thi Ngoc Hue; Pham Thi Hue; Do Duy Khiem
2015-01-01
For the purpose of operating and optimizing the analyses of the equipment: wavelength dispersion X-ray fluorescence (WDXRF)- model S8 TIGER from Enhancing Equipment Project (TCTTB) 2011-2012, we set up sampling and analytical process for different sample kinds; we constructed multi-elemental calibration curve for clay sample; we analysed elemental concentrations of 5 clay samples by XRF method and compared the results with the results given by NAA method. Equipment sensitivity was tested by analysing elemental concentrations of 2 Kaolin standard samples. The results show that S8-Tiger equipment is within good condition and is able to analyze powder clay sample exactly. (author)
Bergeest, Jan-Philip; Rohr, Karl
2012-10-01
In high-throughput applications, accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression and the understanding of cell function. We propose an approach for segmenting cell nuclei which is based on active contours using level sets and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We consider three different well-known energy functionals for active contour-based segmentation and introduce convex formulations of these functionals. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images from different experiments comprising different cell types. We have also performed a quantitative comparison with previous segmentation approaches. Copyright © 2012 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Lekha Pandit
2013-01-01
Full Text Available Background: In resource-poor settings, the management of neuromyelitis optica (NMO and NMO spectrum (NMOS disorders is limited because of delayed diagnosis and financial constraints. Aim: To device a cost-effective strategy for the management of NMO and related disorders in India. Materials and Methods: A cost-effective and disease-specific protocol was used for evaluating the course and treatment outcome of 70 consecutive patients. Results: Forty-five patients (65% had a relapse from the onset and included NMO (n = 20, recurrent transverse myelitis (RTM; n = 10, and recurrent optic neuritis (ROPN; n = 15. In 38 (84.4% patients presenting after multiple attacks, the diagnosis was made clinically. Only 7 patients with a relapsing course were seen at the onset and included ROPN (n = 5, NMO (n = 1, and RTM (n = 1. They had a second attack after a median interval of 1 ± 0.9 years, which was captured through our dedicated review process. Twenty-five patients had isolated longitudinally extensive transverse myelitis (LETM, of which 20 (80% remained ambulant at follow-up of 3 ± 1.9 years. Twelve patients (17% with median expanded disability status scale (EDSS of 8.5 at entry had a fatal outcome. Serum NMO-IgG testing was done in selected patients, and it was positive in 7 of 18 patients (39%. Irrespective of the NMO-IgG status, the treatment compliant patients (44.4% showed significant improvement in EDSS (P ≤ 0.001. Conclusions : Early clinical diagnosis and treatment compliance were important for good outcome. Isolated LETM was most likely a post-infectious demyelinating disorder in our set-up. NMO and NMOS disorders contributed to 14.9% (45/303 of all demyelinating disorders in our registry.
Doubly stochastic radial basis function methods
Yang, Fenglian; Yan, Liang; Ling, Leevan
2018-06-01
We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).
Energy Technology Data Exchange (ETDEWEB)
Chavan, A.; Kirchhoff, T.; Baus, S.; Galanski, M. [Medizinische Hochschule Hannover (Germany). Abt. Diagnostische Radiologie 1; Pichlmaier, M. [Medizinische Hochschule Hannover (Germany). Leibniz Forschungslab. fuer Biotechnologie und Kuenstliche Organe an der Klinik fuer Thorax-, Herz- und Gefaesschirurgie
2001-08-01
Purpose. In the endoluminal therapy of abdominal aortic aneurysms, a short proximal aneurysm neck, endoleaks and the large size and stiffness of the introducer systems are responsible for many of the complications and sub-optimal outcomes. The purpose of the present review article is to to suggest strategies to minimize these complications based on the results of experimental studies in animals. Material and methods. After implanting various types of stents across the renal artery origins, the functional and morphological changes in the kidneys and renal vessels were studied by various authors. In order to prevent progressive widening of the proximal aneurysmal neck and graft dislocation, Sonesson et al. performed a laparoscopic banding around the proximal neck in pigs. To study the effects of endoleaks, Marty, Schurink and Pitton carried out pressure measurements in experimental aneurysms with and without endoleaks. Sakaguchi and Pavcnik developed the 'Twin-tube endografts' (TTEG) and the 'Bifurcated drum occluder endografts' (BDOEG) and tested them in dogs. Results. Up to 3 months after suprarenal stent placement, Chavan et al. detected no significant fall in the mean inulin clearance in sheep (140{+-}46 ml/min before, 137{+-}58 ml/min after). Nasim et al. and Malina et al. reported similar observations with respect to renal function. Suprarenal fixation may result in isolated thrombotic occlusions of the renal arteries and microinfarcts in the kidneys. Mean aortic diameters at the level of banding were significantly smaller in the animals with aortic banding as opposed to those in the control group without banding (8 mm vs 11 mm, p=0.004). The banding caused a secure proximal fixation of the stent-graft. Persistent endoleaks resulted in significantly higher intraaneurysmal pressures. Although the TTEG and the BDOEG stent-grafts required smaller sheaths, occlusions were observed in 8% (TTEG) and 60% (BDOEG) of the graft limbs. (orig.) [German
Time-Dependent Selection of an Optimal Set of Sources to Define a Stable Celestial Reference Frame
Le Bail, Karine; Gordon, David
2010-01-01
Temporal statistical position stability is required for VLBI sources to define a stable Celestial Reference Frame (CRF) and has been studied in many recent papers. This study analyzes the sources from the latest realization of the International Celestial Reference Frame (ICRF2) with the Allan variance, in addition to taking into account the apparent linear motions of the sources. Focusing on the 295 defining sources shows how they are a good compromise of different criteria, such as statistical stability and sky distribution, as well as having a sufficient number of sources, despite the fact that the most stable sources of the entire ICRF2 are mostly in the Northern Hemisphere. Nevertheless, the selection of a stable set is not unique: studying different solutions (GSF005a and AUG24 from GSFC and OPA from the Paris Observatory) over different time periods (1989.5 to 2009.5 and 1999.5 to 2009.5) leads to selections that can differ in up to 20% of the sources. Observing, recording, and network improvement are some of the causes, showing better stability for the CRF over the last decade than the last twenty years. But this may also be explained by the assumption of stationarity that is not necessarily right for some sources.
Roper, Ian P E; Besley, Nicholas A
2016-03-21
The simulation of X-ray emission spectra of transition metal complexes with time-dependent density functional theory (TDDFT) is investigated. X-ray emission spectra can be computed within TDDFT in conjunction with the Tamm-Dancoff approximation by using a reference determinant with a vacancy in the relevant core orbital, and these calculations can be performed using the frozen orbital approximation or with the relaxation of the orbitals of the intermediate core-ionised state included. Both standard exchange-correlation functionals and functionals specifically designed for X-ray emission spectroscopy are studied, and it is shown that the computed spectral band profiles are sensitive to the exchange-correlation functional used. The computed intensities of the spectral bands can be rationalised by considering the metal p orbital character of the valence molecular orbitals. To compute X-ray emission spectra with the correct energy scale allowing a direct comparison with experiment requires the relaxation of the core-ionised state to be included and the use of specifically designed functionals with increased amounts of Hartree-Fock exchange in conjunction with high quality basis sets. A range-corrected functional with increased Hartree-Fock exchange in the short range provides transition energies close to experiment and spectral band profiles that have a similar accuracy to those from standard functionals.
Energy Technology Data Exchange (ETDEWEB)
Odette, G Robert; Cunningham, Nicholas J., Wu, Yuan; Etienne, Auriane; Stergar, Erich; Yamamoto, Takuya
2012-02-21
The broad objective of this NEUP was to further develop a class of 12-15Cr ferritic alloys that are dispersion strengthened and made radiation tolerant by an ultrahigh density of Y-Ti-O nanofeatures (NFs) in the size range of less than 5 nm. We call these potentially transformable materials nanostructured ferritic alloys (NFAs). NFAs are typically processed by ball milling pre-alloyed rapidly solidified powders and yttria (Y2O3) powders. Proper milling effectively dissolves the Ti, Y and O solutes that precipitate as NFs during hot consolidation. The tasks in the present study included examining alternative processing paths, characterizing and optimizing the NFs and investigating solid state joining. Alternative processing paths involved rapid solidification by gas atomization of Fe, 14% Cr, 3% W, and 0.4% Ti powders that are also pre-alloyed with 0.2% Y (14YWT), where the compositions are in wt.%. The focus is on exploring the possibility of minimizing, or even eliminating, the milling time, as well as producing alloys with more homogeneous distributions of NFs and a more uniform, fine grain size. Three atomization environments were explored: Ar, Ar plus O (Ar/O) and He. The characterization of powders and alloys occurred through each processing step: powder production by gas atomization; powder milling; and powder annealing or hot consolidation by hot isostatic pressing (HIPing) or hot extrusion. The characterization studies of the materials described here include various combinations of: a) bulk chemistry; b) electron probe microanalysis (EPMA); c) atom probe tomography (APT); d) small angle neutron scattering (SANS); e) various types of scanning and transmission electron microscopy (SEM and TEM); and f) microhardness testing. The bulk chemistry measurements show that preliminary batches of gas-atomized powders could be produced within specified composition ranges. However, EPMA and TEM showed that the Y is heterogeneously distributed and phase separated, but
Karki, Kishor; Hugo, Geoffrey D; Ford, John C; Olsen, Kathryn M; Saraiya, Siddharth; Groves, Robert; Weiss, Elisabeth
2015-10-21
The purpose of this study was to determine optimal sets of b-values in diffusion-weighted MRI (DW-MRI) for obtaining monoexponential apparent diffusion coefficient (ADC) close to perfusion-insensitive intravoxel incoherent motion (IVIM) model ADC (ADCIVIM) in non-small cell lung cancer. Ten subjects had 40 DW-MRI scans before and during radiotherapy in a 1.5 T MRI scanner. Respiratory triggering was applied to the echo-planar DW-MRI with TR ≈ 4500 ms, TE = 74 ms, eight b-values of 0-1000 μs μm(-2), pixel size = 1.98 × 1.98 mm(2), slice thickness = 6 mm, interslice gap = 1.2 mm, 7 axial slices and total acquisition time ≈6 min. One or more DW-MRI scans together covered the whole tumour volume. Monoexponential model ADC values using various b-value sets were compared to reference-standard ADCIVIM values using all eight b-values. Intra-scan coefficient of variation (CV) of active tumour volumes was computed to compare the relative noise in ADC maps. ADC values for one pre-treatment DW-MRI scan of each of the 10 subjects were computed using b-value pairs from DW-MRI images synthesized for b-values of 0-2000 μs μm(-2) from the estimated IVIM parametric maps and corrupted by various Rician noise levels. The square root of mean of squared error percentage (RMSE) of the ADC value relative to the corresponding ADCIVIM for the tumour volume of the scan was computed. Monoexponential ADC values for the b-value sets of 250 and 1000; 250, 500 and 1000; 250, 650 and 1000; 250, 800 and 1000; and 250-1000 μs μm(-2) were not significantly different from ADCIVIM values (p > 0.05, paired t-test). Mean error in ADC values for these sets relative to ADCIVIM were within 3.5%. Intra-scan CVs for these sets were comparable to that for ADCIVIM. The monoexponential ADC values for other sets-0-1000; 50-1000; 100-1000; 500-1000; and 250 and 800 μs μm(-2) were significantly different from the ADCIVIM values. From Rician noise
Energy Technology Data Exchange (ETDEWEB)
Barreto, E.B.; Fiuza, R.A.; Jose, N.M.; Boaventura, J.S.; Carvalho, L.F.V. [Universidade Federal da Bahia (IQ/UFBA), Salvador, BA (Brazil). Inst. de Quimica. Grupo de Energia e Ciencia dos Materiais
2008-07-01
With the growing concern emission of polluting gases in the atmosphere and search for alternative sources of clean energy that can meet the future shortage of oil, the fuel cells have become the target of scientific research in everyone. Among the various types of fuel cells includes the PEMFC (Polymer exchange membrane fuel cell), in the case of a device with high efficiency, without emission of pollutants. This work was to produce membranes and optimizing the basis of S-PEEK (poly-ether-ether-sulfonate) with varying degrees of sulfonation to be applied as electrolytes in fuel cells to the type PEMFC. The membranes were characterized chemically, by thermal analysis, and electrochemistry. (author)
International Nuclear Information System (INIS)
Pujol, L.; Suarez-Navarro, J.A.; Montero, M.
1998-01-01
The determination of radiological water quality is useful for a wide range of environmental studies. In these cases, the gross alpha activity is one of the parameters to determine. This parameter permits to decide if further radiological analyses are necessary in order to identify and quantify the presence of alpha emitters in water. The usual method for monitoring the gross alpha activity includes sample evaporation to dryness on a disk and counting using ZnS(Ag) scintillation detector. Detector electronics is provided with two components which are adjustable by the user the high-voltage applied to the photomultiplier tubes and the low level discriminator that is used to eliminate the electronic noise. The high-voltage and low level discriminator optimization are convenient in order to reach the best counting conditions. This paper is a preliminary study of the procedure followed for the setting up and optimization of the detector electronics in the laboratories of CEDEX for the measurement of gross alpha activity. (Author)
Fuzzy resource optimization for safeguards
International Nuclear Information System (INIS)
Zardecki, A.; Markin, J.T.
1991-01-01
Authorization, enforcement, and verification -- three key functions of safeguards systems -- form the basis of a hierarchical description of the system risk. When formulated in terms of linguistic rather than numeric attributes, the risk can be computed through an algorithm based on the notion of fuzzy sets. Similarly, this formulation allows one to analyze the optimal resource allocation by maximizing the overall detection probability, regarded as a linguistic variable. After summarizing the necessary elements of the fuzzy sets theory, we outline the basic algorithm. This is followed by a sample computation of the fuzzy optimization. 10 refs., 1 tab
International Nuclear Information System (INIS)
Nielsen, Joseph; Tokuhiro, Akira; Khatry, Jivan; Hiromoto, Robert
2014-01-01
Traditional probabilistic risk assessment (PRA) methods have been developed to evaluate risk associated with complex systems; however, PRA methods lack the capability to evaluate complex dynamic systems. In these systems, time and energy scales associated with transient events may vary as a function of transition times and energies to arrive at a different physical state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. Unfortunately DPRA methods introduce issues associated with combinatorial explosion of states. In order to address this combinatorial complexity, a branch-and-bound optimization technique is applied to the DPRA formalism to control the combinatorial state explosion. In addition, a new characteristic scaling metric (LENDIT – length, energy, number, distribution, information and time) is proposed as linear constraints that are used to guide the branch-and-bound algorithm to limit the number of possible states to be analyzed. The LENDIT characterization is divided into four groups or sets – 'state, system, resource and response' (S2R2) – describing reactor operations (normal and off-normal). In this paper we introduce the branch-and-bound DPRA approach and the application of LENDIT scales and S2R2 sets to a station blackout (SBO) transient. (author)
International Nuclear Information System (INIS)
Reissberg, S.; Hoeschen, C.; Redlich, U.; Scherlach, C.; Preuss, H.; Kaestner, A.; Doehring, W.; Woischneck, D.; Schuetze, M.; Reichardt, K.; Firsching, R.
2002-01-01
Purpose: To improve the diagnostic quality of lateral radiographs of the cervical spine by pre-processing the image data sets produced by a transparent imaging plate with both-side reading and to evaluate any possible impact on minimizing the number of additional radiographs and supplementary investigations. Material and Methods: One hundred lateral digital radiographs of the cervical spine were processed with two different methods: processing of each data set using the system-imminent parameters and using the manual model. The difference between the two types of processing is the level of the latitude value. Hard copies of the processed images were judged by five radiologists and three neurosurgeons. The evaluation applied the image criteria score (ICS) without conventional reference images. Results: In 99% of the lateral radiographs of the cervical spine, all vertebral bodies could be completed delineated using the manual mode, but only 76% of the images processed by the system-imminent parameters showed all vertebral bodies. Thus, the manual mode enabled the evaluation of up to two additional more caudal vertebral bodies. The manual mode processing was significantly better concerning object size and processing artifacts. This optimized image processing and the resultant minimization of supplementary investigations was calculated to correspond to a theoretical dose reduction of about 50%. (orig.) [de
Ciarelli, Giancarlo; El Haddad, Imad; Bruns, Emily; Aksoyoglu, Sebnem; Möhler, Ottmar; Baltensperger, Urs; Prévôt, André S. H.
2017-06-01
In this study, novel wood combustion aging experiments performed at different temperatures (263 and 288 K) in a ˜ 7 m3 smog chamber were modelled using a hybrid volatility basis set (VBS) box model, representing the emission partitioning and their oxidation against OH. We combine aerosol-chemistry box-model simulations with unprecedented measurements of non-traditional volatile organic compounds (NTVOCs) from a high-resolution proton transfer reaction mass spectrometer (PTR-MS) and with organic aerosol measurements from an aerosol mass spectrometer (AMS). Due to this, we are able to observationally constrain the amounts of different NTVOC aerosol precursors (in the model) relative to low volatility and semi-volatile primary organic material (OMsv), which is partitioned based on current published volatility distribution data. By comparing the NTVOC / OMsv ratios at different temperatures, we determine the enthalpies of vaporization of primary biomass-burning organic aerosols. Further, the developed model allows for evaluating the evolution of oxidation products of the semi-volatile and volatile precursors with aging. More than 30 000 box-model simulations were performed to retrieve the combination of parameters that best fit the observed organic aerosol mass and O : C ratios. The parameters investigated include the NTVOC reaction rates and yields as well as enthalpies of vaporization and the O : C of secondary organic aerosol surrogates. Our results suggest an average ratio of NTVOCs to the sum of non-volatile and semi-volatile organic compounds of ˜ 4.75. The mass yields of these compounds determined for a wide range of atmospherically relevant temperatures and organic aerosol (OA) concentrations were predicted to vary between 8 and 30 % after 5 h of continuous aging. Based on the reaction scheme used, reaction rates of the NTVOC mixture range from 3.0 × 10-11 to 4. 0 × 10-11 cm3 molec-1 s-1. The average enthalpy of vaporization of secondary organic aerosol
Directory of Open Access Journals (Sweden)
G. Ciarelli
2017-06-01
Full Text Available In this study, novel wood combustion aging experiments performed at different temperatures (263 and 288 K in a ∼ 7 m3 smog chamber were modelled using a hybrid volatility basis set (VBS box model, representing the emission partitioning and their oxidation against OH. We combine aerosol–chemistry box-model simulations with unprecedented measurements of non-traditional volatile organic compounds (NTVOCs from a high-resolution proton transfer reaction mass spectrometer (PTR-MS and with organic aerosol measurements from an aerosol mass spectrometer (AMS. Due to this, we are able to observationally constrain the amounts of different NTVOC aerosol precursors (in the model relative to low volatility and semi-volatile primary organic material (OMsv, which is partitioned based on current published volatility distribution data. By comparing the NTVOC ∕ OMsv ratios at different temperatures, we determine the enthalpies of vaporization of primary biomass-burning organic aerosols. Further, the developed model allows for evaluating the evolution of oxidation products of the semi-volatile and volatile precursors with aging. More than 30 000 box-model simulations were performed to retrieve the combination of parameters that best fit the observed organic aerosol mass and O : C ratios. The parameters investigated include the NTVOC reaction rates and yields as well as enthalpies of vaporization and the O : C of secondary organic aerosol surrogates. Our results suggest an average ratio of NTVOCs to the sum of non-volatile and semi-volatile organic compounds of ∼ 4.75. The mass yields of these compounds determined for a wide range of atmospherically relevant temperatures and organic aerosol (OA concentrations were predicted to vary between 8 and 30 % after 5 h of continuous aging. Based on the reaction scheme used, reaction rates of the NTVOC mixture range from 3.0 × 10−11 to 4. 0 × 10−11 cm3 molec−1 s−1
Energy Technology Data Exchange (ETDEWEB)
Nolte-Ernsting, C.C.A.; Krombach, G.; Staatz, G.; Kilbinger, M.; Adam, G.B.; Guenther, R.W. [RWTH Aachen (Germany). Klinik fuer Radiologische Diagnostik
1999-06-01
Purpose: To investigate the feasibility of reconstructing a virtual endoscopy from MR imaging data sets of the upper urinary tract. Method: The data obtained from 28 contrast-enhanced MR urographic examinations (5 normal; 23 pathologic) were post-processed to reconstruct a virtual ureterorenoscopy (VURS) using a threshold image segmentation. The visualization of the upper urinary tract was based on the acquisition of T{sub 1}-weighted 3D gradient-echo sequences after intravenous administration of gadolinium-DTPA and a prior injection of low-dose furosemide. Results: The employed MR urography technique created in all 28 cases a complete and strong contrast enhancement of the urinary tract. These 3D sequence data allowed the reconstruction of a VURS, even when the collecting system was not dilated. The best accuracy was provided by the MR urography sequences with the smallest voxel size. Moreover, the data acquisition based on a breath-hold technique has proved superior to that using a respiratory gating. Inside the renal pelvis, all calices could be assessed by turning the virtual endoscope in the appropriate direction. The visualization of the ureteral orifices in the bladder was also possible. All filling defects that were diagnosed by MR urography could be evaluated from the endoluminal view using the VURS. The exact characterization of the lesions based only on the assessment of the surface structure was difficult. Conclusion: A virtual endoscopy of the upper urinary tract can be successfully reconstructed using the data sets of high-resolution 3D MR urography sequences. (orig.) [Deutsch] Ziel: Untersuchungen zur Anwendung der virtuellen Endoskopie auf MR-tomographische Datensaetze des oberen Harntraktes. Methoden: Die Daten von 28 kontrastangehobenen MR-Urographien (5 normal; 23 pathologisch) wurden zur Erstellung einer virtuellen Ureterorenoskopie (VURS) mittels Schwellenwert-Bildsegmentierung nachverarbeitet. Als Grundlage fuer die Darstellung des Harntraktes
Optimally Stopped Optimization
Vinci, Walter; Lidar, Daniel
We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.
Directory of Open Access Journals (Sweden)
Pesole Graziano
2005-10-01
Full Text Available Abstract Background: Currently available methods to predict splice sites are mainly based on the independent and progressive alignment of transcript data (mostly ESTs to the genomic sequence. Apart from often being computationally expensive, this approach is vulnerable to several problems – hence the need to develop novel strategies. Results: We propose a method, based on a novel multiple genome-EST alignment algorithm, for the detection of splice sites. To avoid limitations of splice sites prediction (mainly, over-predictions due to independent single EST alignments to the genomic sequence our approach performs a multiple alignment of transcript data to the genomic sequence based on the combined analysis of all available data. We recast the problem of predicting constitutive and alternative splicing as an optimization problem, where the optimal multiple transcript alignment minimizes the number of exons and hence of splice site observations. We have implemented a splice site predictor based on this algorithm in the software tool ASPIC (Alternative Splicing PredICtion. It is distinguished from other methods based on BLAST-like tools by the incorporation of entirely new ad hoc procedures for accurate and computationally efficient transcript alignment and adopts dynamic programming for the refinement of intron boundaries. ASPIC also provides the minimal set of non-mergeable transcript isoforms compatible with the detected splicing events. The ASPIC web resource is dynamically interconnected with the Ensembl and Unigene databases and also implements an upload facility. Conclusion: Extensive bench marking shows that ASPIC outperforms other existing methods in the detection of novel splicing isoforms and in the minimization of over-predictions. ASPIC also requires a lower computation time for processing a single gene and an EST cluster. The ASPIC web resource is available at http://aspic.algo.disco.unimib.it/aspic-devel/.
Metzger-Filho, Otto; de Azambuja, Evandro; Bradbury, Ian; Saini, Kamal S; Bines, José; Simon, Sergio D; Dooren, Veerle Van; Aktan, Gursel; Pritchard, Kathleen I; Wolff, Antonio C; Smith, Ian; Jackisch, Christian; Lang, Istvan; Untch, Michael; Boyle, Frances; Xu, Binghe; Baselga, Jose; Perez, Edith A; Piccart-Gebhart, Martine
2013-01-01
This study measured the time taken for setting up the different facets of adjuvant lapatinib and/or trastuzumab treatment optimization (ALTTO), an nternational phase III study being conducted in 44 participating countries. Time to regulatory authority (RA) approval, time to ethics committee/institutional review board (EC/IRB) approval, time from study approval by EC/IRB to first randomized patient, and time from first to last randomized patient were prospectively collected in the ALTTO study. Analyses were conducted by grouping countries into either geographic regions or economic classes as per the World Bank's criteria. South America had a significantly longer time to RA approval (median: 236 days, range: 21-257 days) than Europe (median: 52 days, range: 0-151 days), North America (median: 26 days, range: 22-30 days), and Asia-Pacific (median: 62 days, range: 37-75 days). Upper-middle economies had longer times to RA approval (median: 123 days, range: 21-257 days) than high-income (median: 47 days, range: 0-112 days) and lower-middle income economies (median: 57 days, range: 37-62 days). No significant difference was observed for time to EC/IRB approval across the studied regions (median: 59 days, range 0-174 days). Overall, the median time from EC/IRB approval to first recruited patient was 169 days (range: 26-412 days). This study highlights the long time intervals required to activate a global phase III trial. Collaborative research groups, pharmaceutical industry sponsors, and regulatory authorities should analyze the current system and enter into dialogue for optimizing local policies. This would enable faster access of patients to innovative therapies and enhance the efficiency of clinical research.
Meza, James M; Hickey, Edward J; Blackstone, Eugene H; Jaquiss, Robert D B; Anderson, Brett R; Williams, William G; Cai, Sally; Van Arsdell, Glen S; Karamlou, Tara; McCrindle, Brian W
2017-10-31
In infants requiring 3-stage single-ventricle palliation for hypoplastic left heart syndrome, attrition after the Norwood procedure remains significant. The effect of the timing of stage 2 palliation (S2P), a physician-modifiable factor, on long-term survival is not well understood. We hypothesized that an optimal interval between the Norwood and S2P that both minimizes pre-S2P attrition and maximizes post-S2P survival exists and is associated with individual patient characteristics. The National Institutes of Health/National Heart, Lung, and Blood Institute Pediatric Heart Network Single Ventricle Reconstruction Trial public data set was used. Transplant-free survival (TFS) was modeled from (1) Norwood to S2P and (2) S2P to 3 years by using parametric hazard analysis. Factors associated with death or heart transplantation were determined for each interval. To account for staged procedures, risk-adjusted, 3-year, post-Norwood TFS (the probability of TFS at 3 years given survival to S2P) was calculated using parametric conditional survival analysis. TFS from the Norwood to S2P was first predicted. TFS after S2P to 3 years was then predicted and adjusted for attrition before S2P by multiplying by the estimate of TFS to S2P. The optimal timing of S2P was determined by generating nomograms of risk-adjusted, 3-year, post-Norwood, TFS versus the interval from the Norwood to S2P. Of 547 included patients, 399 survived to S2P (73%). Of the survivors to S2P, 349 (87%) survived to 3-year follow-up. The median interval from the Norwood to S2P was 5.1 (interquartile range, 4.1-6.0) months. The risk-adjusted, 3-year, TFS was 68±7%. A Norwood-S2P interval of 3 to 6 months was associated with greatest 3-year TFS overall and in patients with few risk factors. In patients with multiple risk factors, TFS was severely compromised, regardless of the timing of S2P and most severely when S2P was performed early. No difference in the optimal timing of S2P existed when stratified by
Energy Technology Data Exchange (ETDEWEB)
Karki, K; Hugo, G; Ford, J; Saraiya, S; Weiss, E [Radiation Oncology, Virginia Commonwealth University, Richmond, VA (United States); Olsen, K; Groves, R [Radiology, Virginia Commonwealth University, Richmond, VA (United States)
2014-06-15
Purpose: Diffusion-weighted MRI (DW-MRI) is increasingly being investigated for radiotherapy planning and response assessment. Selection of a limited number of b-values in DW-MRI is important to keep geometrical variations low and imaging time short. We investigated various b-value sets to determine an optimal set for obtaining monoexponential apparent diffusion coefficient (ADC) close to perfusion-insensitive intravoxel incoherent motion (IVIM) model ADC (ADC_{IVIM}) in nonsmall cell lung cancer. Methods: Seven patients had 27 DW-MRI scans before and during radiotherapy in a 1.5T scanner. Respiratory triggering was applied to the echo-planar DW-MRI with TR=4500ms approximately, TE=74ms, pixel size=1.98X1.98mm{sub 2}, slice thickness=4–6mm and 7 axial slices. Diffusion gradients were applied to all three axes producing traceweighted images with eight b-values of 0–1000μs/μm{sup 2}. Monoexponential model ADC values using various b-value sets were compared to ADC_{IVIM} using all b-values. To compare the relative noise in ADC maps, intra-scan coefficient of variation (CV) of active tumor volumes was computed. Results: ADC_{IVIM}, perfusion coefficient and perfusion fraction for tumor volumes were in the range of 880-1622 μm{sup 2}/s, 8119-33834 μm{sup 2}/s and 0.104–0.349, respectively. ADC values using sets of 250, 800 and 1000; 250, 650 and 1000; and 250–1000μs/μm{sup 2} only were not significantly different from ADC_{IVIM}(p>0.05, paired t-test). Error in ADC values for 0–1000, 50–1000, 100–1000, 250–1000, 500–1000, and three b-value sets- 250, 500 and 1000; 250, 650 and 1000; and 250, 800 and 1000μs/μm{sup 2} were 15.0, 9.4, 5.6, 1.4, 11.7, 3.7, 2.0 and 0.2% relative to the reference-standard ADC_{IVIM}, respectively. Mean intrascan CV was 20.2, 20.9, 21.9, 24.9, 32.6, 25.8, 25.4 and 24.8%, respectively, whereas that for ADC_{IVIM} was 23.3%. Conclusion: ADC values of two 3 b-value sets
Stapelberg, S.; Jerg, M.; Stengel, M.; Hollmann, R.
2014-12-01
In 2010 the ESA Climate Change Initiative (CCI) Cloud project was started with the objectives of generating a long-term coherent data set of cloud properties. The cloud properties considered are cloud mask, cloud top estimates, cloud optical thickness, cloud effective radius and post processed parameters such as cloud liquid and ice water path. During the first phase of the project 3 years of data spanning 2007 to 2009 have been produced on a global gridded daily and monthly mean basis. Next to the processing an extended evaluation study was started in order to gain a first understanding of the quality of the retrieved data. The critical discussion of the results of the evaluation holds a key role for the further development and improvement of the dataset's quality. The presentation will give a short overview of the evaluation study undertaken in the Cloud_cci project. The focus will be on the evaluation of gridded, monthly mean cloud fraction and cloud top data from the Cloud_cci AVHRR-heritage dataset with CLARA-A1, MODIS-Coll5, PATMOS-X and ISCCP data. Exemplary results will be shown. Strengths and shortcomings of the retrieval scheme as well as possible impacts of averaging approaches on the evaluation will be discussed. An Overview of Cloud_cci Phase 2 will be given.
The pointer basis and the feedback stabilization of quantum systems
International Nuclear Information System (INIS)
Li, L; Chia, A; Wiseman, H M
2014-01-01
The dynamics for an open quantum system can be ‘unravelled’ in infinitely many ways, depending on how the environment is monitored, yielding different sorts of conditioned states, evolving stochastically. In the case of ideal monitoring these states are pure, and the set of states for a given monitoring forms a basis (which is overcomplete in general) for the system. It has been argued elsewhere (Atkins et al 2005 Europhys. Lett. 69 163) that the ‘pointer basis’ as introduced by Zurek et al (1993 Phys. Rev. Lett. 70 1187), should be identified with the unravelling-induced basis which decoheres most slowly. Here we show the applicability of this concept of pointer basis to the problem of state stabilization for quantum systems. In particular we prove that for linear Gaussian quantum systems, if the feedback control is assumed to be strong compared to the decoherence of the pointer basis, then the system can be stabilized in one of the pointer basis states with a fidelity close to one (the infidelity varies inversely with the control strength). Moreover, if the aim of the feedback is to maximize the fidelity of the unconditioned system state with a pure state that is one of its conditioned states, then the optimal unravelling for stabilizing the system in this way is that which induces the pointer basis for the conditioned states. We illustrate these results with a model system: quantum Brownian motion. We show that even if the feedback control strength is comparable to the decoherence, the optimal unravelling still induces a basis very close to the pointer basis. However if the feedback control is weak compared to the decoherence, this is not the case. (paper)
Villanueva, Matthew G; Lane, Christianne Joy; Schroeder, E Todd
2015-02-01
To determine if 8 weeks of periodized strength resistance training (RT) utilizing relatively short rest interval lengths (RI) in between sets (SS) would induce greater improvements in body composition and muscular performance, compared to the same RT program utilizing extended RI (SL). 22 male volunteers (SS: n = 11, 65.6 ± 3.4 years; SL: n = 11, 70.3 ± 4.9 years) were assigned to one of two strength RT groups, following 4 weeks of periodized hypertrophic RT (PHRT): strength RT with 60-s RI (SS) or strength RT with 4-min RI (SL). Prior to randomization, all 22 study participants trained 3 days/week, for 4 weeks, targeting hypertrophy; from week 4 to week 12, SS and SL followed the same periodized strength RT program for 8 weeks, with RI the only difference in their RT prescription. Following PHRT, all study participants experienced increases in lean body mass (LBM) (p body strength (p body fat (p high-intensity strength RT with shortened RI induces significantly greater enhancements in body composition, muscular performance, and functional performance, compared to the same RT prescription with extended RI, in older men. Applied professionals may optimize certain RT-induced adaptations, by incorporating shortened RI.
Zhang, Yong-Feng; Chiang, Hsiao-Dong
2017-09-01
A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.
A geometric approach to multiperiod mean variance optimization of assets and liabilities
Leippold, Markus; Trojani, Fabio; Vanini, Paolo
2005-01-01
We present a geometric approach to discrete time multiperiod mean variance portfolio optimization that largely simplifies the mathematical analysis and the economic interpretation of such model settings. We show that multiperiod mean variance optimal policies can be decomposed in an orthogonal set of basis strategies, each having a clear economic interpretation. This implies that the corresponding multi period mean variance frontiers are spanned by an orthogonal basis of dynamic returns. Spec...
Energy Technology Data Exchange (ETDEWEB)
Fiorucci, I.; Muscari, G. [Istituto Nazionale di Geofisica e Vulcanologia, Rome (Italy); De Zafra, R.L. [State Univ. of New York, Stony Brook, NY (United States). Dept. of Physics and Astronomy
2011-07-01
The Ground-Based Millimeter-wave Spectrometer (GBMS) was designed and built at the State University of New York at Stony Brook in the early 1990s and since then has carried out many measurement campaigns of stratospheric O{sub 3}, HNO{sub 3}, CO and N{sub 2}O at polar and mid-latitudes. Its HNO{sub 3} data set shed light on HNO{sub 3} annual cycles over the Antarctic continent and contributed to the validation of both generations of the satellite-based JPL Microwave Limb Sounder (MLS). Following the increasing need for long-term data sets of stratospheric constituents, we resolved to establish a long-term GMBS observation site at the Arctic station of Thule (76.5 N, 68.8 W), Greenland, beginning in January 2009, in order to track the long- and short-term interactions between the changing climate and the seasonal processes tied to the ozone depletion phenomenon. Furthermore, we updated the retrieval algorithm adapting the Optimal Estimation (OE) method to GBMS spectral data in order to conform to the standard of the Network for the Detection of Atmospheric Composition Change (NDACC) microwave group, and to provide our retrievals with a set of averaging kernels that allow more straightforward comparisons with other data sets. The new OE algorithm was applied to GBMS HNO{sub 3} data sets from 1993 South Pole observations to date, in order to produce HNO{sub 3} version 2 (v2) profiles. A sample of results obtained at Antarctic latitudes in fall and winter and at mid-latitudes is shown here. In most conditions, v2 inversions show a sensitivity (i.e., sum of column elements of the averaging kernel matrix) of 100{+-}20% from 20 to 45 km altitude, with somewhat worse (better) sensitivity in the Antarctic winter lower (upper) stratosphere. The 1{sigma} uncertainty on HNO{sub 3} v2 mixing ratio vertical profiles depends on altitude and is estimated at {proportional_to}15% or 0.3 ppbv, whichever is larger. Comparisons of v2 with former (v1) GBMS HNO{sub 3} vertical profiles
Blank, L. Aaron; Sharma, Amit R.; Weeks, David E.
2018-03-01
The X Σ 1 /2 +2 , A Π 1 /2 2 , A Π 3 /2 2 , and B2Σ1/2 + potential-energy curves for Rb+He are computed at the spin-orbit multireference configuration interaction level of theory using a hierarchy of Gaussian basis sets at the double-zeta (DZ), triple-zeta (TZ), and quadruple-zeta (QZ) levels of valence quality. Counterpoise and Davidson-Silver corrections are employed to remove basis-set superposition error and ameliorate size-consistency error. An extrapolation is performed to obtain a final set of potential-energy curves in the complete basis-set (CBS) limit. This yields four sets of systematically improved X Σ 1 /2 +2 , A Π 1 /2 2 , A Π 3 /2 2 , and B2Σ1/2 + potential-energy curves that are used to compute the A Π 3 /2 2 bound vibrational energies, the position of the D2 blue satellite peak, and the D1 and D2 pressure broadening and shifting coefficients, at the DZ, TZ, QZ, and CBS levels. Results are compared with previous calculations and experimental observation.
International Nuclear Information System (INIS)
Illuminati, Silvia; Annibaldi, Anna; Truzzi, Cristina; Finale, Carolina; Scarponi, Giuseppe
2013-01-01
For the first time, square-wave anodic-stripping voltammetry (SWASV) was set up and optimized for the determination of Cd, Pb and Cu in white wine after UV photo-oxidative digestion of the sample. The best procedure for the sample pre-treatment consisted in a 6-h UV irradiation of diluted, acidified wine, with the addition of ultrapure H 2 O 2 (three sequential additions during the irradiation). Due to metal concentration differences, separate measurements were carried out for Cd (deposition potential −950 mV vs. Ag/AgCl/3 M KCl deposition time 15 min) and simultaneously for Pb and Cu (E d −750 mV, t d 30 s). The optimum set-up of the main instrumental parameters, evaluated also in terms of the signal-to-noise ratio, were as follows: E SW 20 mV, f 100 Hz, ΔE step 8 mV, t step 100 ms, t wait 60 ms, t delay 2 ms, t meas 3 ms. The electrochemical behaviour was reversible bielectronic for Cd and Pb, and kinetically controlled monoelectronic for Cu. Good accuracy was found both when the recovery procedure was used and when the results were compared with data obtained by differential pulse anodic stripping voltammetry. The linearity of the response was verified up to ∼4 μg L −1 for Cd and Pb and ∼15 μg L −1 for Cu. The detection limits for t d = 5 min in the 10 times diluted, UV digested sample were (ng L −1 ): Cd 7.0, Pb 1.2 and Cu 6.6, which are well below currently applied methods. Application to a Verdicchio dei Castelli di Jesi white wine revealed concentration levels of Cd ∼0.2, Pb ∼10, Cu ∼30 μg L −1 with repeatabilities of (±RSD%) Cd ±6%, Pb ±5%, Cu ±10%
Multivariate optimization of production systems
International Nuclear Information System (INIS)
Carroll, J.A.; Horne, R.N.
1992-01-01
This paper reports that mathematically, optimization involves finding the extreme values of a function. Given a function of several variables, Z = ∫(rvec x 1 , rvec x 2 ,rvec x 3 ,→x n ), an optimization scheme will find the combination of these variables that produces an extreme value in the function, whether it is a minimum or a maximum value. Many examples of optimization exist. For instance, if a function gives and investor's expected return on the basis of different investments, numerical optimization of the function will determine the mix of investments that will yield the maximum expected return. This is the basis of modern portfolio theory. If a function gives the difference between a set of data and a model of the data, numerical optimization of the function will produce the best fit of the model to the data. This is the basis for nonlinear parameter estimation. Similar examples can be given for network analysis, queuing theory, decision analysis, etc
Higuchi, Kazuhide; Miyaji, Kousuke; Johguchi, Koh; Takeuchi, Ken
2012-02-01
This paper proposes a verify-programming method for the resistive random access memory (ReRAM) cell which achieves a 50-times higher endurance and a fast set and reset compared with the conventional method. The proposed verify-programming method uses the incremental pulse width with turnback (IPWWT) for the reset and the incremental voltage with turnback (IVWT) for the set. With the combination of IPWWT reset and IVWT set, the endurance-cycle increases from 48 ×103 to 2444 ×103 cycles. Furthermore, the measured data retention-time after 20 ×103 set/reset cycles is estimated to be 10 years. Additionally, the filamentary based physical model is proposed to explain the set/reset failure mechanism with various set/reset pulse shapes. The reset pulse width and set voltage correspond to the width and length of the conductive-filament, respectively. Consequently, since the proposed IPWWT and IVWT recover set and reset failures of ReRAM cells, the endurance-cycles are improved.
International Nuclear Information System (INIS)
1993-09-01
This report forms part of the supporting documentation for the low- and intermediate-level waste repository site selection procedure. The aim of the report is to present the site-specific geological data, and the geosphere database derived therefrom, which were used as a basis for evaluating the long-term safety of a repository at Wellenberg. These data also form a key component of other reports appearing simultaneously with the present one, first on the intercomparison of the four potential sites, (NTB 93-02) and second, on the safety assessment of the Wellenberg site itself (NTB 93-26). The level of detail of the present report is determined by the requirements of the other two reports mentioned, which would include presenting, discussing and justifying the geosphere dataset used in the performance assessment model calculations. The introductory chapter discusses procedures and goals. The second chapter provides an overview of the geographical and geological situation at Wellenberg. Chapter 3 then discusses the planning and progress of the field programme, and the current status of investigations is presented. The fourth chapter presents the geological situation at the Wellenberg site and describes the concept and models formulated on the basis of this information. Chapter 5 derives the performance assessment and engineering datasets, based on the investigations, concepts and modelling exercises described in chapter 4. In summary, it can be said that, to date, the investigation results from Wellenberg have confirmed predictions in all relevant respects and, in some cases, have even exceeded expectations (e.g. in relation to the available volume of host rock). (author) figs., tabs., 141 refs
Quadratic Hedging of Basis Risk
Directory of Open Access Journals (Sweden)
Hardy Hulley
2015-02-01
Full Text Available This paper examines a simple basis risk model based on correlated geometric Brownian motions. We apply quadratic criteria to minimize basis risk and hedge in an optimal manner. Initially, we derive the Föllmer–Schweizer decomposition for a European claim. This allows pricing and hedging under the minimal martingale measure, corresponding to the local risk-minimizing strategy. Furthermore, since the mean-variance tradeoff process is deterministic in our setup, the minimal martingale- and variance-optimal martingale measures coincide. Consequently, the mean-variance optimal strategy is easily constructed. Simple pricing and hedging formulae for put and call options are derived in terms of the Black–Scholes formula. Due to market incompleteness, these formulae depend on the drift parameters of the processes. By making a further equilibrium assumption, we derive an approximate hedging formula, which does not require knowledge of these parameters. The hedging strategies are tested using Monte Carlo experiments, and are compared with results achieved using a utility maximization approach.
International Nuclear Information System (INIS)
Goncharov, V.K.; Krekoten', O.V.; Makarov, V.V.
2015-01-01
The main aim of this article is to assess experimentally the possibility for the development and manufacturing of a high-power pulse X-ray source on the basis of a high-current electron accelerator of the diode type. This task was realized using a vacuum diode with the explosive plasma cathode from brass and an anode of aluminum foil 850 microns thick. As a result of the experiments performed, it is shown that, for this metal of the anode, the component of X-rays, propagating along electron beam motion, has bigger energy weight than the reflected one. The photographic paper placed in a black dense paper holder was used as a sensor. It is necessary to mark that at present the current investigations have a purely qualitative character. At the same time, the authors have succeeded to define an angle of divergence (~90°) of the generated radiation after an aluminum target. The possibility of generating bremsstrahlung and also the energy estimates indicate applicability of this installation in pure research, and application-oriented purposes, for example, for monitoring of the radiation stability of different electronic products. (authors)
DEFF Research Database (Denmark)
Bendsøe, Martin P.; Sigmund, Ole
2007-01-01
Taking as a starting point a design case for a compliant mechanism (a force inverter), the fundamental elements of topology optimization are described. The basis for the developments is a FEM format for this design problem and emphasis is given to the parameterization of design as a raster image...
Multilevel geometry optimization
Rodgers, Jocelyn M.; Fast, Patton L.; Truhlar, Donald G.
2000-02-01
Geometry optimization has been carried out for three test molecules using six multilevel electronic structure methods, in particular Gaussian-2, Gaussian-3, multicoefficient G2, multicoefficient G3, and two multicoefficient correlation methods based on correlation-consistent basis sets. In the Gaussian-2 and Gaussian-3 methods, various levels are added and subtracted with unit coefficients, whereas the multicoefficient Gaussian-x methods involve noninteger parameters as coefficients. The multilevel optimizations drop the average error in the geometry (averaged over the 18 cases) by a factor of about two when compared to the single most expensive component of a given multilevel calculation, and in all 18 cases the accuracy of the atomization energy for the three test molecules improves; with an average improvement of 16.7 kcal/mol.
Multilevel geometry optimization
Energy Technology Data Exchange (ETDEWEB)
Rodgers, Jocelyn M. [Department of Chemistry and Supercomputer Institute, University of Minnesota, Minneapolis, Minnesota 55455-0431 (United States); Fast, Patton L. [Department of Chemistry and Supercomputer Institute, University of Minnesota, Minneapolis, Minnesota 55455-0431 (United States); Truhlar, Donald G. [Department of Chemistry and Supercomputer Institute, University of Minnesota, Minneapolis, Minnesota 55455-0431 (United States)
2000-02-15
Geometry optimization has been carried out for three test molecules using six multilevel electronic structure methods, in particular Gaussian-2, Gaussian-3, multicoefficient G2, multicoefficient G3, and two multicoefficient correlation methods based on correlation-consistent basis sets. In the Gaussian-2 and Gaussian-3 methods, various levels are added and subtracted with unit coefficients, whereas the multicoefficient Gaussian-x methods involve noninteger parameters as coefficients. The multilevel optimizations drop the average error in the geometry (averaged over the 18 cases) by a factor of about two when compared to the single most expensive component of a given multilevel calculation, and in all 18 cases the accuracy of the atomization energy for the three test molecules improves; with an average improvement of 16.7 kcal/mol. (c) 2000 American Institute of Physics.
Tractable Pareto Optimization of Temporal Preferences
Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent
2003-01-01
This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.
Carver, Charles S.; Scheier, Michael F.
2014-01-01
Optimism is a cognitive construct (expectancies regarding future outcomes) that also relates to motivation: optimistic people exert effort, whereas pessimistic people disengage from effort. Study of optimism began largely in health contexts, finding positive associations between optimism and markers of better psychological and physical health. Physical health effects likely occur through differences in both health-promoting behaviors and physiological concomitants of coping. Recently, the scientific study of optimism has extended to the realm of social relations: new evidence indicates that optimists have better social connections, partly because they work harder at them. In this review, we examine the myriad ways this trait can benefit an individual, and our current understanding of the biological basis of optimism. PMID:24630971
International Nuclear Information System (INIS)
Yang, Zhi; Luo, Xiaochuan
2016-01-01
Highlights: • The adjoint equation is introduced to the PDE optimal control problem. • Lipschitz continuity for the gradient of the cost functional is derived. • The simulation time and iterations reduce by a large margin in the simulations. • The model validation and comparison are made to verify the proposed math model. - Abstract: In this paper, this study proposed a new method to solve the PDE optimal control problem by introducing the adjoint problem to the optimization model, which was used to get the reference values for the optimal furnace zone temperatures and the optimal temperature distribution of steel slabs in the reheating furnace on the steady-state operating regime. It was proved that the gradient of the cost functional could be written via the weak solution of this adjoint problem and then Lipschitz continuity of the gradient was derived. Model validation and comparison between the mathematics model and the experiment results indicated that the present heat transfer model worked well for the prediction of thermal behavior about a slab in the reheating furnace. Iterations and simulation time had shown a significant decline in the simulations of 20MnSi slab, and it was shown by numerical simulations for 0.4 m thick slabs that the proposed method was better applied in the medium and heavy plate plant, leading to better performance in terms of productivity, energy efficiency and other features of reheating furnaces.
Ma, Y.T.; Wubs, A.M.; Mathieu, A.; Heuvelink, E.; Zhu, J.Y.; Hu, B.G.; Cournede, P.H.; Reffye, de P.
2011-01-01
Background and aims - Many indeterminate plants can have wide fluctuations in the pattern of fruit-set and harvest. Fruit-set in these types of plants depends largely on the balance between source (assimilate supply) and sink strength (assimilate demand) within the plant. This study aims to evaluate
Optimizing the structure of Tetracyanoplatinate(II)
DEFF Research Database (Denmark)
Dohn, Asmus Ougaard; Møller, Klaus Braagaard; Sauer, Stephan P. A.
2013-01-01
. For the C-N bond these trends are reversed and an order of magnitude smaller. With respect to the basis set dependence we observed that a triple zeta basis set with polarization functions gives in general sufﬁciently converged results, but while for the Pt-C bond it is advantageous to include extra diffuse......The geometry of tetracyanoplatinate(II) (TCP) has been optimized with density functional theory (DFT) calculations in order to compare different computational strategies. Two approximate scalar relativistic methods, i.e. the scalar zeroth-order regular approximation (ZORA) and non...... is almost quantitatively reproduced in the ZORA and ECP calculations. In addition, the effect of the exchange-correlation functional and one-electron basis set was studied by employing the two generalized gradient approximation (GGA) functionals, BLYP and PBE, as well as their hybrid version B3LYP and PBE0...
Institute of Scientific and Technical Information of China (English)
朱国俊; 冯建军; 郭鹏程; 罗兴锜
2014-01-01
如何提高海流能水轮机的能量捕获效率是海洋能开发领域的重点研究课题，而提高海流能水轮机能量性能的关键在于其叶片几何的构造基础--水力翼型的性能提升。该文提出了一种水力翼型的多工况优化设计方法，该方法采用Bezier曲线参数化技术建立翼型的参数化表征方法，然后利用拉丁超立方试验设计技术在设计空间获取训练径向基（radial basis function，RBF）神经网络的样本点，通过计算流体动力学的方法获得每个翼型样本的性能参数后开展神经网络的学习训练，最后采用RBF神经网络与NSGA-II遗传算法相结合的现代优化技术数值求解水力翼型的多工况优化问题。基于上述优化方法对NACA63-815翼型进行了优化改进，重点研究了该翼型在3个攻角工况下（0，6°和12°）的优化问题及求解方法。优化结果表明，优化后的翼型在3个工况点下都具有更好的升阻比性能，同时也能更好地抑制失速现象的产生，验证了该优化方法的理论正确性和可行性。%In order to reduce the current dependence on fossil and nuclear-fueled power plants to cope with the growing demand of electrical energy, the ocean energy technologies must be improved to develop more energy. There are several types of ocean energy that can be feasible to exploit:wave energy, marine-current energy, tidal barrages, ocean thermal energy and so on. But the most promising in the short term may be wave and marine-current energy. Marine-current energy can be exploited by a marine current turbine. So how to improve the efficiency of mariner current turbine is the key research subject in ocean energy development. The key to efficiency improvement is the performance improvement of hydrofoil, which is used to establish the turbine blade. In order to improve the hydrofoil’s performance, a multi-point optimization method is presented in this paper. In this method
International Nuclear Information System (INIS)
R.J. Garrett
2002-01-01
As part of the internal Integrated Safety Management Assessment verification process, it was determined that there was a lack of documentation that summarizes the safety basis of the current Yucca Mountain Project (YMP) site characterization activities. It was noted that a safety basis would make it possible to establish a technically justifiable graded approach to the implementation of the requirements identified in the Standards/Requirements Identification Document. The Standards/Requirements Identification Documents commit a facility to compliance with specific requirements and, together with the hazard baseline documentation, provide a technical basis for ensuring that the public and workers are protected. This Safety Basis Report has been developed to establish and document the safety basis of the current site characterization activities, establish and document the hazard baseline, and provide the technical basis for identifying structures, systems, and components (SSCs) that perform functions necessary to protect the public, the worker, and the environment from hazards unique to the YMP site characterization activities. This technical basis for identifying SSCs serves as a grading process for the implementation of programs such as Conduct of Operations (DOE Order 5480.19) and the Suspect/Counterfeit Items Program. In addition, this report provides a consolidated summary of the hazards analyses processes developed to support the design, construction, and operation of the YMP site characterization facilities and, therefore, provides a tool for evaluating the safety impacts of changes to the design and operation of the YMP site characterization activities
Energy Technology Data Exchange (ETDEWEB)
R.J. Garrett
2002-01-14
As part of the internal Integrated Safety Management Assessment verification process, it was determined that there was a lack of documentation that summarizes the safety basis of the current Yucca Mountain Project (YMP) site characterization activities. It was noted that a safety basis would make it possible to establish a technically justifiable graded approach to the implementation of the requirements identified in the Standards/Requirements Identification Document. The Standards/Requirements Identification Documents commit a facility to compliance with specific requirements and, together with the hazard baseline documentation, provide a technical basis for ensuring that the public and workers are protected. This Safety Basis Report has been developed to establish and document the safety basis of the current site characterization activities, establish and document the hazard baseline, and provide the technical basis for identifying structures, systems, and components (SSCs) that perform functions necessary to protect the public, the worker, and the environment from hazards unique to the YMP site characterization activities. This technical basis for identifying SSCs serves as a grading process for the implementation of programs such as Conduct of Operations (DOE Order 5480.19) and the Suspect/Counterfeit Items Program. In addition, this report provides a consolidated summary of the hazards analyses processes developed to support the design, construction, and operation of the YMP site characterization facilities and, therefore, provides a tool for evaluating the safety impacts of changes to the design and operation of the YMP site characterization activities.
BWR NSSS design basis documentation
International Nuclear Information System (INIS)
Vij, R.S.; Bates, R.E.
2004-01-01
programs that GE has participated in and describes the different options and approaches that have been used by various utilities in their design basis programs. Some of these variations deal with the scope and depth of coverage of the information, while others are related to the process (how the work is done). Both of these topics can have a significant effect on the program cost. Some insight into these effects is provided. The final section of the paper presents a set of lessons learned and a recommendation for an optimum approach to a design basis information program. The lessons learned reflect the knowledge that GE has gained by participating in design basis programs with nineteen domestic and international BWR owner/operators. The optimum approach described in this paper is GE's attempt to define a set of information and a work process for a utility/GE NSSS Design Basis Information program that will maximize the cost effectiveness of the program for the utility. (author)
Diffusion Forecasting Model with Basis Functions from QR-Decomposition
Harlim, John; Yang, Haizhao
2017-12-01
The diffusion forecasting is a nonparametric approach that provably solves the Fokker-Planck PDE corresponding to Itô diffusion without knowing the underlying equation. The key idea of this method is to approximate the solution of the Fokker-Planck equation with a discrete representation of the shift (Koopman) operator on a set of basis functions generated via the diffusion maps algorithm. While the choice of these basis functions is provably optimal under appropriate conditions, computing these basis functions is quite expensive since it requires the eigendecomposition of an N× N diffusion matrix, where N denotes the data size and could be very large. For large-scale forecasting problems, only a few leading eigenvectors are computationally achievable. To overcome this computational bottleneck, a new set of basis functions constructed by orthonormalizing selected columns of the diffusion matrix and its leading eigenvectors is proposed. This computation can be carried out efficiently via the unpivoted Householder QR factorization. The efficiency and effectiveness of the proposed algorithm will be shown in both deterministically chaotic and stochastic dynamical systems; in the former case, the superiority of the proposed basis functions over purely eigenvectors is significant, while in the latter case forecasting accuracy is improved relative to using a purely small number of eigenvectors. Supporting arguments will be provided on three- and six-dimensional chaotic ODEs, a three-dimensional SDE that mimics turbulent systems, and also on the two spatial modes associated with the boreal winter Madden-Julian Oscillation obtained from applying the Nonlinear Laplacian Spectral Analysis on the measured Outgoing Longwave Radiation.
Probing community nurses' professional basis
DEFF Research Database (Denmark)
Schaarup, Clara; Pape-Haugaard, Louise; Jensen, Merete Hartun
2017-01-01
Complicated and long-lasting wound care of diabetic foot ulcers are moving from specialists in wound care at hospitals towards community nurses without specialist diabetic foot ulcer wound care knowledge. The aim of the study is to elucidate community nurses' professional basis for treating...... diabetic foot ulcers. A situational case study design was adopted in an archetypical Danish community nursing setting. Experience is a crucial component in the community nurses' professional basis for treating diabetic foot ulcers. Peer-to-peer training is the prevailing way to learn about diabetic foot...... ulcer, however, this contributes to the risk of low evidence-based practice. Finally, a frequent behaviour among the community nurses is to consult colleagues before treating the diabetic foot ulcers....
Authorization basis requirements comparison report
Energy Technology Data Exchange (ETDEWEB)
Brantley, W.M.
1997-08-18
The TWRS Authorization Basis (AB) consists of a set of documents identified by TWRS management with the concurrence of DOE-RL. Upon implementation of the TWRS Basis for Interim Operation (BIO) and Technical Safety Requirements (TSRs), the AB list will be revised to include the BIO and TSRs. Some documents that currently form part of the AB will be removed from the list. This SD identifies each - requirement from those documents, and recommends a disposition for each to ensure that necessary requirements are retained when the AB is revised to incorporate the BIO and TSRs. This SD also identifies documents that will remain part of the AB after the BIO and TSRs are implemented. This document does not change the AB, but provides guidance for the preparation of change documentation.
Authorization basis requirements comparison report
International Nuclear Information System (INIS)
Brantley, W.M.
1997-01-01
The TWRS Authorization Basis (AB) consists of a set of documents identified by TWRS management with the concurrence of DOE-RL. Upon implementation of the TWRS Basis for Interim Operation (BIO) and Technical Safety Requirements (TSRs), the AB list will be revised to include the BIO and TSRs. Some documents that currently form part of the AB will be removed from the list. This SD identifies each - requirement from those documents, and recommends a disposition for each to ensure that necessary requirements are retained when the AB is revised to incorporate the BIO and TSRs. This SD also identifies documents that will remain part of the AB after the BIO and TSRs are implemented. This document does not change the AB, but provides guidance for the preparation of change documentation
The optimal design of UAV wing structure
Długosz, Adam; Klimek, Wiktor
2018-01-01
The paper presents an optimal design of UAV wing, made of composite materials. The aim of the optimization is to improve strength and stiffness together with reduction of the weight of the structure. Three different types of functionals, which depend on stress, stiffness and the total mass are defined. The paper presents an application of the in-house implementation of the evolutionary multi-objective algorithm in optimization of the UAV wing structure. Values of the functionals are calculated on the basis of results obtained from numerical simulations. Numerical FEM model, consisting of different composite materials is created. Adequacy of the numerical model is verified by results obtained from the experiment, performed on a tensile testing machine. Examples of multi-objective optimization by means of Pareto-optimal set of solutions are presented.
International Nuclear Information System (INIS)
Mubayi, V.
1995-05-01
The consequences of severe accidents at nuclear power plants can be limited by various protective actions, including emergency responses and long-term measures, to reduce exposures of affected populations. Each of these protective actions involve costs to society. The costs of the long-term protective actions depend on the criterion adopted for the allowable level of long-term exposure. This criterion, called the ''long term interdiction limit,'' is expressed in terms of the projected dose to an individual over a certain time period from the long-term exposure pathways. The two measures of offsite consequences, latent cancers and costs, are inversely related and the choice of an interdiction limit is, in effect, a trade-off between these two measures. By monetizing the health effects (through ascribing a monetary value to life lost), the costs of the two consequence measures vary with the interdiction limit, the health effect costs increasing as the limit is relaxed and the protective action costs decreasing. The minimum of the total cost curve can be used to calculate an optimal long term interdiction limit. The calculation of such an optimal limit is presented for each of five US nuclear power plants which were analyzed for severe accident risk in the NUREG-1150 program by the Nuclear Regulatory Commission
Cherni, Yosra; Begon, Mickael; Chababe, Hicham; Moissenet, Florent
2017-09-01
While generic protocols exist for gait rehabilitation using robotic orthotics such as the Lokomat ® , several settings - guidance, body-weight support (BWS) and velocity - may be adjusted to individualize patient training. However, no systematic approach has yet emerged. Our objective was to assess the feasibility and effects of a systematic approach based on electromyography to determine subject-specific settings with application to the strengthening of the gluteus maximus muscle in post-stroke hemiparetic patients. Two male patients (61 and 65 years) with post-stroke hemiparesis performed up to 9 Lokomat ® trials by changing guidance and BWS while electromyography of the gluteus maximus was measured. For each subject, the settings that maximized gluteus maximus activity were used in 20 sessions of Lokomat ® training. Modified Functional Ambulation Classification (mFAC), 6-minutes walking test (6-MWT), and extensor strength were measured before and after training. The greatest gluteus maximus activity was observed at (Guidance: 70% -BWS: 20%) for Patient 1 and (Guidance: 80% - BWS: 30%) for Patient 2. In both patients, mFAC score increased from 4 to 7. The additional distance in 6-MWT increased beyond minimal clinically important difference (MCID=34.4m) reported for post-stroke patients. The isometric strength of hip extensors increased by 43 and 114%. Defining subject-specific settings for a Lokomat ® training was feasible and simple to implement. These two case reports suggest a benefit of this approach for muscle strengthening. It remains to demonstrate the superiority of such an approach for a wider population, compared to the use of a generic protocol. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
CLOPW; a mixed basis set full potential electronic structure method
Bekker, H.G.; Bekker, Hermie Gerhard
1997-01-01
This thesis is about the development of the full potental CLOPW package for electronic structure calculations. Chapter 1 provides the necessary background in the theory of solid state physics. It gives a short overview of the effective one particle model as commonly used in solid state physics. It
Directory of Open Access Journals (Sweden)
G.Chaitanya
2013-12-01
Full Text Available The present work aims at maximizing the overall heat transfer rate of an automobile radiator using Genetic Algorithm approach. The design specifications and empirical data pertaining to a rally car radiator obtained from literature are considered in the present work. The mathematical function describing the objective for the problem is formulated using the radiator core design equations and heat transfer relations governing the radiator. The overall heat transfer rate obtained from the present optimization technique is found to be 9.48 percent higher compared to the empirical value present in the literature. Also, the enhancement in the overall heat transfer rate is achieved with a marginal reduction in the radiator dimensions indicating better spacing ratio compared to the existing design.
Theoretical basis for dosimetry
International Nuclear Information System (INIS)
Carlsson, G.A.
1985-01-01
Radiation dosimetry is fundamental to all fields of science dealing with radiation effects and is concerned with problems which are often intricate as hinted above. A firm scientific basis is needed to face increasing demands on accurate dosimetry. This chapter is an attempt to review and to elucidate the elements for such a basis. Quantities suitable for radiation dosimetry have been defined in the unique work to coordinate radiation terminology and usage by the International Commission on Radiation Units and Measurements, ICRU. Basic definitions and terminology used in this chapter conform with the recent ''Radiation Quantities and Units, Report 33'' of the ICRU
Economic communication model set
Zvereva, Olga M.; Berg, Dmitry B.
2017-06-01
This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.
Directory of Open Access Journals (Sweden)
Manusov Vadim
2016-01-01
Full Text Available In this article the basic principles and classifications of small hydroelectric power stations depending on the power of system in which they work are provided. Especially, in the conditions of Tajikistan when all small hydroelectric power stations in some regions carry out the functions of big hydroelectric power stations. We suggest to use new classification by using function of fuzzy logic. The new concept of a power complex (HUB on the basis of the renewed energy source (RES is also offered.
DEFF Research Database (Denmark)
Tsapatsaris, Nikolaos; Willendrup, Peter Kjær; E. Lechner, Ruep
2015-01-01
Results based on virtual instrument models for the first high-flux, high-resolution, spallation based, backscattering spectrometer, BASIS are presented in this paper. These were verified using the Monte Carlo instrument simulation packages McStas and VITESS. Excellent agreement of the neutron count...... are pivotal to the conceptual design of the next generation backscattering spectrometer, MIRACLES at the European Spallation Source....
Hong, Haoyuan; Tsangaratos, Paraskevas; Ilia, Ioanna; Liu, Junzhi; Zhu, A-Xing; Xu, Chong
2018-07-15
The main objective of the present study was to utilize Genetic Algorithms (GA) in order to obtain the optimal combination of forest fire related variables and apply data mining methods for constructing a forest fire susceptibility map. In the proposed approach, a Random Forest (RF) and a Support Vector Machine (SVM) was used to produce a forest fire susceptibility map for the Dayu County which is located in southwest of Jiangxi Province, China. For this purpose, historic forest fires and thirteen forest fire related variables were analyzed, namely: elevation, slope angle, aspect, curvature, land use, soil cover, heat load index, normalized difference vegetation index, mean annual temperature, mean annual wind speed, mean annual rainfall, distance to river network and distance to road network. The Natural Break and the Certainty Factor method were used to classify and weight the thirteen variables, while a multicollinearity analysis was performed to determine the correlation among the variables and decide about their usability. The optimal set of variables, determined by the GA limited the number of variables into eight excluding from the analysis, aspect, land use, heat load index, distance to river network and mean annual rainfall. The performance of the forest fire models was evaluated by using the area under the Receiver Operating Characteristic curve (ROC-AUC) based on the validation dataset. Overall, the RF models gave higher AUC values. Also the results showed that the proposed optimized models outperform the original models. Specifically, the optimized RF model gave the best results (0.8495), followed by the original RF (0.8169), while the optimized SVM gave lower values (0.7456) than the RF, however higher than the original SVM (0.7148) model. The study highlights the significance of feature selection techniques in forest fire susceptibility, whereas data mining methods could be considered as a valid approach for forest fire susceptibility modeling
Energy Technology Data Exchange (ETDEWEB)
Haeck, M. [Dr. Bruno Lange GmbH Berlin, Duesseldorf (Germany)
1999-07-01
The spectral absorption coefficient (SAC) is a sum parameter for describing the organic pollutant load of waste water. It is based on a purely physical measuring technique and can be monitored continuously and directly in the medium by means of the described UV process probe. From this arise numerous opportunities for optimizing waste water discharge and cleaning. (orig.) [German] Der spektrale Absorptionskoeffizient (SAK) ist ein Summenparameter zur Beschreibung der organischen Abwasserbelastung. Er basiert auf einem rein physikalischen Messverfahren und kann mit der hier vorgestellten UV-Prozess-Sonde kontinuierlich und direkt im Medium erfasst werden. Daraus ergeben sich zahlreiche Moeglichkeiten zur Optimierung von Abwasserableitung und -reinigung. (orig.)
Ly, Trang T; Weinzimer, Stuart A; Maahs, David M; Sherr, Jennifer L; Roy, Anirban; Grosman, Benyamin; Cantwell, Martin; Kurtz, Natalie; Carria, Lori; Messer, Laurel; von Eyben, Rie; Buckingham, Bruce A
2017-08-01
Automated insulin delivery systems, utilizing a control algorithm to dose insulin based upon subcutaneous continuous glucose sensor values and insulin pump therapy, will soon be available for commercial use. The objective of this study was to determine the preliminary safety and efficacy of initialization parameters with the Medtronic hybrid closed-loop controller by comparing percentage of time in range, 70-180 mg/dL (3.9-10 mmol/L), mean glucose values, as well as percentage of time above and below target range between sensor-augmented pump therapy and hybrid closed-loop, in adults and adolescents with type 1 diabetes. We studied an initial cohort of 9 adults followed by a second cohort of 15 adolescents, using the Medtronic hybrid closed-loop system with the proportional-integral-derivative with insulin feed-back (PID-IFB) algorithm. Hybrid closed-loop was tested in supervised hotel-based studies over 4-5 days. The overall mean percentage of time in range (70-180 mg/dL, 3.9-10 mmol/L) during hybrid closed-loop was 71.8% in the adult cohort and 69.8% in the adolescent cohort. The overall percentage of time spent under 70 mg/dL (3.9 mmol/L) was 2.0% in the adult cohort and 2.5% in the adolescent cohort. Mean glucose values were 152 mg/dL (8.4 mmol/L) in the adult cohort and 153 mg/dL (8.5 mmol/L) in the adolescent cohort. Closed-loop control using the Medtronic hybrid closed-loop system enables adaptive, real-time basal rate modulation. Initializing hybrid closed-loop in clinical practice will involve individualizing initiation parameters to optimize overall glucose control. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Energy Technology Data Exchange (ETDEWEB)
Larsen, G.; Soerensen, P. [Risoe National Lab., Roskilde (Denmark)
1996-09-01
Design Basis Program 2 (DBP2) is comprehensive fully coupled code which has the capability to operate in the time domain as well as in the frequency domain. The code was developed during the period 1991-93 and succeed Design Basis 1, which is a one-blade model presuming stiff tower, transmission system and hub. The package is designed for use on a personal computer and offers a user-friendly environment based on menu-driven editing and control facilities, and with graphics used extensively for the data presentation. Moreover in-data as well as results are dumped on files in Ascii-format. The input data is organized in a in-data base with a structure that easily allows for arbitrary combinations of defined structural components and load cases. (au)
Optimal resource states for local state discrimination
Bandyopadhyay, Somshubhro; Halder, Saronath; Nathanson, Michael
2018-02-01
We study the problem of locally distinguishing pure quantum states using shared entanglement as a resource. For a given set of locally indistinguishable states, we define a resource state to be useful if it can enhance local distinguishability and optimal if it can distinguish the states as well as global measurements and is also minimal with respect to a partial ordering defined by entanglement and dimension. We present examples of useful resources and show that an entangled state need not be useful for distinguishing a given set of states. We obtain optimal resources with explicit local protocols to distinguish multipartite Greenberger-Horne-Zeilinger and graph states and also show that a maximally entangled state is an optimal resource under one-way local operations and classical communication to distinguish any bipartite orthonormal basis which contains at least one entangled state of full Schmidt rank.
Study of integrated optimization design of wind farm in complex terrain
DEFF Research Database (Denmark)
Xu, Chang; Chen, Dandan; Han, Xingxing
2017-01-01
wind farm design in complex terrain and setting up integrated optimization mathematical model for micro-site selection, power lines and road maintenance design etc.. Based on the existing 1-year wind measurement data in the wind farm area, the genetic algorithm was used to optimize the micro......-site selection. On the basis of location optimization of wind turbine, the optimization algorithms such as single-source shortest path algorithm and minimum spanning tree algorithm were used to optimize electric lines and maintenance roads. The practice shows that the research results can provide important...
Siedner, Mark J; Bwana, Mwebesa B; Moosa, Mahomed-Yunus S; Paul, Michelle; Pillay, Selvan; McCluskey, Suzanne; Aturinda, Isaac; Ard, Kevin; Muyindike, Winnie; Moodley, Pravikrishnen; Brijkumar, Jaysingh; Rautenberg, Tamlyn; George, Gavin; Johnson, Brent; Gandhi, Rajesh T; Sunpath, Henry; Marconi, Vincent C
2017-07-01
In sub-Saharan Africa, rates of sustained HIV virologic suppression remain below international goals. HIV resistance testing, while common in resource-rich settings, has not gained traction due to concerns about cost and sustainability. We designed a randomized clinical trial to determine the feasibility, effectiveness, and cost-effectiveness of routine HIV resistance testing in sub-Saharan Africa. We describe challenges common to intervention studies in resource-limited settings, and strategies used to address them, including: (1) optimizing generalizability and cost-effectiveness estimates to promote transition from study results to policy; (2) minimizing bias due to patient attrition; and (3) addressing ethical issues related to enrollment of pregnant women. The study randomizes people in Uganda and South Africa with virologic failure on first-line therapy to standard of care virologic monitoring or immediate resistance testing. To strengthen external validity, study procedures are conducted within publicly supported laboratory and clinical facilities using local staff. To optimize cost estimates, we collect primary data on quality of life and medical resource utilization. To minimize losses from observation, we collect locally relevant contact information, including Whatsapp account details, for field-based tracking of missing participants. Finally, pregnant women are followed with an adapted protocol which includes an increased visit frequency to minimize risk to them and their fetuses. REVAMP is a pragammatic randomized clinical trial designed to test the effectiveness and cost-effectiveness of HIV resistance testing versus standard of care in sub-Saharan Africa. We anticipate the results will directly inform HIV policy in sub-Saharan Africa to optimize care for HIV-infected patients.
Topology optimized microbioreactors
DEFF Research Database (Denmark)
Schäpper, Daniel; Lencastre Fernandes, Rita; Eliasson Lantz, Anna
2011-01-01
This article presents the fusion of two hitherto unrelated fields—microbioreactors and topology optimization. The basis for this study is a rectangular microbioreactor with homogeneously distributed immobilized brewers yeast cells (Saccharomyces cerevisiae) that produce a recombinant protein...
Radioactive Waste Management Basis
International Nuclear Information System (INIS)
Perkins, B.K.
2009-01-01
The purpose of this Radioactive Waste Management Basis is to describe the systematic approach for planning, executing, and evaluating the management of radioactive waste at LLNL. The implementation of this document will ensure that waste management activities at LLNL are conducted in compliance with the requirements of DOE Order 435.1, Radioactive Waste Management, and the Implementation Guide for DOE Manual 435.1-1, Radioactive Waste Management Manual. Technical justification is provided where methods for meeting the requirements of DOE Order 435.1 deviate from the DOE Manual 435.1-1 and Implementation Guide.
Molecular basis for mitochondrial signaling
2017-01-01
This book covers recent advances in the study of structure, function, and regulation of metabolite, protein and ion translocating channels, and transporters in mitochondria. A wide array of cutting-edge methods are covered, ranging from electrophysiology and cell biology to bioinformatics, as well as structural, systems, and computational biology. At last, the molecular identity of two important channels in the mitochondrial inner membrane, the mitochondrial calcium uniporter and the mitochondrial permeability transition pore have been established. After years of work on the physiology and structure of VDAC channels in the mitochondrial outer membrane, there have been multiple discoveries on VDAC permeation and regulation by cytosolic proteins. Recent breakthroughs in structural studies of the mitochondrial cholesterol translocator reveal a set of novel unexpected features and provide essential clues for defining therapeutic strategies. Molecular Basis for Mitochondrial Signaling covers these and many more re...
Basis for selecting optimum antibiotic regimens for secondary peritonitis.
Maseda, Emilio; Gimenez, Maria-Jose; Gilsanz, Fernando; Aguilar, Lorenzo
2016-01-01
Adequate management of severely ill patients with secondary peritonitis requires supportive therapy of organ dysfunction, source control of infection and antimicrobial therapy. Since secondary peritonitis is polymicrobial, appropriate empiric therapy requires combination therapy in order to achieve the needed coverage for both common and more unusual organisms. This article reviews etiological agents, resistance mechanisms and their prevalence, how and when to cover them and guidelines for treatment in the literature. Local surveillances are the basis for the selection of compounds in antibiotic regimens, which should be further adapted to the increasing number of patients with risk factors for resistance (clinical setting, comorbidities, previous antibiotic treatments, previous colonization, severity…). Inadequate antimicrobial regimens are strongly associated with unfavorable outcomes. Awareness of resistance epidemiology and of clinical consequences of inadequate therapy against resistant bacteria is crucial for clinicians treating secondary peritonitis, with delicate balance between optimization of empirical therapy (improving outcomes) and antimicrobial overuse (increasing resistance emergence).
Harman, Nate
2016-01-01
We consider the following counting problem related to the card game SET: How many $k$-element SET-free sets are there in an $n$-dimensional SET deck? Through a series of algebraic reformulations and reinterpretations, we show the answer to this question satisfies two polynomiality conditions.
Energy Technology Data Exchange (ETDEWEB)
NONE
2002-01-01
Following on from the Final Report of the EDA(DS/21), and the summary of the ITER Final Design report(DS/22), the technical basis gives further details of the design of ITER. It is in two parts. The first, the Plant Design specification, summarises the main constraints on the plant design and operation from the viewpoint of engineering and physics assumptions, compliance with safety regulations, and siting requirements and assumptions. The second, the Plant Description Document, describes the physics performance and engineering characteristics of the plant design, illustrates the potential operational consequences foe the locality of a generic site, gives the construction, commissioning, exploitation and decommissioning schedule, and reports the estimated lifetime costing based on data from the industry of the EDA parties.
International Nuclear Information System (INIS)
2002-01-01
Following on from the Final Report of the EDA(DS/21), and the summary of the ITER Final Design report(DS/22), the technical basis gives further details of the design of ITER. It is in two parts. The first, the Plant Design specification, summarises the main constraints on the plant design and operation from the viewpoint of engineering and physics assumptions, compliance with safety regulations, and siting requirements and assumptions. The second, the Plant Description Document, describes the physics performance and engineering characteristics of the plant design, illustrates the potential operational consequences foe the locality of a generic site, gives the construction, commissioning, exploitation and decommissioning schedule, and reports the estimated lifetime costing based on data from the industry of the EDA parties
Basis of valve operator selection for SMART
International Nuclear Information System (INIS)
Kang, H. S.; Lee, D. J.; See, J. K.; Park, C. K.; Choi, B. S.
2000-05-01
SMART, an integral reactor with enhanced safety and operability, is under development for use of the nuclear energy. The valve operator of SMART system were selected through the data survey and technical review of potential valve fabrication vendors, and it will provide the establishment and optimization of the basic system design of SMART. In order to establish and optimize the basic system design of SMART, the basis of selection for the valve operator type were provided based on the basic design requirements. The basis of valve operator selection for SMART will be used as a basic technical data for the SMART basic and detail design and a fundamental material for the new reactor development in the future
Basis of valve operator selection for SMART
Energy Technology Data Exchange (ETDEWEB)
Kang, H. S.; Lee, D. J.; See, J. K.; Park, C. K.; Choi, B. S
2000-05-01
SMART, an integral reactor with enhanced safety and operability, is under development for use of the nuclear energy. The valve operator of SMART system were selected through the data survey and technical review of potential valve fabrication vendors, and it will provide the establishment and optimization of the basic system design of SMART. In order to establish and optimize the basic system design of SMART, the basis of selection for the valve operator type were provided based on the basic design requirements. The basis of valve operator selection for SMART will be used as a basic technical data for the SMART basic and detail design and a fundamental material for the new reactor development in the future.
The Biological Basis of Learning and Individuality.
Kandel, Eric R.; Hawkins, Robert D.
1992-01-01
Describes the biological basis of learning and individuality. Presents an overview of recent discoveries that suggest learning engages a simple set of rules that modify the strength of connection between neurons in the brain. The changes are cited as playing an important role in making each individual unique. (MCO)
The Emotional and Moral Basis of Rationality
Boostrom, Robert
2013-01-01
This chapter explores the basis of rationality, arguing that critical thinking tends to be taught in schools as a set of skills because of the failure to recognize that choosing to think critically depends on the prior development of stable sentiments or moral habits that nourish a rational self. Primary among these stable sentiments are the…
Totally optimal decision trees for Boolean functions
Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail
2016-01-01
We study decision trees which are totally optimal relative to different sets of complexity parameters for Boolean functions. A totally optimal tree is an optimal tree relative to each parameter from the set simultaneously. We consider the parameters
Verifying optimal depth settings for LFAS
Lam, F.P.A.; Beerens, S.P.; Ainslie, M.A.
2006-01-01
Naval operations in coastal waters are challenging the modelling support in several disciplines. An important instrument for undersea defence in the littoral is the LFAS sonar. To adapt to the local acoustic environment, LFAS sonars can adjust their operation depth to increase the coverage of the
Optimizing anesthesia techniques in the ambulatory setting
E. Galvin (Eilish)
2007-01-01
textabstractAmbulatory surgery refers to the process of admitting patients, administering anesthesia and surgical care, and discharging patients home following an appropriate level of recovery on the same day. The word ambulatory is derived from the latin word ambulare, which means ''to walk''. This