Feller, David; Dixon, David A
2018-03-08
Two recent papers in this journal called into question the suitability of the correlation consistent basis sets for density functional theory (DFT) calculations, because the sets were designed for correlated methods such as configuration interaction, perturbation theory, and coupled cluster theory. These papers focused on the ability of the correlation consistent and other basis sets to reproduce total energies, atomization energies, and dipole moments obtained from "quasi-exact" multiwavelet results. Undesirably large errors were observed for the correlation consistent basis sets. One of the papers argued that basis sets specifically optimized for DFT methods were "essential" for obtaining high accuracy. In this work we re-examined the performance of the correlation consistent basis sets by resolving problems with the previous calculations and by making more appropriate basis set choices for the alkali and alkaline-earth metals and second-row elements. When this is done, the statistical errors with respect to the benchmark values and with respect to DFT optimized basis sets are greatly reduced, especially in light of the relatively large intrinsic error of the underlying DFT method. When judged with respect to high-quality Feller-Peterson-Dixon coupled cluster theory atomization energies, the PBE0 DFT method used in the previous studies exhibits a mean absolute deviation more than a factor of 50 larger than the quintuple zeta basis set truncation error.
Correlation consistent basis sets for actinides. I. The Th and U atoms
Energy Technology Data Exchange (ETDEWEB)
Peterson, Kirk A., E-mail: kipeters@wsu.edu [Department of Chemistry, Washington State University, Pullman, Washington 99164-4630 (United States)
2015-02-21
New correlation consistent basis sets based on both pseudopotential (PP) and all-electron Douglas-Kroll-Hess (DKH) Hamiltonians have been developed from double- to quadruple-zeta quality for the actinide atoms thorium and uranium. Sets for valence electron correlation (5f6s6p6d), cc − pV nZ − PP and cc − pV nZ − DK3, as well as outer-core correlation (valence + 5s5p5d), cc − pwCV nZ − PP and cc − pwCV nZ − DK3, are reported (n = D, T, Q). The -PP sets are constructed in conjunction with small-core, 60-electron PPs, while the -DK3 sets utilized the 3rd-order Douglas-Kroll-Hess scalar relativistic Hamiltonian. Both series of basis sets show systematic convergence towards the complete basis set limit, both at the Hartree-Fock and correlated levels of theory, making them amenable to standard basis set extrapolation techniques. To assess the utility of the new basis sets, extensive coupled cluster composite thermochemistry calculations of ThF{sub n} (n = 2 − 4), ThO{sub 2}, and UF{sub n} (n = 4 − 6) have been carried out. After accurately accounting for valence and outer-core correlation, spin-orbit coupling, and even Lamb shift effects, the final 298 K atomization enthalpies of ThF{sub 4}, ThF{sub 3}, ThF{sub 2}, and ThO{sub 2} are all within their experimental uncertainties. Bond dissociation energies of ThF{sub 4} and ThF{sub 3}, as well as UF{sub 6} and UF{sub 5}, were similarly accurate. The derived enthalpies of formation for these species also showed a very satisfactory agreement with experiment, demonstrating that the new basis sets allow for the use of accurate composite schemes just as in molecular systems composed only of lighter atoms. The differences between the PP and DK3 approaches were found to increase with the change in formal oxidation state on the actinide atom, approaching 5-6 kcal/mol for the atomization enthalpies of ThF{sub 4} and ThO{sub 2}. The DKH3 atomization energy of ThO{sub 2} was calculated to be smaller than the DKH2
Correlation consistent basis sets for actinides. II. The atoms Ac and Np-Lr.
Feng, Rulin; Peterson, Kirk A
2017-08-28
New correlation consistent basis sets optimized using the all-electron third-order Douglas-Kroll-Hess (DKH3) scalar relativistic Hamiltonian are reported for the actinide elements Ac and Np through Lr. These complete the series of sets reported previously for Th-U [K. A. Peterson, J. Chem. Phys. 142, 074105 (2015); M. Vasiliu et al., J. Phys. Chem. A 119, 11422 (2015)]. The new sets range in size from double- to quadruple-zeta and encompass both those optimized for valence (6s6p5f7s6d) and outer-core electron correlations (valence + 5s5p5d). The final sets have been contracted for both the DKH3 and eXact 2-component (X2C) Hamiltonians, yielding cc-pVnZ-DK3/cc-pVnZ-X2C sets for valence correlation and cc-pwCVnZ-DK3/cc-pwCVnZ-X2C sets for outer-core correlation (n = D, T, Q in each case). In order to test the effectiveness of the new basis sets, both atomic and molecular benchmark calculations have been carried out. In the first case, the first three atomic ionization potentials (IPs) of all the actinide elements Ac-Lr have been calculated using the Feller-Peterson-Dixon (FPD) composite approach, primarily with the multireference configuration interaction (MRCI) method. Excellent convergence towards the respective complete basis set (CBS) limits is achieved with the new sets, leading to good agreement with experiment, where these exist, after accurately accounting for spin-orbit effects using the 4-component Dirac-Hartree-Fock method. For a molecular test, the IP and atomization energy (AE) of PuO 2 have been calculated also using the FPD method but using a coupled cluster approach with spin-orbit coupling accounted for using the 4-component MRCI. The present calculations yield an IP 0 for PuO 2 of 159.8 kcal/mol, which is in excellent agreement with the experimental electron transfer bracketing value of 162 ± 3 kcal/mol. Likewise, the calculated 0 K AE of 305.6 kcal/mol is in very good agreement with the currently accepted experimental value of 303.1 ± 5 kcal
Grimme, Stefan; Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas
2015-08-07
A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of "low-cost" electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT methods
International Nuclear Information System (INIS)
Grimme, Stefan; Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas
2015-01-01
A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of “low-cost” electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT
Correlation consistent basis sets for lanthanides: The atoms La–Lu
Energy Technology Data Exchange (ETDEWEB)
Lu, Qing; Peterson, Kirk A., E-mail: kipeters@wsu.edu [Department of Chemistry, Washington State University, Pullman, Washington 99164-4630 (United States)
2016-08-07
Using the 3rd-order Douglas-Kroll-Hess (DKH3) Hamiltonian, all-electron correlation consistent basis sets of double-, triple-, and quadruple-zeta quality have been developed for the lanthanide elements La through Lu. Basis sets designed for the recovery of valence correlation (defined here as 4f5s5p5d6s), cc-pVnZ-DK3, and outer-core correlation (valence + 4s4p4d), cc-pwCVnZ-DK3, are reported (n = D, T, and Q). Systematic convergence of both Hartree-Fock and correlation energies towards their respective complete basis set (CBS) limits are observed. Benchmark calculations of the first three ionization potentials (IPs) of La through Lu are reported at the DKH3 coupled cluster singles and doubles with perturbative triples, CCSD(T), level of theory, including effects of correlation down through the 4s electrons. Spin-orbit coupling is treated at the 2-component HF level. After extrapolation to the CBS limit, the average errors with respect to experiment were just 0.52, 1.14, and 4.24 kcal/mol for the 1st, 2nd, and 3rd IPs, respectively, compared to the average experimental uncertainties of 0.03, 1.78, and 2.65 kcal/mol, respectively. The new basis sets are also used in CCSD(T) benchmark calculations of the equilibrium geometries, atomization energies, and heats of formation for Gd{sub 2}, GdF, and GdF{sub 3}. Except for the equilibrium geometry and harmonic frequency of GdF, which are accurately known from experiment, all other calculated quantities represent significant improvements compared to the existing experimental quantities. With estimated uncertainties of about ±3 kcal/mol, the 0 K atomization energies (298 K heats of formation) are calculated to be (all in kcal/mol): 33.2 (160.1) for Gd{sub 2}, 151.7 (−36.6) for GdF, and 447.1 (−295.2) for GdF{sub 3}.
Sylvetsky, Nitai; Kesharwani, Manoj K; Martin, Jan M L
2017-10-07
We have developed a new basis set family, denoted as aug-cc-pVnZ-F12 (or aVnZ-F12 for short), for explicitly correlated calculations. The sets included in this family were constructed by supplementing the corresponding cc-pVnZ-F12 sets with additional diffuse functions on the higher angular momenta (i.e., additional d-h functions on non-hydrogen atoms and p-g on hydrogen atoms), optimized for the MP2-F12 energy of the relevant atomic anions. The new basis sets have been benchmarked against electron affinities of the first- and second-row atoms, the W4-17 dataset of total atomization energies, the S66 dataset of noncovalent interactions, the Benchmark Energy and Geometry Data Base water cluster subset, and the WATER23 subset of the GMTKN24 and GMTKN30 benchmark suites. The aVnZ-F12 basis sets displayed excellent performance, not just for electron affinities but also for noncovalent interaction energies of neutral and anionic species. Appropriate CABSs (complementary auxiliary basis sets) were explored for the S66 noncovalent interaction benchmark: between similar-sized basis sets, CABSs were found to be more transferable than generally assumed.
Laun, Joachim; Vilela Oliveira, Daniel; Bredow, Thomas
2018-02-22
Consistent basis sets of double- and triple-zeta valence with polarization quality for the fifth period have been derived for periodic quantum-chemical solid-state calculations with the crystalline-orbital program CRYSTAL. They are an extension of the pob-TZVP basis sets, and are based on the full-relativistic effective core potentials (ECPs) of the Stuttgart/Cologne group and on the def2-SVP and def2-TZVP valence basis of the Ahlrichs group. We optimized orbital exponents and contraction coefficients to supply robust and stable self-consistent field (SCF) convergence for a wide range of different compounds. The computed crystal structures are compared to those obtained with standard basis sets available from the CRYSTAL basis set database. For the applied hybrid density functional PW1PW, the average deviations of calculated lattice constants from experimental references are smaller with pob-DZVP and pob-TZVP than with standard basis sets. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Energy Technology Data Exchange (ETDEWEB)
Friese, Daniel H., E-mail: daniel.h.friese@uit.no [Centre for Theoretical and Computational Chemistry CTCC, Department of Chemistry, University of Tromsø, N-9037 Tromsø (Norway); Törk, Lisa; Hättig, Christof, E-mail: christof.haettig@rub.de [Lehrstuhl für Theoretische Chemie, Ruhr-Universität Bochum, D-44801 Bochum (Germany)
2014-11-21
We present scaling factors for vibrational frequencies calculated within the harmonic approximation and the correlated wave-function methods coupled cluster singles and doubles model (CC2) and Møller-Plesset perturbation theory (MP2) with and without a spin-component scaling (SCS or spin-opposite scaling (SOS)). Frequency scaling factors and the remaining deviations from the reference data are evaluated for several non-augmented basis sets of the cc-pVXZ family of generally contracted correlation-consistent basis sets as well as for the segmented contracted TZVPP basis. We find that the SCS and SOS variants of CC2 and MP2 lead to a slightly better accuracy for the scaled vibrational frequencies. The determined frequency scaling factors can also be used for vibrational frequencies calculated for excited states through response theory with CC2 and the algebraic diagrammatic construction through second order and their spin-component scaled variants.
International Nuclear Information System (INIS)
Blanco, M.; Heller, E.J.
1985-01-01
A new Cartesian basis set is defined that is suitable for the representation of molecular vibration-rotation bound states. The Cartesian basis functions are superpositions of semiclassical states generated through the use of classical trajectories that conform to the intrinsic dynamics of the molecule. Although semiclassical input is employed, the method becomes ab initio through the standard matrix diagonalization variational method. Special attention is given to classical-quantum correspondences for angular momentum. In particular, it is shown that the use of semiclassical information preferentially leads to angular momentum eigenstates with magnetic quantum number Vertical BarMVertical Bar equal to the total angular momentum J. The present method offers a reliable technique for representing highly excited vibrational-rotational states where perturbation techniques are no longer applicable
International Nuclear Information System (INIS)
Wang, C.S.; Freeman, A.J.
1979-01-01
We present the self-consistent numerical-basis-set linear combination of atomic orbitals (LCAO) discrete variational method for treating the electronic structure of thin films. As in the case of bulk solids, this method provides for thin films accurate solutions of the one-particle local density equations with a non-muffin-tin potential. Hamiltonian and overlap matrix elements are evaluated accurately by means of a three-dimensional numerical Diophantine integration scheme. Application of this method is made to the self-consistent solution of one-, three-, and five-layer Ni(001) unsupported films. The LCAO Bloch basis set consists of valence orbitals (3d, 4s, and 4p states for transition metals) orthogonalized to the frozen-core wave functions. The self-consistent potential is obtained iteratively within the superposition of overlapping spherical atomic charge density model with the atomic configurations treated as adjustable parameters. Thus the crystal Coulomb potential is constructed as a superposition of overlapping spherically symmetric atomic potentials and, correspondingly, the local density Kohn-Sham (α = 2/3) potential is determined from a superposition of atomic charge densities. At each iteration in the self-consistency procedure, the crystal charge density is evaluated using a sampling of 15 independent k points in (1/8)th of the irreducible two-dimensional Brillouin zone. The total density of states (DOS) and projected local DOS (by layer plane) are calculated using an analytic linear energy triangle method (presented as an Appendix) generalized from the tetrahedron scheme for bulk systems. Distinct differences are obtained between the surface and central plane local DOS. The central plane DOS is found to converge rapidly to the DOS of bulk paramagnetic Ni obtained by Wang and Callaway. Only a very small surplus charge (0.03 electron/atom) is found on the surface planes, in agreement with jellium model calculations
Energy Technology Data Exchange (ETDEWEB)
Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au [School of Chemistry and Biochemistry, The University of Western Australia, Perth, WA 6009 (Australia)
2015-05-15
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.
International Nuclear Information System (INIS)
Spackman, Peter R.; Karton, Amir
2015-01-01
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L α two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol –1 . The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol –1
Conductance calculations with a wavelet basis set
DEFF Research Database (Denmark)
Thygesen, Kristian Sommer; Bollinger, Mikkel; Jacobsen, Karsten Wedel
2003-01-01
We present a method based on density functional theory (DFT) for calculating the conductance of a phase-coherent system. The metallic contacts and the central region where the electron scattering occurs, are treated on the same footing taking their full atomic and electronic structure into account....... The linear-response conductance is calculated from the Green's function which is represented in terms of a system-independent basis set containing wavelets with compact support. This allows us to rigorously separate the central region from the contacts and to test for convergence in a systematic way...
Physically self-consistent basis for modern cosmology
International Nuclear Information System (INIS)
Khlopov, M.Yu.
2000-01-01
Cosmoparticle physics appeared as a natural result of internal development of cosmology seeking physical grounds for inflation, baryosynthesis, and nonbaryonic dark matter and of particle physics going outside the Standard Model of particle interactions. Its aim is to study the foundations of particle physics and cosmology and their fundamental relationship in the combination of respective indirect cosmological, astrophysical, and physical effects. The ideas on new particles and fields predicted by particle theory and on their cosmological impact are discussed, as well as the methods of cosmoparticle physics to probe these ideas, are considered with special analysis of physical mechanisms for inflation, baryosynthesis, and nonbaryonic dark matter. These mechanisms are shown to reflect the main principle of modern cosmology, putting, instead of formal parameters of cosmological models, physical processes governing the evolution of the big-bang universe. Their realization on the basis of particle theory induces additional model-dependent predictions, accessible to various methods of nonaccelerator particle physics. Probes for such predictions, with the use of astrophysical data, are the aim of cosmoarcheology studying astrophysical effects of new physics. The possibility of finding quantitatively definite relationships between cosmological and laboratory effects on the basis of cosmoparticle approach, as well as of obtaining a unique solution to the problem of physical candidates for inflation, mechanisms of baryogenesis, and multicomponent dark matter, is exemplified in terms of gauge model with broken family symmetry, underlying horizontal unification and possessing quantitatively definite physical grounds for inflation, baryosynthesis, and effectively multicomponent dark-matter scenarios
Anacker, Tony; Hill, J Grant; Friedrich, Joachim
2016-04-21
Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.
Setting clear expectations for safety basis development
International Nuclear Information System (INIS)
MORENO, M.R.
2003-01-01
DOE-RL has set clear expectations for a cost-effective approach for achieving compliance with the Nuclear Safety Management requirements (10 CFR 830, Nuclear Safety Rule) which will ensure long-term benefit to Hanford. To facilitate implementation of these expectations, tools were developed to streamline and standardize safety analysis and safety document development resulting in a shorter and more predictable DOE approval cycle. A Hanford Safety Analysis and Risk Assessment Handbook (SARAH) was issued to standardized methodologies for development of safety analyses. A Microsoft Excel spreadsheet (RADIDOSE) was issued for the evaluation of radiological consequences for accident scenarios often postulated for Hanford. A standard Site Documented Safety Analysis (DSA) detailing the safety management programs was issued for use as a means of compliance with a majority of 3009 Standard chapters. An in-process review was developed between DOE and the Contractor to facilitate DOE approval and provide early course correction. As a result of setting expectations and providing safety analysis tools, the four Hanford Site waste management nuclear facilities were able to integrate into one Master Waste Management Documented Safety Analysis (WM-DSA)
Groebner basis, resultants and the generalized Mandelbrot set
Energy Technology Data Exchange (ETDEWEB)
Geum, Young Hee [Centre of Research for Computational Sciences and Informatics in Biology, Bioindustry, Environment, Agriculture and Healthcare, University of Malaya, 50603 Kuala Lumpur (Malaysia)], E-mail: conpana@empal.com; Hare, Kevin G. [Department of Pure Mathematics, University of Waterloo, Waterloo, Ont., N2L 3G1 (Canada)], E-mail: kghare@math.uwaterloo.ca
2009-10-30
This paper demonstrates how the Groebner basis algorithm can be used for finding the bifurcation points in the generalized Mandelbrot set. It also shows how resultants can be used to find components of the generalized Mandelbrot set.
Groebner basis, resultants and the generalized Mandelbrot set
International Nuclear Information System (INIS)
Geum, Young Hee; Hare, Kevin G.
2009-01-01
This paper demonstrates how the Groebner basis algorithm can be used for finding the bifurcation points in the generalized Mandelbrot set. It also shows how resultants can be used to find components of the generalized Mandelbrot set.
Localized atomic basis set in the projector augmented wave method
DEFF Research Database (Denmark)
Larsen, Ask Hjorth; Vanin, Marco; Mortensen, Jens Jørgen
2009-01-01
We present an implementation of localized atomic-orbital basis sets in the projector augmented wave (PAW) formalism within the density-functional theory. The implementation in the real-space GPAW code provides a complementary basis set to the accurate but computationally more demanding grid...
On the performance of atomic natural orbital basis sets: A full configuration interaction study
International Nuclear Information System (INIS)
Illas, F.; Ricart, J.M.; Rubio, J.; Bagus, P.S.
1990-01-01
The performance of atomic natural orbital (ANO) basis sets has been studied by comparing self-consistant field (SCF) and full configuration interaction (CI) results obtained for the first row atoms and hydrides. The ANO results have been compared with those obtained using a segmented basis set containing the same number of contracted basis functions. The total energies obtained with the ANO basis sets are always lower than the one obtained by using the segmented one. However, for the hydrides, differential electronic correlation energy obtained with the ANO basis set may be smaller than the one recovered with the segmented set. We relate this poorer differential correlation energy for the ANO basis set to the fact that only one contracted d function is used for the ANO and segmented basis sets
EVALUATION OF CONSISTENCY AND SETTING TIME OF IRANIAN DENTAL STONES
Directory of Open Access Journals (Sweden)
F GOL BIDI
2000-09-01
Full Text Available Introduction. Dental stones are widely used in dentistry and the success or failure of many dental treatments depend on the accuracy of these gypsums. The purpose of this study was the evaluation of Iranian dental stones and comparison between Iranian and foreign ones. In this investigation, consistency and setting time were compared between Pars Dendn, Almas and Hinrizit stones. The latter is accepted by ADA (American Dental Association. Consistency and setting time are 2 of 5 properties that are necessitated by both ADA specification No. 25 and Iranian Standard Organization specification No. 2569 for evaluation of dental stones. Methods. In this study, the number and preparation of specimens and test conditions were done according to the ADA specification No. 25 and all the measurements were done with vicat apparatus. Results. The results of this study showed that the standard consistency of Almas stone was obtained by 42ml water and 100gr powder and the setting time of this stone was 11±0.03 min. Which was with in the limits of ADA specification (12±4 min. The standard consistency of Pars Dandan stone was obrianed by 31ml water and 100 gr powder, but the setting time of this stone was 5± 0.16 min which was nt within the limits of ADA specification. Discussion: Comparison of Iranian and Hinrizit stones properties showed that two probable problems of Iranian stones are:1- Unhemogrnousity of Iranian stoned powder was caused by uncontrolled temperature, pressure and humidity in the production process of stone. 2- Impurities such as sodium chloride was responsible fo shortening of Pars Dendens setting time.
Basis set approach in the constrained interpolation profile method
International Nuclear Information System (INIS)
Utsumi, T.; Koga, J.; Yabe, T.; Ogata, Y.; Matsunaga, E.; Aoki, T.; Sekine, M.
2003-07-01
We propose a simple polynomial basis-set that is easily extendable to any desired higher-order accuracy. This method is based on the Constrained Interpolation Profile (CIP) method and the profile is chosen so that the subgrid scale solution approaches the real solution by the constraints from the spatial derivative of the original equation. Thus the solution even on the subgrid scale becomes consistent with the master equation. By increasing the order of the polynomial, this solution quickly converges. 3rd and 5th order polynomials are tested on the one-dimensional Schroedinger equation and are proved to give solutions a few orders of magnitude higher in accuracy than conventional methods for lower-lying eigenstates. (author)
GRACE L1b inversion through a self-consistent modified radial basis function approach
Yang, Fan; Kusche, Juergen; Rietbroek, Roelof; Eicker, Annette
2016-04-01
Implementing a regional geopotential representation such as mascons or, more general, RBFs (radial basis functions) has been widely accepted as an efficient and flexible approach to recover the gravity field from GRACE (Gravity Recovery and Climate Experiment), especially at higher latitude region like Greenland. This is since RBFs allow for regionally specific regularizations over areas which have sufficient and dense GRACE observations. Although existing RBF solutions show a better resolution than classical spherical harmonic solutions, the applied regularizations cause spatial leakage which should be carefully dealt with. It has been shown that leakage is a main error source which leads to an evident underestimation of yearly trend of ice-melting over Greenland. Unlike some popular post-processing techniques to mitigate leakage signals, this study, for the first time, attempts to reduce the leakage directly in the GRACE L1b inversion by constructing an innovative modified (MRBF) basis in place of the standard RBFs to retrieve a more realistic temporal gravity signal along the coastline. Our point of departure is that the surface mass loading associated with standard RBF is smooth but disregards physical consistency between continental mass and passive ocean response. In this contribution, based on earlier work by Clarke et al.(2007), a physically self-consistent MRBF representation is constructed from standard RBFs, with the help of the sea level equation: for a given standard RBF basis, the corresponding MRBF basis is first obtained by keeping the surface load over the continent unchanged, but imposing global mass conservation and equilibrium response of the oceans. Then, the updated set of MRBFs as well as standard RBFs are individually employed as the basis function to determine the temporal gravity field from GRACE L1b data. In this way, in the MRBF GRACE solution, the passive (e.g. ice melting and land hydrology response) sea level is automatically
Point Set Denoising Using Bootstrap-Based Radial Basis Function.
Directory of Open Access Journals (Sweden)
Khang Jie Liew
Full Text Available This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
Point Set Denoising Using Bootstrap-Based Radial Basis Function.
Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad
2016-01-01
This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
Petruzielo, F R; Toulouse, Julien; Umrigar, C J
2011-02-14
A simple yet general method for constructing basis sets for molecular electronic structure calculations is presented. These basis sets consist of atomic natural orbitals from a multiconfigurational self-consistent field calculation supplemented with primitive functions, chosen such that the asymptotics are appropriate for the potential of the system. Primitives are optimized for the homonuclear diatomic molecule to produce a balanced basis set. Two general features that facilitate this basis construction are demonstrated. First, weak coupling exists between the optimal exponents of primitives with different angular momenta. Second, the optimal primitive exponents for a chosen system depend weakly on the particular level of theory employed for optimization. The explicit case considered here is a basis set appropriate for the Burkatzki-Filippi-Dolg pseudopotentials. Since these pseudopotentials are finite at nuclei and have a Coulomb tail, the recently proposed Gauss-Slater functions are the appropriate primitives. Double- and triple-zeta bases are developed for elements hydrogen through argon. These new bases offer significant gains over the corresponding Burkatzki-Filippi-Dolg bases at various levels of theory. Using a Gaussian expansion of the basis functions, these bases can be employed in any electronic structure method. Quantum Monte Carlo provides an added benefit: expansions are unnecessary since the integrals are evaluated numerically.
Some considerations about Gaussian basis sets for electric property calculations
Arruda, Priscilla M.; Canal Neto, A.; Jorge, F. E.
Recently, segmented contracted basis sets of double, triple, and quadruple zeta valence quality plus polarization functions (XZP, X = D, T, and Q, respectively) for the atoms from H to Ar were reported. In this work, with the objective of having a better description of polarizabilities, the QZP set was augmented with diffuse (s and p symmetries) and polarization (p, d, f, and g symmetries) functions that were chosen to maximize the mean dipole polarizability at the UHF and UMP2 levels, respectively. At the HF and B3LYP levels of theory, electric dipole moment and static polarizability for a sample of molecules were evaluated. Comparison with experimental data and results obtained with a similar size basis set, whose diffuse functions were optimized for the ground state energy of the anion, was done.
A consistent set of thermodynamic constants for americium (III) species with hydroxyl and carbonate
International Nuclear Information System (INIS)
Kerrisk, J.F.; Silva, R.J.
1986-01-01
A consistent set of thermodynamic constants for aqueous species, and compounds of Am(III) with hydroxyl and carbonate ligands has been developed. The procedure used to develop these constants involved establishing a value for one formation constant at a time in a sequential order, starting with the hydrolysis products and hydroxide solids, and then proceeding to carbonate species. The EQ3NR chemical-equilibrium model was used to test the constants developed. These constants are consistent with most of the experimental data that form their basis; however, considerable uncertainty still exists in some aspects of the Am(III) data
Determining a Consistent Set of Accounting and Financial Reporting Standards
Anne Le Manh-Béna; Olivier Ramond
2011-01-01
Following the debate on the Conceptual Framework revision undertaken by the IASB and the FASB, this paper discusses three major concerns about the way financial reporting standards should be determined: (1) What is the role a Conceptual Framework?; (2) For whom and for which needs are accounting and financial reporting standards made?; and (3) What information set should financial reporting provide? We show that the perceived need of a Framework has resulted in practice in weak usefulness We ...
Quiney, H. M.; Glushkov, V. N.; Wilson, S.; Sabin,; Brandas, E
2001-01-01
A comparison is made of the accuracy achieved in finite difference and finite basis set approximations to the Dirac equation for the ground state of the hydrogen molecular ion. The finite basis set calculations are carried out using a distributed basis set of Gaussian functions the exponents and
International Nuclear Information System (INIS)
Woon, D.E.; Dunning, T.H. Jr.
1994-01-01
An accurate description of the electrical properties of atoms and molecules is critical for quantitative predictions of the nonlinear properties of molecules and of long-range atomic and molecular interactions between both neutral and charged species. We report a systematic study of the basis sets required to obtain accurate correlated values for the static dipole (α 1 ), quadrupole (α 2 ), and octopole (α 3 ) polarizabilities and the hyperpolarizability (γ) of the rare gas atoms He, Ne, and Ar. Several methods of correlation treatment were examined, including various orders of Moller--Plesset perturbation theory (MP2, MP3, MP4), coupled-cluster theory with and without perturbative treatment of triple excitations [CCSD, CCSD(T)], and singles and doubles configuration interaction (CISD). All of the basis sets considered here were constructed by adding even-tempered sets of diffuse functions to the correlation consistent basis sets of Dunning and co-workers. With multiply-augmented sets we find that the electrical properties of the rare gas atoms converge smoothly to values that are in excellent agreement with the available experimental data and/or previously computed results. As a further test of the basis sets presented here, the dipole polarizabilities of the F - and Cl - anions and of the HCl and N 2 molecules are also reported
Hellweg, Arnim; Rappoport, Dmitrij
2015-01-14
We report optimized auxiliary basis sets for use with the Karlsruhe segmented contracted basis sets including moderately diffuse basis functions (Rappoport and Furche, J. Chem. Phys., 2010, 133, 134105) in resolution-of-the-identity (RI) post-self-consistent field (post-SCF) computations for the elements H-Rn (except lanthanides). The errors of the RI approximation using optimized auxiliary basis sets are analyzed on a comprehensive test set of molecules containing the most common oxidation states of each element and do not exceed those of the corresponding unaugmented basis sets. During these studies an unsatisfying performance of the def2-SVP and def2-QZVPP auxiliary basis sets for Barium was found and improved sets are provided. We establish the versatility of the def2-SVPD, def2-TZVPPD, and def2-QZVPPD basis sets for RI-MP2 and RI-CC (coupled-cluster) energy and property calculations. The influence of diffuse basis functions on correlation energy, basis set superposition error, atomic electron affinity, dipole moments, and computational timings is evaluated at different levels of theory using benchmark sets and showcase examples.
International Nuclear Information System (INIS)
Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook; Kim, Woo Youn
2015-01-01
We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal to 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems
Basis set expansion for inverse problems in plasma diagnostic analysis
Jones, B.; Ruiz, C. L.
2013-07-01
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)], 10.1063/1.1482156 is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20-25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
Radiobiological basis for setting neutron radiation safety standards
International Nuclear Information System (INIS)
Straume, T.
1985-01-01
Present neutron standards, adopted more than 20 yr ago from a weak radiobiological data base, have been in doubt for a number of years and are currently under challenge. Moreover, recent dosimetric re-evaluations indicate that Hiroshima neutron doses may have been much lower than previously thought, suggesting that direct data for neutron-induced cancer in humans may in fact not be available. These recent developments make it urgent to determine the extent to which neutron cancer risk in man can be estimated from data that are available. Two approaches are proposed here that are anchored in particularly robust epidemiological and experimental data and appear most likely to provide reliable estimates of neutron cancer risk in man. The first approach uses gamma-ray dose-response relationships for human carcinogenesis, available from Nagasaki (Hiroshima data are also considered), together with highly characterized neutron and gamma-ray data for human cytogenetics. When tested against relevant experimental data, this approach either adequately predicts or somewhat overestimates neutron tumorigenesis (and mutagenesis) in animals. The second approach also uses the Nagasaki gamma-ray cancer data, but together with neutron RBEs from animal tumorigenesis studies. Both approaches give similar results and provide a basis for setting neutron radiation safety standards. They appear to be an improvement over previous approaches, including those that rely on highly uncertain maximum neutron RBEs and unnecessary extrapolations of gamma-ray data to very low doses. Results suggest that, at the presently accepted neutron dose limit of 0.5 rad/yr, the cancer mortality risk to radiation workers is not very different from accidental mortality risks to workers in various nonradiation occupations
Basis set expansion for inverse problems in plasma diagnostic analysis
Energy Technology Data Exchange (ETDEWEB)
Jones, B.; Ruiz, C. L. [Sandia National Laboratories, PO Box 5800, Albuquerque, New Mexico 87185 (United States)
2013-07-15
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)] is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20–25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
Multiple-scattering theory with a truncated basis set
International Nuclear Information System (INIS)
Zhang, X.; Butler, W.H.
1992-01-01
Multiple-scattering theory (MST) is an extremely efficient technique for calculating the electronic structure of an assembly of atoms. The wave function in MST is expanded in terms of spherical waves centered on each atom and indexed by their orbital and azimuthal quantum numbers, l and m. The secular equation which determines the characteristic energies can be truncated at a value of the orbital angular momentum l max , for which the higher angular momentum phase shifts, δ l (l>l max ), are sufficiently small. Generally, the wave-function coefficients which are calculated from the secular equation are also truncated at l max . Here we point out that this truncation of the wave function is not necessary and is in fact inconsistent with the truncation of the secular equation. A consistent procedure is described in which the states with higher orbital angular momenta are retained but with their phase shifts set to zero. We show that this treatment gives smooth, continuous, and correctly normalized wave functions and that the total charge density calculated from the corresponding Green function agrees with the Lloyd formula result. We also show that this augmented wave function can be written as a linear combination of Andersen's muffin-tin orbitals in the case of muffin-tin potentials, and can be used to generalize the muffin-tin orbital idea to full-cell potentals
DEFF Research Database (Denmark)
Silva-Junior, Mario R.; Sauer, Stephan P. A.; Schreiber, Marko
2010-01-01
Vertical electronic excitation energies and one-electron properties of 28 medium-sized molecules from a previously proposed benchmark set are revisited using the augmented correlation-consistent triple-zeta aug-cc-pVTZ basis set in CC2, CCSDR(3), and CC3 calculations. The results are compared...... to those obtained previously with the smaller TZVP basis set. For each of the three coupled cluster methods, a correlation coefficient greater than 0.994 is found between the vertical excitation energies computed with the two basis sets. The deviations of the CC2 and CCSDR(3) results from the CC3 reference...... values are very similar for both basis sets, thus confirming previous conclusions on the intrinsic accuracy of CC2 and CCSDR(3). This similarity justifies the use of CC2- or CCSDR(3)-based corrections to account for basis set incompleteness in CC3 studies of vertical excitation energies. For oscillator...
International Nuclear Information System (INIS)
Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Teresa; Skylaris, Chris-Kriton; Head-Gordon, Martin
2016-01-01
Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set produces <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.
Dynamical pruning of static localized basis sets in time-dependent quantum dynamics
McCormack, D.A.
2006-01-01
We investigate the viability of dynamical pruning of localized basis sets in time-dependent quantum wave packet methods. Basis functions that have a very small population at any given time are removed from the active set. The basis functions themselves are time independent, but the set of active
Geboy, Nicholas J.; Engle, Mark A.; Hower, James C.
2013-01-01
Several standard methods require coal to be ashed prior to geochemical analysis. Researchers, however, are commonly interested in the compositional nature of the whole-coal, not its ash. Coal geochemical data for any given sample can, therefore, be reported in the ash basis on which it is analyzed or the whole-coal basis to which the ash basis data are back calculated. Basic univariate (mean, variance, distribution, etc.) and bivariate (correlation coefficients, etc.) measures of the same suite of samples can be very different depending which reporting basis the researcher uses. These differences are not real, but an artifact resulting from the compositional nature of most geochemical data. The technical term for this artifact is subcompositional incoherence. Since compositional data are forced to a constant sum, such as 100% or 1,000,000 ppm, they possess curvilinear properties which make the Euclidean principles on which most statistical tests rely inappropriate, leading to erroneous results. Applying the isometric logratio (ilr) transformation to compositional data allows them to be represented in Euclidean space and evaluated using traditional tests without fear of producing mathematically inconsistent results. When applied to coal geochemical data, the issues related to differences between the two reporting bases are resolved as demonstrated in this paper using major oxide and trace metal data from the Pennsylvanian-age Pond Creek coal of eastern Kentucky, USA. Following ilr transformation, univariate statistics, such as mean and variance, still differ between the ash basis and whole-coal basis, but in predictable and calculated manners. Further, the stability between two different components, a bivariate measure, is identical, regardless of the reporting basis. The application of ilr transformations addresses both the erroneous results of Euclidean-based measurements on compositional data as well as the inconsistencies observed on coal geochemical data
Geminal embedding scheme for optimal atomic basis set construction in correlated calculations
Energy Technology Data Exchange (ETDEWEB)
Sorella, S., E-mail: sorella@sissa.it [International School for Advanced Studies (SISSA), Via Beirut 2-4, 34014 Trieste, Italy and INFM Democritos National Simulation Center, Trieste (Italy); Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr [Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France); Mazzola, G., E-mail: gmazzola@phys.ethz.ch [Theoretische Physik, ETH Zurich, 8093 Zurich (Switzerland); Casula, M., E-mail: michele.casula@impmc.upmc.fr [CNRS and Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France)
2015-12-28
We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.
International Nuclear Information System (INIS)
Vasin, M.F
1999-01-01
The consistency of the classification of prophylactic antiradiation drugs have been given consideration as history of their discovery, theory of the radioprotection mechanisms and their use in applied medicine. Prophylactic drugs consists of radioprotectors with short-term of long-term action, drugs stimulating radioresistance, the ones suppressing symptoms of primary radiation reaction, the ones of early detoxication, the ones for adsorption and elimination of radionuclides from an organism [ru
Evaluating the consistency of gene sets used in the analysis of bacterial gene expression data
Directory of Open Access Journals (Sweden)
Tintle Nathan L
2012-08-01
Full Text Available Abstract Background Statistical analyses of whole genome expression data require functional information about genes in order to yield meaningful biological conclusions. The Gene Ontology (GO and Kyoto Encyclopedia of Genes and Genomes (KEGG are common sources of functionally grouped gene sets. For bacteria, the SEED and MicrobesOnline provide alternative, complementary sources of gene sets. To date, no comprehensive evaluation of the data obtained from these resources has been performed. Results We define a series of gene set consistency metrics directly related to the most common classes of statistical analyses for gene expression data, and then perform a comprehensive analysis of 3581 Affymetrix® gene expression arrays across 17 diverse bacteria. We find that gene sets obtained from GO and KEGG demonstrate lower consistency than those obtained from the SEED and MicrobesOnline, regardless of gene set size. Conclusions Despite the widespread use of GO and KEGG gene sets in bacterial gene expression data analysis, the SEED and MicrobesOnline provide more consistent sets for a wide variety of statistical analyses. Increased use of the SEED and MicrobesOnline gene sets in the analysis of bacterial gene expression data may improve statistical power and utility of expression data.
Scribano, Yohann; Lauvergnat, David M; Benoit, David M
2010-09-07
In this paper, we couple a numerical kinetic-energy operator approach to the direct-vibrational self-consistent field (VSCF)/vibrational configuration interaction (VCI) method for the calculation of vibrational anharmonic frequencies. By combining this with fast-VSCF, an efficient direct evaluation of the ab initio potential-energy surface (PES), we introduce a general formalism for the computation of vibrational bound states of molecular systems exhibiting large-amplitude motion such as methyl-group torsion. We validate our approach on an analytical two-dimensional model and apply it to the methanol molecule. We show that curvilinear coordinates lead to a significant improvement in the VSCF/VCI description of the torsional frequency in methanol, even for a simple two-mode coupling expansion of the PES. Moreover, we demonstrate that a curvilinear formulation of the fast-VSCF/VCI scheme improves its speed by a factor of two and its accuracy by a factor of 3.
Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures.
Papior, Nick R; Calogero, Gaetano; Brandbyge, Mads
2018-06-27
We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C 60 ). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.
International Nuclear Information System (INIS)
Chen Pingxing; Li Chengzu
2004-01-01
Nonlocality without entanglement is an interesting field. A manifestation of quantum nonlocality without entanglement is the possible local indistinguishability of orthogonal product states. In this paper we analyze the character of operators to distinguish the elements of a full product basis set in a multipartite system, and show that distinguishing perfectly these product bases needs only local projective measurements and classical communication, and these measurements cannot damage each product basis. Employing these conclusions one can discuss local distinguishability of the elements of any full product basis set easily. Finally we discuss the generalization of these results to the locally distinguishability of the elements of incomplete product basis set
New basis set for the prediction of the specific rotation in flexible biological molecules
DEFF Research Database (Denmark)
Baranowska-Łaczkowska, Angelika; Z. Łaczkowski, Krzysztof Z. Łaczkowski; Henriksen, Christian
2016-01-01
are compared to those obtained with the (d-)aug-cc-pVXZ (X = D, T and Q) basis sets of Dunning et al. The ORP values are in good overall agreement with the aug-cc-pVTZ results making the ORP a good basis set for routine TD-DFT optical rotation calculations of conformationally flexible molecules. The results...
Plumley, Joshua A; Dannenberg, J J
2011-06-01
We evaluate the performance of ten functionals (B3LYP, M05, M05-2X, M06, M06-2X, B2PLYP, B2PLYPD, X3LYP, B97D, and MPWB1K) in combination with 16 basis sets ranging in complexity from 6-31G(d) to aug-cc-pV5Z for the calculation of the H-bonded water dimer with the goal of defining which combinations of functionals and basis sets provide a combination of economy and accuracy for H-bonded systems. We have compared the results to the best non-density functional theory (non-DFT) molecular orbital (MO) calculations and to experimental results. Several of the smaller basis sets lead to qualitatively incorrect geometries when optimized on a normal potential energy surface (PES). This problem disappears when the optimization is performed on a counterpoise (CP) corrected PES. The calculated interaction energies (ΔEs) with the largest basis sets vary from -4.42 (B97D) to -5.19 (B2PLYPD) kcal/mol for the different functionals. Small basis sets generally predict stronger interactions than the large ones. We found that, because of error compensation, the smaller basis sets gave the best results (in comparison to experimental and high-level non-DFT MO calculations) when combined with a functional that predicts a weak interaction with the largest basis set. As many applications are complex systems and require economical calculations, we suggest the following functional/basis set combinations in order of increasing complexity and cost: (1) D95(d,p) with B3LYP, B97D, M06, or MPWB1k; (2) 6-311G(d,p) with B3LYP; (3) D95++(d,p) with B3LYP, B97D, or MPWB1K; (4) 6-311++G(d,p) with B3LYP or B97D; and (5) aug-cc-pVDZ with M05-2X, M06-2X, or X3LYP. Copyright © 2011 Wiley Periodicals, Inc.
Many-body calculations of molecular electric polarizabilities in asymptotically complete basis sets
Monten, Ruben; Hajgató, Balázs; Deleuze, Michael S.
2011-10-01
The static dipole polarizabilities of Ne, CO, N2, F2, HF, H2O, HCN, and C2H2 (acetylene) have been determined close to the Full-CI limit along with an asymptotically complete basis set (CBS), according to the principles of a Focal Point Analysis. For this purpose the results of Finite Field calculations up to the level of Coupled Cluster theory including Single, Double, Triple, Quadruple and perturbative Pentuple excitations [CCSDTQ(P)] were used, in conjunction with suited extrapolations of energies obtained using augmented and doubly-augmented Dunning's correlation consistent polarized valence basis sets of improving quality. The polarizability characteristics of C2H4 (ethylene) and C2H6 (ethane) have been determined on the same grounds at the CCSDTQ level in the CBS limit. Comparison is made with results obtained using lower levels in electronic correlation, or taking into account the relaxation of the molecular structure due to an adiabatic polarization process. Vibrational corrections to electronic polarizabilities have been empirically estimated according to Born-Oppenheimer Molecular Dynamical simulations employing Density Functional Theory. Confrontation with experiment ultimately indicates relative accuracies of the order of 1 to 2%.
Inverse consistent non-rigid image registration based on robust point set matching
2014-01-01
Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number
Usvyat, Denis; Civalleri, Bartolomeo; Maschio, Lorenzo; Dovesi, Roberto; Pisani, Cesare; Schütz, Martin
2011-06-07
The atomic orbital basis set limit is approached in periodic correlated calculations for solid LiH. The valence correlation energy is evaluated at the level of the local periodic second order Møller-Plesset perturbation theory (MP2), using basis sets of progressively increasing size, and also employing "bond"-centered basis functions in addition to the standard atom-centered ones. Extended basis sets, which contain linear dependencies, are processed only at the MP2 stage via a dual basis set scheme. The local approximation (domain) error has been consistently eliminated by expanding the orbital excitation domains. As a final result, it is demonstrated that the complete basis set limit can be reached for both HF and local MP2 periodic calculations, and a general scheme is outlined for the definition of high-quality atomic-orbital basis sets for solids. © 2011 American Institute of Physics
Factors Influencing the Degree of Intrajudge Consistency during the Standard Setting Process.
Plake, Barbara S.; And Others
The accuracy of standards obtained from judgmental methods is dependent on the quality of the judgments made by experts throughout the standard setting process. One important dimension of the quality of these judgments is the consistency of the judges' perceptions with item performance of minimally competent candidates. Several interrelated…
Martinez, Guillermo F.; Gupta, Hoshin V.
2011-12-01
Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.
Dynamical basis sets for algebraic variational calculations in quantum-mechanical scattering theory
Sun, Yan; Kouri, Donald J.; Truhlar, Donald G.; Schwenke, David W.
1990-01-01
New basis sets are proposed for linear algebraic variational calculations of transition amplitudes in quantum-mechanical scattering problems. These basis sets are hybrids of those that yield the Kohn variational principle (KVP) and those that yield the generalized Newton variational principle (GNVP) when substituted in Schlessinger's stationary expression for the T operator. Trial calculations show that efficiencies almost as great as that of the GNVP and much greater than the KVP can be obtained, even for basis sets with the majority of the members independent of energy.
Quantum Dynamics with Short-Time Trajectories and Minimal Adaptive Basis Sets.
Saller, Maximilian A C; Habershon, Scott
2017-07-11
Methods for solving the time-dependent Schrödinger equation via basis set expansion of the wave function can generally be categorized as having either static (time-independent) or dynamic (time-dependent) basis functions. We have recently introduced an alternative simulation approach which represents a middle road between these two extremes, employing dynamic (classical-like) trajectories to create a static basis set of Gaussian wavepackets in regions of phase-space relevant to future propagation of the wave function [J. Chem. Theory Comput., 11, 8 (2015)]. Here, we propose and test a modification of our methodology which aims to reduce the size of basis sets generated in our original scheme. In particular, we employ short-time classical trajectories to continuously generate new basis functions for short-time quantum propagation of the wave function; to avoid the continued growth of the basis set describing the time-dependent wave function, we employ Matching Pursuit to periodically minimize the number of basis functions required to accurately describe the wave function. Overall, this approach generates a basis set which is adapted to evolution of the wave function while also being as small as possible. In applications to challenging benchmark problems, namely a 4-dimensional model of photoexcited pyrazine and three different double-well tunnelling problems, we find that our new scheme enables accurate wave function propagation with basis sets which are around an order-of-magnitude smaller than our original trajectory-guided basis set methodology, highlighting the benefits of adaptive strategies for wave function propagation.
Molecular basis sets - a general similarity-based approach for representing chemical spaces.
Raghavendra, Akshay S; Maggiora, Gerald M
2007-01-01
A new method, based on generalized Fourier analysis, is described that utilizes the concept of "molecular basis sets" to represent chemical space within an abstract vector space. The basis vectors in this space are abstract molecular vectors. Inner products among the basis vectors are determined using an ansatz that associates molecular similarities between pairs of molecules with their corresponding inner products. Moreover, the fact that similarities between pairs of molecules are, in essentially all cases, nonzero implies that the abstract molecular basis vectors are nonorthogonal, but since the similarity of a molecule with itself is unity, the molecular vectors are normalized to unity. A symmetric orthogonalization procedure, which optimally preserves the character of the original set of molecular basis vectors, is used to construct appropriate orthonormal basis sets. Molecules can then be represented, in general, by sets of orthonormal "molecule-like" basis vectors within a proper Euclidean vector space. However, the dimension of the space can become quite large. Thus, the work presented here assesses the effect of basis set size on a number of properties including the average squared error and average norm of molecular vectors represented in the space-the results clearly show the expected reduction in average squared error and increase in average norm as the basis set size is increased. Several distance-based statistics are also considered. These include the distribution of distances and their differences with respect to basis sets of differing size and several comparative distance measures such as Spearman rank correlation and Kruscal stress. All of the measures show that, even though the dimension can be high, the chemical spaces they represent, nonetheless, behave in a well-controlled and reasonable manner. Other abstract vector spaces analogous to that described here can also be constructed providing that the appropriate inner products can be directly
Zaleśny, Robert; Baranowska-Łączkowska, Angelika; Medveď, Miroslav; Luis, Josep M
2015-09-08
In the present work, we perform an assessment of several property-oriented atomic basis sets in computing (hyper)polarizabilities with a focus on the vibrational contributions. Our analysis encompasses the Pol and LPol-ds basis sets of Sadlej and co-workers, the def2-SVPD and def2-TZVPD basis sets of Rappoport and Furche, and the ORP basis set of Baranowska-Łączkowska and Łączkowski. Additionally, we use the d-aug-cc-pVQZ and aug-cc-pVTZ basis sets of Dunning and co-workers to determine the reference estimates of the investigated electric properties for small- and medium-sized molecules, respectively. We combine these basis sets with ab initio post-Hartree-Fock quantum-chemistry approaches (including the coupled cluster method) to calculate electronic and nuclear relaxation (hyper)polarizabilities of carbon dioxide, formaldehyde, cis-diazene, and a medium-sized Schiff base. The primary finding of our study is that, among all studied property-oriented basis sets, only the def2-TZVPD and ORP basis sets yield nuclear relaxation (hyper)polarizabilities of small molecules with average absolute errors less than 5.5%. A similar accuracy for the nuclear relaxation (hyper)polarizabilites of the studied systems can also be reached using the aug-cc-pVDZ basis set (5.3%), although for more accurate calculations of vibrational contributions, i.e., average absolute errors less than 1%, the aug-cc-pVTZ basis set is recommended. It was also demonstrated that anharmonic contributions to first and second hyperpolarizabilities of a medium-sized Schiff base are particularly difficult to accurately predict at the correlated level using property-oriented basis sets. For instance, the value of the nuclear relaxation first hyperpolarizability computed at the MP2/def2-TZVPD level of theory is roughly 3 times larger than that determined using the aug-cc-pVTZ basis set. We link the failure of the def2-TZVPD basis set with the difficulties in predicting the first-order field
Recommended ALIs and DACs for 10 CFR part 220: A consistent numerical set
Energy Technology Data Exchange (ETDEWEB)
Eckerman, K.F.
1996-05-01
Appendix B to 10 CFR Part 20 contains numerical data for controlling the intake of radionuclides in the workplace or in the environment. These data, derived from the recommendations of the International Commission on Radiological Protection (ICRP), do not provide a numerically consistent basis for demonstrating compliance with the limitation on dose stated in the regulation. This situation is largely a consequence of the numerical procedures used by the ICRP which did not maintain, in a strict numerical sense, the hierarchial relationship among the radiation protection quantities. In this work recommended values of the quantities in Appendix B to CFR Part 20 are developed using the dose coefficients of the applicable ICRP publications and a numerical procedure which ensures that the tabulated quantities are numerically consistent.
A Hartree–Fock study of the confined helium atom: Local and global basis set approaches
Energy Technology Data Exchange (ETDEWEB)
Young, Toby D., E-mail: tyoung@ippt.pan.pl [Zakład Metod Komputerowych, Instytut Podstawowych Prolemów Techniki Polskiej Akademia Nauk, ul. Pawińskiego 5b, 02-106 Warszawa (Poland); Vargas, Rubicelia [Universidad Autónoma Metropolitana Iztapalapa, División de Ciencias Básicas e Ingenierías, Departamento de Química, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, D.F. C.P. 09340, México (Mexico); Garza, Jorge, E-mail: jgo@xanum.uam.mx [Universidad Autónoma Metropolitana Iztapalapa, División de Ciencias Básicas e Ingenierías, Departamento de Química, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, D.F. C.P. 09340, México (Mexico)
2016-02-15
Two different basis set methods are used to calculate atomic energy within Hartree–Fock theory. The first is a local basis set approach using high-order real-space finite elements and the second is a global basis set approach using modified Slater-type orbitals. These two approaches are applied to the confined helium atom and are compared by calculating one- and two-electron contributions to the total energy. As a measure of the quality of the electron density, the cusp condition is analyzed. - Highlights: • Two different basis set methods for atomic Hartree–Fock theory. • Galerkin finite element method and modified Slater-type orbitals. • Confined atom model (helium) under small-to-extreme confinement radii. • Detailed analysis of the electron wave-function and the cusp condition.
The Bethe Sum Rule and Basis Set Selection in the Calculation of Generalized Oscillator Strengths
DEFF Research Database (Denmark)
Cabrera-Trujillo, Remigio; Sabin, John R.; Oddershede, Jens
1999-01-01
Fulfillment of the Bethe sum rule may be construed as a measure of basis set quality for atomic and molecular properties involving the generalized oscillator strength distribution. It is first shown that, in the case of a complete basis, the Bethe sum rule is fulfilled exactly in the random phase...
Energy optimized Gaussian basis sets for the atoms T1 - Rn
International Nuclear Information System (INIS)
Faegri, K. Jr.
1987-01-01
Energy optimized Gaussian basis sets have been derived for the atoms Tl-Rn. Two sets are presented - a (20,16,10,6) set and a (22,17,13,8) set. The smallest sets yield atomic energies 107 to 123 mH above the numerical Hartree-Fock values, while the larger sets give energies 11 mH above the numerical results. Energy trends from the smaller sets indicate that reduced shielding by p-electrons may place a greater demand on the flexibility of d- and f-orbital description for the lighter elements of the series
A two-center-oscillator-basis as an alternative set for heavy ion processes
International Nuclear Information System (INIS)
Tornow, V.; Reinhard, P.G.; Drechsel, D.
1977-01-01
The two-center-oscillator-basis, which is constructed from harmonic oscillator wave functions developing about two different centers, suffers from numerical problems at small center separations due to the overcompleteness of the set. In order to overcome these problems we admix higer oscillator wave functions before the orthogonalization, or antisymmetrization resp. This yields a numerically stable basis set at each center separation. The results obtained for the potential energy suface are comparable with the results of more elaborate models. (orig.) [de
Plumley, Joshua A.; Dannenberg, J. J.
2011-01-01
We evaluate the performance of nine functionals (B3LYP, M05, M05-2X, M06, M06-2X, B2PLYP, B2PLYPD, X3LYP, B97D and MPWB1K) in combination with 16 basis sets ranging in complexity from 6-31G(d) to aug-cc-pV5Z for the calculation of the H-bonded water dimer with the goal of defining which combinations of functionals and basis sets provide a combination of economy and accuracy for H-bonded systems. We have compared the results to the best non-DFT molecular orbital calculations and to experimenta...
On sets of vectors of a finite vector space in which every subset of basis size is a basis II
Ball, Simeon; De Beule, Jan
2012-01-01
This article contains a proof of the MDS conjecture for k a parts per thousand currency sign 2p - 2. That is, that if S is a set of vectors of in which every subset of S of size k is a basis, where q = p (h) , p is prime and q is not and k a parts per thousand currency sign 2p - 2, then |S| a parts per thousand currency sign q + 1. It also contains a short proof of the same fact for k a parts per thousand currency sign p, for all q.
International Nuclear Information System (INIS)
Kari, R.E.; Mezey, P.G.; Csizmadia, I.G.
1975-01-01
Expressions are given for calculating the energy gradient vector in the exponent space of Gaussian basis sets and a technique to optimize orbital exponents using the method of conjugate gradients is described. The method is tested on the (9/sups/5/supp/) Gaussian basis space and optimum exponents are determined for the carbon atom. The analysis of the results shows that the calculated one-electron properties converge more slowly to their optimum values than the total energy converges to its optimum value. In addition, basis sets approximating the optimum total energy very well can still be markedly improved for the prediction of one-electron properties. For smaller basis sets, this improvement does not warrant the necessary expense
Continuum contributions to dipole oscillator-strength sum rules for hydrogen in finite basis sets
DEFF Research Database (Denmark)
Oddershede, Jens; Ogilvie, John F.; Sauer, Stephan P. A.
2017-01-01
Calculations of the continuum contributions to dipole oscillator sum rules for hydrogen are performed using both exact and basis-set representations of the stick spectra of the continuum wave function. We show that the same results are obtained for the sum rules in both cases, but that the conver......Calculations of the continuum contributions to dipole oscillator sum rules for hydrogen are performed using both exact and basis-set representations of the stick spectra of the continuum wave function. We show that the same results are obtained for the sum rules in both cases......, but that the convergence towards the final results with increasing excitation energies included in the sum over states is slower in the basis-set cases when we use the best basis. We argue also that this conclusion most likely holds also for larger atoms or molecules....
A consistent data set of Antarctic ice sheet topography, cavity geometry, and global bathymetry
Directory of Open Access Journals (Sweden)
R. Timmermann
2010-12-01
Full Text Available Sub-ice shelf circulation and freezing/melting rates in ocean general circulation models depend critically on an accurate and consistent representation of cavity geometry. Existing global or pan-Antarctic topography data sets have turned out to contain various inconsistencies and inaccuracies. The goal of this work is to compile independent regional surveys and maps into a global data set. We use the S-2004 global 1-min bathymetry as the backbone and add an improved version of the BEDMAP topography (ALBMAP bedrock topography for an area that roughly coincides with the Antarctic continental shelf. The position of the merging line is individually chosen in different sectors in order to capture the best of both data sets. High-resolution gridded data for ice shelf topography and cavity geometry of the Amery, Fimbul, Filchner-Ronne, Larsen C and George VI Ice Shelves, and for Pine Island Glacier are carefully merged into the ambient ice and ocean topographies. Multibeam survey data for bathymetry in the former Larsen B cavity and the southeastern Bellingshausen Sea have been obtained from the data centers of Alfred Wegener Institute (AWI, British Antarctic Survey (BAS and Lamont-Doherty Earth Observatory (LDEO, gridded, and blended into the existing bathymetry map. The resulting global 1-min Refined Topography data set (RTopo-1 contains self-consistent maps for upper and lower ice surface heights, bedrock topography, and surface type (open ocean, grounded ice, floating ice, bare land surface. The data set is available in NetCDF format from the PANGAEA database at doi:10.1594/pangaea.741917.
Energy Technology Data Exchange (ETDEWEB)
Feller, D.F.
1979-01-01
The behavior of the two exponential parameters in an even-tempered gaussian basis set is investigated as the set optimally approaches an integral transform representation of the radial portion of atomic and molecular orbitals. This approach permits a highly accurate assessment of the Hartree-Fock limit for atoms and molecules.
Energy Technology Data Exchange (ETDEWEB)
Witte, Jonathon [Department of Chemistry, University of California, Berkeley, California 94720 (United States); Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Neaton, Jeffrey B. [Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Department of Physics, University of California, Berkeley, California 94720 (United States); Kavli Energy Nanosciences Institute at Berkeley, Berkeley, California 94720 (United States); Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu [Department of Chemistry, University of California, Berkeley, California 94720 (United States); Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States)
2016-05-21
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methods and systems examined, the most complete basis is Jensen’s pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.
Hill, J. Grant; Peterson, Kirk A.
2017-12-01
New correlation consistent basis sets based on pseudopotential (PP) Hamiltonians have been developed from double- to quintuple-zeta quality for the late alkali (K-Fr) and alkaline earth (Ca-Ra) metals. These are accompanied by new all-electron basis sets of double- to quadruple-zeta quality that have been contracted for use with both Douglas-Kroll-Hess (DKH) and eXact 2-Component (X2C) scalar relativistic Hamiltonians. Sets for valence correlation (ms), cc-pVnZ-PP and cc-pVnZ-(DK,DK3/X2C), in addition to outer-core correlation [valence + (m-1)sp], cc-p(w)CVnZ-PP and cc-pwCVnZ-(DK,DK3/X2C), are reported. The -PP sets have been developed for use with small-core PPs [I. S. Lim et al., J. Chem. Phys. 122, 104103 (2005) and I. S. Lim et al., J. Chem. Phys. 124, 034107 (2006)], while the all-electron sets utilized second-order DKH Hamiltonians for 4s and 5s elements and third-order DKH for 6s and 7s. The accuracy of the basis sets is assessed through benchmark calculations at the coupled-cluster level of theory for both atomic and molecular properties. Not surprisingly, it is found that outer-core correlation is vital for accurate calculation of the thermodynamic and spectroscopic properties of diatomic molecules containing these elements.
Hill, J Grant; Peterson, Kirk A
2017-12-28
New correlation consistent basis sets based on pseudopotential (PP) Hamiltonians have been developed from double- to quintuple-zeta quality for the late alkali (K-Fr) and alkaline earth (Ca-Ra) metals. These are accompanied by new all-electron basis sets of double- to quadruple-zeta quality that have been contracted for use with both Douglas-Kroll-Hess (DKH) and eXact 2-Component (X2C) scalar relativistic Hamiltonians. Sets for valence correlation (ms), cc-pVnZ-PP and cc-pVnZ-(DK,DK3/X2C), in addition to outer-core correlation [valence + (m-1)sp], cc-p(w)CVnZ-PP and cc-pwCVnZ-(DK,DK3/X2C), are reported. The -PP sets have been developed for use with small-core PPs [I. S. Lim et al., J. Chem. Phys. 122, 104103 (2005) and I. S. Lim et al., J. Chem. Phys. 124, 034107 (2006)], while the all-electron sets utilized second-order DKH Hamiltonians for 4s and 5s elements and third-order DKH for 6s and 7s. The accuracy of the basis sets is assessed through benchmark calculations at the coupled-cluster level of theory for both atomic and molecular properties. Not surprisingly, it is found that outer-core correlation is vital for accurate calculation of the thermodynamic and spectroscopic properties of diatomic molecules containing these elements.
Massobrio, C
2003-01-01
Density functional theory, in combination with a) a careful choice of the exchange-correlation part of the total energy and b) localized basis sets for the electronic orbital, has become the method of choice for calculating the exchange-couplings in magnetic molecular complexes. Orbital expansion on plane waves can be seen as an alternative basis set especially suited to allow optimization of newly synthesized materials of unknown geometries. However, little is known on the predictive power of this scheme to yield quantitative values for exchange coupling constants J as small as a few hundredths of eV (50-300 cm sup - sup 1). We have used density functional theory and a plane waves basis set to calculate the exchange couplings J of three homodinuclear Cu-based molecular complexes with experimental values ranging from +40 cm sup - sup 1 to -300 cm sup - sup 1. The plane waves basis set proves as accurate as the localized basis set, thereby suggesting that this approach can be reliably employed to predict and r...
International Nuclear Information System (INIS)
Massobrio, C.; Ruiz, E.
2003-01-01
Density functional theory, in combination with a) a careful choice of the exchange-correlation part of the total energy and b) localized basis sets for the electronic orbital, has become the method of choice for calculating the exchange-couplings in magnetic molecular complexes. Orbital expansion on plane waves can be seen as an alternative basis set especially suited to allow optimization of newly synthesized materials of unknown geometries. However, little is known on the predictive power of this scheme to yield quantitative values for exchange coupling constants J as small as a few hundredths of eV (50-300 cm -1 ). We have used density functional theory and a plane waves basis set to calculate the exchange couplings J of three homodinuclear Cu-based molecular complexes with experimental values ranging from +40 cm -1 to -300 cm -1 . The plane waves basis set proves as accurate as the localized basis set, thereby suggesting that this approach can be reliably employed to predict and rationalize the magnetic properties of molecular-based materials. (author)
Czech Academy of Sciences Publication Activity Database
Kupka, T.; Nieradka, M.; Stachów, M.; Pluta, T.; Nowak, P.; Kjaer, H.; Kongsted, J.; Kaminský, Jakub
2012-01-01
Roč. 116, č. 14 (2012), s. 3728-3738 ISSN 1089-5639 R&D Projects: GA ČR GPP208/10/P356 Institutional research plan: CEZ:AV0Z40550506 Keywords : consistent basis-sets * density-functional methods * ab-inition calculations * polarization propagator approximation Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.771, year: 2012
Zhu, Wuming; Trickey, S B
2017-12-28
In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li + , Be + , and B + , in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B
Zhu, Wuming; Trickey, S. B.
2017-12-01
In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li+, Be+, and B+, in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B field.
Basis set convergence on static electric dipole polarizability calculations of alkali-metal clusters
International Nuclear Information System (INIS)
Souza, Fabio A. L. de; Jorge, Francisco E.
2013-01-01
A hierarchical sequence of all-electron segmented contracted basis sets of double, triple and quadruple zeta valence qualities plus polarization functions augmented with diffuse functions for the atoms from H to Ar was constructed. A systematic study of basis sets required to obtain reliable and accurate values of static dipole polarizabilities of lithium and sodium clusters (n = 2, 4, 6 and 8) at their optimized equilibrium geometries is reported. Three methods are examined: Hartree-Fock (HF), second-order Moeller-Plesset perturbation theory (MP2), and density functional theory (DFT). By direct calculations or by fitting the directly calculated values through one extrapolation scheme, estimates of the HF, MP2 and DFT complete basis set limits were obtained. Comparison with experimental and theoretical data reported previously in the literature is done (author)
Basis set convergence on static electric dipole polarizability calculations of alkali-metal clusters
Energy Technology Data Exchange (ETDEWEB)
Souza, Fabio A. L. de; Jorge, Francisco E., E-mail: jorge@cce.ufes.br [Departamento de Fisica, Universidade Federal do Espirito Santo, 29060-900 Vitoria-ES (Brazil)
2013-07-15
A hierarchical sequence of all-electron segmented contracted basis sets of double, triple and quadruple zeta valence qualities plus polarization functions augmented with diffuse functions for the atoms from H to Ar was constructed. A systematic study of basis sets required to obtain reliable and accurate values of static dipole polarizabilities of lithium and sodium clusters (n = 2, 4, 6 and 8) at their optimized equilibrium geometries is reported. Three methods are examined: Hartree-Fock (HF), second-order Moeller-Plesset perturbation theory (MP2), and density functional theory (DFT). By direct calculations or by fitting the directly calculated values through one extrapolation scheme, estimates of the HF, MP2 and DFT complete basis set limits were obtained. Comparison with experimental and theoretical data reported previously in the literature is done (author)
On the use of Locally Dense Basis Sets in the Calculation of EPR Hyperfine Couplings
DEFF Research Database (Denmark)
Hedegård, Erik D.; Sauer, Stephan P. A.; Milhøj, Birgitte O.
2013-01-01
The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site in the c......The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site...
On the use of locally dense basis sets in the calculation of EPR hyperfine couplings
DEFF Research Database (Denmark)
Milhøj, Birgitte Olai; Hedegård, Erik D.; Sauer, Stephan P. A.
2013-01-01
The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site in the c......The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site...
Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni
Shuttleworth, I.G.
2015-01-01
© 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.
Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni
Shuttleworth, I.G.
2015-11-01
© 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.
Magnetic anisotropy basis sets for epitaxial (110) and (111) REFe2 nanofilms
International Nuclear Information System (INIS)
Bowden, G J; Martin, K N; Fox, A; Rainford, B D; Groot, P A J de
2008-01-01
Magnetic anisotropy basis sets for the cubic Laves phase rare earth intermetallic REFe 2 compounds are discussed in some detail. Such compounds can be either free standing, or thin films grown in either (110) or (111) mode using molecular beam epitaxy. For the latter, it is useful to rotate to a new coordinate system where the z-axis coincides with the growth axes of the film. In this paper, three symmetry adapted basis sets are given, for multi-pole moments up to n = 12. These sets can be used for free-standing compounds and for (110) and (111) epitaxial films. In addition, the distortion of REFe 2 films, grown on sapphire substrates, is also considered. The distortions are different for the (110) and (111) films. Strain-induced harmonic sets are given for both specific and general distortions. Finally, some predictions are made concerning the preferred direction of easy magnetization in (111) molecular beam epitaxy grown REFe 2 films
Hyperspectral band selection based on consistency-measure of neighborhood rough set theory
International Nuclear Information System (INIS)
Liu, Yao; Xie, Hong; Wang, Liguo; Tan, Kezhu; Chen, Yuehua; Xu, Zhen
2016-01-01
Band selection is a well-known approach for reducing dimensionality in hyperspectral imaging. In this paper, a band selection method based on consistency-measure of neighborhood rough set theory (CMNRS) was proposed to select informative bands from hyperspectral images. A decision-making information system was established by the reflection spectrum of soybeans’ hyperspectral data between 400 nm and 1000 nm wavelengths. The neighborhood consistency-measure, which reflects not only the size of the decision positive region, but also the sample distribution in the boundary region, was used as the evaluation function of band significance. The optimal band subset was selected by a forward greedy search algorithm. A post-pruning strategy was employed to overcome the over-fitting problem and find the minimum subset. To assess the effectiveness of the proposed band selection technique, two classification models (extreme learning machine (ELM) and random forests (RF)) were built. The experimental results showed that the proposed algorithm can effectively select key bands and obtain satisfactory classification accuracy. (paper)
Accurate Conformational Energy Differences of Carbohydrates: A Complete Basis Set Extrapolation
Czech Academy of Sciences Publication Activity Database
Csonka, G. I.; Kaminský, Jakub
2011-01-01
Roč. 7, č. 4 (2011), s. 988-997 ISSN 1549-9618 Institutional research plan: CEZ:AV0Z40550506 Keywords : MP2 * basis set extrapolation * saccharides Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.215, year: 2011
Predicting Pt-195 NMR chemical shift using new relativistic all-electron basis set
Paschoal, D.; Fonseca Guerra, C.; de Oliveira, M.A.L.; Ramalho, T.C.; Dos Santos, H.F.
2016-01-01
Predicting NMR properties is a valuable tool to assist the experimentalists in the characterization of molecular structure. For heavy metals, such as Pt-195, only a few computational protocols are available. In the present contribution, all-electron Gaussian basis sets, suitable to calculate the
Incomplete basis-set problem. V. Application of CIBS to many-electron systems
International Nuclear Information System (INIS)
McDowell, K.; Lewis, L.
1982-01-01
Five versions of CIBS (corrections to an incomplete basis set) theory are used to compute first and second corrections to Roothaan--Hartree--Fock energies via expansion of a given basis set. Version one is an order by order perturbation approximation which neglects virtual orbitals; version two is a full CIBS expansion which neglects virtual orbitals; version three is an order by order perturbation approximation which includes virtual orbitals; version four is a full CIBS expansion which includes orthogonalization to virtual orbitals but neglects virtual orbital coupling terms; and version five is a full CIBS expansion with inclusion of coupling to virtual orbitals. Results are presented for the atomic and molecular systems He, Be, H 2 , LiH, Li 2 , and H 2 O. Version five is shown to produce a corrected Hartree--Fock energy which is essentially in agreement with a comparable SCF result using the same expanded basis set. Versions one through four yield varying degrees of agreement; however, it is evident that the effect of the virtual orbitals must be included. From the results, CIBS version five is shown to be a viable quantitative procedure which can be used to expand or to study the use of basis sets in quantum chemistry
Energy Technology Data Exchange (ETDEWEB)
Borges, A.; Solomon, G. C. [Department of Chemistry and Nano-Science Center, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen Ø (Denmark)
2016-05-21
Single molecule conductance measurements are often interpreted through computational modeling, but the complexity of these calculations makes it difficult to directly link them to simpler concepts and models. Previous work has attempted to make this connection using maximally localized Wannier functions and symmetry adapted basis sets, but their use can be ambiguous and non-trivial. Starting from a Hamiltonian and overlap matrix written in a hydrogen-like basis set, we demonstrate a simple approach to obtain a new basis set that is chemically more intuitive and allows interpretation in terms of simple concepts and models. By diagonalizing the Hamiltonians corresponding to each atom in the molecule, we obtain a basis set that can be partitioned into pseudo-σ and −π and allows partitioning of the Landuaer-Büttiker transmission as well as create simple Hückel models that reproduce the key features of the full calculation. This method provides a link between complex calculations and simple concepts and models to provide intuition or extract parameters for more complex model systems.
Nagata, Takeshi; Iwata, Suehiro
2004-02-22
The locally projected self-consistent field molecular orbital method for molecular interaction (LP SCF MI) is reformulated for multifragment systems. For the perturbation expansion, two types of the local excited orbitals are defined; one is fully local in the basis set on a fragment, and the other has to be partially delocalized to the basis sets on the other fragments. The perturbation expansion calculations only within single excitations (LP SE MP2) are tested for water dimer, hydrogen fluoride dimer, and colinear symmetric ArM+ Ar (M = Na and K). The calculated binding energies of LP SE MP2 are all close to the corresponding counterpoise corrected SCF binding energy. By adding the single excitations, the deficiency in LP SCF MI is thus removed. The results suggest that the exclusion of the charge-transfer effects in LP SCF MI might indeed be the cause of the underestimation for the binding energy. (c) 2004 American Institute of Physics.
Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan
2013-09-26
We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface.
An editor for the maintenance and use of a bank of contracted Gaussian basis set functions
International Nuclear Information System (INIS)
Taurian, O.E.
1984-01-01
A bank of basis sets to be used in ab-initio calculations has been created. The bases are sets of contracted Gaussian type orbitals to be used as input to any molecular integral package. In this communication we shall describe the organization of the bank and a portable editor program which was designed for its maintenance and use. This program is operated by commands and it may be used to obtain any kind of information about the bases in the bank as well as to produce output to be directly used as input for different integral programs. The editor may also be used to format basis sets in the conventional way utilized in publications, as well as to generate a complete, or partial, manual of the contents of the bank if so desired. (orig.)
Expressing clinical data sets with openEHR archetypes: a solid basis for ubiquitous computing.
Garde, Sebastian; Hovenga, Evelyn; Buck, Jasmin; Knaup, Petra
2007-12-01
The purpose of this paper is to analyse the feasibility and usefulness of expressing clinical data sets (CDSs) as openEHR archetypes. For this, we present an approach to transform CDS into archetypes, and outline typical problems with CDS and analyse whether some of these problems can be overcome by the use of archetypes. Literature review and analysis of a selection of existing Australian, German, other European and international CDSs; transfer of a CDS for Paediatric Oncology into openEHR archetypes; implementation of CDSs in application systems. To explore the feasibility of expressing CDS as archetypes an approach to transform existing CDSs into archetypes is presented in this paper. In case of the Paediatric Oncology CDS (which consists of 260 data items) this lead to the definition of 48 openEHR archetypes. To analyse the usefulness of expressing CDS as archetypes, we identified nine problems with CDS that currently remain unsolved without a common model underpinning the CDS. Typical problems include incompatible basic data types and overlapping and incompatible definitions of clinical content. A solution to most of these problems based on openEHR archetypes is motivated. With regard to integrity constraints, further research is required. While openEHR cannot overcome all barriers to Ubiquitous Computing, it can provide the common basis for ubiquitous presence of meaningful and computer-processable knowledge and information, which we believe is a basic requirement for Ubiquitous Computing. Expressing CDSs as openEHR archetypes is feasible and advantageous as it fosters semantic interoperability, supports ubiquitous computing, and helps to develop archetypes that are arguably of better quality than the original CDS.
Kolmann, Stephen J.; Jordan, Meredith J. T.
2010-02-01
One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol-1 at the CCSD(T)/6-31G∗ level of theory, has a 4 kJ mol-1 dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol-1 lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol-1 lower in energy at the CCSD(T)/6-31G∗ level of theory. Ideally, for sub-kJ mol-1 thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.
Kolmann, Stephen J; Jordan, Meredith J T
2010-02-07
One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol(-1) at the CCSD(T)/6-31G* level of theory, has a 4 kJ mol(-1) dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol(-1) lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol(-1) lower in energy at the CCSD(T)/6-31G* level of theory. Ideally, for sub-kJ mol(-1) thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.
First-principle modelling of forsterite surface properties: Accuracy of methods and basis sets.
Demichelis, Raffaella; Bruno, Marco; Massaro, Francesco R; Prencipe, Mauro; De La Pierre, Marco; Nestola, Fabrizio
2015-07-15
The seven main crystal surfaces of forsterite (Mg2 SiO4 ) were modeled using various Gaussian-type basis sets, and several formulations for the exchange-correlation functional within the density functional theory (DFT). The recently developed pob-TZVP basis set provides the best results for all properties that are strongly dependent on the accuracy of the wavefunction. Convergence on the structure and on the basis set superposition error-corrected surface energy can be reached also with poorer basis sets. The effect of adopting different DFT functionals was assessed. All functionals give the same stability order for the various surfaces. Surfaces do not exhibit any major structural differences when optimized with different functionals, except for higher energy orientations where major rearrangements occur around the Mg sites at the surface or subsurface. When dispersions are not accounted for, all functionals provide similar surface energies. The inclusion of empirical dispersions raises the energy of all surfaces by a nearly systematic value proportional to the scaling factor s of the dispersion formulation. An estimation for the surface energy is provided through adopting C6 coefficients that are more suitable than the standard ones to describe O-O interactions in minerals. A 2 × 2 supercell of the most stable surface (010) was optimized. No surface reconstruction was observed. The resulting structure and surface energy show no difference with respect to those obtained when using the primitive cell. This result validates the (010) surface model here adopted, that will serve as a reference for future studies on adsorption and reactivity of water and carbon dioxide at this interface. © 2015 Wiley Periodicals, Inc.
Pseudo-atomic orbitals as basis sets for the O(N) DFT code CONQUEST
Energy Technology Data Exchange (ETDEWEB)
Torralba, A S; Brazdova, V; Gillan, M J; Bowler, D R [Materials Simulation Laboratory, UCL, Gower Street, London WC1E 6BT (United Kingdom); Todorovic, M; Miyazaki, T [National Institute for Materials Science, 1-2-1 Sengen, Tsukuba, Ibaraki 305-0047 (Japan); Choudhury, R [London Centre for Nanotechnology, UCL, 17-19 Gordon Street, London WC1H 0AH (United Kingdom)], E-mail: david.bowler@ucl.ac.uk
2008-07-23
Various aspects of the implementation of pseudo-atomic orbitals (PAOs) as basis functions for the linear scaling CONQUEST code are presented. Preliminary results for the assignment of a large set of PAOs to a smaller space of support functions are encouraging, and an important related proof on the necessary symmetry of the support functions is shown. Details of the generation and integration schemes for the PAOs are also given.
Current-voltage curves for molecular junctions computed using all-electron basis sets
International Nuclear Information System (INIS)
Bauschlicher, Charles W.; Lawson, John W.
2006-01-01
We present current-voltage (I-V) curves computed using all-electron basis sets on the conducting molecule. The all-electron results are very similar to previous results obtained using effective core potentials (ECP). A hybrid integration scheme is used that keeps the all-electron calculations cost competitive with respect to the ECP calculations. By neglecting the coupling of states to the contacts below a fixed energy cutoff, the density matrix for the core electrons can be evaluated analytically. The full density matrix is formed by adding this core contribution to the valence part that is evaluated numerically. Expanding the definition of the core in the all-electron calculations significantly reduces the computational effort and, up to biases of about 2 V, the results are very similar to those obtained using more rigorous approaches. The convergence of the I-V curves and transmission coefficients with respect to basis set is discussed. The addition of diffuse functions is critical in approaching basis set completeness
Optimization of auxiliary basis sets for the LEDO expansion and a projection technique for LEDO-DFT.
Götz, Andreas W; Kollmar, Christian; Hess, Bernd A
2005-09-01
We present a systematic procedure for the optimization of the expansion basis for the limited expansion of diatomic overlap density functional theory (LEDO-DFT) and report on optimized auxiliary orbitals for the Ahlrichs split valence plus polarization basis set (SVP) for the elements H, Li--F, and Na--Cl. A new method to deal with near-linear dependences in the LEDO expansion basis is introduced, which greatly reduces the computational effort of LEDO-DFT calculations. Numerical results for a test set of small molecules demonstrate the accuracy of electronic energies, structural parameters, dipole moments, and harmonic frequencies. For larger molecular systems the numerical errors introduced by the LEDO approximation can lead to an uncontrollable behavior of the self-consistent field (SCF) process. A projection technique suggested by Löwdin is presented in the framework of LEDO-DFT, which guarantees for SCF convergence. Numerical results on some critical test molecules suggest the general applicability of the auxiliary orbitals presented in combination with this projection technique. Timing results indicate that LEDO-DFT is competitive with conventional density fitting methods. (c) 2005 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Vesko Drašković
2008-08-01
Full Text Available According to the World Health Organization’s data, obesity is one of the main risk factors for the human health, especially in so called “mature age”, that is in forties and fiftees of the human’s life. There are many causes of obesity, and the most common ones are unadequate or excessive nutrition, low quality food rich in fats and highly caloric sweetener, unsufficient physical activity – hypokinesy, but also technical and technological development of the modern World (TV, cell phones, elevators, cars etc.. The objective of this research is to define the obesity of adults living in the urban settings through BMI (body mass index and to create, on the basis of these findings, the basis for different applications of the recreational sports programme.
Langhoff, P. W.; Winstead, C. L.
Early studies of the electronically excited states of molecules by John A. Pople and coworkers employing ab initio single-excitation configuration interaction (SECI) calculations helped to simulate related applications of these methods to the partial-channel photoionization cross sections of polyatomic molecules. The Gaussian representations of molecular orbitals adopted by Pople and coworkers can describe SECI continuum states when sufficiently large basis sets are employed. Minimal-basis virtual Fock orbitals stabilized in the continuous portions of such SECI spectra are generally associated with strong photoionization resonances. The spectral attributes of these resonance orbitals are illustrated here by revisiting previously reported experimental and theoretical studies of molecular formaldehyde (H2CO) in combination with recently calculated continuum orbital amplitudes.
Gaspari, Roberto; Rapallo, Arnaldo
2008-06-28
In this work a new method is proposed for the choice of basis functions in diffusion theory (DT) calculations. This method, named hybrid basis approach (HBA), combines the two previously adopted long time sorting procedure (LTSP) and maximum correlation approximation (MCA) techniques; the first emphasizing contributions from the long time dynamics, the latter being based on the local correlations along the chain. In order to fulfill this task, the HBA procedure employs a first order basis set corresponding to a high order MCA one and generates upper order approximations according to LTSP. A test of the method is made first on a melt of cis-1,4-polyisoprene decamers where HBA and LTSP are compared in terms of efficiency. Both convergence properties and numerical stability are improved by the use of the HBA basis set whose performance is evaluated on local dynamics, by computing the correlation times of selected bond vectors along the chain, and on global ones, through the eigenvalues of the diffusion operator L. Further use of the DT with a HBA basis set has been made on a 71-mer of syndiotactic trans-1,2-polypentadiene in toluene solution, whose dynamical properties have been computed with a high order calculation and compared to the "numerical experiment" provided by the molecular dynamics (MD) simulation in explicit solvent. The necessary equilibrium averages have been obtained by a vacuum trajectory of the chain where solvent effects on conformational properties have been reproduced with a proper screening of the nonbonded interactions, corresponding to a definite value of the mean radius of gyration of the polymer in vacuum. Results show a very good agreement between DT calculations and the MD numerical experiment. This suggests a further use of DT methods with the necessary input quantities obtained by the only knowledge of some experimental values, i.e., the mean radius of gyration of the chain and the viscosity of the solution, and by a suitable vacuum
Symmetry-adapted basis sets automatic generation for problems in chemistry and physics
Avery, John Scales; Avery, James Emil
2012-01-01
In theoretical physics, theoretical chemistry and engineering, one often wishes to solve partial differential equations subject to a set of boundary conditions. This gives rise to eigenvalue problems of which some solutions may be very difficult to find. For example, the problem of finding eigenfunctions and eigenvalues for the Hamiltonian of a many-particle system is usually so difficult that it requires approximate methods, the most common of which is expansion of the eigenfunctions in terms of basis functions that obey the boundary conditions of the problem. The computational effort needed
Reformulating classical and quantum mechanics in terms of a unified set of consistency conditions
International Nuclear Information System (INIS)
Bordley, R.F.
1983-01-01
This paper imposes consistency conditions on the path of a particle and shows that they imply Hamilton's principle in classical contexts and Schroedinger's equation in quantum mechanical contexts. Thus this paper provides a common, intuitive foundation for classical and quantum mechanics. It also provides a very new perspective on quantum mechanics. (author
Simulating the oxygen content of ambient organic aerosol with the 2D volatility basis set
Directory of Open Access Journals (Sweden)
B. N. Murphy
2011-08-01
Full Text Available A module predicting the oxidation state of organic aerosol (OA has been developed using the two-dimensional volatility basis set (2D-VBS framework. This model is an extension of the 1D-VBS framework and tracks saturation concentration and oxygen content of organic species during their atmospheric lifetime. The host model, a one-dimensional Lagrangian transport model, is used to simulate air parcels arriving at Finokalia, Greece during the Finokalia Aerosol Measurement Experiment in May 2008 (FAME-08. Extensive observations were collected during this campaign using an aerosol mass spectrometer (AMS and a thermodenuder to determine the chemical composition and volatility, respectively, of the ambient OA. Although there are several uncertain model parameters, the consistently high oxygen content of OA measured during FAME-08 (O:C = 0.8 can help constrain these parameters and elucidate OA formation and aging processes that are necessary for achieving the high degree of oxygenation observed. The base-case model reproduces observed OA mass concentrations (measured mean = 3.1 μg m^{−3}, predicted mean = 3.3 μg m^{−3} and O:C (predicted O:C = 0.78 accurately. A suite of sensitivity studies explore uncertainties due to (1 the anthropogenic secondary OA (SOA aging rate constant, (2 assumed enthalpies of vaporization, (3 the volatility change and number of oxygen atoms added for each generation of aging, (4 heterogeneous chemistry, (5 the oxidation state of the first generation of compounds formed from SOA precursor oxidation, and (6 biogenic SOA aging. Perturbations in most of these parameters do impact the ability of the model to predict O:C well throughout the simulation period. By comparing measurements of the O:C from FAME-08, several sensitivity cases including a high oxygenation case, a low oxygenation case, and biogenic SOA aging case are found to unreasonably depict OA aging, keeping in mind that this study does not consider
MRD-CI potential surfaces using balanced basis sets. IV. The H2 molecule and the H3 surface
International Nuclear Information System (INIS)
Wright, J.S.; Kruus, E.
1986-01-01
The utility of midbond functions in molecular calculations was tested in two cases where the correct results are known: the H 2 potential curve and the collinear H 3 potential surface. For H 2 , a variety of basis sets both with and without bond functions was compared to the exact nonrelativistic potential curve of Kolos and Wolniewicz [J. Chem. Phys. 43, 2429 (1965)]. It was found that optimally balanced basis sets at two levels of quality were the double zeta single polarization plus sp bond function basis (BF1) and the triple zeta double polarization plus two sets of sp bond function basis (BF2). These gave bond dissociation energies D/sub e/ = 4.7341 and 4.7368 eV, respectively (expt. 4.7477 eV). Four basis sets were tested for basis set superposition errors, which were found to be small relative to basis set incompleteness and therefore did not affect any conclusions regarding basis set balance. Basis sets BF1 and BF2 were used to construct potential surfaces for collinear H 3 , along with the corresponding basis sets DZ*P and TZ*PP which contain no bond functions. Barrier heights of 12.52, 10.37, 10.06, and 9.96 kcal/mol were obtained for basis sets DZ*P, TZ*PP, BF1, and BF2, respectively, compared to an estimated limiting value of 9.60 kcal/mol. Difference maps, force constants, and relative rms deviations show that the bond functions improve the surface shape as well as the barrier height
Polarization functions for the modified m6-31G basis sets for atoms Ga through Kr.
Mitin, Alexander V
2013-09-05
The 2df polarization functions for the modified m6-31G basis sets of the third-row atoms Ga through Kr (Int J Quantum Chem, 2007, 107, 3028; Int J. Quantum Chem, 2009, 109, 1158) are proposed. The performances of the m6-31G, m6-31G(d,p), and m6-31G(2df,p) basis sets were examined in molecular calculations carried out by the density functional theory (DFT) method with B3LYP hybrid functional, Møller-Plesset perturbation theory of the second order (MP2), quadratic configuration interaction method with single and double substitutions and were compared with those for the known 6-31G basis sets as well as with the other similar 641 and 6-311G basis sets with and without polarization functions. Obtained results have shown that the performances of the m6-31G, m6-31G(d,p), and m6-31G(2df,p) basis sets are better in comparison with the performances of the known 6-31G, 6-31G(d,p) and 6-31G(2df,p) basis sets. These improvements are mainly reached due to better approximations of different electrons belonging to the different atomic shells in the modified basis sets. Applicability of the modified basis sets in thermochemical calculations is also discussed. © 2013 Wiley Periodicals, Inc.
Relativistic double-zeta, triple-zeta, and quadruple-zeta basis sets for the lanthanides La–Lu
Dyall, K.G.; Gomes, A.S.P.; Visscher, L.
2010-01-01
Relativistic basis sets of double-zeta, triple-zeta, and quadruple-zeta quality have been optimized for the lanthanide elements La-Lu. The basis sets include SCF exponents for the occupied spinors and for the 6p shell, exponents of correlating functions for the valence shells (4f, 5d and 6s) and the
2010-10-01
... physician services in a teaching setting. 415.170 Section 415.170 Public Health CENTERS FOR MEDICARE... BY PHYSICIANS IN PROVIDERS, SUPERVISING PHYSICIANS IN TEACHING SETTINGS, AND RESIDENTS IN CERTAIN SETTINGS Physician Services in Teaching Settings § 415.170 Conditions for payment on a fee schedule basis...
Brouwer, A.; Hoogendoorn, M.; Naarding, E.
2015-01-01
In this paper we evaluate the International Accounting Standards Board’s (IASB) efforts, in a discussion paper (DP) of 2013, to develop a new conceptual framework (CF) in the light of its stated ambition to establish a robust and consistent basis for future standard setting, thereby guiding standard
Velocity-gauge real-time TDDFT within a numerical atomic orbital basis set
Pemmaraju, C. D.; Vila, F. D.; Kas, J. J.; Sato, S. A.; Rehr, J. J.; Yabana, K.; Prendergast, David
2018-05-01
The interaction of laser fields with solid-state systems can be modeled efficiently within the velocity-gauge formalism of real-time time dependent density functional theory (RT-TDDFT). In this article, we discuss the implementation of the velocity-gauge RT-TDDFT equations for electron dynamics within a linear combination of atomic orbitals (LCAO) basis set framework. Numerical results obtained from our LCAO implementation, for the electronic response of periodic systems to both weak and intense laser fields, are compared to those obtained from established real-space grid and Full-Potential Linearized Augmented Planewave approaches. Potential applications of the LCAO based scheme in the context of extreme ultra-violet and soft X-ray spectroscopies involving core-electronic excitations are discussed.
Rohmer, Jeremy
2016-04-01
Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.
Energy Technology Data Exchange (ETDEWEB)
Hsu, Po Jen; Lai, S. K., E-mail: sklai@coll.phy.ncu.edu.tw [Complex Liquids Laboratory, Department of Physics, National Central University, Chungli 320, Taiwan and Molecular Science and Technology Program, Taiwan International Graduate Program, Academia Sinica, Taipei 115, Taiwan (China); Rapallo, Arnaldo [Istituto per lo Studio delle Macromolecole (ISMAC) Consiglio Nazionale delle Ricerche (CNR), via E. Bassini 15, C.A.P 20133 Milano (Italy)
2014-03-14
Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit
International Nuclear Information System (INIS)
Hsu, Po Jen; Lai, S. K.; Rapallo, Arnaldo
2014-01-01
Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit
Scudese, Estevão; Willardson, Jeffrey M; Simão, Roberto; Senna, Gilmar; de Salles, Belmiro F; Miranda, Humberto
2015-11-01
The purpose of this study was to compare different rest intervals between sets on repetition consistency and ratings of perceived exertion (RPE) during consecutive bench press sets with an absolute 3RM (3 repetition maximum) load. Sixteen trained men (23.75 ± 4.21 years; 74.63 ± 5.36 kg; 175 ± 4.64 cm; bench press relative strength: 1.44 ± 0.19 kg/kg of body mass) attended 4 randomly ordered sessions during which 5 consecutive sets of the bench press were performed with an absolute 3RM load and 1, 2, 3, or 5 minutes of rest interval between sets. The results indicated that significantly greater bench press repetitions were completed with 2, 3, and 5 minutes vs. 1-minute rest between sets (p ≤ 0.05); no significant differences were noted between the 2, 3, and 5 minutes rest conditions. For the 1-minute rest condition, performance reductions (relative to the first set) were observed commencing with the second set; whereas for the other conditions (2, 3, and 5 minutes rest), performance reductions were not evident until the third and fourth sets. The RPE values before each of the successive sets were significantly greater, commencing with the second set for the 1-minute vs. the 3 and 5 minutes rest conditions. Significant increases were also evident in RPE immediately after each set between the 1 and 5 minutes rest conditions from the second through fifth sets. These findings indicate that when utilizing an absolute 3RM load for the bench press, practitioners may prescribe a time-efficient minimum of 2 minutes rest between sets without significant impairments in repetition performance. However, lower perceived exertion levels may necessitate prescription of a minimum of 3 minutes rest between sets.
Numerical Aspects of Atomic Physics: Helium Basis Sets and Matrix Diagonalization
Jentschura, Ulrich; Noble, Jonathan
2014-03-01
We present a matrix diagonalization algorithm for complex symmetric matrices, which can be used in order to determine the resonance energies of auto-ionizing states of comparatively simple quantum many-body systems such as helium. The algorithm is based in multi-precision arithmetic and proceeds via a tridiagonalization of the complex symmetric (not necessarily Hermitian) input matrix using generalized Householder transformations. Example calculations involving so-called PT-symmetric quantum systems lead to reference values which pertain to the imaginary cubic perturbation (the imaginary cubic anharmonic oscillator). We then proceed to novel basis sets for the helium atom and present results for Bethe logarithms in hydrogen and helium, obtained using the enhanced numerical techniques. Some intricacies of ``canned'' algorithms such as those used in LAPACK will be discussed. Our algorithm, for complex symmetric matrices such as those describing cubic resonances after complex scaling, is faster than LAPACK's built-in routines, for specific classes of input matrices. It also offer flexibility in terms of the calculation of the so-called implicit shift, which is used in order to ``pivot'' the system toward the convergence to diagonal form. We conclude with a wider overview.
On the effects of basis set truncation and electron correlation in conformers of 2-hydroxy-acetamide
Szarecka, A.; Day, G.; Grout, P. J.; Wilson, S.
Ab initio quantum chemical calculations have been used to study the differences in energy between two gas phase conformers of the 2-hydroxy-acetamide molecule that possess intramolecular hydrogen bonding. In particular, rotation around the central C-C bond has been considered as a factor determining the structure of the hydrogen bond and stabilization of the conformer. Energy calculations include full geometiy optimization using both the restricted matrix Hartree-Fock model and second-order many-body perturbation theory with a number of commonly used basis sets. The basis sets employed ranged from the minimal STO-3G set to [`]split-valence' sets up to 6-31 G. The effects of polarization functions were also studied. The results display a strong basis set dependence.
The Raman Spectrum of the Squarate (C4O4-2 Anion: An Ab Initio Basis Set Dependence Study
Directory of Open Access Journals (Sweden)
Miranda Sandro G. de
2002-01-01
Full Text Available The Raman excitation profile of the squarate anion, C4O4-2 , was calculated using ab initio methods at the Hartree-Fock using Linear Response Theory (LRT for six excitation frequencies: 632.5, 514.5, 488.0, 457.9, 363.8 and 337.1 nm. Five basis set functions (6-31G*, 6-31+G*, cc-pVDZ, aug-cc-pVDZ and Sadlej's polarizability basis set were investigated aiming to evaluate the performance of the 6-31G* set for numerical convergence and computational cost in relation to the larger basis sets. All basis sets reproduce the main spectroscopic features of the Raman spectrum of this anion for the excitation interval investigated. The 6-31G* basis set presented, on average, the same accuracy of numerical results as the larger sets but at a fraction of the computational cost showing that it is suitable for the theoretical investigation of the squarate dianion and its complexes and derivatives.
Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland
2009-04-01
Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.
Basis set effects on the energy and hardness profiles of the ...
Indian Academy of Sciences (India)
Unknown
maximum hardness principle (MHP); spurious stationary points; hydrogen fluoride dimer. 1. Introduction ... This error can be solved when accounting for the basis ..... DURSI for financial support through the Distinguished. University Research ...
International Nuclear Information System (INIS)
Kollmar, Christian; Neese, Frank
2014-01-01
The role of the static Kohn-Sham (KS) response function describing the response of the electron density to a change of the local KS potential is discussed in both the theory of the optimized effective potential (OEP) and the so-called inverse Kohn-Sham problem involving the task to find the local KS potential for a given electron density. In a general discussion of the integral equation to be solved in both cases, it is argued that a unique solution of this equation can be found even in case of finite atomic orbital basis sets. It is shown how a matrix representation of the response function can be obtained if the exchange-correlation potential is expanded in terms of a Schmidt-orthogonalized basis comprising orbitals products of occupied and virtual orbitals. The viability of this approach in both OEP theory and the inverse KS problem is illustrated by numerical examples
International Nuclear Information System (INIS)
Grofulović, Marija; Alves, Luís L; Guerra, Vasco
2016-01-01
This work proposes a complete and consistent set of cross sections for electron collisions with carbon dioxide (CO 2 ) molecules to be published in the IST-Lisbon database with LXCat. The set is validated from the comparison between swarm parameters calculated using a two-term Boltzmann solver and the available experimental data. The importance of superelastic collisions with CO 2 (0 1 0) molecules at low values of the reduced electric field is discussed. Due to significant uncertainties, there are ongoing debates regarding the deconvolution of cross sections that describe generic energy losses at specific energy thresholds into cross sections that describe individual processes. An important example of these uncertainties is with the dissociation of CO 2 , for which the total electron impact dissociation cross section has not yet been unambiguously identified. The available dissociation cross sections are evaluated and discussed, and a strategy to obtain electron-impact dissociation rate coefficients is suggested. (paper)
International Nuclear Information System (INIS)
Caravaca, M A; Casali, R A
2005-01-01
The SIESTA approach based on pseudopotentials and a localized basis set is used to calculate the electronic, elastic and equilibrium properties of P 2 1 /c, Pbca, Pnma, Fm3m, P4 2 nmc and Pa3 phases of HfO 2 . Using separable Troullier-Martins norm-conserving pseudopotentials which include partial core corrections for Hf, we tested important physical properties as a function of the basis set size, grid size and cut-off ratio of the pseudo-atomic orbitals (PAOs). We found that calculations in this oxide with the LDA approach and using a minimal basis set (simple zeta, SZ) improve calculated phase transition pressures with respect to the double-zeta basis set and LDA (DZ-LDA), and show similar accuracy to that determined with the PPPW and GGA approach. Still, the equilibrium volumes and structural properties calculated with SZ-LDA compare better with experiments than the GGA approach. The bandgaps and elastic and structural properties calculated with DZ-LDA are accurate in agreement with previous state of the art ab initio calculations and experimental evidence and cannot be improved with a polarized basis set. These calculated properties show low sensitivity to the PAO localization parameter range between 40 and 100 meV. However, this is not true for the relative energy, which improves upon decrease of the mentioned parameter. We found a non-linear behaviour in the lattice parameters with pressure in the P 2 1 /c phase, showing a discontinuity of the derivative of the a lattice parameter with respect to external pressure, as found in experiments. The common enthalpy values calculated with the minimal basis set give pressure transitions of 3.3 and 10.8?GPa for P2 1 /c → Pbca and Pbca → Pnma, respectively, in accordance with different high pressure experimental values
Energy Technology Data Exchange (ETDEWEB)
Caravaca, M A [Facultad de Ingenieria, Universidad Nacional del Nordeste, Avenida Las Heras 727, 3500-Resistencia (Argentina); Casali, R A [Facultad de Ciencias Exactas y Naturales y Agrimensura, Universidad Nacional del Nordeste, Avenida Libertad, 5600-Corrientes (Argentina)
2005-09-21
The SIESTA approach based on pseudopotentials and a localized basis set is used to calculate the electronic, elastic and equilibrium properties of P 2{sub 1}/c, Pbca, Pnma, Fm3m, P4{sub 2}nmc and Pa3 phases of HfO{sub 2}. Using separable Troullier-Martins norm-conserving pseudopotentials which include partial core corrections for Hf, we tested important physical properties as a function of the basis set size, grid size and cut-off ratio of the pseudo-atomic orbitals (PAOs). We found that calculations in this oxide with the LDA approach and using a minimal basis set (simple zeta, SZ) improve calculated phase transition pressures with respect to the double-zeta basis set and LDA (DZ-LDA), and show similar accuracy to that determined with the PPPW and GGA approach. Still, the equilibrium volumes and structural properties calculated with SZ-LDA compare better with experiments than the GGA approach. The bandgaps and elastic and structural properties calculated with DZ-LDA are accurate in agreement with previous state of the art ab initio calculations and experimental evidence and cannot be improved with a polarized basis set. These calculated properties show low sensitivity to the PAO localization parameter range between 40 and 100 meV. However, this is not true for the relative energy, which improves upon decrease of the mentioned parameter. We found a non-linear behaviour in the lattice parameters with pressure in the P 2{sub 1}/c phase, showing a discontinuity of the derivative of the a lattice parameter with respect to external pressure, as found in experiments. The common enthalpy values calculated with the minimal basis set give pressure transitions of 3.3 and 10.8?GPa for P2{sub 1}/c {yields} Pbca and Pbca {yields} Pnma, respectively, in accordance with different high pressure experimental values.
International Nuclear Information System (INIS)
Brorsen, Kurt R.; Sirjoosingh, Andrew; Pak, Michael V.; Hammes-Schiffer, Sharon
2015-01-01
The nuclear electronic orbital (NEO) reduced explicitly correlated Hartree-Fock (RXCHF) approach couples select electronic orbitals to the nuclear orbital via Gaussian-type geminal functions. This approach is extended to enable the use of a restricted basis set for the explicitly correlated electronic orbitals and an open-shell treatment for the other electronic orbitals. The working equations are derived and the implementation is discussed for both extensions. The RXCHF method with a restricted basis set is applied to HCN and FHF − and is shown to agree quantitatively with results from RXCHF calculations with a full basis set. The number of many-particle integrals that must be calculated for these two molecules is reduced by over an order of magnitude with essentially no loss in accuracy, and the reduction factor will increase substantially for larger systems. Typically, the computational cost of RXCHF calculations with restricted basis sets will scale in terms of the number of basis functions centered on the quantum nucleus and the covalently bonded neighbor(s). In addition, the RXCHF method with an odd number of electrons that are not explicitly correlated to the nuclear orbital is implemented using a restricted open-shell formalism for these electrons. This method is applied to HCN + , and the nuclear densities are in qualitative agreement with grid-based calculations. Future work will focus on the significance of nonadiabatic effects in molecular systems and the further enhancement of the NEO-RXCHF approach to accurately describe such effects
Energy Technology Data Exchange (ETDEWEB)
Brorsen, Kurt R.; Sirjoosingh, Andrew; Pak, Michael V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu [Department of Chemistry, University of Illinois at Urbana-Champaign, 600 South Mathews Ave., Urbana, Illinois 61801 (United States)
2015-06-07
The nuclear electronic orbital (NEO) reduced explicitly correlated Hartree-Fock (RXCHF) approach couples select electronic orbitals to the nuclear orbital via Gaussian-type geminal functions. This approach is extended to enable the use of a restricted basis set for the explicitly correlated electronic orbitals and an open-shell treatment for the other electronic orbitals. The working equations are derived and the implementation is discussed for both extensions. The RXCHF method with a restricted basis set is applied to HCN and FHF{sup −} and is shown to agree quantitatively with results from RXCHF calculations with a full basis set. The number of many-particle integrals that must be calculated for these two molecules is reduced by over an order of magnitude with essentially no loss in accuracy, and the reduction factor will increase substantially for larger systems. Typically, the computational cost of RXCHF calculations with restricted basis sets will scale in terms of the number of basis functions centered on the quantum nucleus and the covalently bonded neighbor(s). In addition, the RXCHF method with an odd number of electrons that are not explicitly correlated to the nuclear orbital is implemented using a restricted open-shell formalism for these electrons. This method is applied to HCN{sup +}, and the nuclear densities are in qualitative agreement with grid-based calculations. Future work will focus on the significance of nonadiabatic effects in molecular systems and the further enhancement of the NEO-RXCHF approach to accurately describe such effects.
International Nuclear Information System (INIS)
Varandas, A.J.C.
1980-01-01
A suggestion is made for using the zeroth-order exchange term, at the one-exchange level, in the perturbation development of the interaction energy as a criterion for optmizing the atomic basis sets in interatomic force calculations. The approach is illustrated for the case of two helium atoms. (orig.)
Czech Academy of Sciences Publication Activity Database
Zahradník, Rudolf; Šroubková, Libuše
2005-01-01
Roč. 104, č. 1 (2005), s. 52-63 ISSN 0020-7608 Institutional research plan: CEZ:AV0Z40400503 Keywords : intermolecular complexes * van der Waals species * ab initio calculations * complete basis set values * estimates Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.192, year: 2005
Varandas, António J. C.
2018-04-01
Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.
Kitamura, Aya; Kawai, Yasuhiko
2015-01-01
Laminated alginate impression for edentulous is simple and time efficient compared to border molding technique. The purpose of this study was to examine clinical applicability of the laminated alginate impression, by measuring the effects of different Water/Powder (W/P) and mixing methods, and different bonding methods in the secondary impression of alginate impression. Three W/P: manufacturer-designated mixing water amount (standard), 1.5-fold (1.5×) and 1.75-fold (1.75×) water amount were mixed by manual and automatic mixing methods. Initial and complete setting time, permanent and elastic deformation, and consistency of the secondary impression were investigated (n=10). Additionally, tensile bond strength between the primary and secondary impression were measured in the following surface treatment; air blow only (A), surface baking (B), and alginate impression material bonding agent (ALGI-BOND: AB) (n=12). Initial setting times significantly shortened with automatic mixing for all W/P (p<0.05). The permanent deformation decreased and elastic deformation increased as high W/P, regardless of the mixing method. Elastic deformation significantly reduced in 1.5× and 1.75× with automatic mixing (p<0.05). All of these properties resulted within JIS standards. For all W/P, AB showed a significantly high bonding strength as compared to A and B (p<0.01). The increase of mixing water, 1.5× and 1.75×, resulted within JIS standards in setting time, suggesting its applicability in clinical setting. The use of automatic mixing device decreased elastic strain and shortening of the curing time. For the secondary impression application of adhesives on the primary impression gives secure adhesion. Copyright © 2014 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Using XCO2 retrievals for assessing the long-term consistency of NDACC/FTIR data sets
Barthlott, S.; Schneider, M.; Hase, F.; Wiegele, A.; Christner, E.; González, Y.; Blumenstock, T.; Dohe, S.; García, O. E.; Sepúlveda, E.; Strong, K.; Mendonca, J.; Weaver, D.; Palm, M.; Deutscher, N. M.; Warneke, T.; Notholt, J.; Lejeune, B.; Mahieu, E.; Jones, N.; Griffith, D. W. T.; Velazco, V. A.; Smale, D.; Robinson, J.; Kivi, R.; Heikkinen, P.; Raffalski, U.
2015-03-01
Within the NDACC (Network for the Detection of Atmospheric Composition Change), more than 20 FTIR (Fourier-transform infrared) spectrometers, spread worldwide, provide long-term data records of many atmospheric trace gases. We present a method that uses measured and modelled XCO2 for assessing the consistency of these NDACC data records. Our XCO2 retrieval setup is kept simple so that it can easily be adopted for any NDACC/FTIR-like measurement made since the late 1950s. By a comparison to coincident TCCON (Total Carbon Column Observing Network) measurements, we empirically demonstrate the useful quality of this suggested NDACC XCO2 product (empirically obtained scatter between TCCON and NDACC is about 4‰ for daily mean as well as monthly mean comparisons, and the bias is 25‰). Our XCO2 model is a simple regression model fitted to CarbonTracker results and the Mauna Loa CO2 in situ records. A comparison to TCCON data suggests an uncertainty of the model for monthly mean data of below 3‰. We apply the method to the NDACC/FTIR spectra that are used within the project MUSICA (multi-platform remote sensing of isotopologues for investigating the cycle of atmospheric water) and demonstrate that there is a good consistency for these globally representative set of spectra measured since 1996: the scatter between the modelled and measured XCO2 on a yearly time scale is only 3‰.
Typed Sets as a Basis for Object-Oriented Database Schemas
Balsters, H.; de By, R.A.; Zicari, R.
The object-oriented data model TM is a language that is based on the formal theory of FM, a typed language with object-oriented features such as attributes and methods in the presence of subtyping. The general (typed) set constructs of FM allow one to deal with (database) constraints in TM. The
International Nuclear Information System (INIS)
Lazariev, A; Graveron-Demilly, D; Allouche, A-R; Aubert-Frécon, M; Fauvelle, F; Piotto, M; Elbayed, K; Namer, I-J; Van Ormondt, D
2011-01-01
High-resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. The need to monitor diseases and pharmaceutical follow-up requires an automatic quantitation of HRMAS 1 H signals. However, for several metabolites, the values of chemical shifts of proton groups may slightly differ according to the micro-environment in the tissue or cells, in particular to its pH. This hampers the accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis set: the metabolite fingerprints are not correct anymore. In this work, we propose an accurate method coupling quantum mechanical simulations and quantitation algorithms to handle basis-set changes. The proposed algorithm automatically corrects mismatches between the signals of the simulated basis set and the signal under analysis by maximizing the normalized cross-correlation between the mentioned signals. Optimized chemical shift values of the metabolites are obtained. This method, QM-QUEST, provides more robust fitting while limiting user involvement and respects the correct fingerprints of metabolites. Its efficiency is demonstrated by accurately quantitating 33 signals from tissue samples of human brains with oligodendroglioma, obtained at 11.7 tesla. The corresponding chemical shift changes of several metabolites within the series are also analyzed
Lazariev, A.; Allouche, A.-R.; Aubert-Frécon, M.; Fauvelle, F.; Piotto, M.; Elbayed, K.; Namer, I.-J.; van Ormondt, D.; Graveron-Demilly, D.
2011-11-01
High-resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. The need to monitor diseases and pharmaceutical follow-up requires an automatic quantitation of HRMAS 1H signals. However, for several metabolites, the values of chemical shifts of proton groups may slightly differ according to the micro-environment in the tissue or cells, in particular to its pH. This hampers the accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis set: the metabolite fingerprints are not correct anymore. In this work, we propose an accurate method coupling quantum mechanical simulations and quantitation algorithms to handle basis-set changes. The proposed algorithm automatically corrects mismatches between the signals of the simulated basis set and the signal under analysis by maximizing the normalized cross-correlation between the mentioned signals. Optimized chemical shift values of the metabolites are obtained. This method, QM-QUEST, provides more robust fitting while limiting user involvement and respects the correct fingerprints of metabolites. Its efficiency is demonstrated by accurately quantitating 33 signals from tissue samples of human brains with oligodendroglioma, obtained at 11.7 tesla. The corresponding chemical shift changes of several metabolites within the series are also analyzed.
Many-Body Energy Decomposition with Basis Set Superposition Error Corrections.
Mayer, István; Bakó, Imre
2017-05-09
The problem of performing many-body decompositions of energy is considered in the case when BSSE corrections are also performed. It is discussed that the two different schemes that have been proposed go back to the two different interpretations of the original Boys-Bernardi counterpoise correction scheme. It is argued that from the physical point of view the "hierarchical" scheme of Valiron and Mayer should be preferred and not the scheme recently discussed by Ouyang and Bettens, because it permits the energy of the individual monomers and all the two-body, three-body, etc. energy components to be free of unphysical dependence on the arrangement (basis functions) of other subsystems in the cluster.
Investigation of confined hydrogen atom in spherical cavity, using B-splines basis set
Directory of Open Access Journals (Sweden)
M Barezi
2011-03-01
Full Text Available Studying confined quantum systems (CQS is very important in nano technology. One of the basic CQS is a hydrogen atom confined in spherical cavity. In this article, eigenenergies and eigenfunctions of hydrogen atom in spherical cavity are calculated, using linear variational method. B-splines are used as basis functions, which can easily construct the trial wave functions with appropriate boundary conditions. The main characteristics of B-spline are its high localization and its flexibility. Besides, these functions have numerical stability and are able to spend high volume of calculation with good accuracy. The energy levels as function of cavity radius are analyzed. To check the validity and efficiency of the proposed method, extensive convergence test of eigenenergies in different cavity sizes has been carried out.
Efficient G0W0 using localized basis sets: a benchmark for molecules
Koval, Petr; Per Ljungberg, Mathias; Sanchez-Portal, Daniel
Electronic structure calculations within Hedin's GW approximation are becoming increasingly accessible to the community. In particular, as it has been shown earlier and we confirm by calculations using our MBPT_LCAO package, the computational cost of the so-called G0W0 can be made comparable to the cost of a regular Hartree-Fock calculation. In this work, we study the performance of our new implementation of G0W0 to reproduce the ionization potentials of all 117 closed-shell molecules belonging to the G2/97 test set, using a pseudo-potential starting point provided by the popular density-functional package SIESTA. Moreover, the ionization potentials and electron affinities of a set of 24 acceptor molecules are compared to experiment and to reference all-electron calculations. PK: Guipuzcoa Fellow; PK,ML,DSP: Deutsche Forschungsgemeinschaft (SFB1083); PK,DSP: MINECO MAT2013-46593-C6-2-P.
The prefabricated building risk decision research of DM technology on the basis of Rough Set
Guo, Z. L.; Zhang, W. B.; Ma, L. H.
2017-08-01
With the resources crises and more serious pollution, the green building has been strongly advocated by most countries and become a new building style in the construction field. Compared with traditional building, the prefabricated building has its own irreplaceable advantages but is influenced by many uncertainties. So far, a majority of scholars have been studying based on qualitative researches from all of the word. This paper profoundly expounds its significance about the prefabricated building. On the premise of the existing research methods, combined with rough set theory, this paper redefines the factors which affect the prefabricated building risk. Moreover, it quantifies risk factors and establish an expert knowledge base through assessing. And then reduced risk factors about the redundant attributes and attribute values, finally form the simplest decision rule. This simplest decision rule, which is based on the DM technology of rough set theory, provides prefabricated building with a controllable new decision-making method.
International Nuclear Information System (INIS)
Lee, Dong Ah; Lee, Jong Hoon; Yoo, Jun Beom
2011-01-01
Controllers in safety critical systems such as nuclear power plants often use Function Block Diagrams (FBDs) to design embedded software. The design is implemented using programming languages such as C to compile it into particular target hardware. The implementation must have the same behavior with the design and the behavior should be verified explicitly. For example, the pSET (POSAFE-Q Software Engineering Tool) is a loader software to program POSAFE-Q PLC (Programmable Logic Controller) and is developed as a part of the KNICS (Korea Nuclear Instrumentation and Control System R and D Center) project. It uses FBDs to design software of PLC, and generates ANSI-C code to compile it into specific machine code. To verify the equivalence between the FBDs and ANSI-C code, mathematical proof of code generator or a verification tools such as RETRANS can help guarantee the equivalence. Mathematical proof, however, has a weakness that requires high expenditure and repetitive fulfillment whenever the translator is modified. On the other hand, RETRANS reconstructs the generated source code without consideration of the generator. It has also a weakness that the reconstruction of generated code needs additional analysis This paper introduces verification process of behavioral consistency between design and its implementation of the pSET using the HW-CBMC. The HW-CBMC is a formal verification tool, verifying equivalence between hardware and software description. It requires two inputs for checking equivalence, Verilog for hard-ware and ANSI-C for software. In this approach, FBDs are translated into semantically equivalent Verilog pro-gram, and the HW-CBMC verifies equivalence between the Verilog program and the ANSI-C program which is generated from the FBDs
Energy Technology Data Exchange (ETDEWEB)
Lee, Dong Ah; Lee, Jong Hoon; Yoo, Jun Beom [Konkuk University, Seoul (Korea, Republic of)
2011-05-15
Controllers in safety critical systems such as nuclear power plants often use Function Block Diagrams (FBDs) to design embedded software. The design is implemented using programming languages such as C to compile it into particular target hardware. The implementation must have the same behavior with the design and the behavior should be verified explicitly. For example, the pSET (POSAFE-Q Software Engineering Tool) is a loader software to program POSAFE-Q PLC (Programmable Logic Controller) and is developed as a part of the KNICS (Korea Nuclear Instrumentation and Control System R and D Center) project. It uses FBDs to design software of PLC, and generates ANSI-C code to compile it into specific machine code. To verify the equivalence between the FBDs and ANSI-C code, mathematical proof of code generator or a verification tools such as RETRANS can help guarantee the equivalence. Mathematical proof, however, has a weakness that requires high expenditure and repetitive fulfillment whenever the translator is modified. On the other hand, RETRANS reconstructs the generated source code without consideration of the generator. It has also a weakness that the reconstruction of generated code needs additional analysis This paper introduces verification process of behavioral consistency between design and its implementation of the pSET using the HW-CBMC. The HW-CBMC is a formal verification tool, verifying equivalence between hardware and software description. It requires two inputs for checking equivalence, Verilog for hard-ware and ANSI-C for software. In this approach, FBDs are translated into semantically equivalent Verilog pro-gram, and the HW-CBMC verifies equivalence between the Verilog program and the ANSI-C program which is generated from the FBDs
Zhang, Xing; Carter, Emily A.
2018-01-01
We revisit the static response function-based Kohn-Sham (KS) inversion procedure for determining the KS effective potential that corresponds to a given target electron density within finite atomic orbital basis sets. Instead of expanding the potential in an auxiliary basis set, we directly update the potential in its matrix representation. Through numerical examples, we show that the reconstructed density rapidly converges to the target density. Preliminary results are presented to illustrate the possibility of obtaining a local potential in real space from the optimized potential in its matrix representation. We have further applied this matrix-based KS inversion approach to density functional embedding theory. A proof-of-concept study of a solvated proton transfer reaction demonstrates the method's promise.
International Nuclear Information System (INIS)
Hollauer, E.; Nascimento, M.A.C.
1985-01-01
The photoionization cross-section and dynamic polarizability for lithium atom are calculated using a discrete basis set to represent both the bound and the continuum-states of the atom, to construct an approximation to the dynamic polarizability. From the imaginary part of the complex dynamic polarizability one extracts the photoionization cross-section and from its real part the dynamic polarizability. The results are in good agreement with the experiments and other more elaborate calculations (Author) [pt
International Nuclear Information System (INIS)
Bykov, V.P.; Gerasimov, A.V.
1992-08-01
A new variational method without a basis set for calculation of the eigenvalues and eigenfunctions of Hamiltonians is suggested. The expansion of this method for the Coulomb potentials is given. Calculation of the energy and charge distribution in the two-electron system for different values of the nuclear charge Z is made. It is shown that at small Z the Coulomb forces disintegrate the electron cloud into two clots. (author). 3 refs, 4 figs, 1 tab
International Nuclear Information System (INIS)
Fischer, U.; Wiese, H.W.
1983-01-01
For safe handling, processing and storage of spent nuclear fuel a reliable, experimentally validated method is needed to determine fuel and waste characteristics: composition, radioactivity, heat and radiation. For PWR's, a cell-burnup procedure has been developed which is able to calculate the inventory in consistency with cell geometry, initial enrichment, and reactor control. Routine calculations can be performed with KORIGEN using consistent cross-section sets - burnup-dependent and based on the latest Karlsruhe evaluations for actinides - which were calculated previously with the cell-burnup procedure. Extensive comparisons between calculations and experiments validate the presented procedure. For the use of the KORIGEN code the input description and sample problems are added. Improvements in the calculational method and in data are described, results from KORIGEN, ORIGEN and ORIGEN2 calculations are compared. Fuel and waste inventories are given for BIBLIS-type fuel of different burnup. (orig.) [de
Three-body problem in quantum mechanics: Hyperspherical elliptic coordinates and harmonic basis sets
International Nuclear Information System (INIS)
Aquilanti, Vincenzo; Tonzani, Stefano
2004-01-01
Elliptic coordinates within the hyperspherical formalism for three-body problems were proposed some time ago [V. Aquilanti, S. Cavalli, and G. Grossi, J. Chem. Phys. 85, 1362 (1986)] and recently have also found application, for example, in chemical reaction theory [see O. I. Tolstikhin and H. Nakamura, J. Chem. Phys. 108, 8899 (1998)]. Here we consider their role in providing a smooth transition between the known 'symmetric' and 'asymmetric' parametrizations, and focus on the corresponding hyperspherical harmonics. These harmonics, which will be called hyperspherical elliptic, involve products of two associated Lame polynomials. We will provide an expansion of these new sets in a finite series of standard hyperspherical harmonics, producing a powerful tool for future applications in the field of scattering and bound-state quantum-mechanical three-body problems
International Nuclear Information System (INIS)
Freedhoff, Helen
2004-01-01
We study an aggregate of N identical two-level atoms (TLA's) coupled by the retarded interatomic interaction, using the Lehmberg-Agarwal master equation. First, we calculate the entangled eigenstates of the system; then, we use these eigenstates as a basis set for the projection of the master equation. We demonstrate that in this basis the equations of motion for the level populations, as well as the expressions for the emission and absorption spectra, assume a simple mathematical structure and allow for a transparent physical interpretation. To illustrate the use of the general theory in emission processes, we study an isosceles triangle of atoms, and present in the long wavelength limit the (cascade) emission spectrum for a hexagon of atoms fully excited at t=0. To illustrate its use for absorption processes, we tabulate (in the same limit) the biexciton absorption frequencies, linewidths, and relative intensities for polygons consisting of N=2,...,9 TLA's
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
Krogel, Jaron T.; Reboredo, Fernando A.
2018-01-01
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.
Skakun, Sergii; Justice, Christopher O; Vermote, Eric; Roger, Jean-Claude
2018-01-01
The Visible/Infrared Imager/Radiometer Suite (VIIRS) aboard the Suomi National Polar-orbiting Partnership (S-NPP) satellite was launched in 2011, in part to provide continuity with the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument aboard National Aeronautics and Space Administration's (NASA) Terra and Aqua remote sensing satellites. The VIIRS will eventually replace MODIS for both land science and applications and add to the coarse-resolution, long term data record. It is, therefore, important to provide the user community with an assessment of the consistency of equivalent products from the two sensors. For this study, we do this in the context of example agricultural monitoring applications. Surface reflectance that is routinely delivered within the M{O,Y}D09 and VNP09 series of products provide critical input for generating downstream products. Given the range of applications utilizing the normalized difference vegetation index (NDVI) generated from M{O,Y}D09 and VNP09 products and the inherent differences between MODIS and VIIRS sensors in calibration, spatial sampling, and spectral bands, the main objective of this study is to quantify uncertainties related the transitioning from using MODIS to VIIRS-based NDVI's. In particular, we compare NDVI's derived from two sets of Level 3 MYD09 and VNP09 products with various spatial-temporal characteristics, namely 8-day composites at 500 m spatial resolution and daily Climate Modelling Grid (CMG) images at 0.05° spatial resolution. Spectral adjustment of VIIRS I1 (red) and I2 (near infra-red - NIR) bands to match MODIS/Aqua b1 (red) and b2 (NIR) bands is performed to remove a bias between MODIS and VIIRS-based red, NIR, and NDVI estimates. Overall, red reflectance, NIR reflectance, NDVI uncertainties were 0.014, 0.029 and 0.056 respectively for the 500 m product and 0.013, 0.016 and 0.032 for the 0.05° product. The study shows that MODIS and VIIRS NDVI data can be used interchangeably for
Simon, Sílvia; Duran, Miquel
1997-08-01
Quantum molecular similarity (QMS) techniques are used to assess the response of the electron density of various small molecules to application of a static, uniform electric field. Likewise, QMS is used to analyze the changes in electron density generated by the process of floating a basis set. The results obtained show an interrelation between the floating process, the optimum geometry, and the presence of an external field. Cases involving the Le Chatelier principle are discussed, and an insight on the changes of bond critical point properties, self-similarity values and density differences is performed.
Structural basis for inhibition of the histone chaperone activity of SET/TAF-Iβ by cytochrome c.
González-Arzola, Katiuska; Díaz-Moreno, Irene; Cano-González, Ana; Díaz-Quintana, Antonio; Velázquez-Campoy, Adrián; Moreno-Beltrán, Blas; López-Rivas, Abelardo; De la Rosa, Miguel A
2015-08-11
Chromatin is pivotal for regulation of the DNA damage process insofar as it influences access to DNA and serves as a DNA repair docking site. Recent works identify histone chaperones as key regulators of damaged chromatin's transcriptional activity. However, understanding how chaperones are modulated during DNA damage response is still challenging. This study reveals that the histone chaperone SET/TAF-Iβ interacts with cytochrome c following DNA damage. Specifically, cytochrome c is shown to be translocated into cell nuclei upon induction of DNA damage, but not upon stimulation of the death receptor or stress-induced pathways. Cytochrome c was found to competitively hinder binding of SET/TAF-Iβ to core histones, thereby locking its histone-binding domains and inhibiting its nucleosome assembly activity. In addition, we have used NMR spectroscopy, calorimetry, mutagenesis, and molecular docking to provide an insight into the structural features of the formation of the complex between cytochrome c and SET/TAF-Iβ. Overall, these findings establish a framework for understanding the molecular basis of cytochrome c-mediated blocking of SET/TAF-Iβ, which subsequently may facilitate the development of new drugs to silence the oncogenic effect of SET/TAF-Iβ's histone chaperone activity.
Santra, Biswajit; Michaelides, Angelos; Scheffler, Matthias
2007-11-01
The ability of several density-functional theory (DFT) exchange-correlation functionals to describe hydrogen bonds in small water clusters (dimer to pentamer) in their global minimum energy structures is evaluated with reference to second order Møller-Plesset perturbation theory (MP2). Errors from basis set incompleteness have been minimized in both the MP2 reference data and the DFT calculations, thus enabling a consistent systematic evaluation of the true performance of the tested functionals. Among all the functionals considered, the hybrid X3LYP and PBE0 functionals offer the best performance and among the nonhybrid generalized gradient approximation functionals, mPWLYP and PBE1W perform best. The popular BLYP and B3LYP functionals consistently underbind and PBE and PW91 display rather variable performance with cluster size.
Feller, David; Peterson, Kirk A
2013-08-28
The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies 0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.
DEFF Research Database (Denmark)
Avery, John Scales; Rettrup, Sten; Avery, James Emil
automatically with computer techniques. The method has a wide range of applicability, and can be used to solve difficult eigenvalue problems in a number of fields. The book is of special interest to quantum theorists, computer scientists, computational chemists and applied mathematicians....
Permina, Elizaveta A.; Medvedeva, Yulia; Baeck, Pia M.; Hegde, Shubhada R.; Mande, Shekhar C.; Makeev, Vsevolod J.
2013-01-01
interactions helps to evaluate parameters for regulatory subnetwork inference. We suggest a procedure for modulon construction where a seed regulon is iteratively updated with genes having expression patterns similar to those for regulon member genes. A set
International Nuclear Information System (INIS)
Yao, Y.X.; Wang, C.Z.; Ho, K.M.
2010-01-01
A chemical bonding scheme is presented for the analysis of solid-state systems. The scheme is based on the intrinsic oriented quasiatomic minimal-basis-set orbitals (IO-QUAMBOs) previously developed by Ivanic and Ruedenberg for molecular systems. In the solid-state scheme, IO-QUAMBOs are generated by a unitary transformation of the quasiatomic orbitals located at each site of the system with the criteria of maximizing the sum of the fourth power of interatomic orbital bond order. Possible bonding and antibonding characters are indicated by the single particle matrix elements, and can be further examined by the projected density of states. We demonstrate the method by applications to graphene and (6,0) zigzag carbon nanotube. The oriented-orbital scheme automatically describes the system in terms of sp 2 hybridization. The effect of curvature on the electronic structure of the zigzag carbon nanotube is also manifested in the deformation of the intrinsic oriented orbitals as well as a breaking of symmetry leading to nonzero single particle density matrix elements. In an additional study, the analysis is performed on the Al 3 V compound. The main covalent bonding characters are identified in a straightforward way without resorting to the symmetry analysis. Our method provides a general way for chemical bonding analysis of ab initio electronic structure calculations with any type of basis sets.
NSGIC Local Govt | GIS Inventory — Election Districts and Precincts dataset current as of 1991. PrecinctPoly-The data set is a polygon feature consisting of 220 segments representing voter precinct...
Rabli, Djamal; McCarroll, Ronald
2018-02-01
This review surveys the different theoretical approaches, used to describe inelastic and rearrangement processes in collisions involving atoms and ions. For a range of energies from a few meV up to about 1 keV, the adiabatic representation is expected to be valid and under these conditions, inelastic and rearrangement processes take place via a network of avoided crossings of the potential energy curves of the collision system. In general, such avoided crossings are finite in number. The non-adiabatic coupling, due to the breakdown of the Born-Oppenheimer separation of the electronic and nuclear variables, depends on the ratio of the electron mass to the nuclear mass terms in the total Hamiltonian. By limiting terms in the total Hamiltonian correct to first order in the electron to nuclear mass ratio, a system of reaction coordinates is found which allows for a correct description of both inelastic channels. The connection between the use of reaction coordinates in the quantum description and the electron translation factors of the impact parameter approach is established. A major result is that only when reaction coordinates are used, is it possible to introduce the notion of a minimal basis set. Such a set must include all avoided crossings including both radial coupling and long range Coriolis coupling. But, only when reactive coordinates are used, can such a basis set be considered as complete. In particular when the centre of nuclear mass is used as centre of coordinates, rather than the correct reaction coordinates, it is shown that erroneous results are obtained. A few results to illustrate this important point are presented: one concerning a simple two-state Landau-Zener type avoided crossing, the other concerning a network of multiple crossings in a typical electron capture process involving a highly charged ion with a neutral atom.
Prull, Matthew W
2015-01-01
The present study examined age-related differences in the inconsistency effect, in which memory is enhanced for schema-inconsistent information compared to schema-consistent information. Young and older adults studied schema-consistent and schema-inconsistent objects in an academic office under either intentional or incidental encoding instructions, and were given two recognition tests either immediately or after 48 hr: A yes/no item recognition test that included modified remember/know judgments and a token recognition test that required determining whether an original object was replaced with a different object with the same name. Young and older adults showed equivalent inconsistency effects in both item and token recognition tests, although older adults reported phenomenologically less rich memories of schema-inconsistent objects relative to young adults. These findings run counter to previous reports suggesting that aging is associated with processing declines at encoding that impair memory for details of schema-inconsistent or distinctive events. The results are consistent with a retrieval-based account in which age-related difficulties in retrieving contextual details can be offset by environmental support.
Chacon-Madrid, Heber J; Murphy, Benjamin N; Pandis, Spyros N; Donahue, Neil M
2012-10-16
We use a two-dimensional volatility basis set (2D-VBS) box model to simulate secondary organic aerosol (SOA) mass yields of linear oxygenated molecules: n-tridecanal, 2- and 7-tridecanone, 2- and 7-tridecanol, and n-pentadecane. A hybrid model with explicit, a priori treatment of the first-generation products for each precursor molecule, followed by a generic 2D-VBS mechanism for later-generation chemistry, results in excellent model-measurement agreement. This strongly confirms that the 2D-VBS mechanism is a predictive tool for SOA modeling but also suggests that certain important first-generation products for major primary SOA precursors should be treated explicitly for optimal SOA predictions.
New STO(II-3Gmag family basis sets for the calculations of the molecules magnetic properties
Directory of Open Access Journals (Sweden)
Karina Kapusta
2015-10-01
Full Text Available An efficient approach for construction of physically justified STO(II-3Gmag family basis sets for calculation of molecules magnetic properties has been proposed. The procedure of construction based upon the taken into account the second order of perturbation theory in the magnetic field case. Analytical form of correction functions has been obtained using the closed representation of the Green functions by the solution of nonhomogeneous Schrödinger equation for the model problem of "one-electron atom in the external uniform magnetic field". Their performance has been evaluated for the DFT level calculations carried out with a number of functionals. The test calculations of magnetic susceptibility and 1H nuclear magnetic shielding tensors demonstrated a good agreement of the calculated values with the experimental data.
Permina, Elizaveta A.
2013-01-01
Identification of bacterial modulons from series of gene expression measurements on microarrays is a principal problem, especially relevant for inadequately studied but practically important species. Usage of a priori information on regulatory interactions helps to evaluate parameters for regulatory subnetwork inference. We suggest a procedure for modulon construction where a seed regulon is iteratively updated with genes having expression patterns similar to those for regulon member genes. A set of genes essential for a regulon is used to control modulon updating. Essential genes for a regulon were selected as a subset of regulon genes highly related by different measures to each other. Using Escherichia coli as a model, we studied how modulon identification depends on the data, including the microarray experiments set, the adopted relevance measure and the regulon itself. We have found that results of modulon identification are highly dependent on all parameters studied and thus the resulting modulon varies substantially depending on the identification procedure. Yet, modulons that were identified correctly displayed higher stability during iterations, which allows developing a procedure for reliable modulon identification in the case of less studied species where the known regulatory interactions are sparse. Copyright © 2013 Taylor & Francis.
Khvostichenko, Daria; Choi, Andrew; Boulatov, Roman
2008-04-24
We investigated the effect of several computational variables, including the choice of the basis set, application of symmetry constraints, and zero-point energy (ZPE) corrections, on the structural parameters and predicted ground electronic state of model 5-coordinate hemes (iron(II) porphines axially coordinated by a single imidazole or 2-methylimidazole). We studied the performance of B3LYP and B3PW91 with eight Pople-style basis sets (up to 6-311+G*) and B97-1, OLYP, and TPSS functionals with 6-31G and 6-31G* basis sets. Only hybrid functionals B3LYP, B3PW91, and B97-1 reproduced the quintet ground state of the model hemes. With a given functional, the choice of the basis set caused up to 2.7 kcal/mol variation of the quintet-triplet electronic energy gap (DeltaEel), in several cases, resulting in the inversion of the sign of DeltaEel. Single-point energy calculations with triple-zeta basis sets of the Pople (up to 6-311G++(2d,2p)), Ahlrichs (TZVP and TZVPP), and Dunning (cc-pVTZ) families showed the same trend. The zero-point energy of the quintet state was approximately 1 kcal/mol lower than that of the triplet, and accounting for ZPE corrections was crucial for establishing the ground state if the electronic energy of the triplet state was approximately 1 kcal/mol less than that of the quintet. Within a given model chemistry, effects of symmetry constraints and of a "tense" structure of the iron porphine fragment coordinated to 2-methylimidazole on DeltaEel were limited to 0.3 kcal/mol. For both model hemes the best agreement with crystallographic structural data was achieved with small 6-31G and 6-31G* basis sets. Deviation of the computed frequency of the Fe-Im stretching mode from the experimental value with the basis set decreased in the order: nonaugmented basis sets, basis sets with polarization functions, and basis sets with polarization and diffuse functions. Contraction of Pople-style basis sets (double-zeta or triple-zeta) affected the results
Yang, Yanchao; Jiang, Hong; Liu, Congbin; Lan, Zhongli
2013-03-01
Cognitive radio (CR) is an intelligent wireless communication system which can dynamically adjust the parameters to improve system performance depending on the environmental change and quality of service. The core technology for CR is the design of cognitive engine, which introduces reasoning and learning methods in the field of artificial intelligence, to achieve the perception, adaptation and learning capability. Considering the dynamical wireless environment and demands, this paper proposes a design of cognitive engine based on the rough sets (RS) and radial basis function neural network (RBF_NN). The method uses experienced knowledge and environment information processed by RS module to train the RBF_NN, and then the learning model is used to reconfigure communication parameters to allocate resources rationally and improve system performance. After training learning model, the performance is evaluated according to two benchmark functions. The simulation results demonstrate the effectiveness of the model and the proposed cognitive engine can effectively achieve the goal of learning and reconfiguration in cognitive radio.
Morgan, W James; Matthews, Devin A; Ringholm, Magnus; Agarwal, Jay; Gong, Justin Z; Ruud, Kenneth; Allen, Wesley D; Stanton, John F; Schaefer, Henry F
2018-03-13
Geometric energy derivatives which rely on core-corrected focal-point energies extrapolated to the complete basis set (CBS) limit of coupled cluster theory with iterative and noniterative quadruple excitations, CCSDTQ and CCSDT(Q), are used as elements of molecular gradients and, in the case of CCSDT(Q), expansion coefficients of an anharmonic force field. These gradients are used to determine the CCSDTQ/CBS and CCSDT(Q)/CBS equilibrium structure of the S 0 ground state of H 2 CO where excellent agreement is observed with previous work and experimentally derived results. A fourth-order expansion about this CCSDT(Q)/CBS reference geometry using the same level of theory produces an exceptional level of agreement to spectroscopically observed vibrational band origins with a MAE of 0.57 cm -1 . Second-order vibrational perturbation theory (VPT2) and variational discrete variable representation (DVR) results are contrasted and discussed. Vibration-rotation, anharmonicity, and centrifugal distortion constants from the VPT2 analysis are reported and compared to previous work. Additionally, an initial application of a sum-over-states fourth-order vibrational perturbation theory (VPT4) formalism is employed herein, utilizing quintic and sextic derivatives obtained with a recursive algorithmic approach for response theory.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Gaigong [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Lin, Lin, E-mail: linlin@math.berkeley.edu [Department of Mathematics, University of California, Berkeley, Berkeley, CA 94720 (United States); Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Hu, Wei, E-mail: whu@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Yang, Chao, E-mail: cyang@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Pask, John E., E-mail: pask1@llnl.gov [Physics Division, Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States)
2017-04-15
Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynman forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Since the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H{sub 2} and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.
DEFF Research Database (Denmark)
Kupka, Teobald; Stachów, Michal; Kaminsky, Jakub
2013-01-01
, estimated from calculations with the family of polarizationconsistent pcS-n basis sets is reported. This dependence was also supported by inspection of profiles of deviation between CBS estimated nuclear shieldings and obtained with significantly smaller basis sets pcS-2 and aug-cc-pVTZ-J for the selected......A linear correlation between isotropic nuclear magnetic shielding constants for seven model molecules (CH2O, H2O, HF, F2, HCN, SiH4 and H2S) calculated with 37 methods (34 density functionals, RHF, MP2 and CCSD(T) ), with affordable pcS-2 basis set and corresponding complete basis set results...... set of 37 calculation methods. It was possible to formulate a practical approach of estimating the values of isotropic nuclear magnetic shielding constants at the CCSD(T)/CBS and MP2/CBS levels from affordable CCSD(T)/pcS-2, MP2/pcS-2 and DFT/CBS calculations with pcS-n basis sets. The proposed method...
Cameron, David; Ubels, Jasper; Norström, Fredrik
2018-01-01
The amount a government should be willing to invest in adopting new medical treatments has long been under debate. With many countries using formal cost-effectiveness (C/E) thresholds when examining potential new treatments and ever-growing medical costs, accurately setting the level of a C/E threshold can be essential for an efficient healthcare system. The aim of this systematic review is to describe the prominent approaches to setting a C/E threshold, compile available national-level C/E threshold data and willingness-to-pay (WTP) data, and to discern whether associations exist between these values, gross domestic product (GDP) and health-adjusted life expectancy (HALE). This review further examines current obstacles faced with the presently available data. A systematic review was performed to collect articles which have studied national C/E thresholds and willingness-to-pay (WTP) per quality-adjusted life year (QALY) in the general population. Associations between GDP, HALE, WTP, and C/E thresholds were analyzed with correlations. Seventeen countries were identified from nine unique sources to have formal C/E thresholds within our inclusion criteria. Thirteen countries from nine sources were identified to have WTP per QALY data within our inclusion criteria. Two possible associations were identified: C/E thresholds with HALE (quadratic correlation of 0.63), and C/E thresholds with GDP per capita (polynomial correlation of 0.84). However, these results are based on few observations and therefore firm conclusions cannot be made. Most national C/E thresholds identified in our review fall within the WHO's recommended range of one-to-three times GDP per capita. However, the quality and quantity of data available regarding national average WTP per QALY, opportunity costs, and C/E thresholds is poor in comparison to the importance of adequate investment in healthcare. There exists an obvious risk that countries might either over- or underinvest in healthcare if they
Shin, SeungJun; Yu, YunSeop; Choi, JungBum
2008-10-01
New multi-valued logic (MVL) families using the hybrid circuits consisting of three gates single-electron transistors (TG-SETs) and a metal-oxide-semiconductor field-effect transistor (MOSFET) are proposed. The use of SETs offers periodic literal characteristics due to Coulomb oscillation of SET, which allows a realization of binary logic (BL) circuits as well as multi-valued logic (MVL) circuits. The basic operations of the proposed MVL families are successfully confirmed through SPICE circuit simulation based on the physical device model of a TG-SET. The proposed MVL circuits are found to be much faster, but much larger power consumption than a previously reported MVL, and they have a trade-off between speed and power consumption. As an example to apply the newly developed MVL families, a half-adder is introduced.
Wang, Feng; Pang, Wenning; Duffy, Patrick
2012-12-01
Performance of a number of commonly used density functional methods in chemistry (B3LYP, Bhandh, BP86, PW91, VWN, LB94, PBe0, SAOP and X3LYP and the Hartree-Fock (HF) method) has been assessed using orbital momentum distributions of the 7σ orbital of nitrous oxide (NNO), which models electron behaviour in a chemically significant region. The density functional methods are combined with a number of Gaussian basis sets (Pople's 6-31G*, 6-311G**, DGauss TZVP and Dunning's aug-cc-pVTZ as well as even-tempered Slater basis sets, namely, et-DZPp, et-QZ3P, et-QZ+5P and et-pVQZ). Orbital momentum distributions of the 7σ orbital in the ground electronic state of NNO, which are obtained from a Fourier transform into momentum space from single point electronic calculations employing the above models, are compared with experimental measurement of the same orbital from electron momentum spectroscopy (EMS). The present study reveals information on performance of (a) the density functional methods, (b) Gaussian and Slater basis sets, (c) combinations of the density functional methods and basis sets, that is, the models, (d) orbital momentum distributions, rather than a group of specific molecular properties and (e) the entire region of chemical significance of the orbital. It is found that discrepancies of this orbital between the measured and the calculated occur in the small momentum region (i.e. large r region). In general, Slater basis sets achieve better overall performance than the Gaussian basis sets. Performance of the Gaussian basis sets varies noticeably when combining with different Vxc functionals, but Dunning's augcc-pVTZ basis set achieves the best performance for the momentum distributions of this orbital. The overall performance of the B3LYP and BP86 models is similar to newer models such as X3LYP and SAOP. The present study also demonstrates that the combinations of the density functional methods and the basis sets indeed make a difference in the quality of the
International Nuclear Information System (INIS)
Lin Lin; Lu Jianfeng; Ying Lexing; Weinan, E
2012-01-01
Kohn–Sham density functional theory is one of the most widely used electronic structure theories. In the pseudopotential framework, uniform discretization of the Kohn–Sham Hamiltonian generally results in a large number of basis functions per atom in order to resolve the rapid oscillations of the Kohn–Sham orbitals around the nuclei. Previous attempts to reduce the number of basis functions per atom include the usage of atomic orbitals and similar objects, but the atomic orbitals generally require fine tuning in order to reach high accuracy. We present a novel discretization scheme that adaptively and systematically builds the rapid oscillations of the Kohn–Sham orbitals around the nuclei as well as environmental effects into the basis functions. The resulting basis functions are localized in the real space, and are discontinuous in the global domain. The continuous Kohn–Sham orbitals and the electron density are evaluated from the discontinuous basis functions using the discontinuous Galerkin (DG) framework. Our method is implemented in parallel and the current implementation is able to handle systems with at least thousands of atoms. Numerical examples indicate that our method can reach very high accuracy (less than 1 meV) with a very small number (4–40) of basis functions per atom.
International Nuclear Information System (INIS)
Frisch, M.J.; Binkley, J.S.; Schaefer, H.F. III
1984-01-01
The relative energies of the stationary points on the FH 2 and H 2 CO nuclear potential energy surfaces relevant to the hydrogen atom abstraction, H 2 elimination and 1,2-hydrogen shift reactions have been examined using fourth-order Moller--Plesset perturbation theory and a variety of basis sets. The theoretical absolute zero activation energy for the F+H 2 →FH+H reaction is in better agreement with experiment than previous theoretical studies, and part of the disagreement between earlier theoretical calculations and experiment is found to result from the use of assumed rather than calculated zero-point vibrational energies. The fourth-order reaction energy for the elimination of hydrogen from formaldehyde is within 2 kcal mol -1 of the experimental value using the largest basis set considered. The qualitative features of the H 2 CO surface are unchanged by expansion of the basis set beyond the polarized triple-zeta level, but diffuse functions and several sets of polarization functions are found to be necessary for quantitative accuracy in predicted reaction and activation energies. Basis sets and levels of perturbation theory which represent good compromises between computational efficiency and accuracy are recommended
International Nuclear Information System (INIS)
Chechev, V.P.
2001-01-01
To avoid underestimating the uncertainty of the evaluated values for sets of consistent data the following rule is proposed: if the smallest of the input measurement uncertainties (σ min ) is more than the uncertainty obtained from statistical data processing, the σ min should be used as a final uncertainty of the evaluated value. This rule is justified by the fact that almost any measurement is indirect and the total uncertainty of any precise measurement includes mainly the systematic error of the measurement method. Exceptions can be only for measured data obtained by essentially different methods (for example, half life measurements by calorimetry and specific activity determination)
DEFF Research Database (Denmark)
Faber, Rasmus; Sauer, Stephan P. A.
2018-01-01
The basis set convergence of nuclear spin-spin coupling constants (SSCC) calculated at the coupled cluster singles and doubles (CCSD) level has been investigated for ten difficult molecules. Eight of the molecules contain fluorine atoms and nine contain double or triple bonds. Results obtained...
Van der Veen, J.W.; Van Ormondt, D.; De Beer, R.
2012-01-01
In this work we report on generating/using simulated metabolite basis sets for the quantification of in vivo MRS signals, assuming that they have been acquired by using the PRESS pulse sequence. To that end we have employed the classes and functions of the GAMMA C++ library. By using several
International Nuclear Information System (INIS)
Ermakov, A.I.; Belousov, V.V.
2007-01-01
Relaxation effect of functions of the basis sets (BS) STO-3G and 6-31G* on their equilibration in the series of isoelectron molecules: LiF, BeO, BN and C 2 is considered. Values of parameters (exponential factor of basis functions, orbital exponents of Gauss primitives and coefficients of their grouping) of basis functions in molecules are discovered using the criterion of minimum of energy by the unlimited Hartree-Fock method calculations (UHF) with the help of direct optimization of parameters: the simplex-method and Rosenbrock method. Certain schemes of optimization differing by the amount of varying parameters have been done. Interaction of basis functions parameters of concerned sets through medium values of the Gauss exponents is established. Effects of relaxation on the change of full energy and relative errors of the calculations of interatomic distances, normal oscillations frequencies, dissociation energy and other properties of molecules are considered. Change of full energy during the relaxation of basis functions (RBF) STO-3G and 6-31G* amounts 1100 and 80 kJ/mol correspondingly, and it is in need of the account during estimation of energetic characteristics, especially for systems with high-polar chemical bonds. The relaxation BS STO-3G practically in all considered cases improves description of molecular properties, whereas the relaxation BS 6-31G* lightly effects on its equilibration [ru
Ramírez-Solís, A; Zicovich-Wilson, C M; Hernández-Lamoneda, R; Ochoa-Calle, A J
2017-01-25
The question of the non-magnetic (NM) vs. antiferromagnetic (AF) nature of the ε phase of solid oxygen is a matter of great interest and continuing debate. In particular, it has been proposed that the ε phase is actually composed of two phases, a low-pressure AF ε 1 phase and a higher pressure NM ε 0 phase [Crespo et al., Proc. Natl. Acad. Sci. U. S. A., 2014, 111, 10427]. We address this problem through periodic spin-restricted and spin-polarized Kohn-Sham density functional theory calculations at pressures from 10 to 50 GPa using calibrated GGA and hybrid exchange-correlation functionals with Gaussian atomic basis sets. The two possible configurations for the antiferromagnetic (AF1 and AF2) coupling of the 0 ≤ S ≤ 1 O 2 molecules in the (O 2 ) 4 unit cell were studied. Full enthalpy-driven geometry optimizations of the (O 2 ) 4 unit cells were done to study the pressure evolution of the enthalpy difference between the non-magnetic and both antiferromagnetic structures. We also address the evolution of structural parameters and the spin-per-molecule vs. pressure. We find that the spin-less solution becomes more stable than both AF structures above 50 GPa and, crucially, the spin-less solution yields lattice parameters in much better agreement with experimental data at all pressures than the AF structures. The optimized AF2 broken-symmetry structures lead to large errors of the a and b lattice parameters when compared with experiments. The results for the NM model are in much better agreement with the experimental data than those found for both AF models and are consistent with a completely non-magnetic (O 2 ) 4 unit cell for the low-pressure regime of the ε phase.
Directory of Open Access Journals (Sweden)
Q. J. Zhang
2013-06-01
Full Text Available Simulations with the chemistry transport model CHIMERE are compared to measurements performed during the MEGAPOLI (Megacities: Emissions, urban, regional and Global Atmospheric POLlution and climate effects, and Integrated tools for assessment and mitigation summer campaign in the Greater Paris region in July 2009. The volatility-basis-set approach (VBS is implemented into this model, taking into account the volatility of primary organic aerosol (POA and the chemical aging of semi-volatile organic species. Organic aerosol is the main focus and is simulated with three different configurations with a modified treatment of POA volatility and modified secondary organic aerosol (SOA formation schemes. In addition, two types of emission inventories are used as model input in order to test the uncertainty related to the emissions. Predictions of basic meteorological parameters and primary and secondary pollutant concentrations are evaluated, and four pollution regimes are defined according to the air mass origin. Primary pollutants are generally overestimated, while ozone is consistent with observations. Sulfate is generally overestimated, while ammonium and nitrate levels are well simulated with the refined emission data set. As expected, the simulation with non-volatile POA and a single-step SOA formation mechanism largely overestimates POA and underestimates SOA. Simulation of organic aerosol with the VBS approach taking into account the aging of semi-volatile organic compounds (SVOC shows the best correlation with measurements. High-concentration events observed mostly after long-range transport are well reproduced by the model. Depending on the emission inventory used, simulated POA levels are either reasonable or underestimated, while SOA levels tend to be overestimated. Several uncertainties related to the VBS scheme (POA volatility, SOA yields, the aging parameterization, to emission input data, and to simulated OH levels can be responsible for
Mor, Vincent; Intrator, Orna; Unruh, Mark Aaron; Cai, Shubing
2011-04-15
The Minimum Data Set (MDS) for nursing home resident assessment has been required in all U.S. nursing homes since 1990 and has been universally computerized since 1998. Initially intended to structure clinical care planning, uses of the MDS expanded to include policy applications such as case-mix reimbursement, quality monitoring and research. The purpose of this paper is to summarize a series of analyses examining the internal consistency and predictive validity of the MDS data as used in the "real world" in all U.S. nursing homes between 1999 and 2007. We used person level linked MDS and Medicare denominator and all institutional claim files including inpatient (hospital and skilled nursing facilities) for all Medicare fee-for-service beneficiaries entering U.S. nursing homes during the period 1999 to 2007. We calculated the sensitivity and positive predictive value (PPV) of diagnoses taken from Medicare hospital claims and from the MDS among all new admissions from hospitals to nursing homes and the internal consistency (alpha reliability) of pairs of items within the MDS that logically should be related. We also tested the internal consistency of commonly used MDS based multi-item scales and examined the predictive validity of an MDS based severity measure viz. one year survival. Finally, we examined the correspondence of the MDS discharge record to hospitalizations and deaths seen in Medicare claims, and the completeness of MDS assessments upon skilled nursing facility (SNF) admission. Each year there were some 800,000 new admissions directly from hospital to US nursing homes and some 900,000 uninterrupted SNF stays. Comparing Medicare enrollment records and claims with MDS records revealed reasonably good correspondence that improved over time (by 2006 only 3% of deaths had no MDS discharge record, only 5% of SNF stays had no MDS, but over 20% of MDS discharges indicating hospitalization had no associated Medicare claim). The PPV and sensitivity levels of
Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.
2012-10-01
We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.
International Nuclear Information System (INIS)
Buemi, Giuseppe
2004-01-01
Ab initio calculations of hydrogen bridge energies (E HB ) of 2-halophenols were carried out at various levels of sophistication using a variety of basis sets in order to verify their ability in reproducing the experimentally-determined gas phase ordering, and the related experimental frequencies of the O-H vibration stretching mode. The semiempirical AM1 and PM3 approaches were adopted, too. Calculations were extended to the O-H...X bridge of a particular conformation of 2,4-dihalo-malonaldehyde. The results and their trend with respect to the electronegativity of the halogen series are highly dependant on the basis set. The less sophisticated 3-21G, CEP121G and LANL2DZ basis sets (with and without correlation energy inclusion) predict E HB decreasing on decreasing the electronegativity power whilst the opposite is generally found when more extended bases are used. However, all high level calculations confirm the nearly negligible energy differences between the examined O-H...X bridges
Directory of Open Access Journals (Sweden)
Zhenyu Mei
2012-01-01
Full Text Available The ongoing controversy about in what condition should we set the curb parking has few definitive answers because comprehensive research in this area has been lacking. Our goal is to present a set of heuristic urban street speed functions under mixed traffic flow by taking into account impacts of curb parking. Two impacts have been defined to classify and quantify the phenomena of motor vehicles' speed dynamics in terms of curb parking. The first impact is called Space impact, which is caused by the curb parking types. The other one is the Time impact, which results from the driver maneuvering in or out of parking space. In this paper, based on the empirical data collected from six typical urban streets in Nanjing, China, two models have been proposed to describe these phenomena for one-way traffic and two-way traffic, respectively. An intensive experiment has been conducted in order to calibrate and validate these proposed models, by taking into account the complexity of the model parameters. We also provide guidelines in terms of how to cluster and calculate those models' parameters. Results from these models demonstrated promising performance of modeling motor vehicles' speed for mixed traffic flow under the influence of curb parking.
Directory of Open Access Journals (Sweden)
Piet Swanepoel
2011-10-01
Full Text Available
ABSTRACT: This article focuses on some of the problems raised by Atkins and Rundell's (2008 approach to the design of lexicographic definitions for members of lexical sets. The questions raised are how to define and identify lexical sets, how lexical conceptual models (LCMs can support definitional consistency and coherence in defining members of lexical sets, and what the ideal content and structure of LCMs could be. Although similarity of meaning is proposed as the defining feature of lexical sets, similarity of meaning is only one dimension of the broader concept of lexical coherence. The argument is presented that numerous conceptual lexical models (e.g. taxonomies, folk models, frames, etc. in fact indicate, justify or explain how lexical items cohere (and thus form sets. In support of Fillmore's (2003 suggestion that definitions of the lexical items of cohering sets should be linked to such explanatory models, additional functionally-orientated arguments are presented for the incorporation of conceptual lexical models in electronic monolingual learners' dictionaries. Numerous resources exist to support the design of LCMs which can improve the functionality of definitions of members of lexical sets. A few examples are discussed of how such resources can be used to design functionally justified LCMs.
OPSOMMING: Verbetering van die funksionaliteit van woordeboekdefinisies vir leksikale versamelings: Die rol van definisiematryse, definisie-eenvormigheid, definisiesamehang en die inkorporering van leksikale konseptuele modelle. Hierdie artikel fokus op sommige van die probleme wat ter sprake kom deur Atkins en Rundell (2008 se benadering tot die ontwerp van leksikografiese definisies vir lede van leksikale versamelings. Die vrae wat gestel word, is hoe leksikale versamelings gedefinieer en geïdentifiseer moet word, hoe leksikale konseptuele modelle (LKM's definisie-eenvormigheid en-samehang kan ondersteun by die definiëring van lede
Kruse, Holger; Grimme, Stefan
2012-04-21
A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model
Kruse, Holger; Grimme, Stefan
2012-04-01
A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model
Energy Technology Data Exchange (ETDEWEB)
Roehle, I.
1999-11-01
A Doppler Global Velocimeter was set up in the frame of a PhD thesis. This velocimeter is optimized to carry out high accuracy, three component, time averaged planar velocity measurements. The anemometer was successfully applied to wind tunnel and test rig flows, and the measurement accuracy was investigated. A volumetric data-set of the flow field inside an industrial combustion chamber was measured. This data field contained about 400.000 vectors. DGV measurements in the intake of a jet engine model were carried out applying a fibre bundle boroskope. The flow structure of the wake of a car model in a wind tunnel was investigated. The measurement accuracy of the DGV-System is {+-}0.5 m/s when operated under ideal conditions. This study can serve as a basis to evaluate the use of DGV for aerodynamic development experiments. (orig.) [German] Im Rahmen der Dissertation wurde ein auf hohe Messgenauigkeit optimiertes DGV-Geraet fuer zeitlich gemittelte Drei-Komponenten-Geschwindigkeitsmessungen entwickelt und gebaut, an Laborstroemungen, an Teststaenden und an Windkanaelen erfolgreich eingesetzt und das Potential der Messtechnik, insbesondere im Hinblick auf Messgenauigkeit, untersucht. Im Fall einer industriellen Brennkammer konnte ein Volumen-Datensatz des Stroemungsfeldes erstellt werden, dessen Umfang bei ca. 400.000 Vektoren lag. Es wurden DGV-Messungen mittels eines flexiblen Endoskops auf Basis eines Faserbuendels durchgefuehrt und damit die Stroemung in einem Flugzeugeinlauf vermessen. Es wurden DGV-Messungen im Nachlauf eines PKW-Modells in einem Windkanal durchgefuehrt. Die Messgenauigkeit des erstellten DGV-Systems betraegt unter Idealbedingungen {+-}0,5 m/s. Durch die Arbeit wurde eine Basis zur Beurteilung des Nutzens der DGV-Technik fuer aerodynamische Entwicklungsarbeiten geschaffen. (orig.)
Directory of Open Access Journals (Sweden)
Ling Li
Full Text Available Polycomb repressive complex 2 (PRC2, a histone H3 lysine 27 methyltransferase, plays a key role in gene regulation and is a known epigenetics drug target for cancer therapy. The WD40 domain-containing protein EED is the regulatory subunit of PRC2. It binds to the tri-methylated lysine 27 of the histone H3 (H3K27me3, and through which stimulates the activity of PRC2 allosterically. Recently, we disclosed a novel PRC2 inhibitor EED226 which binds to the K27me3-pocket on EED and showed strong antitumor activity in xenograft mice model. Here, we further report the identification and validation of four other EED binders along with EED162, the parental compound of EED226. The crystal structures for all these five compounds in complex with EED revealed a common deep pocket induced by the binding of this diverse set of compounds. This pocket was created after significant conformational rearrangement of the aromatic cage residues (Y365, Y148 and F97 in the H3K27me3 binding pocket of EED, the width of which was delineated by the side chains of these rearranged residues. In addition, all five compounds interact with the Arg367 at the bottom of the pocket. Each compound also displays unique features in its interaction with EED, suggesting the dynamics of the H3K27me3 pocket in accommodating the binding of different compounds. Our results provide structural insights for rational design of novel EED binder for the inhibition of PRC2 complex activity.
Energy Technology Data Exchange (ETDEWEB)
Golovko, Yury E. [FSUE ' SSC RF-IPPE' , 249033, Bondarenko Square 1, Obninsk (Russian Federation)
2008-07-01
Experiments with plutonium, low enriched uranium and uranium-233 from the ICSBEP1 Handbook are being considered in this paper. Among these experiments it was selected only those, which seem to be the most relevant to the evaluation of uncertainty of critical mass of mixtures of plutonium or low enriched uranium or uranium-233 with light water. All selected experiments were examined and covariance matrices of criticality uncertainties were developed along with some uncertainties were revised. Statistical analysis of these experiments was performed and some contradictions were discovered and eliminated. Evaluation of accuracy of prediction of criticality calculations was performed using the internally consistent set of experiments with plutonium, low enriched uranium and uranium-233 remained after the statistical analyses. The application objects for the evaluation of calculational prediction of criticality were water-reflected spherical systems of homogeneous aqueous mixtures of plutonium or low enriched uranium or uranium-233 of different concentrations which are simplified models of apparatus of external fuel cycle. It is shows that the procedure allows to considerably reduce uncertainty in k{sub eff} caused by the uncertainties in neutron cross-sections. Also it is shows that the results are practically independent of initial covariance matrices of nuclear data uncertainties. (authors)
Hutter, Jürg
2003-03-01
An efficient formulation of time-dependent linear response density functional theory for the use within the plane wave basis set framework is presented. The method avoids the transformation of the Kohn-Sham matrix into the canonical basis and references virtual orbitals only through a projection operator. Using a Lagrangian formulation nuclear derivatives of excited state energies within the Tamm-Dancoff approximation are derived. The algorithms were implemented into a pseudo potential/plane wave code and applied to the calculation of adiabatic excitation energies, optimized geometries and vibrational frequencies of three low lying states of formaldehyde. An overall good agreement with other time-dependent density functional calculations, multireference configuration interaction calculations and experimental data was found.
Handbook of Gaussian basis sets
International Nuclear Information System (INIS)
Poirier, R.; Kari, R.; Csizmadia, I.G.
1985-01-01
A collection of a large body of information is presented useful for chemists involved in molecular Gaussian computations. Every effort has been made by the authors to collect all available data for cartesian Gaussian as found in the literature up to July of 1984. The data in this text includes a large collection of polarization function exponents but in this case the collection is not complete. Exponents for Slater type orbitals (STO) were included for completeness. This text offers a collection of Gaussian exponents primarily without criticism. (Auth.)
Zhao, Bin; Wang, Shuxiao; Donahue, Neil M; Chuang, Wayne; Hildebrandt Ruiz, Lea; Ng, Nga L; Wang, Yangjun; Hao, Jiming
2015-02-17
We evaluate the one-dimensional volatility basis set (1D-VBS) and two-dimensional volatility basis set (2D-VBS) in simulating the aging of SOA derived from toluene and α-pinene against smog-chamber experiments. If we simulate the first-generation products with empirical chamber fits and the subsequent aging chemistry with a 1D-VBS or a 2D-VBS, the models mostly overestimate the SOA concentrations in the toluene oxidation experiments. This is because the empirical chamber fits include both first-generation oxidation and aging; simulating aging in addition to this results in double counting of the initial aging effects. If the first-generation oxidation is treated explicitly, the base-case 2D-VBS underestimates the SOA concentrations and O:C increase of the toluene oxidation experiments; it generally underestimates the SOA concentrations and overestimates the O:C increase of the α-pinene experiments. With the first-generation oxidation treated explicitly, we could modify the 2D-VBS configuration individually for toluene and α-pinene to achieve good model-measurement agreement. However, we are unable to simulate the oxidation of both toluene and α-pinene with the same 2D-VBS configuration. We suggest that future models should implement parallel layers for anthropogenic (aromatic) and biogenic precursors, and that more modeling studies and laboratory research be done to optimize the "best-guess" parameters for each layer.
Energy Technology Data Exchange (ETDEWEB)
Evarestov, R A; Panin, A I; Bandura, A V; Losev, M V [Department of Quantum Chemistry, St. Petersburg State University, University Prospect 26, Stary Peterghof, St. Petersburg, 198504 (Russian Federation)], E-mail: re1973@re1973.spb.edu
2008-06-01
The results of LCAO DFT calculations of lattice parameters, cohesive energy and bulk modulus of the crystalline uranium nitrides UN, U{sub 2}N{sub 3} and UN{sub 2} are presented and discussed. The LCAO computer codes Gaussian03 and Crystal06 are applied. The calculations are made with the uranium atom relativistic effective small core potential by Stuttgart-Cologne group (60 electrons in the core). The calculations include the U atom basis set optimization. Powell, Hooke-Jeeves, conjugated gradient and Box methods are implemented in the author's optimization package, being external to the codes for molecular and periodic calculations. The basis set optimization in LCAO calculations improves the agreement of the lattice parameter and bulk modulus of UN crystal with the experimental data, the change of the cohesive energy due to the optimization is small. The mixed metallic-covalent chemical bonding is found both in LCAO calculations of UN and U{sub 2}N{sub 3} crystals; UN{sub 2} crystal has the semiconducting nature.
International Nuclear Information System (INIS)
Evarestov, R A; Panin, A I; Bandura, A V; Losev, M V
2008-01-01
The results of LCAO DFT calculations of lattice parameters, cohesive energy and bulk modulus of the crystalline uranium nitrides UN, U 2 N 3 and UN 2 are presented and discussed. The LCAO computer codes Gaussian03 and Crystal06 are applied. The calculations are made with the uranium atom relativistic effective small core potential by Stuttgart-Cologne group (60 electrons in the core). The calculations include the U atom basis set optimization. Powell, Hooke-Jeeves, conjugated gradient and Box methods are implemented in the author's optimization package, being external to the codes for molecular and periodic calculations. The basis set optimization in LCAO calculations improves the agreement of the lattice parameter and bulk modulus of UN crystal with the experimental data, the change of the cohesive energy due to the optimization is small. The mixed metallic-covalent chemical bonding is found both in LCAO calculations of UN and U 2 N 3 crystals; UN 2 crystal has the semiconducting nature
The 6-31B(d) basis set and the BMC-QCISD and BMC-CCSD multicoefficient correlation methods.
Lynch, Benjamin J; Zhao, Yan; Truhlar, Donald G
2005-03-03
Three new multicoefficient correlation methods (MCCMs) called BMC-QCISD, BMC-CCSD, and BMC-CCSD-C are optimized against 274 data that include atomization energies, electron affinities, ionization potentials, and reaction barrier heights. A new basis set called 6-31B(d) is developed and used as part of the new methods. BMC-QCISD has mean unsigned errors in calculating atomization energies per bond and barrier heights of 0.49 and 0.80 kcal/mol, respectively. BMC-CCSD has mean unsigned errors of 0.42 and 0.71 kcal/mol for the same two quantities. BMC-CCSD-C is an equally effective variant of BMC-CCSD that employs Cartesian rather than spherical harmonic basis sets. The mean unsigned error of BMC-CCSD or BMC-CCSD-C for atomization energies, barrier heights, ionization potentials, and electron affinities is 22% lower than G3SX(MP2) at an order of magnitude less cost for gradients for molecules with 9-13 atoms, and it scales better (N6 vs N,7 where N is the number of atoms) when the size of the molecule is increased.
Zen, Andrea; Luo, Ye; Sorella, Sandro; Guidoni, Leonardo
2014-01-01
Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely, the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudopotential, and the basis set for QMC calculations. We also introduce a new method for the computation of forces with finite variance on open systems and a new strategy for the definition of the atomic orbitals involved in the Jastrow-Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets. PMID:24526929
Löptien, U.; Dietze, H.
2014-06-01
The Baltic Sea is a seasonally ice-covered, marginal sea, situated in central northern Europe. It is an essential waterway connecting highly industrialised countries. Because ship traffic is intermittently hindered by sea ice, the local weather services have been monitoring sea ice conditions for decades. In the present study we revisit a historical monitoring data set, covering the winters 1960/1961. This data set, dubbed Data Bank for Baltic Sea Ice and Sea Surface Temperatures (BASIS) ice, is based on hand-drawn maps that were collected and then digitised 1981 in a joint project of the Finnish Institute of Marine Research (today Finish Meteorological Institute (FMI)) and the Swedish Meteorological and Hydrological Institute (SMHI). BASIS ice was designed for storage on punch cards and all ice information is encoded by five digits. This makes the data hard to access. Here we present a post-processed product based on the original five-digit code. Specifically, we convert to standard ice quantities (including information on ice types), which we distribute in the current and free Network Common Data Format (NetCDF). Our post-processed data set will help to assess numerical ice models and provide easy-to-access unique historical reference material for sea ice in the Baltic Sea. In addition we provide statistics showcasing the data quality. The website www.baltic-ocean.org hosts the post-prossed data and the conversion code. The data are also archived at the Data Publisher for Earth & Environmental Science PANGEA (doi:10.1594/PANGEA.832353).
Löptien, U.; Dietze, H.
2014-12-01
The Baltic Sea is a seasonally ice-covered, marginal sea in central northern Europe. It is an essential waterway connecting highly industrialised countries. Because ship traffic is intermittently hindered by sea ice, the local weather services have been monitoring sea ice conditions for decades. In the present study we revisit a historical monitoring data set, covering the winters 1960/1961 to 1978/1979. This data set, dubbed Data Bank for Baltic Sea Ice and Sea Surface Temperatures (BASIS) ice, is based on hand-drawn maps that were collected and then digitised in 1981 in a joint project of the Finnish Institute of Marine Research (today the Finnish Meteorological Institute (FMI)) and the Swedish Meteorological and Hydrological Institute (SMHI). BASIS ice was designed for storage on punch cards and all ice information is encoded by five digits. This makes the data hard to access. Here we present a post-processed product based on the original five-digit code. Specifically, we convert to standard ice quantities (including information on ice types), which we distribute in the current and free Network Common Data Format (NetCDF). Our post-processed data set will help to assess numerical ice models and provide easy-to-access unique historical reference material for sea ice in the Baltic Sea. In addition we provide statistics showcasing the data quality. The website http://www.baltic-ocean.org hosts the post-processed data and the conversion code. The data are also archived at the Data Publisher for Earth & Environmental Science, PANGAEA (doi:10.1594/PANGAEA.832353).
NSGIC Local Govt | GIS Inventory — Road and Street Centerlines dataset current as of 1989. Street-The data set is a line feature consisting of 13948 line segments representing streets. It was created...
DEFF Research Database (Denmark)
Staunstrup, Jørgen
1998-01-01
This paper proposes that Interface Consistency is an important issue for the development of modular designs. Byproviding a precise specification of component interfaces it becomes possible to check that separately developedcomponents use a common interface in a coherent matter thus avoiding a very...... significant source of design errors. Awide range of interface specifications are possible, the simplest form is a syntactical check of parameter types.However, today it is possible to do more sophisticated forms involving semantic checks....
Directory of Open Access Journals (Sweden)
Dr. Fee-Alexandra HAASE
2012-11-01
Full Text Available In this article we will apply a method of proof for conceptual consistency in a long historical range taking the example of rhetoric and persuasion. We will analyze the evidentially present linguistic features of this concept within three linguistic areas: The Indo-European languages, the Semitic languages, and the Afro-Asiatic languages. We have chosen the case of the concept ‘rhetoric’ / ’persuasion’ as a paradigm for this study. With the phenomenon of ‘linguistic dispersion’ we can explain the development of language as undirected, but with linguistic consistency across the borders of language families. We will prove that the Semitic and Indo-European languages are related. As a consequence, the strict differentiation between the Semitic and the Indo-European language families is outdated following the research positions of Starostin. In contrast to this, we will propose a theory of cultural exchange between the two language families.
Le Manh-Béna, Anne; Ramond, Olivier,
2011-01-01
Following the debate on the Conceptual Framework revision undertaken by the IASB and the FASB, this paper discusses three major concerns about the way financial reporting standards should be determined: (1) What is the role a Conceptual Framework?; (2) For whom and for which needs are accounting and financial reporting standards made?; and (3) What information set should financial reporting provide? We show that the perceived need of a Framework has resulted in practice in weak usefulness We ...
Energy Technology Data Exchange (ETDEWEB)
Fox, K. M. [Savannah River Site (SRS), Aiken, SC (United States); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States)
2015-12-01
In this report, the Savannah River National Laboratory provides chemical analyses and Product Consistency Test (PCT) results for 14 simulated high level waste glasses fabricated by the Pacific Northwest National Laboratory. The results of these analyses will be used as part of efforts to revise or extend the validation regions of the current Hanford Waste Treatment and Immobilization Plant glass property models to cover a broader span of waste compositions. The measured chemical composition data are reported and compared with the targeted values for each component for each glass. All of the measured sums of oxides for the study glasses fell within the interval of 96.9 to 100.8 wt %, indicating recovery of all components. Comparisons of the targeted and measured chemical compositions showed that the measured values for the glasses met the targeted concentrations within 10% for those components present at more than 5 wt %. The PCT results were normalized to both the targeted and measured compositions of the study glasses. Several of the glasses exhibited increases in normalized concentrations (NCi) after the canister centerline cooled (CCC) heat treatment. Five of the glasses, after the CCC heat treatment, had NC_{B} values that exceeded that of the Environmental Assessment (EA) benchmark glass. These results can be combined with additional characterization, including X-ray diffraction, to determine the cause of the higher release rates.
Saco-Alvarez, Liliana; Durán, Iria; Ignacio Lorenzo, J; Beiras, Ricardo
2010-05-01
The sea-urchin embryo test (SET) has been frequently used as a rapid, sensitive, and cost-effective biological tool for marine monitoring worldwide, but the selection of a sensitive, objective, and automatically readable endpoint, a stricter quality control to guarantee optimum handling and biological material, and the identification of confounding factors that interfere with the response have hampered its widespread routine use. Size increase in a minimum of n=30 individuals per replicate, either normal larvae or earlier developmental stages, was preferred to observer-dependent, discontinuous responses as test endpoint. Control size increase after 48 h incubation at 20 degrees C must meet an acceptability criterion of 218 microm. In order to avoid false positives minimums of 32 per thousand salinity, 7 pH and 2mg/L oxygen, and a maximum of 40 microg/L NH(3) (NOEC) are required in the incubation media. For in situ testing size increase rates must be corrected on a degree-day basis using 12 degrees C as the developmental threshold. Copyright 2010 Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Rodriguez-Bautista, Mariano; Díaz-García, Cecilia; Navarrete-López, Alejandra M.; Vargas, Rubicelia; Garza, Jorge, E-mail: jgo@xanum.uam.mx [Departamento de Química, División de Ciencias Básicas e Ingeniería, Universidad Autónoma Metropolitana-Iztapalapa, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa C. P. 09340, México D. F., México (Mexico)
2015-07-21
In this report, we use a new basis set for Hartree-Fock calculations related to many-electron atoms confined by soft walls. One- and two-electron integrals were programmed in a code based in parallel programming techniques. The results obtained with this proposal for hydrogen and helium atoms were contrasted with other proposals to study just one and two electron confined atoms, where we have reproduced or improved the results previously reported. Usually, an atom enclosed by hard walls has been used as a model to study confinement effects on orbital energies, the main conclusion reached by this model is that orbital energies always go up when the confinement radius is reduced. However, such an observation is not necessarily valid for atoms confined by penetrable walls. The main reason behind this result is that for atoms with large polarizability, like beryllium or potassium, external orbitals are delocalized when the confinement is imposed and consequently, the internal orbitals behave as if they were in an ionized atom. Naturally, the shell structure of these atoms is modified drastically when they are confined. The delocalization was an argument proposed for atoms confined by hard walls, but it was never verified. In this work, the confinement imposed by soft walls allows to analyze the delocalization concept in many-electron atoms.
Rodriguez-Bautista, Mariano; Díaz-García, Cecilia; Navarrete-López, Alejandra M; Vargas, Rubicelia; Garza, Jorge
2015-07-21
In this report, we use a new basis set for Hartree-Fock calculations related to many-electron atoms confined by soft walls. One- and two-electron integrals were programmed in a code based in parallel programming techniques. The results obtained with this proposal for hydrogen and helium atoms were contrasted with other proposals to study just one and two electron confined atoms, where we have reproduced or improved the results previously reported. Usually, an atom enclosed by hard walls has been used as a model to study confinement effects on orbital energies, the main conclusion reached by this model is that orbital energies always go up when the confinement radius is reduced. However, such an observation is not necessarily valid for atoms confined by penetrable walls. The main reason behind this result is that for atoms with large polarizability, like beryllium or potassium, external orbitals are delocalized when the confinement is imposed and consequently, the internal orbitals behave as if they were in an ionized atom. Naturally, the shell structure of these atoms is modified drastically when they are confined. The delocalization was an argument proposed for atoms confined by hard walls, but it was never verified. In this work, the confinement imposed by soft walls allows to analyze the delocalization concept in many-electron atoms.
Richard, Ryan M.
2016-01-05
© 2016 American Chemical Society. In designing organic materials for electronics applications, particularly for organic photovoltaics (OPV), the ionization potential (IP) of the donor and the electron affinity (EA) of the acceptor play key roles. This makes OPV design an appealing application for computational chemistry since IPs and EAs are readily calculable from most electronic structure methods. Unfortunately reliable, high-accuracy wave function methods, such as coupled cluster theory with single, double, and perturbative triples [CCSD(T)] in the complete basis set (CBS) limit are too expensive for routine applications to this problem for any but the smallest of systems. One solution is to calibrate approximate, less computationally expensive methods against a database of high-accuracy IP/EA values; however, to our knowledge, no such database exists for systems related to OPV design. The present work is the first of a multipart study whose overarching goal is to determine which computational methods can be used to reliably compute IPs and EAs of electron acceptors. This part introduces a database of 24 known organic electron acceptors and provides high-accuracy vertical IP and EA values expected to be within ±0.03 eV of the true non-relativistic, vertical CCSD(T)/CBS limit. Convergence of IP and EA values toward the CBS limit is studied systematically for the Hartree-Fock, MP2 correlation, and beyond-MP2 coupled cluster contributions to the focal point estimates.
Directory of Open Access Journals (Sweden)
A. P. Tsimpidi
2010-01-01
Full Text Available New primary and secondary organic aerosol modules have been added to PMCAMx, a three dimensional chemical transport model (CTM, for use with the SAPRC99 chemistry mechanism based on recent smog chamber studies. The new modelling framework is based on the volatility basis-set approach: both primary and secondary organic components are assumed to be semivolatile and photochemically reactive and are distributed in logarithmically spaced volatility bins. This new framework with the use of the new volatility basis parameters for low-NO_{x} and high-NO_{x} conditions tends to predict 4–6 times higher anthropogenic SOA concentrations than those predicted with the older generation of models. The resulting PMCAMx-2008 was applied in Mexico City Metropolitan Area (MCMA for approximately a week during April 2003 during a period of very low regional biomass burning impact. The emission inventory, which uses as a starting point the MCMA 2004 official inventory, is modified and the primary organic aerosol (POA emissions are distributed by volatility based on dilution experiments. The predicted organic aerosol (OA concentrations peak in the center of Mexico City, reaching values above 40 μg m^{−3}. The model predictions are compared with the results of the Positive Matrix Factorization (PMF analysis of the Aerosol Mass Spectrometry (AMS observations. The model reproduces both Hydrocarbon-like Organic Aerosol (HOA and Oxygenated Organic Aerosol (OOA concentrations and diurnal profiles. The small OA underprediction during the rush-hour periods and overprediction in the afternoon suggest potential improvements to the description of fresh primary organic emissions and the formation of the oxygenated organic aerosols, respectively, although they may also be due to errors in the simulation of dispersion and vertical mixing. However, the AMS OOA data are not specific enough to prove that the model reproduces the organic aerosol
Tsimpidi, A. P.; Karydis, V. A.; Pandis, S. N.; Zavala, M.; Lei, W.; Molina, L. T.
2007-12-01
Anthropogenic air pollution is an increasingly serious problem for public health, agriculture, and global climate. Organic material (OM) contributes ~ 20-50% to the total fine aerosol mass at continental mid-latitudes. Although OM accounts for a large fraction of PM2.5 concentration worldwide, the contributions of primary and secondary organic aerosol have been difficult to quantify. In this study, new primary and secondary organic aerosol modules were added to PMCAMx, a three dimensional chemical transport model (Gaydos et al., 2007), for use with the SAPRC99 chemistry mechanism (Carter, 2000; ENVIRON, 2006) based on recent smog chamber studies (Robinson et al., 2007). The new modeling framework is based on the volatility basis-set approach (Lane et al., 2007): both primary and secondary organic components are assumed to be semivolatile and photochemically reactive and are distributed in logarithmically spaced volatility bins. The emission inventory, which uses as starting point the MCMA 2004 official inventory (CAM, 2006), is modified and the primary organic aerosol (POA) emissions are distributed by volatility based on dilution experiments (Robinson et al., 2007). Sensitivity tests where POA is considered as nonvolatile and POA and SOA as chemically reactive are also described. In all cases PMCAMx is applied in the Mexico City Metropolitan Area during March 2006. The modeling domain covers a 180x180x6 km region in the MCMA with 3x3 km grid resolution. The model predictions are compared with Aerodyne's Aerosol Mass Spectrometry (AMS) observations from the MILAGRO Campaign. References Robinson, A. L.; Donahue, N. M.; Shrivastava, M. K.; Weitkamp, E. A.; Sage, A. M.; Grieshop, A. P.; Lane, T. E.; Pandis, S. N.; Pierce, J. R., 2007. Rethinking organic aerosols: semivolatile emissions and photochemical aging. Science 315, 1259-1262. Gaydos, T. M.; Pinder, R. W.; Koo, B.; Fahey, K. M.; Pandis, S. N., 2007. Development and application of a three- dimensional aerosol
Yoshida, Tatsusada; Hayashi, Takahisa; Mashima, Akira; Chuman, Hiroshi
2015-10-01
One of the most challenging problems in computer-aided drug discovery is the accurate prediction of the binding energy between a ligand and a protein. For accurate estimation of net binding energy ΔEbind in the framework of the Hartree-Fock (HF) theory, it is necessary to estimate two additional energy terms; the dispersion interaction energy (Edisp) and the basis set superposition error (BSSE). We previously reported a simple and efficient dispersion correction, Edisp, to the Hartree-Fock theory (HF-Dtq). In the present study, an approximation procedure for estimating BSSE proposed by Kruse and Grimme, a geometrical counterpoise correction (gCP), was incorporated into HF-Dtq (HF-Dtq-gCP). The relative weights of the Edisp (Dtq) and BSSE (gCP) terms were determined to reproduce ΔEbind calculated with CCSD(T)/CBS or /aug-cc-pVTZ (HF-Dtq-gCP (scaled)). The performance of HF-Dtq-gCP (scaled) was compared with that of B3LYP-D3(BJ)-bCP (dispersion corrected B3LYP with the Boys and Bernadi counterpoise correction (bCP)), by taking ΔEbind (CCSD(T)-bCP) of small non-covalent complexes as 'a golden standard'. As a critical test, HF-Dtq-gCP (scaled)/6-31G(d) and B3LYP-D3(BJ)-bCP/6-31G(d) were applied to the complex model for HIV-1 protease and its potent inhibitor, KNI-10033. The present results demonstrate that HF-Dtq-gCP (scaled) is a useful and powerful remedy for accurately and promptly predicting ΔEbind between a ligand and a protein, albeit it is a simple correction procedure. Copyright © 2015 Elsevier Ltd. All rights reserved.
NSGIC Local Govt | GIS Inventory — Parcels and Land Ownership dataset current as of unknown. This data set consists of digital map files containing parcel-level cadastral information obtained from...
Blank, L. Aaron; Sharma, Amit R.; Weeks, David E.
2018-03-01
The X Σ 1 /2 +2 , A Π 1 /2 2 , A Π 3 /2 2 , and B2Σ1/2 + potential-energy curves for Rb+He are computed at the spin-orbit multireference configuration interaction level of theory using a hierarchy of Gaussian basis sets at the double-zeta (DZ), triple-zeta (TZ), and quadruple-zeta (QZ) levels of valence quality. Counterpoise and Davidson-Silver corrections are employed to remove basis-set superposition error and ameliorate size-consistency error. An extrapolation is performed to obtain a final set of potential-energy curves in the complete basis-set (CBS) limit. This yields four sets of systematically improved X Σ 1 /2 +2 , A Π 1 /2 2 , A Π 3 /2 2 , and B2Σ1/2 + potential-energy curves that are used to compute the A Π 3 /2 2 bound vibrational energies, the position of the D2 blue satellite peak, and the D1 and D2 pressure broadening and shifting coefficients, at the DZ, TZ, QZ, and CBS levels. Results are compared with previous calculations and experimental observation.
DEFF Research Database (Denmark)
Brandbyge, Mads
2014-01-01
, different from what would be obtained by using an orthogonal basis, and dividing surfaces defined in real-space. We argue that this assumption is not required to be fulfilled to get exact results. We show how the current/transmission calculated by the standard Greens function method is independent...
Czech Academy of Sciences Publication Activity Database
Li, F.; Wang, L.; Zhao, J.; Xie, J. R. H.; Riley, Kevin Eugene; Chen, Z.
2011-01-01
Roč. 130, 2/3 (2011), s. 341-352 ISSN 1432-881X Institutional research plan: CEZ:AV0Z40550506 Keywords : water cluster * density functional theory * MP2 . CCSD(T) * basis set * relative energies Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.162, year: 2011
International Nuclear Information System (INIS)
Zhidkov, P.E.
1998-01-01
We consider the problem u''=f(u 2 )u (0 2 ) (for r→∞) = -∞. It is known that this problem possesses a sequence of solutions {u n } n=0,1,2... such that the nth solution u x (x) has precisely n roots in the interval (0,1). We prove the existence of a constant s 0 0 , an arbitrary above-described sequence of solutions of our problem is a basis of the space H s (0, 1)
International Nuclear Information System (INIS)
Barashenkov, V.S.; Pogodaev, G.N.; Polanski, A.; Popov, Yu.P.; Puzynin, I.V.; Sisakyan, A.N.; Sosnin, A.N.
1998-01-01
Mathematical modeling and thermal flux estimations show that a combination of installations available at present at JINR - the plutonium reactor IBR-30 and the 660 MeV proton phasotron with the current of the extracted beam 0.25 mkA, i.e. 10% of its average value, - allows one to construct an air-cooled electronuclear set-up with the multiplication coefficient K eff ≅ 0.94, the neutron yield N tot ≅ 10 14 - 10 15 and the heat generation about to 10 kW. This set-up will demonstrate a possibility to construct subcritical transmutation-power generating electronuclear systems safe and stable in operation and applicable for utilization of weapon grade and technical plutonium. Kinetics of the electronuclear system will be investigated, in particular, fluctuations of the value of K eff for various parameters of the proton beam. Cross sections of nuclear reactions which are important for the estimations of an efficiency of various conditions of the nuclear waste transmutation and the neutron fluxes together with the heat distributions inside and outside the plutonium core will be measured. A comparison of these data with theoretical calculations allows one to check up and to develop significantly the methods of mathematical modeling of electronuclear systems. We suppose also to estimate the possibilities of ARC-method for the burning of radioactive wastes and to study the influence of various reflectors and multipartitioning of the core which increases the neutron yield. (author)
Mitsumori, K
1993-01-01
Maximum residue level (MRL) for veterinary drugs in food of animal origin has been proposed by FAO/WHO, as a new evaluation procedure taking into account the presence of metabolites for the regulation of veterinary drug residues. The MRL is the maximum concentration of residue resulting from the use of a veterinary drug that is recommended to be legally permitted as acceptable in a food. It is established from the Acceptable Daily Intake (ADI) obtained from the data of toxicological studies, the residue concentration of the drug when used according to good practice in the use of veterinary drugs, and the lowest level consistent with the practical analytical methods available for routine residue analysis. Among the veterinary drugs, some chemicals contain a large amount of bound residues that are neither extractable from tissues by the analytical method identical with that used in parent chemicals. Especially, the bioavailable residues which are probably absorbed when the food is ingested are of great toxicological concern. In this case, the FAO/WHO recommends that the MRL can be established after the calculation of daily intake of residues of toxicological concern by the addition of both the extractable and bioavailable bound residues.
Authorization basis requirements comparison report
Energy Technology Data Exchange (ETDEWEB)
Brantley, W.M.
1997-08-18
The TWRS Authorization Basis (AB) consists of a set of documents identified by TWRS management with the concurrence of DOE-RL. Upon implementation of the TWRS Basis for Interim Operation (BIO) and Technical Safety Requirements (TSRs), the AB list will be revised to include the BIO and TSRs. Some documents that currently form part of the AB will be removed from the list. This SD identifies each - requirement from those documents, and recommends a disposition for each to ensure that necessary requirements are retained when the AB is revised to incorporate the BIO and TSRs. This SD also identifies documents that will remain part of the AB after the BIO and TSRs are implemented. This document does not change the AB, but provides guidance for the preparation of change documentation.
Authorization basis requirements comparison report
International Nuclear Information System (INIS)
Brantley, W.M.
1997-01-01
The TWRS Authorization Basis (AB) consists of a set of documents identified by TWRS management with the concurrence of DOE-RL. Upon implementation of the TWRS Basis for Interim Operation (BIO) and Technical Safety Requirements (TSRs), the AB list will be revised to include the BIO and TSRs. Some documents that currently form part of the AB will be removed from the list. This SD identifies each - requirement from those documents, and recommends a disposition for each to ensure that necessary requirements are retained when the AB is revised to incorporate the BIO and TSRs. This SD also identifies documents that will remain part of the AB after the BIO and TSRs are implemented. This document does not change the AB, but provides guidance for the preparation of change documentation
Structural Consistency, Consistency, and Sequential Rationality.
Kreps, David M; Ramey, Garey
1987-01-01
Sequential equilibria comprise consistent beliefs and a sequentially ra tional strategy profile. Consistent beliefs are limits of Bayes ratio nal beliefs for sequences of strategies that approach the equilibrium strategy. Beliefs are structurally consistent if they are rationaliz ed by some single conjecture concerning opponents' strategies. Consis tent beliefs are not necessarily structurally consistent, notwithstan ding a claim by Kreps and Robert Wilson (1982). Moreover, the spirit of stru...
International Nuclear Information System (INIS)
Wang Xiaomin; Tegmark, Max; Zaldarriaga, Matias
2002-01-01
We perform a detailed analysis of the latest cosmic microwave background (CMB) measurements (including BOOMERaNG, DASI, Maxima and CBI), both alone and jointly with other cosmological data sets involving, e.g., galaxy clustering and the Lyman Alpha Forest. We first address the question of whether the CMB data are internally consistent once calibration and beam uncertainties are taken into account, performing a series of statistical tests. With a few minor caveats, our answer is yes, and we compress all data into a single set of 24 bandpowers with associated covariance matrix and window functions. We then compute joint constraints on the 11 parameters of the 'standard' adiabatic inflationary cosmological model. Our best fit model passes a series of physical consistency checks and agrees with essentially all currently available cosmological data. In addition to sharp constraints on the cosmic matter budget in good agreement with those of the BOOMERaNG, DASI and Maxima teams, we obtain a heaviest neutrino mass range 0.04-4.2 eV and the sharpest constraints to date on gravity waves which (together with preference for a slight red-tilt) favor 'small-field' inflation models
Lagrangian multiforms and multidimensional consistency
Energy Technology Data Exchange (ETDEWEB)
Lobb, Sarah; Nijhoff, Frank [Department of Applied Mathematics, University of Leeds, Leeds LS2 9JT (United Kingdom)
2009-10-30
We show that well-chosen Lagrangians for a class of two-dimensional integrable lattice equations obey a closure relation when embedded in a higher dimensional lattice. On the basis of this property we formulate a Lagrangian description for such systems in terms of Lagrangian multiforms. We discuss the connection of this formalism with the notion of multidimensional consistency, and the role of the lattice from the point of view of the relevant variational principle.
Campos, Cesar T; Jorge, Francisco E; Alves, Júlia M A
2012-09-01
Recently, segmented all-electron contracted double, triple, quadruple, quintuple, and sextuple zeta valence plus polarization function (XZP, X = D, T, Q, 5, and 6) basis sets for the elements from H to Ar were constructed for use in conjunction with nonrelativistic and Douglas-Kroll-Hess Hamiltonians. In this work, in order to obtain a better description of some molecular properties, the XZP sets for the second-row elements were augmented with high-exponent d "inner polarization functions," which were optimized in the molecular environment at the second-order Møller-Plesset level. At the coupled cluster level of theory, the inclusion of tight d functions for these elements was found to be essential to improve the agreement between theoretical and experimental zero-point vibrational energies (ZPVEs) and atomization energies. For all of the molecules studied, the ZPVE errors were always smaller than 0.5 %. The atomization energies were also improved by applying corrections due to core/valence correlation and atomic spin-orbit effects. This led to estimates for the atomization energies of various compounds in the gaseous phase. The largest error (1.2 kcal mol(-1)) was found for SiH(4).
NSGIC Local Govt | GIS Inventory — Road and Street Centerlines dataset current as of 1989. StreetLabels-The data set is a text feature consisting of 6329 label points representing street names. It was...
NSGIC Local Govt | GIS Inventory — Parcels and Land Ownership dataset current as of 2005. ParcelView-The data set is a view of the parcel polygon consisting of more than 93,000 tax parcel boundaries...
Baker, Claude D.; And Others
The importance of experiential aspects of biological study is addressed using multi-dimensional classroom and field classroom approaches to student learning. This document includes a guide to setting up this style of field experience. Several teaching innovations are employed to introduce undergraduate students to the literature, techniques, and…
Consistent model driven architecture
Niepostyn, Stanisław J.
2015-09-01
The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.
Bitcoin Meets Strong Consistency
Decker, Christian; Seidel, Jochen; Wattenhofer, Roger
2014-01-01
The Bitcoin system only provides eventual consistency. For everyday life, the time to confirm a Bitcoin transaction is prohibitively slow. In this paper we propose a new system, built on the Bitcoin blockchain, which enables strong consistency. Our system, PeerCensus, acts as a certification authority, manages peer identities in a peer-to-peer network, and ultimately enhances Bitcoin and similar systems with strong consistency. Our extensive analysis shows that PeerCensus is in a secure state...
Consistent classical supergravity theories
International Nuclear Information System (INIS)
Muller, M.
1989-01-01
This book offers a presentation of both conformal and Poincare supergravity. The consistent four-dimensional supergravity theories are classified. The formulae needed for further modelling are included
Roper, Ian P E; Besley, Nicholas A
2016-03-21
The simulation of X-ray emission spectra of transition metal complexes with time-dependent density functional theory (TDDFT) is investigated. X-ray emission spectra can be computed within TDDFT in conjunction with the Tamm-Dancoff approximation by using a reference determinant with a vacancy in the relevant core orbital, and these calculations can be performed using the frozen orbital approximation or with the relaxation of the orbitals of the intermediate core-ionised state included. Both standard exchange-correlation functionals and functionals specifically designed for X-ray emission spectroscopy are studied, and it is shown that the computed spectral band profiles are sensitive to the exchange-correlation functional used. The computed intensities of the spectral bands can be rationalised by considering the metal p orbital character of the valence molecular orbitals. To compute X-ray emission spectra with the correct energy scale allowing a direct comparison with experiment requires the relaxation of the core-ionised state to be included and the use of specifically designed functionals with increased amounts of Hartree-Fock exchange in conjunction with high quality basis sets. A range-corrected functional with increased Hartree-Fock exchange in the short range provides transition energies close to experiment and spectral band profiles that have a similar accuracy to those from standard functionals.
Consistency of orthodox gravity
Energy Technology Data Exchange (ETDEWEB)
Bellucci, S. [INFN, Frascati (Italy). Laboratori Nazionali di Frascati; Shiekh, A. [International Centre for Theoretical Physics, Trieste (Italy)
1997-01-01
A recent proposal for quantizing gravity is investigated for self consistency. The existence of a fixed-point all-order solution is found, corresponding to a consistent quantum gravity. A criterion to unify couplings is suggested, by invoking an application of their argument to more complex systems.
Quasiparticles and thermodynamical consistency
International Nuclear Information System (INIS)
Shanenko, A.A.; Biro, T.S.; Toneev, V.D.
2003-01-01
A brief and simple introduction into the problem of the thermodynamical consistency is given. The thermodynamical consistency relations, which should be taken into account under constructing a quasiparticle model, are found in a general manner from the finite-temperature extension of the Hellmann-Feynman theorem. Restrictions following from these relations are illustrated by simple physical examples. (author)
Consistency and Communication in Committees
Inga Deimen; Felix Ketelaar; Mark T. Le Quement
2013-01-01
This paper analyzes truthtelling incentives in pre-vote communication in heterogeneous committees. We generalize the classical Condorcet jury model by introducing a new informational structure that captures consistency of information. In contrast to the impossibility result shown by Coughlan (2000) for the classical model, full pooling of information followed by sincere voting is an equilibrium outcome of our model for a large set of parameter values implying the possibility of ex post confli...
Bergantiños, Gustavo; Valencia-Toledo, Alfredo; Vidal-Puga, Juan
2016-01-01
The program evaluation review technique (PERT) is a tool used to schedule and coordinate activities in a complex project. In assigning the cost of a potential delay, we characterize the Shapley rule as the only rule that satisfies consistency and other desirable properties.
Simplified DFT methods for consistent structures and energies of large systems
Caldeweyher, Eike; Gerit Brandenburg, Jan
2018-05-01
Kohn–Sham density functional theory (DFT) is routinely used for the fast electronic structure computation of large systems and will most likely continue to be the method of choice for the generation of reliable geometries in the foreseeable future. Here, we present a hierarchy of simplified DFT methods designed for consistent structures and non-covalent interactions of large systems with particular focus on molecular crystals. The covered methods are a minimal basis set Hartree–Fock (HF-3c), a small basis set screened exchange hybrid functional (HSE-3c), and a generalized gradient approximated functional evaluated in a medium-sized basis set (B97-3c), all augmented with semi-classical correction potentials. We give an overview on the methods design, a comprehensive evaluation on established benchmark sets for geometries and lattice energies of molecular crystals, and highlight some realistic applications on large organic crystals with several hundreds of atoms in the primitive unit cell.
DEFF Research Database (Denmark)
Thomsen, Christa; Nielsen, Anne Ellerup
2006-01-01
This chapter first outlines theory and literature on CSR and Stakeholder Relations focusing on the different perspectives and the contextual and dynamic character of the CSR concept. CSR reporting challenges are discussed and a model of analysis is proposed. Next, our paper presents the results...... of a case study showing that companies use different and not necessarily consistent strategies for reporting on CSR. Finally, the implications for managerial practice are discussed. The chapter concludes by highlighting the value and awareness of the discourse and the discourse types adopted...... in the reporting material. By implementing consistent discourse strategies that interact according to a well-defined pattern or order, it is possible to communicate a strong social commitment on the one hand, and to take into consideration the expectations of the shareholders and the other stakeholders...
Geometrically Consistent Mesh Modification
Bonito, A.
2010-01-01
A new paradigm of adaptivity is to execute refinement, coarsening, and smoothing of meshes on manifolds with incomplete information about their geometry and yet preserve position and curvature accuracy. We refer to this collectively as geometrically consistent (GC) mesh modification. We discuss the concept of discrete GC, show the failure of naive approaches, and propose and analyze a simple algorithm that is GC and accuracy preserving. © 2010 Society for Industrial and Applied Mathematics.
Ciarelli, Giancarlo; El Haddad, Imad; Bruns, Emily; Aksoyoglu, Sebnem; Möhler, Ottmar; Baltensperger, Urs; Prévôt, André S. H.
2017-06-01
In this study, novel wood combustion aging experiments performed at different temperatures (263 and 288 K) in a ˜ 7 m3 smog chamber were modelled using a hybrid volatility basis set (VBS) box model, representing the emission partitioning and their oxidation against OH. We combine aerosol-chemistry box-model simulations with unprecedented measurements of non-traditional volatile organic compounds (NTVOCs) from a high-resolution proton transfer reaction mass spectrometer (PTR-MS) and with organic aerosol measurements from an aerosol mass spectrometer (AMS). Due to this, we are able to observationally constrain the amounts of different NTVOC aerosol precursors (in the model) relative to low volatility and semi-volatile primary organic material (OMsv), which is partitioned based on current published volatility distribution data. By comparing the NTVOC / OMsv ratios at different temperatures, we determine the enthalpies of vaporization of primary biomass-burning organic aerosols. Further, the developed model allows for evaluating the evolution of oxidation products of the semi-volatile and volatile precursors with aging. More than 30 000 box-model simulations were performed to retrieve the combination of parameters that best fit the observed organic aerosol mass and O : C ratios. The parameters investigated include the NTVOC reaction rates and yields as well as enthalpies of vaporization and the O : C of secondary organic aerosol surrogates. Our results suggest an average ratio of NTVOCs to the sum of non-volatile and semi-volatile organic compounds of ˜ 4.75. The mass yields of these compounds determined for a wide range of atmospherically relevant temperatures and organic aerosol (OA) concentrations were predicted to vary between 8 and 30 % after 5 h of continuous aging. Based on the reaction scheme used, reaction rates of the NTVOC mixture range from 3.0 × 10-11 to 4. 0 × 10-11 cm3 molec-1 s-1. The average enthalpy of vaporization of secondary organic aerosol
Directory of Open Access Journals (Sweden)
G. Ciarelli
2017-06-01
Full Text Available In this study, novel wood combustion aging experiments performed at different temperatures (263 and 288 K in a ∼ 7 m3 smog chamber were modelled using a hybrid volatility basis set (VBS box model, representing the emission partitioning and their oxidation against OH. We combine aerosol–chemistry box-model simulations with unprecedented measurements of non-traditional volatile organic compounds (NTVOCs from a high-resolution proton transfer reaction mass spectrometer (PTR-MS and with organic aerosol measurements from an aerosol mass spectrometer (AMS. Due to this, we are able to observationally constrain the amounts of different NTVOC aerosol precursors (in the model relative to low volatility and semi-volatile primary organic material (OMsv, which is partitioned based on current published volatility distribution data. By comparing the NTVOC ∕ OMsv ratios at different temperatures, we determine the enthalpies of vaporization of primary biomass-burning organic aerosols. Further, the developed model allows for evaluating the evolution of oxidation products of the semi-volatile and volatile precursors with aging. More than 30 000 box-model simulations were performed to retrieve the combination of parameters that best fit the observed organic aerosol mass and O : C ratios. The parameters investigated include the NTVOC reaction rates and yields as well as enthalpies of vaporization and the O : C of secondary organic aerosol surrogates. Our results suggest an average ratio of NTVOCs to the sum of non-volatile and semi-volatile organic compounds of ∼ 4.75. The mass yields of these compounds determined for a wide range of atmospherically relevant temperatures and organic aerosol (OA concentrations were predicted to vary between 8 and 30 % after 5 h of continuous aging. Based on the reaction scheme used, reaction rates of the NTVOC mixture range from 3.0 × 10−11 to 4. 0 × 10−11 cm3 molec−1 s−1
Energy Technology Data Exchange (ETDEWEB)
Nolte-Ernsting, C.C.A.; Krombach, G.; Staatz, G.; Kilbinger, M.; Adam, G.B.; Guenther, R.W. [RWTH Aachen (Germany). Klinik fuer Radiologische Diagnostik
1999-06-01
Purpose: To investigate the feasibility of reconstructing a virtual endoscopy from MR imaging data sets of the upper urinary tract. Method: The data obtained from 28 contrast-enhanced MR urographic examinations (5 normal; 23 pathologic) were post-processed to reconstruct a virtual ureterorenoscopy (VURS) using a threshold image segmentation. The visualization of the upper urinary tract was based on the acquisition of T{sub 1}-weighted 3D gradient-echo sequences after intravenous administration of gadolinium-DTPA and a prior injection of low-dose furosemide. Results: The employed MR urography technique created in all 28 cases a complete and strong contrast enhancement of the urinary tract. These 3D sequence data allowed the reconstruction of a VURS, even when the collecting system was not dilated. The best accuracy was provided by the MR urography sequences with the smallest voxel size. Moreover, the data acquisition based on a breath-hold technique has proved superior to that using a respiratory gating. Inside the renal pelvis, all calices could be assessed by turning the virtual endoscope in the appropriate direction. The visualization of the ureteral orifices in the bladder was also possible. All filling defects that were diagnosed by MR urography could be evaluated from the endoluminal view using the VURS. The exact characterization of the lesions based only on the assessment of the surface structure was difficult. Conclusion: A virtual endoscopy of the upper urinary tract can be successfully reconstructed using the data sets of high-resolution 3D MR urography sequences. (orig.) [Deutsch] Ziel: Untersuchungen zur Anwendung der virtuellen Endoskopie auf MR-tomographische Datensaetze des oberen Harntraktes. Methoden: Die Daten von 28 kontrastangehobenen MR-Urographien (5 normal; 23 pathologisch) wurden zur Erstellung einer virtuellen Ureterorenoskopie (VURS) mittels Schwellenwert-Bildsegmentierung nachverarbeitet. Als Grundlage fuer die Darstellung des Harntraktes
Serfon, Cedric; The ATLAS collaboration
2016-01-01
One of the biggest challenge with Large scale data management system is to ensure the consistency between the global file catalog and what is physically on all storage elements. To tackle this issue, the Rucio software which is used by the ATLAS Distributed Data Management system has been extended to automatically handle lost or unregistered files (aka Dark Data). This system automatically detects these inconsistencies and take actions like recovery or deletion of unneeded files in a central manner. In this talk, we will present this system, explain the internals and give some results.
Griffiths, Robert B.
2001-11-01
Quantum mechanics is one of the most fundamental yet difficult subjects in physics. Nonrelativistic quantum theory is presented here in a clear and systematic fashion, integrating Born's probabilistic interpretation with Schrödinger dynamics. Basic quantum principles are illustrated with simple examples requiring no mathematics beyond linear algebra and elementary probability theory. The quantum measurement process is consistently analyzed using fundamental quantum principles without referring to measurement. These same principles are used to resolve several of the paradoxes that have long perplexed physicists, including the double slit and Schrödinger's cat. The consistent histories formalism used here was first introduced by the author, and extended by M. Gell-Mann, J. Hartle and R. Omnès. Essential for researchers yet accessible to advanced undergraduate students in physics, chemistry, mathematics, and computer science, this book is supplementary to standard textbooks. It will also be of interest to physicists and philosophers working on the foundations of quantum mechanics. Comprehensive account Written by one of the main figures in the field Paperback edition of successful work on philosophy of quantum mechanics
Consistence of Network Filtering Rules
Institute of Scientific and Technical Information of China (English)
SHE Kun; WU Yuancheng; HUANG Juncai; ZHOU Mingtian
2004-01-01
The inconsistence of firewall/VPN(Virtual Private Network) rule makes a huge maintainable cost.With development of Multinational Company,SOHO office,E-government the number of firewalls/VPN will increase rapidly.Rule table in stand-alone or network will be increased in geometric series accordingly.Checking the consistence of rule table manually is inadequate.A formal approach can define semantic consistence,make a theoretic foundation of intelligent management about rule tables.In this paper,a kind of formalization of host rules and network ones for auto rule-validation based on SET theory were proporsed and a rule validation scheme was defined.The analysis results show the superior performance of the methods and demonstrate its potential for the intelligent management based on rule tables.
Harmonization of interests as a methodological basis of logistics
Directory of Open Access Journals (Sweden)
V. V. Baginova
2015-01-01
Full Text Available The article is devoted to the methodology of logistics. The basis of this methodology is the harmonization of interests of all participants of process of distribution. The methodology consists of a refusal from a fragmented approach to the management of merchandise, the use of categories of economic trade-offs, the open exchange of information and joint determination of the final price of the goods, based on the convergence of cost and value pricing methods and tariff setting.
Replica consistency in a Data Grid
International Nuclear Information System (INIS)
Domenici, Andrea; Donno, Flavia; Pucciani, Gianni; Stockinger, Heinz; Stockinger, Kurt
2004-01-01
A Data Grid is a wide area computing infrastructure that employs Grid technologies to provide storage capacity and processing power to applications that handle very large quantities of data. Data Grids rely on data replication to achieve better performance and reliability by storing copies of data sets on different Grid nodes. When a data set can be modified by applications, the problem of maintaining consistency among existing copies arises. The consistency problem also concerns metadata, i.e., additional information about application data sets such as indices, directories, or catalogues. This kind of metadata is used both by the applications and by the Grid middleware to manage the data. For instance, the Replica Management Service (the Grid middleware component that controls data replication) uses catalogues to find the replicas of each data set. Such catalogues can also be replicated and their consistency is crucial to the correct operation of the Grid. Therefore, metadata consistency generally poses stricter requirements than data consistency. In this paper we report on the development of a Replica Consistency Service based on the middleware mainly developed by the European Data Grid Project. The paper summarises the main issues in the replica consistency problem, and lays out a high-level architectural design for a Replica Consistency Service. Finally, results from simulations of different consistency models are presented
Self-consistent studies of magnetic thin film Ni (001)
International Nuclear Information System (INIS)
Wang, C.S.; Freeman, A.J.
1979-01-01
Advances in experimental methods for studying surface phenomena have provided the stimulus to develop theoretical methods capable of interpreting this wealth of new information. Of particular interest have been the relative roles of bulk and surface contributions since in several important cases agreement between experiment and bulk self-consistent (SC) calculations within the local spin density functional formalism (LSDF) is lacking. We discuss our recent extension of the (LSDF) approach to the study of thin films (slabs) and the role of surface effects on magnetic properties. Results are described for Ni (001) films using our new SC numerical basis set LCAO method. Self-consistency within the superposition of overlapping spherical atomic charge density model is obtained iteratively with the atomic configuration as the adjustable parameter. Results are presented for the electronic charge densities and local density of states. The origin and role of (magnetic) surface states is discussed by comparison with results of earlier bulk calculations
Maintaining consistency in distributed systems
Birman, Kenneth P.
1991-01-01
In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability.
International Nuclear Information System (INIS)
1993-09-01
This report forms part of the supporting documentation for the low- and intermediate-level waste repository site selection procedure. The aim of the report is to present the site-specific geological data, and the geosphere database derived therefrom, which were used as a basis for evaluating the long-term safety of a repository at Wellenberg. These data also form a key component of other reports appearing simultaneously with the present one, first on the intercomparison of the four potential sites, (NTB 93-02) and second, on the safety assessment of the Wellenberg site itself (NTB 93-26). The level of detail of the present report is determined by the requirements of the other two reports mentioned, which would include presenting, discussing and justifying the geosphere dataset used in the performance assessment model calculations. The introductory chapter discusses procedures and goals. The second chapter provides an overview of the geographical and geological situation at Wellenberg. Chapter 3 then discusses the planning and progress of the field programme, and the current status of investigations is presented. The fourth chapter presents the geological situation at the Wellenberg site and describes the concept and models formulated on the basis of this information. Chapter 5 derives the performance assessment and engineering datasets, based on the investigations, concepts and modelling exercises described in chapter 4. In summary, it can be said that, to date, the investigation results from Wellenberg have confirmed predictions in all relevant respects and, in some cases, have even exceeded expectations (e.g. in relation to the available volume of host rock). (author) figs., tabs., 141 refs
International Nuclear Information System (INIS)
Goncharov, V.K.; Krekoten', O.V.; Makarov, V.V.
2015-01-01
The main aim of this article is to assess experimentally the possibility for the development and manufacturing of a high-power pulse X-ray source on the basis of a high-current electron accelerator of the diode type. This task was realized using a vacuum diode with the explosive plasma cathode from brass and an anode of aluminum foil 850 microns thick. As a result of the experiments performed, it is shown that, for this metal of the anode, the component of X-rays, propagating along electron beam motion, has bigger energy weight than the reflected one. The photographic paper placed in a black dense paper holder was used as a sensor. It is necessary to mark that at present the current investigations have a purely qualitative character. At the same time, the authors have succeeded to define an angle of divergence (~90°) of the generated radiation after an aluminum target. The possibility of generating bremsstrahlung and also the energy estimates indicate applicability of this installation in pure research, and application-oriented purposes, for example, for monitoring of the radiation stability of different electronic products. (authors)
Fully self-consistent GW calculations for molecules
DEFF Research Database (Denmark)
Rostgaard, Carsten; Jacobsen, Karsten Wedel; Thygesen, Kristian Sommer
2010-01-01
We calculate single-particle excitation energies for a series of 34 molecules using fully self-consistent GW, one-shot G0W0, Hartree-Fock (HF), and hybrid density-functional theory (DFT). All calculations are performed within the projector-augmented wave method using a basis set of Wannier...... functions augmented by numerical atomic orbitals. The GW self-energy is calculated on the real frequency axis including its full frequency dependence and off-diagonal matrix elements. The mean absolute error of the ionization potential (IP) with respect to experiment is found to be 4.4, 2.6, 0.8, 0.4, and 0...
International Nuclear Information System (INIS)
R.J. Garrett
2002-01-01
As part of the internal Integrated Safety Management Assessment verification process, it was determined that there was a lack of documentation that summarizes the safety basis of the current Yucca Mountain Project (YMP) site characterization activities. It was noted that a safety basis would make it possible to establish a technically justifiable graded approach to the implementation of the requirements identified in the Standards/Requirements Identification Document. The Standards/Requirements Identification Documents commit a facility to compliance with specific requirements and, together with the hazard baseline documentation, provide a technical basis for ensuring that the public and workers are protected. This Safety Basis Report has been developed to establish and document the safety basis of the current site characterization activities, establish and document the hazard baseline, and provide the technical basis for identifying structures, systems, and components (SSCs) that perform functions necessary to protect the public, the worker, and the environment from hazards unique to the YMP site characterization activities. This technical basis for identifying SSCs serves as a grading process for the implementation of programs such as Conduct of Operations (DOE Order 5480.19) and the Suspect/Counterfeit Items Program. In addition, this report provides a consolidated summary of the hazards analyses processes developed to support the design, construction, and operation of the YMP site characterization facilities and, therefore, provides a tool for evaluating the safety impacts of changes to the design and operation of the YMP site characterization activities
Energy Technology Data Exchange (ETDEWEB)
R.J. Garrett
2002-01-14
As part of the internal Integrated Safety Management Assessment verification process, it was determined that there was a lack of documentation that summarizes the safety basis of the current Yucca Mountain Project (YMP) site characterization activities. It was noted that a safety basis would make it possible to establish a technically justifiable graded approach to the implementation of the requirements identified in the Standards/Requirements Identification Document. The Standards/Requirements Identification Documents commit a facility to compliance with specific requirements and, together with the hazard baseline documentation, provide a technical basis for ensuring that the public and workers are protected. This Safety Basis Report has been developed to establish and document the safety basis of the current site characterization activities, establish and document the hazard baseline, and provide the technical basis for identifying structures, systems, and components (SSCs) that perform functions necessary to protect the public, the worker, and the environment from hazards unique to the YMP site characterization activities. This technical basis for identifying SSCs serves as a grading process for the implementation of programs such as Conduct of Operations (DOE Order 5480.19) and the Suspect/Counterfeit Items Program. In addition, this report provides a consolidated summary of the hazards analyses processes developed to support the design, construction, and operation of the YMP site characterization facilities and, therefore, provides a tool for evaluating the safety impacts of changes to the design and operation of the YMP site characterization activities.
Measuring process and knowledge consistency
DEFF Research Database (Denmark)
Edwards, Kasper; Jensen, Klaes Ladeby; Haug, Anders
2007-01-01
When implementing configuration systems, knowledge about products and processes are documented and replicated in the configuration system. This practice assumes that products are specified consistently i.e. on the same rule base and likewise for processes. However, consistency cannot be taken...... for granted; rather the contrary, and attempting to implement a configuration system may easily ignite a political battle. This is because stakes are high in the sense that the rules and processes chosen may only reflect one part of the practice, ignoring a majority of the employees. To avoid this situation......, this paper presents a methodology for measuring product and process consistency prior to implementing a configuration system. The methodology consists of two parts: 1) measuring knowledge consistency and 2) measuring process consistency. Knowledge consistency is measured by developing a questionnaire...
BWR NSSS design basis documentation
International Nuclear Information System (INIS)
Vij, R.S.; Bates, R.E.
2004-01-01
programs that GE has participated in and describes the different options and approaches that have been used by various utilities in their design basis programs. Some of these variations deal with the scope and depth of coverage of the information, while others are related to the process (how the work is done). Both of these topics can have a significant effect on the program cost. Some insight into these effects is provided. The final section of the paper presents a set of lessons learned and a recommendation for an optimum approach to a design basis information program. The lessons learned reflect the knowledge that GE has gained by participating in design basis programs with nineteen domestic and international BWR owner/operators. The optimum approach described in this paper is GE's attempt to define a set of information and a work process for a utility/GE NSSS Design Basis Information program that will maximize the cost effectiveness of the program for the utility. (author)
Developing consistent pronunciation models for phonemic variants
CSIR Research Space (South Africa)
Davel, M
2006-09-01
Full Text Available Pronunciation lexicons often contain pronunciation variants. This can create two problems: It can be difficult to define these variants in an internally consistent way and it can also be difficult to extract generalised grapheme-to-phoneme rule sets...
Image recognition and consistency of response
Haygood, Tamara M.; Ryan, John; Liu, Qing Mary A.; Bassett, Roland; Brennan, Patrick C.
2012-02-01
Purpose: To investigate the connection between conscious recognition of an image previously encountered in an experimental setting and consistency of response to the experimental question. Materials and Methods: Twenty-four radiologists viewed 40 frontal chest radiographs and gave their opinion as to the position of a central venous catheter. One-to-three days later they again viewed 40 frontal chest radiographs and again gave their opinion as to the position of the central venous catheter. Half of the radiographs in the second set were repeated images from the first set and half were new. The radiologists were asked of each image whether it had been included in the first set. For this study, we are evaluating only the 20 repeated images. We used the Kruskal-Wallis test and Fisher's exact test to determine the relationship between conscious recognition of a previously interpreted image and consistency in interpretation of the image. Results. There was no significant correlation between recognition of the image and consistency in response regarding the position of the central venous catheter. In fact, there was a trend in the opposite direction, with radiologists being slightly more likely to give a consistent response with respect to images they did not recognize than with respect to those they did recognize. Conclusion: Radiologists' recognition of previously-encountered images in an observer-performance study does not noticeably color their interpretation on the second encounter.
Probing community nurses' professional basis
DEFF Research Database (Denmark)
Schaarup, Clara; Pape-Haugaard, Louise; Jensen, Merete Hartun
2017-01-01
Complicated and long-lasting wound care of diabetic foot ulcers are moving from specialists in wound care at hospitals towards community nurses without specialist diabetic foot ulcer wound care knowledge. The aim of the study is to elucidate community nurses' professional basis for treating...... diabetic foot ulcers. A situational case study design was adopted in an archetypical Danish community nursing setting. Experience is a crucial component in the community nurses' professional basis for treating diabetic foot ulcers. Peer-to-peer training is the prevailing way to learn about diabetic foot...... ulcer, however, this contributes to the risk of low evidence-based practice. Finally, a frequent behaviour among the community nurses is to consult colleagues before treating the diabetic foot ulcers....
CLOPW; a mixed basis set full potential electronic structure method
Bekker, H.G.; Bekker, Hermie Gerhard
1997-01-01
This thesis is about the development of the full potental CLOPW package for electronic structure calculations. Chapter 1 provides the necessary background in the theory of solid state physics. It gives a short overview of the effective one particle model as commonly used in solid state physics. It
Consistency argued students of fluid
Viyanti; Cari; Suparmi; Winarti; Slamet Budiarti, Indah; Handika, Jeffry; Widyastuti, Fatma
2017-01-01
Problem solving for physics concepts through consistency arguments can improve thinking skills of students and it is an important thing in science. The study aims to assess the consistency of the material Fluid student argmentation. The population of this study are College students PGRI Madiun, UIN Sunan Kalijaga Yogyakarta and Lampung University. Samples using cluster random sampling, 145 samples obtained by the number of students. The study used a descriptive survey method. Data obtained through multiple-choice test and interview reasoned. Problem fluid modified from [9] and [1]. The results of the study gained an average consistency argmentation for the right consistency, consistency is wrong, and inconsistent respectively 4.85%; 29.93%; and 65.23%. Data from the study have an impact on the lack of understanding of the fluid material which is ideally in full consistency argued affect the expansion of understanding of the concept. The results of the study as a reference in making improvements in future studies is to obtain a positive change in the consistency of argumentations.
Coordinating user interfaces for consistency
Nielsen, Jakob
2001-01-01
In the years since Jakob Nielsen's classic collection on interface consistency first appeared, much has changed, and much has stayed the same. On the one hand, there's been exponential growth in the opportunities for following or disregarding the principles of interface consistency-more computers, more applications, more users, and of course the vast expanse of the Web. On the other, there are the principles themselves, as persistent and as valuable as ever. In these contributed chapters, you'll find details on many methods for seeking and enforcing consistency, along with bottom-line analys
Theoretical basis for dosimetry
International Nuclear Information System (INIS)
Carlsson, G.A.
1985-01-01
Radiation dosimetry is fundamental to all fields of science dealing with radiation effects and is concerned with problems which are often intricate as hinted above. A firm scientific basis is needed to face increasing demands on accurate dosimetry. This chapter is an attempt to review and to elucidate the elements for such a basis. Quantities suitable for radiation dosimetry have been defined in the unique work to coordinate radiation terminology and usage by the International Commission on Radiation Units and Measurements, ICRU. Basic definitions and terminology used in this chapter conform with the recent ''Radiation Quantities and Units, Report 33'' of the ICRU
Choice, internal consistency, and rationality
Aditi Bhattacharyya; Prasanta K. Pattanaik; Yongsheng Xu
2010-01-01
The classical theory of rational choice is built on several important internal consistency conditions. In recent years, the reasonableness of those internal consistency conditions has been questioned and criticized, and several responses to accommodate such criticisms have been proposed in the literature. This paper develops a general framework to accommodate the issues raised by the criticisms of classical rational choice theory, and examines the broad impact of these criticisms from both no...
International Nuclear Information System (INIS)
Rafelski, J.
1979-01-01
After an introductory overview of the bag model the author uses the self-consistent solution of the coupled Dirac-meson fields to represent a bound state of strongly ineteracting fermions. In this framework he discusses the vivial approach to classical field equations. After a short description of the used numerical methods the properties of bound states of scalar self-consistent Fields and the solutions of a self-coupled Dirac field are considered. (HSI) [de
Economic communication model set
Zvereva, Olga M.; Berg, Dmitry B.
2017-06-01
This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.
DEFF Research Database (Denmark)
Tsapatsaris, Nikolaos; Willendrup, Peter Kjær; E. Lechner, Ruep
2015-01-01
Results based on virtual instrument models for the first high-flux, high-resolution, spallation based, backscattering spectrometer, BASIS are presented in this paper. These were verified using the Monte Carlo instrument simulation packages McStas and VITESS. Excellent agreement of the neutron count...... are pivotal to the conceptual design of the next generation backscattering spectrometer, MIRACLES at the European Spallation Source....
Time-consistent and market-consistent evaluations
Pelsser, A.; Stadje, M.A.
2014-01-01
We consider evaluation methods for payoffs with an inherent financial risk as encountered for instance for portfolios held by pension funds and insurance companies. Pricing such payoffs in a way consistent to market prices typically involves combining actuarial techniques with methods from
Consistency Analysis of Nearest Subspace Classifier
Wang, Yi
2015-01-01
The Nearest subspace classifier (NSS) finds an estimation of the underlying subspace within each class and assigns data points to the class that corresponds to its nearest subspace. This paper mainly studies how well NSS can be generalized to new samples. It is proved that NSS is strongly consistent under certain assumptions. For completeness, NSS is evaluated through experiments on various simulated and real data sets, in comparison with some other linear model based classifiers. It is also ...
Orthology and paralogy constraints: satisfiability and consistency.
Lafond, Manuel; El-Mabrouk, Nadia
2014-01-01
A variety of methods based on sequence similarity, reconciliation, synteny or functional characteristics, can be used to infer orthology and paralogy relations between genes of a given gene family G. But is a given set C of orthology/paralogy constraints possible, i.e., can they simultaneously co-exist in an evolutionary history for G? While previous studies have focused on full sets of constraints, here we consider the general case where C does not necessarily involve a constraint for each pair of genes. The problem is subdivided in two parts: (1) Is C satisfiable, i.e. can we find an event-labeled gene tree G inducing C? (2) Is there such a G which is consistent, i.e., such that all displayed triplet phylogenies are included in a species tree? Previous results on the Graph sandwich problem can be used to answer to (1), and we provide polynomial-time algorithms for satisfiability and consistency with a given species tree. We also describe a new polynomial-time algorithm for the case of consistency with an unknown species tree and full knowledge of pairwise orthology/paralogy relationships, as well as a branch-and-bound algorithm in the case when unknown relations are present. We show that our algorithms can be used in combination with ProteinOrtho, a sequence similarity-based orthology detection tool, to extract a set of robust orthology/paralogy relationships.
Market-consistent actuarial valuation
Wüthrich, Mario V
2016-01-01
This is the third edition of this well-received textbook, presenting powerful methods for measuring insurance liabilities and assets in a consistent way, with detailed mathematical frameworks that lead to market-consistent values for liabilities. Topics covered are stochastic discounting with deflators, valuation portfolio in life and non-life insurance, probability distortions, asset and liability management, financial risks, insurance technical risks, and solvency. Including updates on recent developments and regulatory changes under Solvency II, this new edition of Market-Consistent Actuarial Valuation also elaborates on different risk measures, providing a revised definition of solvency based on industry practice, and presents an adapted valuation framework which takes a dynamic view of non-life insurance reserving risk.
The Principle of Energetic Consistency
Cohn, Stephen E.
2009-01-01
A basic result in estimation theory is that the minimum variance estimate of the dynamical state, given the observations, is the conditional mean estimate. This result holds independently of the specifics of any dynamical or observation nonlinearity or stochasticity, requiring only that the probability density function of the state, conditioned on the observations, has two moments. For nonlinear dynamics that conserve a total energy, this general result implies the principle of energetic consistency: if the dynamical variables are taken to be the natural energy variables, then the sum of the total energy of the conditional mean and the trace of the conditional covariance matrix (the total variance) is constant between observations. Ensemble Kalman filtering methods are designed to approximate the evolution of the conditional mean and covariance matrix. For them the principle of energetic consistency holds independently of ensemble size, even with covariance localization. However, full Kalman filter experiments with advection dynamics have shown that a small amount of numerical dissipation can cause a large, state-dependent loss of total variance, to the detriment of filter performance. The principle of energetic consistency offers a simple way to test whether this spurious loss of variance limits ensemble filter performance in full-blown applications. The classical second-moment closure (third-moment discard) equations also satisfy the principle of energetic consistency, independently of the rank of the conditional covariance matrix. Low-rank approximation of these equations offers an energetically consistent, computationally viable alternative to ensemble filtering. Current formulations of long-window, weak-constraint, four-dimensional variational methods are designed to approximate the conditional mode rather than the conditional mean. Thus they neglect the nonlinear bias term in the second-moment closure equation for the conditional mean. The principle of
Consistent guiding center drift theories
International Nuclear Information System (INIS)
Wimmel, H.K.
1982-04-01
Various guiding-center drift theories are presented that are optimized in respect of consistency. They satisfy exact energy conservation theorems (in time-independent fields), Liouville's theorems, and appropriate power balance equations. A theoretical framework is given that allows direct and exact derivation of associated drift-kinetic equations from the respective guiding-center drift-orbit theories. These drift-kinetic equations are listed. Northrop's non-optimized theory is discussed for reference, and internal consistency relations of G.C. drift theories are presented. (orig.)
Weak consistency and strong paraconsistency
Directory of Open Access Journals (Sweden)
Gemma Robles
2009-11-01
Full Text Available In a standard sense, consistency and paraconsistency are understood as, respectively, the absence of any contradiction and as the absence of the ECQ (“E contradictione quodlibet” rule that allows us to conclude any well formed formula from any contradiction. The aim of this paper is to explain the concepts of weak consistency alternative to the standard one, the concepts of paraconsistency related to them and the concept of strong paraconsistency, all of which have been defined by the author together with José M. Méndez.
Consistent force fields for saccharides
DEFF Research Database (Denmark)
Rasmussen, Kjeld
1999-01-01
Consistent force fields for carbohydrates were hitherto developed by extensive optimization ofpotential energy function parameters on experimental data and on ab initio results. A wide range of experimental data is used: internal structures obtained from gas phase electron diffraction and from x......-anomeric effects are accounted for without addition of specific terms. The work is done in the framework of the Consistent Force Field which originatedin Israel and was further developed in Denmark. The actual methods and strategies employed havebeen described previously. Extensive testing of the force field...
Glass consistency and glass performance
International Nuclear Information System (INIS)
Plodinec, M.J.; Ramsey, W.G.
1994-01-01
Glass produced by the Defense Waste Processing Facility (DWPF) will have to consistently be more durable than a benchmark glass (evaluated using a short-term leach test), with high confidence. The DWPF has developed a Glass Product Control Program to comply with this specification. However, it is not clear what relevance product consistency has on long-term glass performance. In this report, the authors show that DWPF glass, produced in compliance with this specification, can be expected to effectively limit the release of soluble radionuclides to natural environments. However, the release of insoluble radionuclides to the environment will be limited by their solubility, and not glass durability
Policy consistency and the achievement of Nigeria's foreign policy ...
African Journals Online (AJOL)
This study is an attempt to investigate the policy consistency of Nigeria‟s foreign policy and to understand the basis for this consistency; and also to see whether peacekeeping/peace-enforcement is key instrument in the achievement of Nigeria‟s foreign policy goals. The objective of the study was to examine whether the ...
Personalized recommendation based on unbiased consistence
Zhu, Xuzhen; Tian, Hui; Zhang, Ping; Hu, Zheng; Zhou, Tao
2015-08-01
Recently, in physical dynamics, mass-diffusion-based recommendation algorithms on bipartite network provide an efficient solution by automatically pushing possible relevant items to users according to their past preferences. However, traditional mass-diffusion-based algorithms just focus on unidirectional mass diffusion from objects having been collected to those which should be recommended, resulting in a biased causal similarity estimation and not-so-good performance. In this letter, we argue that in many cases, a user's interests are stable, and thus bidirectional mass diffusion abilities, no matter originated from objects having been collected or from those which should be recommended, should be consistently powerful, showing unbiased consistence. We further propose a consistence-based mass diffusion algorithm via bidirectional diffusion against biased causality, outperforming the state-of-the-art recommendation algorithms in disparate real data sets, including Netflix, MovieLens, Amazon and Rate Your Music.
Energy Technology Data Exchange (ETDEWEB)
Larsen, G.; Soerensen, P. [Risoe National Lab., Roskilde (Denmark)
1996-09-01
Design Basis Program 2 (DBP2) is comprehensive fully coupled code which has the capability to operate in the time domain as well as in the frequency domain. The code was developed during the period 1991-93 and succeed Design Basis 1, which is a one-blade model presuming stiff tower, transmission system and hub. The package is designed for use on a personal computer and offers a user-friendly environment based on menu-driven editing and control facilities, and with graphics used extensively for the data presentation. Moreover in-data as well as results are dumped on files in Ascii-format. The input data is organized in a in-data base with a structure that easily allows for arbitrary combinations of defined structural components and load cases. (au)
Time-consistent actuarial valuations
Pelsser, A.A.J.; Salahnejhad Ghalehjooghi, A.
2016-01-01
Time-consistent valuations (i.e. pricing operators) can be created by backward iteration of one-period valuations. In this paper we investigate the continuous-time limits of well-known actuarial premium principles when such backward iteration procedures are applied. This method is applied to an
Dynamically consistent oil import tariffs
International Nuclear Information System (INIS)
Karp, L.; Newbery, D.M.
1992-01-01
The standard theory of optimal tariffs considers tariffs on perishable goods produced abroad under static conditions, in which tariffs affect prices only in that period. Oil and other exhaustable resources do not fit this model, for current tariffs affect the amount of oil imported, which will affect the remaining stock and hence its future price. The problem of choosing a dynamically consistent oil import tariff when suppliers are competitive but importers have market power is considered. The open-loop Nash tariff is solved for the standard competitive case in which the oil price is arbitraged, and it was found that the resulting tariff rises at the rate of interest. This tariff was found to have an equilibrium that in general is dynamically inconsistent. Nevertheless, it is shown that necessary and sufficient conditions exist under which the tariff satisfies the weaker condition of time consistency. A dynamically consistent tariff is obtained by assuming that all agents condition their current decisions on the remaining stock of the resource, in contrast to open-loop strategies. For the natural case in which all agents choose their actions simultaneously in each period, the dynamically consistent tariff was characterized, and found to differ markedly from the time-inconsistent open-loop tariff. It was shown that if importers do not have overwhelming market power, then the time path of the world price is insensitive to the ability to commit, as is the level of wealth achieved by the importer. 26 refs., 4 figs
American Society for Testing and Materials. Philadelphia
2007-01-01
1.1 This practice sets forth requirements to ensure consistency in neutron-induced displacement damage testing of silicon and gallium arsenide electronic piece parts. This requires controls on facility, dosimetry, tester, and communications processes that affect the accuracy and reproducibility of these tests. It provides background information on the technical basis for the requirements and additional recommendations on neutron testing. In addition to neutrons, reactors are used to provide gamma-ray pulses of intensities and durations that are not achievable elsewhere. This practice also provides background information and recommendations on gamma-ray testing of electronics using nuclear reactors. 1.2 Methods are presented for ensuring and validating consistency in neutron displacement damage testing of electronic parts such as integrated circuits, transistors, and diodes. The issues identified and the controls set forth in this practice address the characterization and suitability of the radiation environm...
Morozov, Albert D; Dragunov, Timothy N; Malysheva, Olga V
1999-01-01
This book deals with the visualization and exploration of invariant sets (fractals, strange attractors, resonance structures, patterns etc.) for various kinds of nonlinear dynamical systems. The authors have created a special Windows 95 application called WInSet, which allows one to visualize the invariant sets. A WInSet installation disk is enclosed with the book.The book consists of two parts. Part I contains a description of WInSet and a list of the built-in invariant sets which can be plotted using the program. This part is intended for a wide audience with interests ranging from dynamical
Consistently violating the non-Gaussian consistency relation
International Nuclear Information System (INIS)
Mooij, Sander; Palma, Gonzalo A.
2015-01-01
Non-attractor models of inflation are characterized by the super-horizon evolution of curvature perturbations, introducing a violation of the non-Gaussian consistency relation between the bispectrum's squeezed limit and the power spectrum's spectral index. In this work we show that the bispectrum's squeezed limit of non-attractor models continues to respect a relation dictated by the evolution of the background. We show how to derive this relation using only symmetry arguments, without ever needing to solve the equations of motion for the perturbations
PROGNOSTICATION OF PRODUCTION OF GOODS ON THE BASIS OF FUZZY
Directory of Open Access Journals (Sweden)
Olesya TOTSKA
2009-06-01
Full Text Available In the article the authors forecast the issue of commodities by the enterprises of food retail industry of theVolyn region of Ukraine by the use of the fuzzy sets theory. The algorithm of such foresight consists in passing of thefollowing stages: construction of dynamic rows of issue of ten basic food stuffs in the last few years; equipping themafter growth; construction fuzzy interval for every commodity; determination optimistic estimation for every index. Onthe basis of obtained information the ge neral issue of food products is calculated also in next years. Determination ofoptimistic estimation is conducted after the original method developed by an author. Basic its idea consists in that aninterval which answers the “golden” mean of dynamic row is most credible.
International Nuclear Information System (INIS)
Hazeltine, R.D.
1988-12-01
The boundary layer arising in the radial vicinity of a tokamak limiter is examined, with special reference to the TEXT tokamak. It is shown that sheath structure depends upon the self-consistent effects of ion guiding-center orbit modification, as well as the radial variation of E /times/ B-induced toroidal rotation. Reasonable agreement with experiment is obtained from an idealized model which, however simplified, preserves such self-consistent effects. It is argued that the radial sheath, which occurs whenever confining magnetic field-lines lie in the plasma boundary surface, is an object of some intrinsic interest. It differs from the more familiar axial sheath because magnetized charges respond very differently to parallel and perpendicular electric fields. 11 refs., 1 fig
Deep Feature Consistent Variational Autoencoder
Hou, Xianxu; Shen, Linlin; Sun, Ke; Qiu, Guoping
2016-01-01
We present a novel method for constructing Variational Autoencoder (VAE). Instead of using pixel-by-pixel loss, we enforce deep feature consistency between the input and the output of a VAE, which ensures the VAE's output to preserve the spatial correlation characteristics of the input, thus leading the output to have a more natural visual appearance and better perceptual quality. Based on recent deep learning works such as style transfer, we employ a pre-trained deep convolutional neural net...
Radioactive Waste Management Basis
International Nuclear Information System (INIS)
Perkins, B.K.
2009-01-01
The purpose of this Radioactive Waste Management Basis is to describe the systematic approach for planning, executing, and evaluating the management of radioactive waste at LLNL. The implementation of this document will ensure that waste management activities at LLNL are conducted in compliance with the requirements of DOE Order 435.1, Radioactive Waste Management, and the Implementation Guide for DOE Manual 435.1-1, Radioactive Waste Management Manual. Technical justification is provided where methods for meeting the requirements of DOE Order 435.1 deviate from the DOE Manual 435.1-1 and Implementation Guide.
Molecular basis for mitochondrial signaling
2017-01-01
This book covers recent advances in the study of structure, function, and regulation of metabolite, protein and ion translocating channels, and transporters in mitochondria. A wide array of cutting-edge methods are covered, ranging from electrophysiology and cell biology to bioinformatics, as well as structural, systems, and computational biology. At last, the molecular identity of two important channels in the mitochondrial inner membrane, the mitochondrial calcium uniporter and the mitochondrial permeability transition pore have been established. After years of work on the physiology and structure of VDAC channels in the mitochondrial outer membrane, there have been multiple discoveries on VDAC permeation and regulation by cytosolic proteins. Recent breakthroughs in structural studies of the mitochondrial cholesterol translocator reveal a set of novel unexpected features and provide essential clues for defining therapeutic strategies. Molecular Basis for Mitochondrial Signaling covers these and many more re...
Internal dosimetry technical basis manual
Energy Technology Data Exchange (ETDEWEB)
1990-12-20
The internal dosimetry program at the Savannah River Site (SRS) consists of radiation protection programs and activities used to detect and evaluate intakes of radioactive material by radiation workers. Examples of such programs are: air monitoring; surface contamination monitoring; personal contamination surveys; radiobioassay; and dose assessment. The objectives of the internal dosimetry program are to demonstrate that the workplace is under control and that workers are not being exposed to radioactive material, and to detect and assess inadvertent intakes in the workplace. The Savannah River Site Internal Dosimetry Technical Basis Manual (TBM) is intended to provide a technical and philosophical discussion of the radiobioassay and dose assessment aspects of the internal dosimetry program. Detailed information on air, surface, and personal contamination surveillance programs is not given in this manual except for how these programs interface with routine and special bioassay programs.
Internal dosimetry technical basis manual
International Nuclear Information System (INIS)
1990-01-01
The internal dosimetry program at the Savannah River Site (SRS) consists of radiation protection programs and activities used to detect and evaluate intakes of radioactive material by radiation workers. Examples of such programs are: air monitoring; surface contamination monitoring; personal contamination surveys; radiobioassay; and dose assessment. The objectives of the internal dosimetry program are to demonstrate that the workplace is under control and that workers are not being exposed to radioactive material, and to detect and assess inadvertent intakes in the workplace. The Savannah River Site Internal Dosimetry Technical Basis Manual (TBM) is intended to provide a technical and philosophical discussion of the radiobioassay and dose assessment aspects of the internal dosimetry program. Detailed information on air, surface, and personal contamination surveillance programs is not given in this manual except for how these programs interface with routine and special bioassay programs
Harman, Nate
2016-01-01
We consider the following counting problem related to the card game SET: How many $k$-element SET-free sets are there in an $n$-dimensional SET deck? Through a series of algebraic reformulations and reinterpretations, we show the answer to this question satisfies two polynomiality conditions.
DEFF Research Database (Denmark)
Ruud, Kenneth; Helgaker, Trygve; Kobayashi, Rika
1994-01-01
to corresponding individual gauges for localized orbitals (IGLO) results. The London results show better basis set convergence than IGLO, especially for heavier atoms. It is shown that the choice of active space is crucial for determination of accurate nuclear shielding constants.......Nuclear shielding calculations are presented for multiconfigurational self-consistent field wave functions using London atomic orbitals (gauge invariant atomic orbitals). Calculations of nuclear shieldings for eight molecules (H2O, H2S, CH4, N2, CO, HF, F2, and SO2) are presented and compared...
Energy Technology Data Exchange (ETDEWEB)
NONE
2002-01-01
Following on from the Final Report of the EDA(DS/21), and the summary of the ITER Final Design report(DS/22), the technical basis gives further details of the design of ITER. It is in two parts. The first, the Plant Design specification, summarises the main constraints on the plant design and operation from the viewpoint of engineering and physics assumptions, compliance with safety regulations, and siting requirements and assumptions. The second, the Plant Description Document, describes the physics performance and engineering characteristics of the plant design, illustrates the potential operational consequences foe the locality of a generic site, gives the construction, commissioning, exploitation and decommissioning schedule, and reports the estimated lifetime costing based on data from the industry of the EDA parties.
International Nuclear Information System (INIS)
2002-01-01
Following on from the Final Report of the EDA(DS/21), and the summary of the ITER Final Design report(DS/22), the technical basis gives further details of the design of ITER. It is in two parts. The first, the Plant Design specification, summarises the main constraints on the plant design and operation from the viewpoint of engineering and physics assumptions, compliance with safety regulations, and siting requirements and assumptions. The second, the Plant Description Document, describes the physics performance and engineering characteristics of the plant design, illustrates the potential operational consequences foe the locality of a generic site, gives the construction, commissioning, exploitation and decommissioning schedule, and reports the estimated lifetime costing based on data from the industry of the EDA parties
A Consistent Phylogenetic Backbone for the Fungi
Ebersberger, Ingo; de Matos Simoes, Ricardo; Kupczok, Anne; Gube, Matthias; Kothe, Erika; Voigt, Kerstin; von Haeseler, Arndt
2012-01-01
The kingdom of fungi provides model organisms for biotechnology, cell biology, genetics, and life sciences in general. Only when their phylogenetic relationships are stably resolved, can individual results from fungal research be integrated into a holistic picture of biology. However, and despite recent progress, many deep relationships within the fungi remain unclear. Here, we present the first phylogenomic study of an entire eukaryotic kingdom that uses a consistency criterion to strengthen phylogenetic conclusions. We reason that branches (splits) recovered with independent data and different tree reconstruction methods are likely to reflect true evolutionary relationships. Two complementary phylogenomic data sets based on 99 fungal genomes and 109 fungal expressed sequence tag (EST) sets analyzed with four different tree reconstruction methods shed light from different angles on the fungal tree of life. Eleven additional data sets address specifically the phylogenetic position of Blastocladiomycota, Ustilaginomycotina, and Dothideomycetes, respectively. The combined evidence from the resulting trees supports the deep-level stability of the fungal groups toward a comprehensive natural system of the fungi. In addition, our analysis reveals methodologically interesting aspects. Enrichment for EST encoded data—a common practice in phylogenomic analyses—introduces a strong bias toward slowly evolving and functionally correlated genes. Consequently, the generalization of phylogenomic data sets as collections of randomly selected genes cannot be taken for granted. A thorough characterization of the data to assess possible influences on the tree reconstruction should therefore become a standard in phylogenomic analyses. PMID:22114356
Consistency Checking of Web Service Contracts
DEFF Research Database (Denmark)
Cambronero, M. Emilia; Okika, Joseph C.; Ravn, Anders Peter
2008-01-01
Behavioural properties are analyzed for web service contracts formulated in Business Process Execution Language (BPEL) and Choreography Description Language (CDL). The key result reported is an automated technique to check consistency between protocol aspects of the contracts. The contracts...... are abstracted to (timed) automata and from there a simulation is set up, which is checked using automated tools for analyzing networks of finite state processes. Here we use the Concurrency Work Bench. The proposed techniques are illustrated with a case study that include otherwise difficult to analyze fault...
Consistent application of codes and standards
International Nuclear Information System (INIS)
Scott, M.A.
1989-01-01
The guidelines presented in the US Department of Energy, General Design Criteria (DOE 6430.1A), and the Design and Evaluation Guidelines for Department of Energy Facilities Subject to Natural Phenomena Hazards (UCRL-15910) provide a consistent and well defined approach to determine the natural phenomena hazards loads for US Department of Energy site facilities. The guidelines for the application of loads combinations and allowables criteria are not as well defined and are more flexible in interpretation. This flexibility in the interpretation of load combinations can lead to conflict between the designer and overseer. The establishment of an efficient set of acceptable design criteria, based on US Department of Energy guidelines, provides a consistent baseline for analysis, design, and review. Additionally, the proposed method should not limit the design and analytical innovation necessary to analyze or qualify the unique structure. This paper investigates the consistent application of load combinations, analytical methods, and load allowables and suggests a reference path consistent with the US Department of Energy guidelines
The Biological Basis of Learning and Individuality.
Kandel, Eric R.; Hawkins, Robert D.
1992-01-01
Describes the biological basis of learning and individuality. Presents an overview of recent discoveries that suggest learning engages a simple set of rules that modify the strength of connection between neurons in the brain. The changes are cited as playing an important role in making each individual unique. (MCO)
The Emotional and Moral Basis of Rationality
Boostrom, Robert
2013-01-01
This chapter explores the basis of rationality, arguing that critical thinking tends to be taught in schools as a set of skills because of the failure to recognize that choosing to think critically depends on the prior development of stable sentiments or moral habits that nourish a rational self. Primary among these stable sentiments are the…
The neurocognitive basis of feature integration
Keizer, André Willem
2010-01-01
One of the most striking features of the brain is that it is modular; it consists of often highly specialized areas. This modular organization requires efficient communication in order to integrate the information that is represented in distinct brain areas. In my thesis, I studied the neural basis
Consistent resolution of some relativistic quantum paradoxes
International Nuclear Information System (INIS)
Griffiths, Robert B.
2002-01-01
A relativistic version of the (consistent or decoherent) histories approach to quantum theory is developed on the basis of earlier work by Hartle, and used to discuss relativistic forms of the paradoxes of spherical wave packet collapse, Bohm's formulation of the Einstein-Podolsky-Rosen paradox, and Hardy's paradox. It is argued that wave function collapse is not needed for introducing probabilities into relativistic quantum mechanics, and in any case should never be thought of as a physical process. Alternative approaches to stochastic time dependence can be used to construct a physical picture of the measurement process that is less misleading than collapse models. In particular, one can employ a coarse-grained but fully quantum-mechanical description in which particles move along trajectories, with behavior under Lorentz transformations the same as in classical relativistic physics, and detectors are triggered by particles reaching them along such trajectories. States entangled between spacelike separate regions are also legitimate quantum descriptions, and can be consistently handled by the formalism presented here. The paradoxes in question arise because of using modes of reasoning which, while correct for classical physics, are inconsistent with the mathematical structure of quantum theory, and are resolved (or tamed) by using a proper quantum analysis. In particular, there is no need to invoke, nor any evidence for, mysterious long-range superluminal influences, and thus no incompatibility, at least from this source, between relativity theory and quantum mechanics
Decentralized Consistent Updates in SDN
Nguyen, Thanh Dang
2017-04-10
We present ez-Segway, a decentralized mechanism to consistently and quickly update the network state while preventing forwarding anomalies (loops and blackholes) and avoiding link congestion. In our design, the centralized SDN controller only pre-computes information needed by the switches during the update execution. This information is distributed to the switches, which use partial knowledge and direct message passing to efficiently realize the update. This separation of concerns has the key benefit of improving update performance as the communication and computation bottlenecks at the controller are removed. Our evaluations via network emulations and large-scale simulations demonstrate the efficiency of ez-Segway, which compared to a centralized approach, improves network update times by up to 45% and 57% at the median and the 99th percentile, respectively. A deployment of a system prototype in a real OpenFlow switch and an implementation in P4 demonstrate the feasibility and low overhead of implementing simple network update functionality within switches.
Automatic sets and Delone sets
International Nuclear Information System (INIS)
Barbe, A; Haeseler, F von
2004-01-01
Automatic sets D part of Z m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D part of Z m to be a Delone set in R m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples
The biological basis of radiotherapy
International Nuclear Information System (INIS)
Steel, G.G.; Adams, G.E.; Horwich, A.
1989-01-01
The focus of this book is the biological basis of radiotherapy. The papers presented include: Temporal stages of radiation action:free radical processes; The molecular basis of radiosensitivity; and Radiation damage to early-reacting normal tissue
Accelerated Best Basis Inventory Baselining Task
International Nuclear Information System (INIS)
SASAKI, L.M.
2001-01-01
The baselining effort was recently proposed to bring the Best-Basis Inventory (BBI) and Question No.8 of the Tank Interpretive Report (TIR) for all 177 tanks to the current standards and protocols and to prepare a TIR Question No.8 if one is not already available. This plan outlines the objectives and methodology of the accelerated BBI baselining task. BBI baselining meetings held during December 2000 resulted in a revised BBI methodology and an initial set of BBI creation rules to be used in the baselining effort. The objectives of the BBI baselining effort are to: (1) Provide inventories that are consistent with the revised BBI methodology and new BBI creation rules. (2) Split the total tank waste in each tank into six waste phases, as appropriate (Supernatant, saltcake solids, saltcake liquid, sludge solids, sludge liquid, and retained gas). In some tanks, the solids and liquid portions of the sludge and/or saltcake may be combined into a single sludge or saltcake phase. (3) Identify sampling events that are to be used for calculating the BBIs. (4) Update waste volumes for subsequent reconciliation with the Hanlon (2001) waste tank summary. (5) Implement new waste type templates. (6) Include any sample data that might have been unintentionally omitted in the previous BBI and remove any sample data that should not have been included. Sample data to be used in the BBI must be available on TWINS. (7) Ensure that an inventory value for each standard BBI analyte is provided for each waste component. Sample based inventories for supplemental BBI analytes will be included when available. (8) Provide new means and confidence interval reports if one is not already available and include uncertainties in reporting inventory values
Advanced Fuel Cycle Cost Basis
Energy Technology Data Exchange (ETDEWEB)
D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert; E. Schneider
2009-12-01
This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 25 cost modules—23 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, transuranic, and high-level waste.
Advanced Fuel Cycle Cost Basis
Energy Technology Data Exchange (ETDEWEB)
D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert
2007-04-01
This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 26 cost modules—24 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, and high-level waste.
Advanced Fuel Cycle Cost Basis
Energy Technology Data Exchange (ETDEWEB)
D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert; E. Schneider
2008-03-01
This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 25 cost modules—23 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, transuranic, and high-level waste.
The Tucson-Melbourne Three-Body Force in a Translationally-Invariant Harmonic Oscillator Basis
Marsden, David; Navratil, Petr; Barrett, Bruce
2000-09-01
A translationally-invariant three-body basis set has been employed in shell model calculations on ^3H and ^3He including the Tucson-Melbourne form of the real nuclear three-body force. The basis consists of harmonic oscillators in Jacobi coordinates, explicitly avoiding the centre of mass drift problem in the calculations. The derivation of the three-body matrix elements and the results of large basis effective interaction shell model calculations will be presented. J. L. Friar, B. F. Gibson, G. L. Payne and S. A. Coon; Few Body Systems 5, 13 (1988) P. Navratil, G.P. Kamuntavicius and B.R. Barrett; Phys. Rev. C. 61, 044001 (2000)
Consistency of color representation in smart phones.
Dain, Stephen J; Kwan, Benjamin; Wong, Leslie
2016-03-01
One of the barriers to the construction of consistent computer-based color vision tests has been the variety of monitors and computers. Consistency of color on a variety of screens has necessitated calibration of each setup individually. Color vision examination with a carefully controlled display has, as a consequence, been a laboratory rather than a clinical activity. Inevitably, smart phones have become a vehicle for color vision tests. They have the advantage that the processor and screen are associated and there are fewer models of smart phones than permutations of computers and monitors. Colorimetric consistency of display within a model may be a given. It may extend across models from the same manufacturer but is unlikely to extend between manufacturers especially where technologies vary. In this study, we measured the same set of colors in a JPEG file displayed on 11 samples of each of four models of smart phone (iPhone 4s, iPhone5, Samsung Galaxy S3, and Samsung Galaxy S4) using a Photo Research PR-730. The iPhones are white LED backlit LCD and the Samsung are OLEDs. The color gamut varies between models and comparison with sRGB space shows 61%, 85%, 117%, and 110%, respectively. The iPhones differ markedly from the Samsungs and from one another. This indicates that model-specific color lookup tables will be needed. Within each model, the primaries were quite consistent (despite the age of phone varying within each sample). The worst case in each model was the blue primary; the 95th percentile limits in the v' coordinate were ±0.008 for the iPhone 4 and ±0.004 for the other three models. The u'v' variation in white points was ±0.004 for the iPhone4 and ±0.002 for the others, although the spread of white points between models was u'v'±0.007. The differences are essentially the same for primaries at low luminance. The variation of colors intermediate between the primaries (e.g., red-purple, orange) mirror the variation in the primaries. The variation in
Toward a consistent model for glass dissolution
International Nuclear Information System (INIS)
Strachan, D.M.; McGrail, B.P.; Bourcier, W.L.
1994-01-01
Understanding the process of glass dissolution in aqueous media has advanced significantly over the last 10 years through the efforts of many scientists around the world. Mathematical models describing the glass dissolution process have also advanced from simple empirical functions to structured models based on fundamental principles of physics, chemistry, and thermodynamics. Although borosilicate glass has been selected as the waste form for disposal of high-level wastes in at least 5 countries, there is no international consensus on the fundamental methodology for modeling glass dissolution that could be used in assessing the long term performance of waste glasses in a geologic repository setting. Each repository program is developing their own model and supporting experimental data. In this paper, we critically evaluate a selected set of these structured models and show that a consistent methodology for modeling glass dissolution processes is available. We also propose a strategy for a future coordinated effort to obtain the model input parameters that are needed for long-term performance assessments of glass in a geologic repository. (author) 4 figs., tabs., 75 refs
Self-consistent model of confinement
International Nuclear Information System (INIS)
Swift, A.R.
1988-01-01
A model of the large-spatial-distance, zero--three-momentum, limit of QCD is developed from the hypothesis that there is an infrared singularity. Single quarks and gluons do not propagate because they have infinite energy after renormalization. The Hamiltonian formulation of the path integral is used to quantize QCD with physical, nonpropagating fields. Perturbation theory in the infrared limit is simplified by the absence of self-energy insertions and by the suppression of large classes of diagrams due to vanishing propagators. Remaining terms in the perturbation series are resummed to produce a set of nonlinear, renormalizable integral equations which fix both the confining interaction and the physical propagators. Solutions demonstrate the self-consistency of the concepts of an infrared singularity and nonpropagating fields. The Wilson loop is calculated to provide a general proof of confinement. Bethe-Salpeter equations for quark-antiquark pairs and for two gluons have finite-energy solutions in the color-singlet channel. The choice of gauge is addressed in detail. Large classes of corrections to the model are discussed and shown to support self-consistency
Subgame consistent cooperation a comprehensive treatise
Yeung, David W K
2016-01-01
Strategic behavior in the human and social world has been increasingly recognized in theory and practice. It is well known that non-cooperative behavior could lead to suboptimal or even highly undesirable outcomes. Cooperation suggests the possibility of obtaining socially optimal solutions and the calls for cooperation are prevalent in real-life problems. Dynamic cooperation cannot be sustainable if there is no guarantee that the agreed upon optimality principle at the beginning is maintained throughout the cooperation duration. It is due to the lack of this kind of guarantees that cooperative schemes fail to last till its end or even fail to get started. The property of subgame consistency in cooperative dynamic games and the corresponding solution mechanism resolve this “classic” problem in game theory. This book is a comprehensive treatise on subgame consistent dynamic cooperation covering the up-to-date state of the art analyses in this important topic. It sets out to provide the theory, solution tec...
Accelerating GW calculations with optimal polarizability basis
Energy Technology Data Exchange (ETDEWEB)
Umari, P.; Stenuit, G. [CNR-IOM DEMOCRITOS Theory Elettra Group, Basovizza (Trieste) (Italy); Qian, X.; Marzari, N. [Department of Materials Science and Engineering, MIT, Cambridge, MA (United States); Giacomazzi, L.; Baroni, S. [CNR-IOM DEMOCRITOS Theory Elettra Group, Basovizza (Trieste) (Italy); SISSA - Scuola Internazionale Superiore di Studi Avanzati, Trieste (Italy)
2011-03-15
We present a method for accelerating GW quasi-particle (QP) calculations. This is achieved through the introduction of optimal basis sets for representing polarizability matrices. First the real-space products of Wannier like orbitals are constructed and then optimal basis sets are obtained through singular value decomposition. Our method is validated by calculating the vertical ionization energies of the benzene molecule and the band structure of crystalline silicon. Its potentialities are illustrated by calculating the QP spectrum of a model structure of vitreous silica. Finally, we apply our method for studying the electronic structure properties of a model of quasi-stoichiometric amorphous silicon nitride and of its point defects. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Levy, Azriel
2002-01-01
An advanced-level treatment of the basics of set theory, this text offers students a firm foundation, stopping just short of the areas employing model-theoretic methods. Geared toward upper-level undergraduate and graduate students, it consists of two parts: the first covers pure set theory, including the basic motions, order and well-foundedness, cardinal numbers, the ordinals, and the axiom of choice and some of it consequences; the second deals with applications and advanced topics such as point set topology, real spaces, Boolean algebras, and infinite combinatorics and large cardinals. An
View from Europe: stability, consistency or pragmatism
International Nuclear Information System (INIS)
Dunster, H.J.
1988-01-01
The last few years of this decade look like a period of reappraisal of radiation protection standards. The revised risk estimates from Japan will be available, and the United Nations Scientific Committee on the Effects of Atomic Radiation will be publishing new reports on biological topics. The International Commission on Radiological Protection (ICRP) has started a review of its basic recommendations, and the new specification for dose equivalent in radiation fields of the International Commission on Radiation Units and Measurements (ICRU) will be coming into use. All this is occurring at a time when some countries are still trying to catch up with committed dose equivalent and the recently recommended change in the value of the quality factor for neutrons. In Europe, the problems of adapting to new ICRP recommendations are considerable. The European Community, including 12 states and nine languages, takes ICRP recommendations as a basis and develops council directives that are binding on member states, which have then to arrange for their own regulatory changes. Any substantial adjustments could take 5 y or more to work through the system. Clearly, the regulatory preference is for stability. Equally clearly, trade unions and public interest groups favor a rapid response to scientific developments (provided that the change is downward). Organizations such as the ICRP have to balance their desire for internal consistency and intellectual purity against the practical problems of their clients in adjusting to change. This paper indicates some of the changes that might be necessary over the next few years and how, given a pragmatic approach, they might be accommodated in Europe without too much regulatory confusion
REFORMASI SISTEM AKUNTANSI CASH BASIS MENUJU SISTEM AKUNTANSI ACCRUAL BASIS
Directory of Open Access Journals (Sweden)
Yuri Rahayu
2016-03-01
Full Text Available Abstract – Accounting reform movement was born with the aim of structuring the direction of improvement . This movement is characterized by the enactment of the Act of 2003 and Act 1 of 2004, which became the basis of the birth of Government Regulation No.24 of 2005 on Government Accounting Standards ( SAP . The general, accounting is based on two systems, the cash basis and the accrual basis. The facts speak far students still at problem with differences to the two methods that result in a lack of understanding on the treatment system for recording. The purpose method of research is particularly relevant to student references who are learning basic accounting so that it can provide information and more meaningful understanding of the accounting method cash basis and Accrual basis. This research was conducted through a normative approach, by tracing the document that references a study/library that combines source of reference that can be believed either from books and the internet are processed with a foundation of knowledge and experience of the author. The conclusion can be drawn that basically to be able to understand the difference of the system and the Cash Basis accrual student base treatment requires an understanding of both methods. To be able to have the ability and understanding of both systems required reading exercises and reference sources. Keywords : Reform, cash basis, accrual basis Abstrak - Gerakan reformasi akuntansi dilahirkan dengan tujuan penataan ke arah perbaikan. Gerakan ini ditandai dengan dikeluarkannya Undang-Undang tahun 2003 dan Undang-Undang No.1 Tahun 2004 yang menjadi dasar lahirnya Peraturan Pemerintah No.24 Tahun 2005 tentang Standar Akuntansi Pemerintah (SAP . Pada umumnya pencatatan akuntansi di dasarkan pada dua sistem yaitu basis kas (Cash Basis dan basis akrual (Accrual Basis. Fakta berbicara Selama ini mahasiswa masih dibinggungkan dengan perbedaan ke dua metode itu sehingga
Directory of Open Access Journals (Sweden)
Bruno Barras
2010-01-01
Full Text Available This work is about formalizing models of various type theories of the Calculus of Constructions family. Here we focus on set theoretical models. The long-term goal is to build a formal set theoretical model of the Calculus of Inductive Constructions, so we can be sure that Coq is consistent with the language used by most mathematicians.One aspect of this work is to axiomatize several set theories: ZF possibly with inaccessible cardinals, and HF, the theory of hereditarily finite sets. On top of these theories we have developped a piece of the usual set theoretical construction of functions, ordinals and fixpoint theory. We then proved sound several models of the Calculus of Constructions, its extension with an infinite hierarchy of universes, and its extension with the inductive type of natural numbers where recursion follows the type-based termination approach.The other aspect is to try and discharge (most of these assumptions. The goal here is rather to compare the theoretical strengths of all these formalisms. As already noticed by Werner, the replacement axiom of ZF in its general form seems to require a type-theoretical axiom of choice (TTAC.
A consistent thermodynamic database for cement minerals
International Nuclear Information System (INIS)
Blanc, P.; Claret, F.; Burnol, A.; Marty, N.; Gaboreau, S.; Tournassat, C.; Gaucher, E.C.; Giffault, E.; Bourbon, X.
2010-01-01
Document available in extended abstract form only. In the context of waste confinement and, more specifically, waste from the nuclear industry, concrete is used both as a confinement and as a building material. The exposure to high temperatures makes its geochemical behaviour difficult to predict over large periods of time. The present work aims to elucidate the temperature dependency of the thermodynamic functions related to minerals from the concrete or associated with some of its degradation products. To address precisely these functions is a key issue in order to investigate correctly the cement/clay interaction, from a geochemical point of view. A large set of experimental data has been collected, for the chemical systems CaO-SiO 2 -H 2 O, SO 3 -Al 2 O 3 - CaO-CO 2 -Cl-H 2 O and SiO 2 -Al 2 O 3 -CaO-H 2 O, including iron and magnesium bearing phases. Data include calorimetric measurements when available and results from equilibration experiments. The stability of C-S-H phases was considered as a specific issue, those phases may appear both as amorphous or crystalline minerals. In addition, the composition of amorphous minerals is still under debate. The phase diagram of crystalline phases was refined, providing the thermodynamic function of most of the main minerals. Then, we were able to build up a polyhedral model from the refined properties. The composition and the equilibrium constants of amorphous C-S-H, at room temperature, were derived from a large a set of equilibration data. Finally, the thermodynamic functions were completed by using the polyhedral model, both for amorphous and crystalline phases, in some cases. A verification test, based on reaction enthalpies derived from experimental data, indicates that predicted values for amorphous C-S-H are in close agreement with experimental data. For phases other than C-S-H, we have proceeded for each mineral the following way: - the equilibrium constant at 25 deg. C is selected from a single experimental
Thermodynamically consistent model calibration in chemical kinetics
Directory of Open Access Journals (Sweden)
Goutsias John
2011-05-01
Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new
International Nuclear Information System (INIS)
Xu, Peng; Gordon, Mark S.
2013-01-01
The charge transfer (CT) interaction, the most time-consuming term in the general effective fragment potential method, is made much more computationally efficient. This is accomplished by the projection of the quasiatomic minimal-basis-set orbitals (QUAMBOs) as the atomic basis onto the self-consistent field virtual molecular orbital (MO) space to select a subspace of the full virtual space called the valence virtual space. The diagonalization of the Fock matrix in terms of QUAMBOs recovers the canonical occupied orbitals and, more importantly, gives rise to the valence virtual orbitals (VVOs). The CT energies obtained using VVOs are generally as accurate as those obtained with the full virtual space canonical MOs because the QUAMBOs span the valence part of the virtual space, which can generally be regarded as “chemically important.” The number of QUAMBOs is the same as the number of minimal-basis MOs of a molecule. Therefore, the number of VVOs is significantly smaller than the number of canonical virtual MOs, especially for large atomic basis sets. This leads to a dramatic decrease in the computational cost
Doubly stochastic radial basis function methods
Yang, Fenglian; Yan, Liang; Ling, Leevan
2018-06-01
We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).
On the consistent histories approach to quantum mechanics
International Nuclear Information System (INIS)
Dowker, F.; Kent, A.
1996-01-01
We review the consistent histories formulations of quantum mechanics developed by Griffiths, Omnes, Gell-Man, and Hartle, and we describe the classifications of consistent sets. We illustrate some general features of consistent sets by a few lemmas and examples. We also consider various interpretations of the formalism, and we examine the new problems which arise in reconstructing the past and predicting the future. It is shown that Omnes characterization of true statements---statements that can be deduced unconditionally in his interpretation---is incorrect. We examine critically Gell-Mann and Hartle's interpretation of the formalism, and in particular, their discussions of communication, prediction, and retrodiction, and we conclude that their explanation of the apparent persistence of quasiclassicality relies on assumptions about an as-yet-unknown theory of experience. Our overall conclusion is that the consistent histories approach illustrates the need to supplement quantum mechanics by some selection principle in order to produce a fundamental theory capable of unconditional predictions
Cosmological consistency tests of gravity theory and cosmic acceleration
Ishak-Boushaki, Mustapha B.
2017-01-01
Testing general relativity at cosmological scales and probing the cause of cosmic acceleration are among the important objectives targeted by incoming and future astronomical surveys and experiments. I present our recent results on consistency tests that can provide insights about the underlying gravity theory and cosmic acceleration using cosmological data sets. We use statistical measures, the rate of cosmic expansion, the growth rate of large scale structure, and the physical consistency of these probes with one another.
Parquet equations for numerical self-consistent-field theory
International Nuclear Information System (INIS)
Bickers, N.E.
1991-01-01
In recent years increases in computational power have provided new motivation for the study of self-consistent-field theories for interacting electrons. In this set of notes, the so-called parquet equations for electron systems are derived pedagogically. The principal advantages of the parquet approach are outlined, and its relationship to simpler self-consistent-field methods, including the Baym-Kadanoff technique, is discussed in detail. (author). 14 refs, 9 figs
Self-consistent meson mass spectrum
International Nuclear Information System (INIS)
Balazs, L.A.P.
1982-01-01
A dual-topological-unitarization (or dual-fragmentation) approach to the calculation of hadron masses is presented, in which the effect of planar ''sea''-quark loops is taken into account from the beginning. Using techniques based on analyticity and generalized ladder-graph dynamics, we first derive the approximate ''generic'' Regge-trajectory formula α(t) = max (S 1 +S 2 , S 3 +S 4 )-(1/2) +2alpha-circumflex'[s/sub a/ +(1/2)(t-summationm/sub i/ 2 )] for any given hadronic process 1+2→3+4, where S/sub i/ and m/sub i/ are the spins and masses of i = 1,2,3,4, and √s/sub a/ is the effective mass of the lowest nonvanishing contribution (a) exchanged in the crossed channel. By requiring a minimization of secondary (background, etc.) contributions to a, and demanding simultaneous consistency for entire sets of such processes, we are then able to calculate the masses of all the lowest pseudoscalar and vector qq-bar states with q = u,d,s and the Regge trajectories on which they lie. By making certain additional assumptions we are also able to do this with q = u,d,c and q = u,d,b. Our only arbitrary parameters are m/sub rho/, m/sub K/*, m/sub psi/, and m/sub Upsilon/, one of which merely serves to fix the energy scale. In contrast to many other approaches, a small m/sub π/ 2 /m/sub rho/ 2 ratio arises quite naturally in the present scheme
Consistency Anchor Formalization and Correctness Proofs
Miguel, Correia; Bessani, Alysson
2014-01-01
This is report contains the formal proofs for the techniques for increasing the consistency of cloud storage as presented in "Bessani et al. SCFS: A Cloud-backed File System. Proc. of the 2014 USENIX Annual Technical Conference. June 2014." The consistency anchor technique allows one to increase the consistency provided by eventually consistent cloud storage services like Amazon S3. This technique has been used in the SCFS (Shared Cloud File System) cloud-backed file system for solving rea...
[Basis for designing a medical course curriculum].
Villarreal, R; Bojalil, L F; Mercer, H
1977-01-01
This article sets forth the reasons for the structure given to the Division of Biology and Health on the Xochimilco campus of Metropolitan Autonomous University in Mexico: to adjust the university to the process of social change going forward in the country and gear the university to the problems of the present by avoiding the rigidity of its structure. The basic aspects of curriculum design are cited against a background of an historical analysis of the socioeconomic structure of education and health. The principles underlying the curriculum and the course work are then described on the basis of that analysis.
Minimization of Basis Risk in Parametric Earthquake Cat Bonds
Franco, G.
2009-12-01
A catastrophe -cat- bond is an instrument used by insurance and reinsurance companies, by governments or by groups of nations to cede catastrophic risk to the financial markets, which are capable of supplying cover for highly destructive events, surpassing the typical capacity of traditional reinsurance contracts. Parametric cat bonds, a specific type of cat bonds, use trigger mechanisms or indices that depend on physical event parameters published by respected third parties in order to determine whether a part or the entire bond principal is to be paid for a certain event. First generation cat bonds, or cat-in-a-box bonds, display a trigger mechanism that consists of a set of geographic zones in which certain conditions need to be met by an earthquake’s magnitude and depth in order to trigger payment of the bond principal. Second generation cat bonds use an index formulation that typically consists of a sum of products of a set of weights by a polynomial function of the ground motion variables reported by a geographically distributed seismic network. These instruments are especially appealing to developing countries with incipient insurance industries wishing to cede catastrophic losses to the financial markets because the payment trigger mechanism is transparent and does not involve the parties ceding or accepting the risk, significantly reducing moral hazard. In order to be successful in the market, however, parametric cat bonds have typically been required to specify relatively simple trigger conditions. The consequence of such simplifications is the increase of basis risk. This risk represents the possibility that the trigger mechanism fails to accurately capture the actual losses of a catastrophic event, namely that it does not trigger for a highly destructive event or vice versa, that a payment of the bond principal is caused by an event that produced insignificant losses. The first case disfavors the sponsor who was seeking cover for its losses while the
Quasi-Particle Self-Consistent GW for Molecules.
Kaplan, F; Harding, M E; Seiler, C; Weigend, F; Evers, F; van Setten, M J
2016-06-14
We present the formalism and implementation of quasi-particle self-consistent GW (qsGW) and eigenvalue only quasi-particle self-consistent GW (evGW) adapted to standard quantum chemistry packages. Our implementation is benchmarked against high-level quantum chemistry computations (coupled-cluster theory) and experimental results using a representative set of molecules. Furthermore, we compare the qsGW approach for five molecules relevant for organic photovoltaics to self-consistent GW results (scGW) and analyze the effects of the self-consistency on the ground state density by comparing calculated dipole moments to their experimental values. We show that qsGW makes a significant improvement over conventional G0W0 and that partially self-consistent flavors (in particular evGW) can be excellent alternatives.
Acceptable risk as a basis for design
International Nuclear Information System (INIS)
Vrijling, J.K.; Hengel, W. van; Houben, R.J.
1998-01-01
Historically, human civilisations have striven to protect themselves against natural and man-made hazards. The degree of protection is a matter of political choice. Today this choice should be expressed in terms of risk and acceptable probability of failure to form the basis of the probabilistic design of the protection. It is additionally argued that the choice for a certain technology and the connected risk is made in a cost-benefit framework. The benefits and the costs including risk are weighed in the decision process. A set of rules for the evaluation of risk is proposed and tested in cases. The set of rules leads to technical advice in a question that has to be decided politically
Consistency Check for the Bin Packing Constraint Revisited
Dupuis, Julien; Schaus, Pierre; Deville, Yves
The bin packing problem (BP) consists in finding the minimum number of bins necessary to pack a set of items so that the total size of the items in each bin does not exceed the bin capacity C. The bin capacity is common for all the bins.
Consistent measurements comparing the drift features of noble gas mixtures
Becker, U; Fortunato, E M; Kirchner, J; Rosera, K; Uchida, Y
1999-01-01
We present a consistent set of measurements of electron drift velocities and Lorentz deflection angles for all noble gases with methane and ethane as quenchers in magnetic fields up to 0.8 T. Empirical descriptions are also presented. Details on the World Wide Web allow for guided design and optimization of future detectors.
Genetic basis of chronic pancreatitis
Jansen, JBMJ; Morsche, RT; van Goor, Harry; Drenth, JPH
2002-01-01
Background: Pancreatitis has a proven genetic basis in a minority of patients. Methods: Review of the literature on genetics of pancreatitis. Results: Ever since the discovery that in most patients with hereditary pancreatitis a mutation in the gene encoding for cationic trypsinogen (R122H) was
Ellipsoidal basis for isotropic oscillator
International Nuclear Information System (INIS)
Kallies, W.; Lukac, I.; Pogosyan, G.S.; Sisakyan, A.N.
1994-01-01
The solutions of the Schroedinger equation are derived for the isotropic oscillator potential in the ellipsoidal coordinate system. The explicit expression is obtained for the ellipsoidal integrals of motion through the components of the orbital moment and Demkov's tensor. The explicit form of the ellipsoidal basis is given for the lowest quantum numbers. 10 refs.; 1 tab. (author)
Molecular basis of familial hypercholesterolemia
Bruikman, Caroline S.; Hovingh, Gerard K.; Kastelein, John J. P.
2017-01-01
Purpose of review To provide an overview about the molecular basis of familial hypercholesterolemia. Recent findings Familial hypercholesterolemia is a common hereditary cause of premature coronary heart disease. It has been estimated that 1 in every 250 individuals has heterozygous familial
Basis reduction for layered lattices
E.L. Torreão Dassen (Erwin)
2011-01-01
htmlabstractWe develop the theory of layered Euclidean spaces and layered lattices. With this new theory certain problems that usually are solved by using classical lattices with a "weighting" gain a new, more natural form. Using the layered lattice basis reduction algorithms introduced here these
Mixtures of truncated basis functions
DEFF Research Database (Denmark)
Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2012-01-01
In this paper we propose a framework, called mixtures of truncated basis functions (MoTBFs), for representing general hybrid Bayesian networks. The proposed framework generalizes both the mixture of truncated exponentials (MTEs) framework and the mixture of polynomials (MoPs) framework. Similar t...
General variational many-body theory with complete self-consistency for trapped bosonic systems
International Nuclear Information System (INIS)
Streltsov, Alexej I.; Alon, Ofir E.; Cederbaum, Lorenz S.
2006-01-01
In this work we develop a complete variational many-body theory for a system of N trapped bosons interacting via a general two-body potential. The many-body solution of this system is expanded over orthogonal many-body basis functions (configurations). In this theory both the many-body basis functions and the respective expansion coefficients are treated as variational parameters. The optimal variational parameters are obtained self-consistently by solving a coupled system of noneigenvalue--generally integro-differential--equations to get the one-particle functions and by diagonalizing the secular matrix problem to find the expansion coefficients. We call this theory multiconfigurational Hartree theory for bosons or MCHB(M), where M specifies explicitly the number of one-particle functions used to construct the configurations. General rules for evaluating the matrix elements of one- and two-particle operators are derived and applied to construct the secular Hamiltonian matrix. We discuss properties of the derived equations. We show that in the limiting cases of one configuration the theory boils down to the well-known Gross-Pitaevskii and the recently developed multi-orbital mean fields. The invariance of the complete solution with respect to unitary transformations of the one-particle functions is utilized to find the solution with the minimal number of contributing configurations. In the second part of our work we implement and apply the developed theory. It is demonstrated that for any practical computation where the configurational space is restricted, the description of trapped bosonic systems strongly depends on the choice of the many-body basis set used, i.e., self-consistency is of great relevance. As illustrative examples we consider bosonic systems trapped in one- and two-dimensional symmetric and asymmetric double well potentials. We demonstrate that self-consistency has great impact on the predicted physical properties of the ground and excited states
A new approach to hull consistency
Directory of Open Access Journals (Sweden)
Kolev Lubomir
2016-06-01
Full Text Available Hull consistency is a known technique to improve the efficiency of iterative interval methods for solving nonlinear systems describing steady-states in various circuits. Presently, hull consistency is checked in a scalar manner, i.e. successively for each equation of the nonlinear system with respect to a single variable. In the present poster, a new more general approach to implementing hull consistency is suggested which consists in treating simultaneously several equations with respect to the same number of variables.
Probabilistic Open Set Recognition
Jain, Lalit Prithviraj
support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.
Roadmap to a mutually consistent set of offshore vertical reference frames
Slobbe, D.C.
2013-01-01
This thesis presents a combined approach for the realization of the (quasi-)geoid as a height reference surface and the vertical reference surface at sea (chart datum). This approach, specifically designed for shallow seas and coastal waters, provides the relation between the two vertical reference
Using the Milieu: Treatment-Environment Consistency.
Szekais, Barbara
1985-01-01
Describes trial use of milieu and activity-based therapy in two adult day centers to increase client involvement in physical and social environments of treatment settings. Reports results from empirical observations and recommends further investigation of this treatment modality in settings for the elderly. (Author/NRB)
Student Effort, Consistency, and Online Performance
Patron, Hilde; Lopez, Salvador
2011-01-01
This paper examines how student effort, consistency, motivation, and marginal learning, influence student grades in an online course. We use data from eleven Microeconomics courses taught online for a total of 212 students. Our findings show that consistency, or less time variation, is a statistically significant explanatory variable, whereas…
Translationally invariant self-consistent field theories
International Nuclear Information System (INIS)
Shakin, C.M.; Weiss, M.S.
1977-01-01
We present a self-consistent field theory which is translationally invariant. The equations obtained go over to the usual Hartree-Fock equations in the limit of large particle number. In addition to deriving the dynamic equations for the self-consistent amplitudes we discuss the calculation of form factors and various other observables
Sticky continuous processes have consistent price systems
DEFF Research Database (Denmark)
Bender, Christian; Pakkanen, Mikko; Sayit, Hasanjan
Under proportional transaction costs, a price process is said to have a consistent price system, if there is a semimartingale with an equivalent martingale measure that evolves within the bid-ask spread. We show that a continuous, multi-asset price process has a consistent price system, under...
Consistent-handed individuals are more authoritarian.
Lyle, Keith B; Grillo, Michael C
2014-01-01
Individuals differ in the consistency with which they use one hand over the other to perform everyday activities. Some individuals are very consistent, habitually using a single hand to perform most tasks. Others are relatively inconsistent, and hence make greater use of both hands. More- versus less-consistent individuals have been shown to differ in numerous aspects of personality and cognition. In several respects consistent-handed individuals resemble authoritarian individuals. For example, both consistent-handedness and authoritarianism have been linked to cognitive inflexibility. Therefore we hypothesised that consistent-handedness is an external marker for authoritarianism. Confirming our hypothesis, we found that consistent-handers scored higher than inconsistent-handers on a measure of submission to authority, were more likely to identify with a conservative political party (Republican), and expressed less-positive attitudes towards out-groups. We propose that authoritarianism may be influenced by the degree of interaction between the left and right brain hemispheres, which has been found to differ between consistent- and inconsistent-handed individuals.
Testing the visual consistency of web sites
van der Geest, Thea; Loorbach, N.R.
2005-01-01
Consistency in the visual appearance of Web pages is often checked by experts, such as designers or reviewers. This article reports a card sort study conducted to determine whether users rather than experts could distinguish visual (in-)consistency in Web elements and pages. The users proved to
Consistent spectroscopy for a extended gauge model
International Nuclear Information System (INIS)
Oliveira Neto, G. de.
1990-11-01
The consistent spectroscopy was obtained with a Lagrangian constructed with vector fields with a U(1) group extended symmetry. As consistent spectroscopy is understood the determination of quantum physical properties described by the model in an manner independent from the possible parametrizations adopted in their description. (L.C.J.A.)
TECHNICAL BASIS DOCUMENT FOR AT-POWER SIGNIFICANCE DETERMINATION PROCESS (SDP) NOTEBOOKS
International Nuclear Information System (INIS)
AZARM, M.A.; SMANTA, P.K.; MARTINEZ-GURIDI, G.; HIGGINS, J.
2004-01-01
To support the assessment of inspection findings as part of the risk-informed inspection in the United States Nuclear Regulatory Commission's (USNRC's) Reactor Oversight Process (ROP), risk inspection notebooks, also called significance determination process (SDP) notebooks, have been developed for each of the operating plants in the United States. These notebooks serve as a tool for assessing risk significance of inspection findings along with providing an engineering understanding of the significance. Plant-specific notebooks are developed to capture plant-specific features, characteristics, and analyses that influence the risk profile of the plant. At the same time, the notebooks follow a consistent set of assumptions and guidelines to assure consistent treatment of inspection findings across the plants. To achieve these objectives, notebooks are designed to provide specific information that are unique both in the manner in which the information is provided and in the way the screening risk assessment is carried out using the information provided. The unique features of the SDP notebooks, the approaches used to present the information for assessment of inspection findings, the assumptions used in consistent modeling across different plants with due credit to plant-specific features and analyses form the technical basis of the SDP notebooks. In this document, the unique features and the technical basis for the notebooks are presented. The types of information that are included and the reasoning/basis for including that information are discussed. The rules and basis for developing the worksheets that are used by the inspectors in the assessment of inspection findings are presented. The approach to modeling plants' responses to different initiating events and specific assumptions/considerations used for each of the reactor types are also discussed
Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals
International Nuclear Information System (INIS)
Kutepov, A. L.
2017-01-01
We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3 scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.
Assessment of WWER fuel condition in design basis accident
International Nuclear Information System (INIS)
Bibilashvili, Yu.; Sokolov, N.; Andreeva-Andrievskaya, L.; Vlasov, Yu.; Nechaeva, O.; Salatov, A.
1994-01-01
The fuel behaviour in design basis accidents is assessed by means of the verified code RAPTA-5. The code uses a set of high temperature physico-chemical properties of the fuel components as determined for commercially produced materials, fuel rod simulators and fuel rod bundles. The WWER fuel criteria available in Russia for design basis accidents do not generally differ from the similar criteria adopted for PWR's. 12 figs., 11 refs
Assessment of WWER fuel condition in design basis accident
Energy Technology Data Exchange (ETDEWEB)
Bibilashvili, Yu; Sokolov, N; Andreeva-Andrievskaya, L; Vlasov, Yu; Nechaeva, O; Salatov, A [Vsesoyuznyj Nauchno-Issledovatel` skij Inst. Neorganicheskikh Materialov, Moscow (Russian Federation)
1994-12-31
The fuel behaviour in design basis accidents is assessed by means of the verified code RAPTA-5. The code uses a set of high temperature physico-chemical properties of the fuel components as determined for commercially produced materials, fuel rod simulators and fuel rod bundles. The WWER fuel criteria available in Russia for design basis accidents do not generally differ from the similar criteria adopted for PWR`s. 12 figs., 11 refs.
Modeling and Testing Legacy Data Consistency Requirements
DEFF Research Database (Denmark)
Nytun, J. P.; Jensen, Christian Søndergaard
2003-01-01
An increasing number of data sources are available on the Internet, many of which offer semantically overlapping data, but based on different schemas, or models. While it is often of interest to integrate such data sources, the lack of consistency among them makes this integration difficult....... This paper addresses the need for new techniques that enable the modeling and consistency checking for legacy data sources. Specifically, the paper contributes to the development of a framework that enables consistency testing of data coming from different types of data sources. The vehicle is UML and its...... accompanying XMI. The paper presents techniques for modeling consistency requirements using OCL and other UML modeling elements: it studies how models that describe the required consistencies among instances of legacy models can be designed in standard UML tools that support XMI. The paper also considers...
Consistency of Trend Break Point Estimator with Underspecified Break Number
Directory of Open Access Journals (Sweden)
Jingjing Yang
2017-01-01
Full Text Available This paper discusses the consistency of trend break point estimators when the number of breaks is underspecified. The consistency of break point estimators in a simple location model with level shifts has been well documented by researchers under various settings, including extensions such as allowing a time trend in the model. Despite the consistency of break point estimators of level shifts, there are few papers on the consistency of trend shift break point estimators in the presence of an underspecified break number. The simulation study and asymptotic analysis in this paper show that the trend shift break point estimator does not converge to the true break points when the break number is underspecified. In the case of two trend shifts, the inconsistency problem worsens if the magnitudes of the breaks are similar and the breaks are either both positive or both negative. The limiting distribution for the trend break point estimator is developed and closely approximates the finite sample performance.
Mean-field theory and self-consistent dynamo modeling
International Nuclear Information System (INIS)
Yoshizawa, Akira; Yokoi, Nobumitsu
2001-12-01
Mean-field theory of dynamo is discussed with emphasis on the statistical formulation of turbulence effects on the magnetohydrodynamic equations and the construction of a self-consistent dynamo model. The dynamo mechanism is sought in the combination of the turbulent residual-helicity and cross-helicity effects. On the basis of this mechanism, discussions are made on the generation of planetary magnetic fields such as geomagnetic field and sunspots and on the occurrence of flow by magnetic fields in planetary and fusion phenomena. (author)
Hanford Generic Interim Safety Basis
Energy Technology Data Exchange (ETDEWEB)
Lavender, J.C.
1994-09-09
The purpose of this document is to identify WHC programs and requirements that are an integral part of the authorization basis for nuclear facilities that are generic to all WHC-managed facilities. The purpose of these programs is to implement the DOE Orders, as WHC becomes contractually obligated to implement them. The Hanford Generic ISB focuses on the institutional controls and safety requirements identified in DOE Order 5480.23, Nuclear Safety Analysis Reports.
Hanford Generic Interim Safety Basis
International Nuclear Information System (INIS)
Lavender, J.C.
1994-01-01
The purpose of this document is to identify WHC programs and requirements that are an integral part of the authorization basis for nuclear facilities that are generic to all WHC-managed facilities. The purpose of these programs is to implement the DOE Orders, as WHC becomes contractually obligated to implement them. The Hanford Generic ISB focuses on the institutional controls and safety requirements identified in DOE Order 5480.23, Nuclear Safety Analysis Reports
Intracellular recovery - basis of hyperfractionation
International Nuclear Information System (INIS)
Hagen, U.; Guttenberger, R.; Kummermehr, J.
1988-01-01
The radiobiological basis fo a hyperfractionated radiation therapy versus conventional fractionation with respect to therapeutic gain, i.e., improved normal tissue sparing for the same level of tumour cell inactivation, will be presented. Data on the recovery potential of various tissues as well as the kinetics of repair will be given. The problem of incomplete repair with short irradiation intervals will be discussed. (orig.) [de
Genetic basis of atrial fibrillation
Directory of Open Access Journals (Sweden)
Oscar Campuzano
2016-12-01
Full Text Available Atrial fibrillation is the most common sustained arrhythmia and remains as one of main challenges in current clinical practice. The disease may be induced secondary to other diseases such as hypertension, valvular heart disease, and heart failure, conferring an increased risk of stroke and sudden death. Epidemiological studies have provided evidence that genetic factors play an important role and up to 30% of clinically diagnosed patients may have a family history of atrial fibrillation. To date, several rare variants have been identified in a wide range of genes associated with ionic channels, calcium handling protein, fibrosis, conduction and inflammation. Important advances in clinical, genetic and molecular basis have been performed over the last decade, improving diagnosis and treatment. However, the genetics of atrial fibrillation is complex and pathophysiological data remains still unraveling. A better understanding of the genetic basis will induce accurate risk stratification and personalized clinical treatment. In this review, we have focused on current genetics basis of atrial fibrillation.
Dong, S.
2018-05-01
We present a reduction-consistent and thermodynamically consistent formulation and an associated numerical algorithm for simulating the dynamics of an isothermal mixture consisting of N (N ⩾ 2) immiscible incompressible fluids with different physical properties (densities, viscosities, and pair-wise surface tensions). By reduction consistency we refer to the property that if only a set of M (1 ⩽ M ⩽ N - 1) fluids are present in the system then the N-phase governing equations and boundary conditions will exactly reduce to those for the corresponding M-phase system. By thermodynamic consistency we refer to the property that the formulation honors the thermodynamic principles. Our N-phase formulation is developed based on a more general method that allows for the systematic construction of reduction-consistent formulations, and the method suggests the existence of many possible forms of reduction-consistent and thermodynamically consistent N-phase formulations. Extensive numerical experiments have been presented for flow problems involving multiple fluid components and large density ratios and large viscosity ratios, and the simulation results are compared with the physical theories or the available physical solutions. The comparisons demonstrate that our method produces physically accurate results for this class of problems.
SIS - Species and Stock Administrative Data Set
National Oceanic and Atmospheric Administration, Department of Commerce — The Species and Stock Administrative data set within the Species Information System (SIS) defines entities within the database that serve as the basis for recording...
Web Enabled DROLS Verity TopicSets
National Research Council Canada - National Science Library
Tong, Richard
1999-01-01
The focus of this effort has been the design and development of automatically generated TopicSets and HTML pages that provide the basis of the required search and browsing capability for DTIC's Web Enabled DROLS System...
Self-consistent electrodynamic scattering in the symmetric Bragg case
International Nuclear Information System (INIS)
Campos, H.S.
1988-01-01
We have analyzed the symmetric Bragg case, introducing a model of self consistent scattering for two elliptically polarized beams. The crystal is taken as a set of mathematical planes, each of them defined by a surface density of dipoles. We have considered the mesofield and the epifield differently from that of the Ewald's theory and, we assumed a plane of dipoles and the associated fields as a self consistent scattering unit. The exact analytical treatment when applied to any two neighbouring planes, results in a general and self consistent Bragg's equation, in terms of the amplitude and phase variations. The generalized solution for the set of N planes was obtained after introducing an absorption factor in the incident radiation, in two ways: (i) the analytical one, through a rule of field similarity, which says that the incidence occurs in both faces of the all crystal planes and also, through a matricial development with the Chebyshev polynomials; (ii) using the numerical solution we calculated, iteratively, the reflectivity, the reflection phase, the transmissivity, the transmission phase and the energy. The results are showed through reflection and transmission curves, which are characteristics as from kinematical as dynamical theories. The conservation of the energy results from the Ewald's self consistency principle is used. In the absorption case, the results show that it is not the only cause for the asymmetric form in the reflection curves. The model contains basic elements for a unified, microscope, self consistent, vectorial and exact formulation for interpretating the X ray diffraction in perfect crystals. (author)
Consistency in the World Wide Web
DEFF Research Database (Denmark)
Thomsen, Jakob Grauenkjær
Tim Berners-Lee envisioned that computers will behave as agents of humans on the World Wide Web, where they will retrieve, extract, and interact with information from the World Wide Web. A step towards this vision is to make computers capable of extracting this information in a reliable...... and consistent way. In this dissertation we study steps towards this vision by showing techniques for the specication, the verication and the evaluation of the consistency of information in the World Wide Web. We show how to detect certain classes of errors in a specication of information, and we show how...... the World Wide Web, in order to help perform consistent evaluations of web extraction techniques. These contributions are steps towards having computers reliable and consistently extract information from the World Wide Web, which in turn are steps towards achieving Tim Berners-Lee's vision. ii...
Consistent histories and operational quantum theory
International Nuclear Information System (INIS)
Rudolph, O.
1996-01-01
In this work a generalization of the consistent histories approach to quantum mechanics is presented. We first critically review the consistent histories approach to nonrelativistic quantum mechanics in a mathematically rigorous way and give some general comments about it. We investigate to what extent the consistent histories scheme is compatible with the results of the operational formulation of quantum mechanics. According to the operational approach, nonrelativistic quantum mechanics is most generally formulated in terms of effects, states, and operations. We formulate a generalized consistent histories theory using the concepts and the terminology which have proven useful in the operational formulation of quantum mechanics. The logical rule of the logical interpretation of quantum mechanics is generalized to the present context. The algebraic structure of the generalized theory is studied in detail
The cluster bootstrap consistency in generalized estimating equations
Cheng, Guang
2013-03-01
The cluster bootstrap resamples clusters or subjects instead of individual observations in order to preserve the dependence within each cluster or subject. In this paper, we provide a theoretical justification of using the cluster bootstrap for the inferences of the generalized estimating equations (GEE) for clustered/longitudinal data. Under the general exchangeable bootstrap weights, we show that the cluster bootstrap yields a consistent approximation of the distribution of the regression estimate, and a consistent approximation of the confidence sets. We also show that a computationally more efficient one-step version of the cluster bootstrap provides asymptotically equivalent inference. © 2012.
STP: A mathematically and physically consistent library of steam properties
International Nuclear Information System (INIS)
Aguilar, F.; Hutter, A.C.; Tuttle, P.G.
1982-01-01
A new FORTRAN library of subroutines has been developed from the fundamental equation of Keenan et al. to evaluate a large set of water properties including derivatives such as sound speed and isothermal compressibility. The STP library uses the true saturation envelope of the Keenan et al. fundamental equation. The evaluation of the true envelope by a continuation method is explained. This envelope, along with other design features, imparts an exceptionally high degree of thermodynamic and mathematical consistency to the STP library, even at the critical point. Accuracy and smoothness, library self-consistency, and designed user convenience make the STP library a reliable and versatile water property package
Diagnostic language consistency among multicultural English-speaking nurses.
Wieck, K L
1996-01-01
Cultural differences among nurses may influence the choice of terminology applicable to use of a nursing diagnostic statement. This study explored whether defining characteristics are consistently applied by culturally varied nurses in an English language setting. Two diagnoses, pain, and high risk for altered skin integrity, were studied within six cultures: African, Asian, Filipino, East Indian, African-American, and Anglo-American nurses. Overall, there was consistency between the cultural groups. Analysis of variance for the pain scale demonstrated differences among cultures on two characteristics of pain, restlessness and grimace. The only difference on the high risk for altered skin integrity scale was found on the distructor, supple skin.
Self-consistent areas law in QCD
International Nuclear Information System (INIS)
Makeenko, Yu.M.; Migdal, A.A.
1980-01-01
The problem of obtaining the self-consistent areas law in quantum chromodynamics (QCD) is considered from the point of view of the quark confinement. The exact equation for the loop average in multicolor QCD is reduced to a bootstrap form. Its iterations yield new manifestly gauge invariant perturbation theory in the loop space, reproducing asymptotic freedom. For large loops, the areas law apprears to be a self-consistent solution
Consistency of the MLE under mixture models
Chen, Jiahua
2016-01-01
The large-sample properties of likelihood-based statistical inference under mixture models have received much attention from statisticians. Although the consistency of the nonparametric MLE is regarded as a standard conclusion, many researchers ignore the precise conditions required on the mixture model. An incorrect claim of consistency can lead to false conclusions even if the mixture model under investigation seems well behaved. Under a finite normal mixture model, for instance, the consis...
NDARC-NASA Design and Analysis of Rotorcraft Theoretical Basis and Architecture
Johnson, Wayne
2010-01-01
The theoretical basis and architecture of the conceptual design tool NDARC (NASA Design and Analysis of Rotorcraft) are described. The principal tasks of NDARC are to design (or size) a rotorcraft to satisfy specified design conditions and missions, and then analyze the performance of the aircraft for a set of off-design missions and point operating conditions. The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated. The aircraft attributes are obtained from the sum of the component attributes. NDARC provides a capability to model general rotorcraft configurations, and estimate the performance and attributes of advanced rotor concepts. The software has been implemented with low-fidelity models, typical of the conceptual design environment. Incorporation of higher-fidelity models will be possible, as the architecture of the code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis and optimization.
Conceptual basis of the master directed diagram
International Nuclear Information System (INIS)
Kelly, M.; Billington, D.
1998-01-01
This document forms part of a suite of documents describing the Nirex model development programme. The programme is designed to provide a clear audit trail from the identification of significant features, events and processes (FEPs) to the models and modelling processes employed within a detailed safety assessment. A five stage approach has been adopted, which provides a systematic framework for addressing uncertainty and for the documentation of all modelling decisions and assumptions. The five stages are as follows: Stage 1: EP analysis - compilation and structuring of a FEP database; Stage 2: Scenario and conceptual model development; Stage 3: mathematical model development; Stage 4: Software development; Stage 5: confidence building. This report describes the work involved in Stage 1 of the Nirex model development programme, FEP analysis. The aim of FEP analysis is to produce a set of FEPs and FEP interactions that form the basis for the scenario and conceptual model development in Stage 2. There are two requirements for the set of FEPs and FEP interactions; first, all aspects material to the performance of the disposal system should be covered, i.e. the set should be comprehensive, and secondly a clear audit trail of decisions, consensus and analysis should be maintained
International spinal cord injury pulmonary function basic data set.
Biering-Sørensen, F; Krassioukov, A; Alexander, M S; Donovan, W; Karlsson, A-K; Mueller, G; Perkash, I; Sheel, A William; Wecht, J; Schilero, G J
2012-06-01
To develop the International Spinal Cord Injury (SCI) Pulmonary Function Basic Data Set within the framework of the International SCI Data Sets in order to facilitate consistent collection and reporting of basic bronchopulmonary findings in the SCI population. International. The SCI Pulmonary Function Data Set was developed by an international working group. The initial data set document was revised on the basis of suggestions from members of the Executive Committee of the International SCI Standards and Data Sets, the International Spinal Cord Society (ISCoS) Executive and Scientific Committees, American Spinal Injury Association (ASIA) Board, other interested organizations and societies and individual reviewers. In addition, the data set was posted for 2 months on ISCoS and ASIA websites for comments. The final International SCI Pulmonary Function Data Set contains questions on the pulmonary conditions diagnosed before spinal cord lesion,if available, to be obtained only once; smoking history; pulmonary complications and conditions after the spinal cord lesion, which may be collected at any time. These data include information on pneumonia, asthma, chronic obstructive pulmonary disease and sleep apnea. Current utilization of ventilator assistance including mechanical ventilation, diaphragmatic pacing, phrenic nerve stimulation and Bi-level positive airway pressure can be reported, as well as results from pulmonary function testing includes: forced vital capacity, forced expiratory volume in one second and peak expiratory flow. The complete instructions for data collection and the data sheet itself are freely available on the website of ISCoS (http://www.iscos.org.uk).
Self-consistent asset pricing models
Malevergne, Y.; Sornette, D.
2007-08-01
We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alphas and betas of the factor model are unobservable. Self-consistency leads to renormalized betas with zero effective alphas, which are observable with standard OLS regressions. When the conditions derived from internal consistency are not met, the model is necessarily incomplete, which means that some sources of risk cannot be replicated (or hedged) by a portfolio of stocks traded on the market, even for infinite economies. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value αi at the origin between an asset i's return and the proxy's return. Self-consistency also introduces “orthogonality” and “normality” conditions linking the betas, alphas (as well as the residuals) and the weights of the proxy portfolio. Two diagnostics based on these orthogonality and normality conditions are implemented on a basket of 323 assets which have been components of the S&P500 in the period from January 1990 to February 2005. These two diagnostics show interesting departures from dynamical self-consistency starting about 2 years before the end of the Internet bubble. Assuming that the CAPM holds with the self-consistency condition, the OLS method automatically obeys the resulting orthogonality and normality conditions and therefore provides a simple way to self-consistently assess the parameters of the model by using proxy portfolios made only of the assets which are used in the CAPM regressions. Finally, the factor decomposition with the
Strategies for ensuring global consistency/comparability of water-quality data
Klein, J.M.
1999-01-01
In the past 20 years the water quality of the United States has improved remarkably-the waters are safer for drinking, swimming, and fishing. However, despite many accomplishments, it is still difficult to answer such basic questions as: 'How clean is the water?' and 'How is it changing over time?' These same questions exist on a global scale as well. In order to focus water-data issues in the United States, a national Intergovernmental Task Force on Monitoring Water Quality (ITFM) was initiated for public and private organizations, whereby key elements involved in data collection, analysis, storage, and management could be made consistent and comparable. The ITFM recommended and its members are implementing a nationwide strategy to improve water-quality monitoring, assessment, and reporting activities. The intent of this paper is to suggest that a voluntary effort be initiated to ensure the comparability and utility of hydrological data on a global basis. Consistent, long-term data sets that are comparable are necessary in order to formulate ideas regarding regional and global trends in water quantity and quality. The author recommends that a voluntary effort similar to the ITFM effort be utilized. The strategy proposed would involve voluntary representation from countries and international organizations (e.g. World Health Organization) involved in drinking-water assessments and/or ambient water-quality monitoring. Voluntary partnerships such as this will improve curability to reduce health risks and achieve a better return on public and private investments in monitoring, environmental protection, and natural resource management, and result in a collaborative process that will save millions of dollars.In this work it is suggested that a voluntary effort be initiated to ensure the comparability and utility of hydrological data on a global basis. The strategy proposed would involve voluntary representation from countries and international organizations involved in
A level playing field-obtaining consistent cost estimates for advanced reactor designs
International Nuclear Information System (INIS)
Hudson, C.R.; Rohm, H.H.; Humphreys, J.R.
1987-01-01
A level playing field in sports is necessary to avoid a situation in which a team has an unfair advantage over its competition. Similarly, rules and guidelines for developing cost estimates can be established which, in effect, provide a level playing field whereby cost estimates for advanced power plant concepts can be presented on a consistent and equitable basis. As an example, consider the capital costs shown in Table 1. Both sets of cost are for the exact same power plant; Estimate 1 is expressed in constant dollars while Estimate 2 is presented in nominal or as-spent dollars. As shown, the costs in Table 1 are not directly comparable. Similar problems can be introduced as a result of differing assumptions in any number of parameters including the scope of the cost estimate, inflation/escalation and interest rates, contingency costs, and site location. Of course, the motivation for having consistent cost estimates is to permit comparison among various concepts. As the U.S. Department of Energy sponsors research and development work on several advanced reactor concepts in which expected cost is a key evaluation parameter, the emphasis in this particular endeavor has been in promoting the comparability of advanced reactor cost estimates among themselves and to existing power plant types. To continue with the analogy, the idea is to lay out the playing field and the rules of the contest such that each team participates in the match on an equal basis with the final score being solely determined by the inherent strengths and abilities of the teams. A description of the playing field and some of the more important rules will now be provided
TreeBASIS Feature Descriptor and Its Hardware Implementation
Directory of Open Access Journals (Sweden)
Spencer Fowers
2014-01-01
Full Text Available This paper presents a novel feature descriptor called TreeBASIS that provides improvements in descriptor size, computation time, matching speed, and accuracy. This new descriptor uses a binary vocabulary tree that is computed using basis dictionary images and a test set of feature region images. To facilitate real-time implementation, a feature region image is binary quantized and the resulting quantized vector is passed into the BASIS vocabulary tree. A Hamming distance is then computed between the feature region image and the effectively descriptive basis dictionary image at a node to determine the branch taken and the path the feature region image takes is saved as a descriptor. The TreeBASIS feature descriptor is an excellent candidate for hardware implementation because of its reduced descriptor size and the fact that descriptors can be created and features matched without the use of floating point operations. The TreeBASIS descriptor is more computationally and space efficient than other descriptors such as BASIS, SIFT, and SURF. Moreover, it can be computed entirely in hardware without the support of a CPU for additional software-based computations. Experimental results and a hardware implementation show that the TreeBASIS descriptor compares well with other descriptors for frame-to-frame homography computation while requiring fewer hardware resources.
Computing single step operators of logic programming in radial basis function neural networks
Energy Technology Data Exchange (ETDEWEB)
Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)
2014-07-10
Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.
Computing single step operators of logic programming in radial basis function neural networks
Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong
2014-07-01
Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.
Computing single step operators of logic programming in radial basis function neural networks
International Nuclear Information System (INIS)
Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong
2014-01-01
Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T p :I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks
Thermodynamic basis for cluster kinetics
DEFF Research Database (Denmark)
Hu, Lina; Bian, Xiufang; Qin, Xubo
2006-01-01
Due to the inaccessibility of the supercooled region of marginal metallic glasses (MMGs) within the experimental time window, we study the cluster kinetics above the liquidus temperature, Tl, to acquire information on the fragility of the MMG systems. Thermodynamic basis for the stability...... of locally ordered structure in the MMG liquids is discussed in terms of the two-order-parameter model. It is found that the Arrhenius activation energy of clusters, h, is proportional to the chemical mixing enthalpy of alloys, Hchem. Fragility of the MMG forming liquids can be described by the ratio...
OSR encapsulation basis -- 100-KW
International Nuclear Information System (INIS)
Meichle, R.H.
1995-01-01
The purpose of this report is to provide the basis for a change in the Operations Safety Requirement (OSR) encapsulated fuel storage requirements in the 105 KW fuel storage basin which will permit the handling and storing of encapsulated fuel in canisters which no longer have a water-free space in the top of the canister. The scope of this report is limited to providing the change from the perspective of the safety envelope (bases) of the Safety Analysis Report (SAR) and Operations Safety Requirements (OSR). It does not change the encapsulation process itself
The physical basis of chemistry
Warren, Warren S
2000-01-01
If the text you're using for general chemistry seems to lack sufficient mathematics and physics in its presentation of classical mechanics, molecular structure, and statistics, this complementary science series title may be just what you're looking for. Written for the advanced lower-division undergraduate chemistry course, The Physical Basis of Chemistry, Second Edition, offers students an opportunity to understand and enrich the understanding of physical chemistry with some quantum mechanics, the Boltzmann distribution, and spectroscopy. Posed and answered are questions concerning eve
DEFF Research Database (Denmark)
Flesch, Benjamin; Hussain, Abid; Vatrapu, Ravi
2015-01-01
-edge open source visual analytics libraries from D3.js and creation of new visualizations (ac-tor mobility across time, conversational comets etc). Evaluation of the dashboard consisting of technical testing, usability testing, and domain-specific testing with CSR students and yielded positive results.......This paper presents a state-of-the art visual analytics dash-board, Social Set Visualizer (SoSeVi), of approximately 90 million Facebook actions from 11 different companies that have been mentioned in the traditional media in relation to garment factory accidents in Bangladesh. The enterprise...
Technical Basis Document (TBD) and user guides
International Nuclear Information System (INIS)
Chiaro, P.J. Jr.
1998-09-01
A Technical Basis Document (TBD) should provide the background information for establishment of an instrument's operational requirements. Due to the amount and location of DOE facilities, no one set of requirements is possible. Operational requirements will vary based on the local environments and missions at each facility. Environmental conditions that can affect an instrument's operations are ambient temperature, humidity, and radio frequency, and to a lesser extent, magnetic fields, and interfering ionizing radiations. Consideration should also be made regarding how an instrument is to be used. If an instrument will be transported around the facility, vibration and shock can cause problems if they are not addressed in the TBD. This document provides guidance for the development of a TBD. This document applies to radiation instruments used for personnel and equipment contamination monitoring, dose rate monitoring, and air monitoring
Design Load Basis for Offshore Wind turbines
DEFF Research Database (Denmark)
Natarajan, Anand; Hansen, Morten Hartvig; Wang, Shaofeng
2016-01-01
DTU Wind Energy is not designing and manufacturing wind turbines and does therefore not need a Design Load Basis (DLB) that is accepted by a certification body. However, to assess the load consequences of innovative features and devices added to existing offshore turbine concepts or new offshore...... turbine concept developed in our research, it is useful to have a full DLB that follows the current design standard and is representative of a general DLB used by the industry. It will set a standard for the offshore wind turbine design load evaluations performed at DTU Wind Energy, which is aligned...... with the challenges faced by the industry and therefore ensures that our research continues to have a strong foundation in this interaction. Furthermore, the use of a full DLB that follows the current standard can improve and increase the feedback from the research at DTU Wind Energy to the international...
Fast, kinetically self-consistent simulation of RF modulated plasma boundary sheaths
International Nuclear Information System (INIS)
Shihab, Mohammed; Ziegler, Dennis; Brinkmann, Ralf Peter
2012-01-01
A mathematical model is presented which enables the efficient, kinetically self-consistent simulation of RF modulated plasma boundary sheaths in all technically relevant discharge regimes. It is defined on a one-dimensional geometry where a Cartesian x-axis points from the electrode or wall at x E ≡ 0 towards the plasma bulk. An arbitrary endpoint x B is chosen ‘deep in the bulk’. The model consists of a set of kinetic equations for the ions, Boltzmann's relation for the electrons and Poisson's equation for the electrical field. Boundary conditions specify the ion flux at x B and a periodically—not necessarily harmonically—modulated sheath voltage V(t) or sheath charge Q(t). The equations are solved in a statistical sense. However, it is not the well-known particle-in-cell (PIC) scheme that is employed, but an alternative iterative algorithm termed ensemble-in-spacetime (EST). The basis of the scheme is a discretization of the spacetime, the product of the domain [x E , x B ] and the RF period [0, T]. Three modules are called in a sequence. A Monte Carlo module calculates the trajectories of a large set of ions from their start at x B until they reach the electrode at x E , utilizing the potential values on the nodes of the spatio-temporal grid. A harmonic analysis module reconstructs the Fourier modes n im (x) of the ion density n i (x, t) from the calculated trajectories. A field module finally solves the Boltzmann-Poisson equation with the calculated ion densities to generate an updated set of potential values for the spatio-temporal grid. The iteration is started with the potential values of a self-consistent fluid model and terminates when the updates become sufficiently small, i.e. when self-consistency is achieved. A subsequent post-processing determines important quantities, in particular the phase-resolved and phase-averaged values of the ion energy and angular distributions and the total energy flux at the electrode. A drastic reduction of the
NUUK BASIC: The BioBasis programme
DEFF Research Database (Denmark)
Bay, Christian; Nymand, Josephine; Aastrup, Peter
2013-01-01
The 2011 season was the fourth full season for the BioBasis monitoring programme. Generally, there is a high consistency in data collected during the four years indicating that the data and the procedures used are reliable and sound. A preliminary review of data related to fl owering and plant...... in the beginning of the growing season presumably due to the late snow melt. Generally, all plots functioned as sinks for atmospheric CO2 in August and September, whereas fl uxes were close to zero in June, July and October. Similar to earlier years NEE (Net Ecosystem Exchange) is more negative in C-plots (control...... of gross primary production were generally observed in C-plots, while especially S-plots showed lower GPP (Gross Primary Production) rates compared with other treatments. In general, there are huge variations in the number of trapped arthropod specimen within all taxonomical groups both within the season...
Closed sets of nonlocal correlations
International Nuclear Information System (INIS)
Allcock, Jonathan; Linden, Noah; Brunner, Nicolas; Popescu, Sandu; Skrzypczyk, Paul; Vertesi, Tamas
2009-01-01
We present a fundamental concept - closed sets of correlations - for studying nonlocal correlations. We argue that sets of correlations corresponding to information-theoretic principles, or more generally to consistent physical theories, must be closed under a natural set of operations. Hence, studying the closure of sets of correlations gives insight into which information-theoretic principles are genuinely different, and which are ultimately equivalent. This concept also has implications for understanding why quantum nonlocality is limited, and for finding constraints on physical theories beyond quantum mechanics.
Quadratic Hedging of Basis Risk
Directory of Open Access Journals (Sweden)
Hardy Hulley
2015-02-01
Full Text Available This paper examines a simple basis risk model based on correlated geometric Brownian motions. We apply quadratic criteria to minimize basis risk and hedge in an optimal manner. Initially, we derive the Föllmer–Schweizer decomposition for a European claim. This allows pricing and hedging under the minimal martingale measure, corresponding to the local risk-minimizing strategy. Furthermore, since the mean-variance tradeoff process is deterministic in our setup, the minimal martingale- and variance-optimal martingale measures coincide. Consequently, the mean-variance optimal strategy is easily constructed. Simple pricing and hedging formulae for put and call options are derived in terms of the Black–Scholes formula. Due to market incompleteness, these formulae depend on the drift parameters of the processes. By making a further equilibrium assumption, we derive an approximate hedging formula, which does not require knowledge of these parameters. The hedging strategies are tested using Monte Carlo experiments, and are compared with results achieved using a utility maximization approach.
Towards thermodynamical consistency of quasiparticle picture
International Nuclear Information System (INIS)
Biro, T.S.; Shanenko, A.A.; Toneev, V.D.; Research Inst. for Particle and Nuclear Physics, Hungarian Academy of Sciences, Budapest
2003-01-01
The purpose of the present article is to call attention to some realistic quasi-particle-based description of the quark/gluon matter and its consistent implementation in thermodynamics. A simple and transparent representation of the thermodynamical consistency conditions is given. This representation allows one to review critically and systemize available phenomenological approaches to the deconfinement problem with respect to their thermodynamical consistency. A particular attention is paid to the development of a method for treating the string screening in the dense matter of unbound color charges. The proposed method yields an integrable effective pair potential, which can be incorporated into the mean-field picture. The results of its application are in reasonable agreement with lattice data on the QCD thermodynamics [ru
Toward thermodynamic consistency of quasiparticle picture
International Nuclear Information System (INIS)
Biro, T.S.; Toneev, V.D.; Shanenko, A.A.
2003-01-01
The purpose of the present article is to call attention to some realistic quasiparticle-based description of quark/gluon matter and its consistent implementation in thermodynamics. A simple and transparent representation of the thermodynamic consistency conditions is given. This representation allows one to review critically and systemize available phenomenological approaches to the deconfinement problem with respect to their thermodynamic consistency. Particular attention is paid to the development of a method for treating the string screening in the dense matter of unbound color charges. The proposed method yields an integrable effective pair potential that can be incorporated into the mean-field picture. The results of its application are in reasonable agreement with lattice data on the QCD thermodynamics
International Nuclear Information System (INIS)
Shepard, J.R.
1991-01-01
The authors examine the RPA based on a relativistic Hartree approximation description for nuclear ground states. This model includes contributions from the negative energy sea at the 1-loop level. They emphasize consistency between the treatment of the ground state and the RPA. This consistency is important in the description of low-lying collective levels but less important for the longitudinal (e, e') quasi-elastic response. They also study the effect of imposing a 3-momentum cutoff on negative energy sea contributions. A cutoff of twice the nucleon mass improves agreement with observed spin orbit splittings in nuclei compared to the standard infinite cutoff results, an effect traceable to the fact that imposing the cutoff reduces m*/m. The cutoff is much less important than consistency in the description of low-lying collective levels. The cutoff model provides excellent agreement with quasi-elastic (e, e') data
Wavelets as basis functions in electronic structure calculations
International Nuclear Information System (INIS)
Chauvin, C.
2005-11-01
This thesis is devoted to the definition and the implementation of a multi-resolution method to determine the fundamental state of a system composed of nuclei and electrons. In this work, we are interested in the Density Functional Theory (DFT), which allows to express the Hamiltonian operator with the electronic density only, by a Coulomb potential and a non-linear potential. This operator acts on orbitals, which are solutions of the so-called Kohn-Sham equations. Their resolution needs to express orbitals and density on a set of functions owing both physical and numerical properties, as explained in the second chapter. One can hardly satisfy these two properties simultaneously, that is why we are interested in orthogonal and bi-orthogonal wavelets basis, whose properties of interpolation are presented in the third chapter. We present in the fourth chapter three dimensional solvers for the Coulomb's potential, using not only the preconditioning property of wavelets, but also a multigrid algorithm. Determining this potential allows us to solve the self-consistent Kohn-Sham equations, by an algorithm presented in chapter five. The originality of our method consists in the construction of the stiffness matrix, combining a Galerkin formulation and a collocation scheme. We analyse the approximation properties of this method in case of linear Hamiltonian, such as harmonic oscillator and hydrogen, and present convergence results of the DFT for small electrons. Finally we show how orbital compression reduces considerably the number of coefficients to keep, while preserving a good accuracy of the fundamental energy. (author)
Orthology and paralogy constraints: satisfiability and consistency
Lafond, Manuel; El-Mabrouk, Nadia
2014-01-01
Background A variety of methods based on sequence similarity, reconciliation, synteny or functional characteristics, can be used to infer orthology and paralogy relations between genes of a given gene family G . But is a given set C of orthology/paralogy constraints possible, i.e., can they simultaneously co-exist in an evolutionary history for G ? While previous studies have focused on full sets of constraints, here we consider the general case where C does not necessarily involve a ...
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
International Nuclear Information System (INIS)
Chan, C.T.; Vanderbilt, D.; Louie, S.G.; Materials and Molecular Research Division, Lawrence Berkeley Laboratory, University of California, Berkeley, California 94720)
1986-01-01
We present a general self-consistency procedure formulated in momentum space for electronic structure and total-energy calculations of crystalline solids. It is shown that both the charge density and the change in the Hamiltonian matrix elements in each iteration can be calculated in a straight-forward fashion once a set of overlap matrices is computed. The present formulation has the merit of bringing the self-consistency problem for different basis sets to the same footing. The scheme is used to extend a first-principles pseudopotential linear combination of Gaussian orbitals method to full point-by-point self-consistency, without refitting of potentials. It is shown that the set of overlap matrices can be calculated very efficiently if we exploit the translational and space-group symmetries of the system under consideration. This scheme has been applied to study the structural and electronic properties of Si and W, prototypical systems of very different bonding properties. The results agree well with experiment and other calculations. The fully self-consistent results are compared with those obtained by a variational procedure [J. R. Chelikowsky and S. G. Louie, Phys. Rev. B 29, 3470 (1984)]. We find that the structural properties for bulk Si and W (both systems have no interatomic charge transfer) can be treated accurately by the variational procedure. However, full self-consistency is needed for an accurate description of the band energies
Basis reduction for layered lattices
Torreão Dassen, Erwin
2011-01-01
We develop the theory of layered Euclidean spaces and layered lattices. We present algorithms to compute both Gram-Schmidt and reduced bases in this generalized setting. A layered lattice can be seen as lattices where certain directions have infinite weight. It can also be
Parton Distributions based on a Maximally Consistent Dataset
Rojo, Juan
2016-04-01
The choice of data that enters a global QCD analysis can have a substantial impact on the resulting parton distributions and their predictions for collider observables. One of the main reasons for this has to do with the possible presence of inconsistencies, either internal within an experiment or external between different experiments. In order to assess the robustness of the global fit, different definitions of a conservative PDF set, that is, a PDF set based on a maximally consistent dataset, have been introduced. However, these approaches are typically affected by theory biases in the selection of the dataset. In this contribution, after a brief overview of recent NNPDF developments, we propose a new, fully objective, definition of a conservative PDF set, based on the Bayesian reweighting approach. Using the new NNPDF3.0 framework, we produce various conservative sets, which turn out to be mutually in agreement within the respective PDF uncertainties, as well as with the global fit. We explore some of their implications for LHC phenomenology, finding also good consistency with the global fit result. These results provide a non-trivial validation test of the new NNPDF3.0 fitting methodology, and indicate that possible inconsistencies in the fitted dataset do not affect substantially the global fit PDFs.
Self-consistent approximations beyond the CPA: Part II
International Nuclear Information System (INIS)
Kaplan, T.; Gray, L.J.
1982-01-01
This paper concentrates on a self-consistent approximation for random alloys developed by Kaplan, Leath, Gray, and Diehl. The construction of the augmented space formalism for a binary alloy is sketched, and the notation to be used derived. Using the operator methods of the augmented space, the self-consistent approximation is derived for the average Green's function, and for evaluating the self-energy, taking into account the scattering by clusters of excitations. The particular cluster approximation desired is derived by treating the scattering by the excitations with S /SUB T/ exactly. Fourier transforms on the disorder-space clustersite labels solve the self-consistent set of equations. Expansion to short range order in the alloy is also discussed. A method to reduce the problem to a computationally tractable form is described
Consistency and Reconciliation Model In Regional Development Planning
Directory of Open Access Journals (Sweden)
Dina Suryawati
2016-10-01
Full Text Available The aim of this study was to identify the problems and determine the conceptual model of regional development planning. Regional development planning is a systemic, complex and unstructured process. Therefore, this study used soft systems methodology to outline unstructured issues with a structured approach. The conceptual models that were successfully constructed in this study are a model of consistency and a model of reconciliation. Regional development planning is a process that is well-integrated with central planning and inter-regional planning documents. Integration and consistency of regional planning documents are very important in order to achieve the development goals that have been set. On the other hand, the process of development planning in the region involves technocratic system, that is, both top-down and bottom-up system of participation. Both must be balanced, do not overlap and do not dominate each other. regional, development, planning, consistency, reconciliation
An algebraic method for constructing stable and consistent autoregressive filters
International Nuclear Information System (INIS)
Harlim, John; Hong, Hoon; Robbins, Jacob L.
2015-01-01
In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides a discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern
Consistency and refinement for Interval Markov Chains
DEFF Research Database (Denmark)
Delahaye, Benoit; Larsen, Kim Guldstrand; Legay, Axel
2012-01-01
Interval Markov Chains (IMC), or Markov Chains with probability intervals in the transition matrix, are the base of a classic specification theory for probabilistic systems [18]. The standard semantics of IMCs assigns to a specification the set of all Markov Chains that satisfy its interval...
Market-consistent valuation of pension liabilities
Pelsser, A.; Vlaar, P.
2008-01-01
The European Union is currently preparing a new set of rules for the supervision of insurance companies, and is considering implementing these rules for pension funds as well. This framework is known as Solvency II. Similar to the Basle II framework for banking supervision, Solvency II is based on
Some new classes of consistent risk measures
Goovaerts, M.J.; Kaas, R.; Dhaene, J.L.M.; Tang, Q.
2004-01-01
Many types of insurance premium principles and/or risk measures can be characterized by means of a set of axioms, which in many cases are rather arbitrarily chosen and not always in accordance with economic reality. In the present paper we generalize Yaari¿s risk measure by relaxing his axioms. In
Proteolysis and consistency of Meshanger cheese
Jong, de L.
1978-01-01
Proteolysis in Meshanger cheese, estimated by quantitative polyacrylamide gel electrophoresis is discussed. The conversion of α _{s1} -casein was proportional to rennet concentration in the cheese. Changes in consistency, after a maximum, were correlated to breakdown of
Consistent Valuation across Curves Using Pricing Kernels
Directory of Open Access Journals (Sweden)
Andrea Macrina
2018-03-01
Full Text Available The general problem of asset pricing when the discount rate differs from the rate at which an asset’s cash flows accrue is considered. A pricing kernel framework is used to model an economy that is segmented into distinct markets, each identified by a yield curve having its own market, credit and liquidity risk characteristics. The proposed framework precludes arbitrage within each market, while the definition of a curve-conversion factor process links all markets in a consistent arbitrage-free manner. A pricing formula is then derived, referred to as the across-curve pricing formula, which enables consistent valuation and hedging of financial instruments across curves (and markets. As a natural application, a consistent multi-curve framework is formulated for emerging and developed inter-bank swap markets, which highlights an important dual feature of the curve-conversion factor process. Given this multi-curve framework, existing multi-curve approaches based on HJM and rational pricing kernel models are recovered, reviewed and generalised and single-curve models extended. In another application, inflation-linked, currency-based and fixed-income hybrid securities are shown to be consistently valued using the across-curve valuation method.
Guided color consistency optimization for image mosaicking
Xie, Renping; Xia, Menghan; Yao, Jian; Li, Li
2018-01-01
This paper studies the problem of color consistency correction for sequential images with diverse color characteristics. Existing algorithms try to adjust all images to minimize color differences among images under a unified energy framework, however, the results are prone to presenting a consistent but unnatural appearance when the color difference between images is large and diverse. In our approach, this problem is addressed effectively by providing a guided initial solution for the global consistency optimization, which avoids converging to a meaningless integrated solution. First of all, to obtain the reliable intensity correspondences in overlapping regions between image pairs, we creatively propose the histogram extreme point matching algorithm which is robust to image geometrical misalignment to some extents. In the absence of the extra reference information, the guided initial solution is learned from the major tone of the original images by searching some image subset as the reference, whose color characteristics will be transferred to the others via the paths of graph analysis. Thus, the final results via global adjustment will take on a consistent color similar to the appearance of the reference image subset. Several groups of convincing experiments on both the synthetic dataset and the challenging real ones sufficiently demonstrate that the proposed approach can achieve as good or even better results compared with the state-of-the-art approaches.
Consistency in multi-viewpoint architectural design
Dijkman, R.M.; Dijkman, Remco Matthijs
2006-01-01
This thesis presents a framework that aids in preserving consistency in multi-viewpoint designs. In a multi-viewpoint design each stakeholder constructs his own design part. We call each stakeholder’s design part the view of that stakeholder. To construct his view, a stakeholder has a viewpoint.
Consistent Visual Analyses of Intrasubject Data
Kahng, SungWoo; Chung, Kyong-Mee; Gutshall, Katharine; Pitts, Steven C.; Kao, Joyce; Girolami, Kelli
2010-01-01
Visual inspection of single-case data is the primary method of interpretation of the effects of an independent variable on a dependent variable in applied behavior analysis. The purpose of the current study was to replicate and extend the results of DeProspero and Cohen (1979) by reexamining the consistency of visual analysis across raters. We…
Consistent Stochastic Modelling of Meteocean Design Parameters
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Sterndorff, M. J.
2000-01-01
Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...
On the existence of consistent price systems
DEFF Research Database (Denmark)
Bayraktar, Erhan; Pakkanen, Mikko S.; Sayit, Hasanjan
2014-01-01
We formulate a sufficient condition for the existence of a consistent price system (CPS), which is weaker than the conditional full support condition (CFS). We use the new condition to show the existence of CPSs for certain processes that fail to have the CFS property. In particular this condition...
Dynamic phonon exchange requires consistent dressing
International Nuclear Information System (INIS)
Hahne, F.J.W.; Engelbrecht, C.A.; Heiss, W.D.
1976-01-01
It is shown that states with undersirable properties (such as ghosts, states with complex eigenenergies and states with unrestricted normalization) emerge from two-body calculations using dynamic effective interactions if one is not careful in introducing single-particle self-energy insertions in a consistent manner
Consistent feeding positions of great tit parents
Lessells, C.M.; Poelman, E.H.; Mateman, A.C.; Cassey, Ph.
2006-01-01
When parent birds arrive at the nest to provision their young, their position on the nest rim may influence which chick or chicks are fed. As a result, the consistency of feeding positions of the individual parents, and the difference in position between the parents, may affect how equitably food is
Consistency of the postulates of special relativity
International Nuclear Information System (INIS)
Gron, O.; Nicola, M.
1976-01-01
In a recent article in this journal, Kingsley has tried to show that the postulates of special relativity contradict each other. It is shown that the arguments of Kingsley are invalid because of an erroneous appeal to symmetry in a nonsymmetric situation. The consistency of the postulates of special relativity and the relativistic kinematics deduced from them is restated
Consistency of Network Traffic Repositories: An Overview
Lastdrager, E.; Lastdrager, E.E.H.; Pras, Aiko
2009-01-01
Traffc repositories with TCP/IP header information are very important for network analysis. Researchers often assume that such repositories reliably represent all traffc that has been ï¬‚owing over the network; little thoughts are made regarding the consistency of these repositories. Still, for
Consistency analysis of network traffic repositories
Lastdrager, Elmer; Lastdrager, E.E.H.; Pras, Aiko
Traffic repositories with TCP/IP header information are very important for network analysis. Researchers often assume that such repositories reliably represent all traffic that has been flowing over the network; little thoughts are made regarding the consistency of these repositories. Still, for
Mutually unbiased bases and trinary operator sets for N qutrits
International Nuclear Information System (INIS)
Lawrence, Jay
2004-01-01
A compete orthonormal basis of N-qutrit unitary operators drawn from the Pauli group consists of the identity and 9 N -1 traceless operators. The traceless ones partition into 3 N +1 maximally commuting subsets (MCS's) of 3 N -1 operators each, whose joint eigenbases are mutually unbiased. We prove that Pauli factor groups of order 3 N are isomorphic to all MCS's and show how this result applies in specific cases. For two qutrits, the 80 traceless operators partition into 10 MCS's. We prove that 4 of the corresponding basis sets must be separable, while 6 must be totally entangled (and Bell-like). For three qutrits, 728 operators partition into 28 MCS's with less rigid structure, allowing for the coexistence of separable, partially entangled, and totally entangled (GHZ-like) bases. However a minimum of 16 GHZ-like bases must occur. Every basis state is described by an N-digit trinary number consisting of the eigenvalues of N observables constructed from the corresponding MCS
Validation of consistency of Mendelian sampling variance.
Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H
2018-03-01
Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic
10 CFR 830.202 - Safety basis.
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Safety basis. 830.202 Section 830.202 Energy DEPARTMENT OF ENERGY NUCLEAR SAFETY MANAGEMENT Safety Basis Requirements § 830.202 Safety basis. (a) The contractor responsible for a hazard category 1, 2, or 3 DOE nuclear facility must establish and maintain the safety basis...
FLAMMABLE GAS TECHNICAL BASIS DOCUMENT
Energy Technology Data Exchange (ETDEWEB)
KRIPPS, L.J.
2005-02-18
This document describes the qualitative evaluation of frequency and consequences for double shell tank (DST) and single shell tank (SST) representative flammable gas accidents and associated hazardous conditions without controls. The evaluation indicated that safety-significant SSCs and/or TSRS were required to prevent or mitigate flammable gas accidents. Discussion on the resulting control decisions is included. This technical basis document was developed to support of the Tank Farms Documented Safety Analysis (DSA) and describes the risk binning process for the flammable gas representative accidents and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous condition based on an evaluation of the event frequency and consequence.
FLAMMABLE GAS TECHNICAL BASIS DOCUMENT
Energy Technology Data Exchange (ETDEWEB)
KRIPPS, L.J.
2005-03-03
This document describes the qualitative evaluation of frequency and consequences for DST and SST representative flammable gas accidents and associated hazardous conditions without controls. The evaluation indicated that safety-significant structures, systems and components (SSCs) and/or technical safety requirements (TSRs) were required to prevent or mitigate flammable gas accidents. Discussion on the resulting control decisions is included. This technical basis document was developed to support WP-13033, Tank Farms Documented Safety Analysis (DSA), and describes the risk binning process for the flammable gas representative accidents and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous condition based on an evaluation of the event frequency and consequence.
The neurological basis of occupation.
Gutman, Sharon A; Schindler, Victoria P
2007-01-01
The purpose of the present paper was to survey the literature about the neurological basis of human activity and its relationship to occupation and health. Activities related to neurological function were organized into three categories: those that activate the brain's reward system; those that promote the relaxation response; and those that preserve cognitive function into old age. The results from the literature review correlating neurological evidence and activities showed that purposeful and meaningful activities could counter the effects of stress-related diseases and reduce the risk for dementia. Specifically, it was found that music, drawing, meditation, reading, arts and crafts, and home repairs, for example, can stimulate the neurogical system and enhance health and well-being, Prospective research studies are needed to examine the effects of purposeful activities on reducing stress and slowing the rate of cognitive decline.
Infective basis in childhood leukaemia
International Nuclear Information System (INIS)
Kinlen, Leo
1995-01-01
Leukaemia in children has often been suspected of having an infective basis (as specifically identified in certain animal species) but, until recently, formal studies had gone no further than to show that it does not markedly cluster in time and space. Many infective illnesses, however, are uncommon responses to infections that are mainly spread by the majority of infected individuals who are not ill. These include, for example, glandular fever and certain types of meningitis. Such illnesses are not contagious and do not normally cluster. The possibilities that childhood leukamia might belong to this class of infective illnesses and be subject to increases in incidence as a result of epidemics of an underlying infection were suggested by the well-known excesses near Sellafield and Dounreay. (author)
Rating environmental noise on the basis of noise maps
Miedema, H.M.E.; Borst, H.C.
2006-01-01
A system that rates noise on the basis of noise maps has been developed which is based on empirical exposure-response relationships, so that effects in the community will be lower if the system gives a better rating. It is consistent with noise metrics and effect endpoint chosen in the EU, i.e., it
Guidelines for determining design basis ground motions
International Nuclear Information System (INIS)
1993-11-01
This report develops and applies a method for estimating strong earthquake ground motion. The emphasis of this study is on ground motion estimation in Eastern North America (east of the Rocky Mountains), with particular emphasis on the Eastern United States and southeastern Canada. Specifically considered are ground motions resulting from earthquakes with magnitudes from 5 to 8, fault distances from 0 to 500 km, and frequencies from 1 to 35 Hz. The two main objectives were: (1) to develop generic relations for estimating ground motion appropriate for site screening; and (2) to develop a guideline for conducting a thorough site investigation needed to define the seismic design basis. For the first objective, an engineering model was developed to predict the expected ground motion on rock sites, with an additional set of amplification factors to account for the response of the soil column over rock at soil sites. The results incorporate best estimates of ground motion as well as the randomness and uncertainty associated with those estimates. For the second objective, guidelines were developed for gathering geotechnical information at a site and using this information in calculating site response. As a part of this development, an extensive set of geotechnical and seismic investigations was conducted at three reference sites. Together, the engineering model and guidelines provide the means to select and assess the seismic suitability of a site
Consistency test of the standard model
International Nuclear Information System (INIS)
Pawlowski, M.; Raczka, R.
1997-01-01
If the 'Higgs mass' is not the physical mass of a real particle but rather an effective ultraviolet cutoff then a process energy dependence of this cutoff must be admitted. Precision data from at least two energy scale experimental points are necessary to test this hypothesis. The first set of precision data is provided by the Z-boson peak experiments. We argue that the second set can be given by 10-20 GeV e + e - colliders. We pay attention to the special role of tau polarization experiments that can be sensitive to the 'Higgs mass' for a sample of ∼ 10 8 produced tau pairs. We argue that such a study may be regarded as a negative selfconsistency test of the Standard Model and of most of its extensions
International Nuclear Information System (INIS)
Korkola, James E; Waldman, Frederic M; Blaveri, Ekaterina; DeVries, Sandy; Moore, Dan H II; Hwang, E Shelley; Chen, Yunn-Yi; Estep, Anne LH; Chew, Karen L; Jensen, Ronald H
2007-01-01
Breast cancer is a heterogeneous disease, presenting with a wide range of histologic, clinical, and genetic features. Microarray technology has shown promise in predicting outcome in these patients. We profiled 162 breast tumors using expression microarrays to stratify tumors based on gene expression. A subset of 55 tumors with extensive follow-up was used to identify gene sets that predicted outcome. The predictive gene set was further tested in previously published data sets. We used different statistical methods to identify three gene sets associated with disease free survival. A fourth gene set, consisting of 21 genes in common to all three sets, also had the ability to predict patient outcome. To validate the predictive utility of this derived gene set, it was tested in two published data sets from other groups. This gene set resulted in significant separation of patients on the basis of survival in these data sets, correctly predicting outcome in 62–65% of patients. By comparing outcome prediction within subgroups based on ER status, grade, and nodal status, we found that our gene set was most effective in predicting outcome in ER positive and node negative tumors. This robust gene selection with extensive validation has identified a predictive gene set that may have clinical utility for outcome prediction in breast cancer patients
A consistent interpretation of quantum mechanics
International Nuclear Information System (INIS)
Omnes, Roland
1990-01-01
Some mostly recent theoretical and mathematical advances can be linked together to yield a new consistent interpretation of quantum mechanics. It relies upon a unique and universal interpretative rule of a logical character which is based upon Griffiths consistent history. Some new results in semi-classical physics allow classical physics to be derived from this rule, including its logical aspects, and to prove accordingly the existence of determinism within the quantum framework. Together with decoherence, this can be used to retrieve the existence of facts, despite the probabilistic character of the theory. Measurement theory can then be made entirely deductive. It is accordingly found that wave packet reduction is a logical property, whereas one can always choose to avoid using it. The practical consequences of this interpretation are most often in agreement with the Copenhagen formulation but they can be proved never to give rise to any logical inconsistency or paradox. (author)
Self-consistency in Capital Markets
Benbrahim, Hamid
2013-03-01
Capital Markets are considered, at least in theory, information engines whereby traders contribute to price formation with their diverse perspectives. Regardless whether one believes in efficient market theory on not, actions by individual traders influence prices of securities, which in turn influence actions by other traders. This influence is exerted through a number of mechanisms including portfolio balancing, margin maintenance, trend following, and sentiment. As a result market behaviors emerge from a number of mechanisms ranging from self-consistency due to wisdom of the crowds and self-fulfilling prophecies, to more chaotic behavior resulting from dynamics similar to the three body system, namely the interplay between equities, options, and futures. This talk will address questions and findings regarding the search for self-consistency in capital markets.
Student Effort, Consistency and Online Performance
Directory of Open Access Journals (Sweden)
Hilde Patron
2011-07-01
Full Text Available This paper examines how student effort, consistency, motivation, and marginal learning, influence student grades in an online course. We use data from eleven Microeconomics courses taught online for a total of 212 students. Our findings show that consistency, or less time variation, is a statistically significant explanatory variable, whereas effort, or total minutes spent online, is not. Other independent variables include GPA and the difference between a pre-test and a post-test. The GPA is used as a measure of motivation, and the difference between a post-test and pre-test as marginal learning. As expected, the level of motivation is found statistically significant at a 99% confidence level, and marginal learning is also significant at a 95% level.
Consistent thermodynamic properties of lipids systems
DEFF Research Database (Denmark)
Cunico, Larissa; Ceriani, Roberta; Sarup, Bent
different pressures, with azeotrope behavior observed. Available thermodynamic consistency tests for TPx data were applied before performing parameter regressions for Wilson, NRTL, UNIQUAC and original UNIFAC models. The relevance of enlarging experimental databank of lipids systems data in order to improve......Physical and thermodynamic properties of pure components and their mixtures are the basic requirement for process design, simulation, and optimization. In the case of lipids, our previous works[1-3] have indicated a lack of experimental data for pure components and also for their mixtures...... the performance of predictive thermodynamic models was confirmed in this work by analyzing the calculated values of original UNIFAC model. For solid-liquid equilibrium (SLE) data, new consistency tests have been developed [2]. Some of the developed tests were based in the quality tests proposed for VLE data...
Consistency relation for cosmic magnetic fields
DEFF Research Database (Denmark)
Jain, R. K.; Sloth, M. S.
2012-01-01
If cosmic magnetic fields are indeed produced during inflation, they are likely to be correlated with the scalar metric perturbations that are responsible for the cosmic microwave background anisotropies and large scale structure. Within an archetypical model of inflationary magnetogenesis, we show...... that there exists a new simple consistency relation for the non-Gaussian cross correlation function of the scalar metric perturbation with two powers of the magnetic field in the squeezed limit where the momentum of the metric perturbation vanishes. We emphasize that such a consistency relation turns out...... to be extremely useful to test some recent calculations in the literature. Apart from primordial non-Gaussianity induced by the curvature perturbations, such a cross correlation might provide a new observational probe of inflation and can in principle reveal the primordial nature of cosmic magnetic fields. DOI...
Consistent Estimation of Partition Markov Models
Directory of Open Access Journals (Sweden)
Jesús E. García
2017-04-01
Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.
Internal Branding and Employee Brand Consistent Behaviours
DEFF Research Database (Denmark)
Mazzei, Alessandra; Ravazzani, Silvia
2017-01-01
constitutive processes. In particular, the paper places emphasis on the role and kinds of communication practices as a central part of the nonnormative and constitutive internal branding process. The paper also discusses an empirical study based on interviews with 32 Italian and American communication managers...... and 2 focus groups with Italian communication managers. Findings show that, in order to enhance employee brand consistent behaviours, the most effective communication practices are those characterised as enablement-oriented. Such a communication creates the organizational conditions adequate to sustain......Employee behaviours conveying brand values, named brand consistent behaviours, affect the overall brand evaluation. Internal branding literature highlights a knowledge gap in terms of communication practices intended to sustain such behaviours. This study contributes to the development of a non...
Self-consistent velocity dependent effective interactions
International Nuclear Information System (INIS)
Kubo, Takayuki; Sakamoto, Hideo; Kammuri, Tetsuo; Kishimoto, Teruo.
1993-09-01
The field coupling method is extended to a system with a velocity dependent mean potential. By means of this method, we can derive the effective interactions which are consistent with the mean potential. The self-consistent velocity dependent effective interactions are applied to the microscopic analysis of the structures of giant dipole resonances (GDR) of 148,154 Sm, of the first excited 2 + states of Sn isotopes and of the first excited 3 - states of Mo isotopes. It is clarified that the interactions play crucial roles in describing the splitting of the resonant structure of GDR peaks, in restoring the energy weighted sum rule values, and in reducing B (Eλ) values. (author)
Evaluating Temporal Consistency in Marine Biodiversity Hotspots
Piacenza, Susan E.; Thurman, Lindsey L.; Barner, Allison K.; Benkwitt, Cassandra E.; Boersma, Kate S.; Cerny-Chipman, Elizabeth B.; Ingeman, Kurt E.; Kindinger, Tye L.; Lindsley, Amy J.; Nelson, Jake; Reimer, Jessica N.; Rowe, Jennifer C.; Shen, Chenchen; Thompson, Kevin A.; Heppell, Selina S.
2015-01-01
With the ongoing crisis of biodiversity loss and limited resources for conservation, the concept of biodiversity hotspots has been useful in determining conservation priority areas. However, there has been limited research into how temporal variability in biodiversity may influence conservation area prioritization. To address this information gap, we present an approach to evaluate the temporal consistency of biodiversity hotspots in large marine ecosystems. Using a large scale, public monito...
Cloud Standardization: Consistent Business Processes and Information
Directory of Open Access Journals (Sweden)
Razvan Daniel ZOTA
2013-01-01
Full Text Available Cloud computing represents one of the latest emerging trends in distributed computing that enables the existence of hardware infrastructure and software applications as services. The present paper offers a general approach to the cloud computing standardization as a mean of improving the speed of adoption for the cloud technologies. Moreover, this study tries to show out how organizations may achieve more consistent business processes while operating with cloud computing technologies.
Migration and the Wage-Settings Curve
DEFF Research Database (Denmark)
Brücker, Herbert; Jahn, Elke
Germany on basis of a wage-setting curve. The wage-setting curve relies on the assumption that wages respond to a hange in the unemployment rate, albeit imperfectly. This allows one to derive the wage and employment effects of migration simultaneously in a general equilibrium framework. Using...
Consistency relations in effective field theory
Energy Technology Data Exchange (ETDEWEB)
Munshi, Dipak; Regan, Donough, E-mail: D.Munshi@sussex.ac.uk, E-mail: D.Regan@sussex.ac.uk [Astronomy Centre, School of Mathematical and Physical Sciences, University of Sussex, Brighton BN1 9QH (United Kingdom)
2017-06-01
The consistency relations in large scale structure relate the lower-order correlation functions with their higher-order counterparts. They are direct outcome of the underlying symmetries of a dynamical system and can be tested using data from future surveys such as Euclid. Using techniques from standard perturbation theory (SPT), previous studies of consistency relation have concentrated on continuity-momentum (Euler)-Poisson system of an ideal fluid. We investigate the consistency relations in effective field theory (EFT) which adjusts the SPT predictions to account for the departure from the ideal fluid description on small scales. We provide detailed results for the 3D density contrast δ as well as the scaled divergence of velocity θ-bar . Assuming a ΛCDM background cosmology, we find the correction to SPT results becomes important at k ∼> 0.05 h/Mpc and that the suppression from EFT to SPT results that scales as square of the wave number k , can reach 40% of the total at k ≈ 0.25 h/Mpc at z = 0. We have also investigated whether effective field theory corrections to models of primordial non-Gaussianity can alter the squeezed limit behaviour, finding the results to be rather insensitive to these counterterms. In addition, we present the EFT corrections to the squeezed limit of the bispectrum in redshift space which may be of interest for tests of theories of modified gravity.
Consistent probabilities in loop quantum cosmology
International Nuclear Information System (INIS)
Craig, David A; Singh, Parampreet
2013-01-01
A fundamental issue for any quantum cosmological theory is to specify how probabilities can be assigned to various quantum events or sequences of events such as the occurrence of singularities or bounces. In previous work, we have demonstrated how this issue can be successfully addressed within the consistent histories approach to quantum theory for Wheeler–DeWitt-quantized cosmological models. In this work, we generalize that analysis to the exactly solvable loop quantization of a spatially flat, homogeneous and isotropic cosmology sourced with a massless, minimally coupled scalar field known as sLQC. We provide an explicit, rigorous and complete decoherent-histories formulation for this model and compute the probabilities for the occurrence of a quantum bounce versus a singularity. Using the scalar field as an emergent internal time, we show for generic states that the probability for a singularity to occur in this model is zero, and that of a bounce is unity, complementing earlier studies of the expectation values of the volume and matter density in this theory. We also show from the consistent histories point of view that all states in this model, whether quantum or classical, achieve arbitrarily large volume in the limit of infinite ‘past’ or ‘future’ scalar ‘time’, in the sense that the wave function evaluated at any arbitrary fixed value of the volume vanishes in that limit. Finally, we briefly discuss certain misconceptions concerning the utility of the consistent histories approach in these models. (paper)
Crespin, Daniel J; Christianson, Jon B; McCullough, Jeffrey S; Finch, Michael D
This study addresses whether health systems have consistent diabetes care performance across their ambulatory clinics and whether increasing consistency is associated with improvements in clinic performance. Study data included 2007 to 2013 diabetes care intermediate outcome measures for 661 ambulatory clinics in Minnesota and bordering states. Health systems provided more consistent performance, as measured by the standard deviation of performance for clinics in a system, relative to propensity score-matched proxy systems created for comparison purposes. No evidence was found that improvements in consistency were associated with higher clinic performance. The combination of high performance and consistent care is likely to enhance a health system's brand reputation, allowing it to better mitigate the financial risks of consumers seeking care outside the organization. These results suggest that larger health systems are most likely to deliver the combination of consistent and high-performance care. Future research should explore the mechanisms that drive consistent care within health systems.
Performance and consistency of indicator groups in two biodiversity hotspots.
Directory of Open Access Journals (Sweden)
Joaquim Trindade-Filho
Full Text Available In a world limited by data availability and limited funds for conservation, scientists and practitioners must use indicator groups to define spatial conservation priorities. Several studies have evaluated the effectiveness of indicator groups, but still little is known about the consistency in performance of these groups in different regions, which would allow their a priori selection.We systematically examined the effectiveness and the consistency of nine indicator groups in representing mammal species in two top-ranked Biodiversity Hotspots (BH: the Brazilian Cerrado and the Atlantic Forest. To test for group effectiveness we first found the best sets of sites able to maximize the representation of each indicator group in the BH and then calculated the average representation of different target species by the indicator groups in the BH. We considered consistent indicator groups whose representation of target species was not statistically different between BH. We called effective those groups that outperformed the target-species representation achieved by random sets of species. Effective indicator groups required the selection of less than 2% of the BH area for representing target species. Restricted-range species were the most effective indicators for the representation of all mammal diversity as well as target species. It was also the only group with high consistency.We show that several indicator groups could be applied as shortcuts for representing mammal species in the Cerrado and the Atlantic Forest to develop conservation plans, however, only restricted-range species consistently held as the most effective indicator group for such a task. This group is of particular importance in conservation planning as it captures high diversity of endemic and endangered species.
Performance and consistency of indicator groups in two biodiversity hotspots.
Trindade-Filho, Joaquim; Loyola, Rafael Dias
2011-01-01
In a world limited by data availability and limited funds for conservation, scientists and practitioners must use indicator groups to define spatial conservation priorities. Several studies have evaluated the effectiveness of indicator groups, but still little is known about the consistency in performance of these groups in different regions, which would allow their a priori selection. We systematically examined the effectiveness and the consistency of nine indicator groups in representing mammal species in two top-ranked Biodiversity Hotspots (BH): the Brazilian Cerrado and the Atlantic Forest. To test for group effectiveness we first found the best sets of sites able to maximize the representation of each indicator group in the BH and then calculated the average representation of different target species by the indicator groups in the BH. We considered consistent indicator groups whose representation of target species was not statistically different between BH. We called effective those groups that outperformed the target-species representation achieved by random sets of species. Effective indicator groups required the selection of less than 2% of the BH area for representing target species. Restricted-range species were the most effective indicators for the representation of all mammal diversity as well as target species. It was also the only group with high consistency. We show that several indicator groups could be applied as shortcuts for representing mammal species in the Cerrado and the Atlantic Forest to develop conservation plans, however, only restricted-range species consistently held as the most effective indicator group for such a task. This group is of particular importance in conservation planning as it captures high diversity of endemic and endangered species.
Merriman, Tony R; Choi, Hyon K; Dalbeth, Nicola
2014-05-01
Gout results from deposition of monosodium urate (MSU) crystals. Elevated serum urate concentrations (hyperuricemia) are not sufficient for the development of disease. Genome-wide association studies (GWAS) have identified 28 loci controlling serum urate levels. The largest genetic effects are seen in genes involved in the renal excretion of uric acid, with others being involved in glycolysis. Whereas much is understood about the genetic control of serum urate levels, little is known about the genetic control of inflammatory responses to MSU crystals. Extending knowledge in this area depends on recruitment of large, clinically ascertained gout sample sets suitable for GWAS. Copyright © 2014 Elsevier Inc. All rights reserved.
Conceptual basis of outcome measures.
Keith, R A
1995-01-01
Because of its treatment configuration and the assumption of long-term benefit, rehabilitation has had a continuing interest in the measurement of outcomes. The utility of outcome indicators rests on their conceptual foundations, the technical development of measures and validation research. Some measures, particularly of functional status, have become increasingly sophisticated with the application of psychometric and statistical analysis techniques. Less effort has been devoted to an elaboration of their theoretical basis. A first step is an examination of the assumptions underlying outcome measures, the purpose of this article. Central to an understanding is clarification of definitions of key terms such as outcomes, independence, impairment, disability and handicap. All outcome measures must be seen as part of a social context of norms and expectations. However, most norms in rehabilitation are implied rather than explicit. The assumptions behind several common outcomes are examined with suggestions for ways to increase their utility. The ability of rehabilitation to compete in the current climate, stressing cost-effectiveness, will depend heavily on the robustness of outcome measures.
Basis Document for Sludge Stabilization
Risenmay, H R
2001-01-01
DOE-RL recently issued Safety Evaluation Report (SER) amendments to the PFP Final Safety Analysis Report, HNF-SD-CP-SAR-021 Rev. 2. The Justification for Continued Operations for 2736-ZB and plutonium oxides in BTCs Safety Basis change (letter DOE-RL ABD-074) was approved by one of the SERs. Also approved by SER was the revised accident analysis for Magnesium Hydroxide Precipitation Process (MHPP) gloveboxes HC-230C-3 and HC-230C-5 containing increased glovebox inventories and corresponding increases in seismic release consequence. Numerous implementing documents require revision and issuance to implement the SER approvals. The SER plutonium oxides into BTCs specifically limited the SER scope to ''pure or clean oxides, i.e., 85 wt% or grater Pu, in this feed change'' (SER Section 3.0 Base Information paragraph 4 [page 11]). Comprehensive USQ Evaluation PFP-2001-12 addressed the packaging of Pu alloy metals into BTCs, and the packaging of Pu alloy oxides (powders) into food pack cans and determined that the ac...
Boverman, Gregory; Miller, Eric L.; Brooks, Dana H.; Fang, Qianqian; Carp, S. A.; Selb, J. J.; Boas, David A.
2007-02-01
In the course of our experiments imaging the compressed breast in conjunction with digital tomosynthesis, we have noted that significant changes in tissue optical properties, on the order of 5%, occur during our imaging protocol. These changes seem to consistent with changes both in total Hemoglobin concentration as well as in oxygen saturation, as was the case for our standalone breast compression study, which made use of reflectance measurements. Simulation experiments show the importance of taking into account the temporal dynamics in the image reconstruction, and demonstrate the possibility of imaging the spatio-temporal dynamics of oxygen saturation and total Hemoglobin in the breast. In the image reconstruction, we make use of spatio-temporal basis functions, specifically a voxel basis for spatial imaging, and a cubic spline basis in time, and we reconstruct the spatio-temporal images using the entire data set simultaneously, making use of both absolute and relative measurements in the cost function. We have modified the sequence of sources used in our imaging acquisition protocol to improve our temporal resolution, and preliminary results are shown for normal subjects.
Justifying quasiparticle self-consistent schemes via gradient optimization in Baym-Kadanoff theory.
Ismail-Beigi, Sohrab
2017-09-27
The question of which non-interacting Green's function 'best' describes an interacting many-body electronic system is both of fundamental interest as well as of practical importance in describing electronic properties of materials in a realistic manner. Here, we study this question within the framework of Baym-Kadanoff theory, an approach where one locates the stationary point of a total energy functional of the one-particle Green's function in order to find the total ground-state energy as well as all one-particle properties such as the density matrix, chemical potential, or the quasiparticle energy spectrum and quasiparticle wave functions. For the case of the Klein functional, our basic finding is that minimizing the length of the gradient of the total energy functional over non-interacting Green's functions yields a set of self-consistent equations for quasiparticles that is identical to those of the quasiparticle self-consistent GW (QSGW) (van Schilfgaarde et al 2006 Phys. Rev. Lett. 96 226402-4) approach, thereby providing an a priori justification for such an approach to electronic structure calculations. In fact, this result is general, applies to any self-energy operator, and is not restricted to any particular approximation, e.g., the GW approximation for the self-energy. The approach also shows that, when working in the basis of quasiparticle states, solving the diagonal part of the self-consistent Dyson equation is of primary importance while the off-diagonals are of secondary importance, a common observation in the electronic structure literature of self-energy calculations. Finally, numerical tests and analytical arguments show that when the Dyson equation produces multiple quasiparticle solutions corresponding to a single non-interacting state, minimizing the length of the gradient translates into choosing the solution with largest quasiparticle weight.
Consistency in performance evaluation reports and medical records.
Lu, Mingshan; Ma, Ching-to Albert
2002-12-01
In the health care market managed care has become the latest innovation for the delivery of services. For efficient implementation, the managed care organization relies on accurate information. So clinicians are often asked to report on patients before referrals are approved, treatments authorized, or insurance claims processed. What are clinicians responses to solicitation for information by managed care organizations? The existing health literature has already pointed out the importance of provider gaming, sincere reporting, nudging, and dodging the rules. We assess the consistency of clinicians reports on clients across administrative data and clinical records. For about 1,000 alcohol abuse treatment episodes, we compare clinicians reports across two data sets. The first one, the Maine Addiction Treatment System (MATS), was an administrative data set; the state government used it for program performance monitoring and evaluation. The second was a set of medical record abstracts, taken directly from the clinical records of treatment episodes. A clinician s reporting practice exhibits an inconsistency if the information reported in MATS differs from the information reported in the medical record in a statistically significant way. We look for evidence of inconsistencies in five categories: admission alcohol use frequency, discharge alcohol use frequency, termination status, admission employment status, and discharge employment status. Chi-square tests, Kappa statistics, and sensitivity and specificity tests are used for hypothesis testing. Multiple imputation methods are employed to address the problem of missing values in the record abstract data set. For admission and discharge alcohol use frequency measures, we find, respectively, strong and supporting evidence for inconsistencies. We find equally strong evidence for consistency in reports of admission and discharge employment status, and mixed evidence on report consistency on termination status. Patterns of
Evaluating Temporal Consistency in Marine Biodiversity Hotspots.
Piacenza, Susan E; Thurman, Lindsey L; Barner, Allison K; Benkwitt, Cassandra E; Boersma, Kate S; Cerny-Chipman, Elizabeth B; Ingeman, Kurt E; Kindinger, Tye L; Lindsley, Amy J; Nelson, Jake; Reimer, Jessica N; Rowe, Jennifer C; Shen, Chenchen; Thompson, Kevin A; Heppell, Selina S
2015-01-01
With the ongoing crisis of biodiversity loss and limited resources for conservation, the concept of biodiversity hotspots has been useful in determining conservation priority areas. However, there has been limited research into how temporal variability in biodiversity may influence conservation area prioritization. To address this information gap, we present an approach to evaluate the temporal consistency of biodiversity hotspots in large marine ecosystems. Using a large scale, public monitoring dataset collected over an eight year period off the US Pacific Coast, we developed a methodological approach for avoiding biases associated with hotspot delineation. We aggregated benthic fish species data from research trawls and calculated mean hotspot thresholds for fish species richness and Shannon's diversity indices over the eight year dataset. We used a spatial frequency distribution method to assign hotspot designations to the grid cells annually. We found no areas containing consistently high biodiversity through the entire study period based on the mean thresholds, and no grid cell was designated as a hotspot for greater than 50% of the time-series. To test if our approach was sensitive to sampling effort and the geographic extent of the survey, we followed a similar routine for the northern region of the survey area. Our finding of low consistency in benthic fish biodiversity hotspots over time was upheld, regardless of biodiversity metric used, whether thresholds were calculated per year or across all years, or the spatial extent for which we calculated thresholds and identified hotspots. Our results suggest that static measures of benthic fish biodiversity off the US West Coast are insufficient for identification of hotspots and that long-term data are required to appropriately identify patterns of high temporal variability in biodiversity for these highly mobile taxa. Given that ecological communities are responding to a changing climate and other
Self-consistent gravitational self-force
International Nuclear Information System (INIS)
Pound, Adam
2010-01-01
I review the problem of motion for small bodies in general relativity, with an emphasis on developing a self-consistent treatment of the gravitational self-force. An analysis of the various derivations extant in the literature leads me to formulate an asymptotic expansion in which the metric is expanded while a representative worldline is held fixed. I discuss the utility of this expansion for both exact point particles and asymptotically small bodies, contrasting it with a regular expansion in which both the metric and the worldline are expanded. Based on these preliminary analyses, I present a general method of deriving self-consistent equations of motion for arbitrarily structured (sufficiently compact) small bodies. My method utilizes two expansions: an inner expansion that keeps the size of the body fixed, and an outer expansion that lets the body shrink while holding its worldline fixed. By imposing the Lorenz gauge, I express the global solution to the Einstein equation in the outer expansion in terms of an integral over a worldtube of small radius surrounding the body. Appropriate boundary data on the tube are determined from a local-in-space expansion in a buffer region where both the inner and outer expansions are valid. This buffer-region expansion also results in an expression for the self-force in terms of irreducible pieces of the metric perturbation on the worldline. Based on the global solution, these pieces of the perturbation can be written in terms of a tail integral over the body's past history. This approach can be applied at any order to obtain a self-consistent approximation that is valid on long time scales, both near and far from the small body. I conclude by discussing possible extensions of my method and comparing it to alternative approaches.
A method for consistent precision radiation therapy
International Nuclear Information System (INIS)
Leong, J.
1985-01-01
Using a meticulous setup procedure in which repeated portal films were taken before each treatment until satisfactory portal verifications were obtained, a high degree of precision in patient positioning was achieved. A fluctuation from treatment to treatment, over 11 treatments, of less than +-0.10 cm (S.D.) for anatomical points inside the treatment field was obtained. This, however, only applies to specific anatomical points selected for this positioning procedure and does not apply to all points within the portal. We have generalized this procedure and have suggested a means by which any target volume can be consistently positioned which may approach this degree of precision. (orig.)
Gentzen's centenary the quest for consistency
Rathjen, Michael
2015-01-01
Gerhard Gentzen has been described as logic’s lost genius, whom Gödel called a better logician than himself. This work comprises articles by leading proof theorists, attesting to Gentzen’s enduring legacy to mathematical logic and beyond. The contributions range from philosophical reflections and re-evaluations of Gentzen’s original consistency proofs to the most recent developments in proof theory. Gentzen founded modern proof theory. His sequent calculus and natural deduction system beautifully explain the deep symmetries of logic. They underlie modern developments in computer science such as automated theorem proving and type theory.
Two consistent calculations of the Weinberg angle
International Nuclear Information System (INIS)
Fairlie, D.B.
1979-01-01
The Weinberg-Salam theory is reformulated as a pure Yang-Mills theory in a six-dimensional space, the Higgs field being interpreted as gauge potentials in the additional dimensions. Viewed in this way, the condition that the Higgs field transforms as a U(1) representation of charge one is equivalent to requiring a value of 30 0 C for the Weinberg angle. A second consistent determination comes from the idea borrowed from monopole theory that the electromagnetic field is in the direction of the Higgs field. (Author)
A Data Set of Human Body Movements for Physical Rehabilitation Exercises.
Vakanski, Aleksandar; Jun, Hyung-Pil; Paul, David; Baker, Russell
2018-03-01
The article presents University of Idaho - Physical Rehabilitation Movement Data (UI-PRMD) - a publically available data set of movements related to common exercises performed by patients in physical rehabilitation programs. For the data collection, 10 healthy subjects performed 10 repetitions of different physical therapy movements, with a Vicon optical tracker and a Microsoft Kinect sensor used for the motion capturing. The data are in a format that includes positions and angles of full-body joints. The objective of the data set is to provide a basis for mathematical modeling of therapy movements, as well as for establishing performance metrics for evaluation of patient consistency in executing the prescribed rehabilitation exercises.
Self-consistent Langmuir waves in resonantly driven thermal plasmas
Lindberg, R. R.; Charman, A. E.; Wurtele, J. S.
2007-12-01
The longitudinal dynamics of a resonantly driven Langmuir wave are analyzed in the limit that the growth of the electrostatic wave is slow compared to the bounce frequency. Using simple physical arguments, the nonlinear distribution function is shown to be nearly invariant in the canonical particle action, provided both a spatially uniform term and higher-order spatial harmonics are included along with the fundamental in the longitudinal electric field. Requirements of self-consistency with the electrostatic potential yield the basic properties of the nonlinear distribution function, including a frequency shift that agrees closely with driven, electrostatic particle simulations over a range of temperatures. This extends earlier work on nonlinear Langmuir waves by Morales and O'Neil [G. J. Morales and T. M. O'Neil, Phys. Rev. Lett. 28, 417 (1972)] and Dewar [R. L. Dewar, Phys. Plasmas 15, 712 (1972)], and could form the basis of a reduced kinetic treatment of plasma dynamics for accelerator applications or Raman backscatter.
Self-consistent Langmuir waves in resonantly driven thermal plasmas
International Nuclear Information System (INIS)
Lindberg, R. R.; Charman, A. E.; Wurtele, J. S.
2007-01-01
The longitudinal dynamics of a resonantly driven Langmuir wave are analyzed in the limit that the growth of the electrostatic wave is slow compared to the bounce frequency. Using simple physical arguments, the nonlinear distribution function is shown to be nearly invariant in the canonical particle action, provided both a spatially uniform term and higher-order spatial harmonics are included along with the fundamental in the longitudinal electric field. Requirements of self-consistency with the electrostatic potential yield the basic properties of the nonlinear distribution function, including a frequency shift that agrees closely with driven, electrostatic particle simulations over a range of temperatures. This extends earlier work on nonlinear Langmuir waves by Morales and O'Neil [G. J. Morales and T. M. O'Neil, Phys. Rev. Lett. 28, 417 (1972)] and Dewar [R. L. Dewar, Phys. Plasmas 15, 712 (1972)], and could form the basis of a reduced kinetic treatment of plasma dynamics for accelerator applications or Raman backscatter
Statistically Consistent k-mer Methods for Phylogenetic Tree Reconstruction.
Allman, Elizabeth S; Rhodes, John A; Sullivant, Seth
2017-02-01
Frequencies of k-mers in sequences are sometimes used as a basis for inferring phylogenetic trees without first obtaining a multiple sequence alignment. We show that a standard approach of using the squared Euclidean distance between k-mer vectors to approximate a tree metric can be statistically inconsistent. To remedy this, we derive model-based distance corrections for orthologous sequences without gaps, which lead to consistent tree inference. The identifiability of model parameters from k-mer frequencies is also studied. Finally, we report simulations showing that the corrected distance outperforms many other k-mer methods, even when sequences are generated with an insertion and deletion process. These results have implications for multiple sequence alignment as well since k-mer methods are usually the first step in constructing a guide tree for such algorithms.
Will the consistent organic food consumer step forward?
DEFF Research Database (Denmark)
Juhl, Hans Jørn; Fenger, Morten H. J.; Thøgersen, John
2017-01-01
registered transactions over 20 months from 8,704 randomly selected customers with a loyalty card are analyzed using a hidden Markov model, capturing the dynamics in consumers’ purchases. The model identifies latent states representing identifiable, accessible, and actionable dynamic customer segments...... and it captures the movements between states or segments. A pattern emerges which is consistent with the theory of behavioral spillover and inconsistent with the theory of moral licensing, including a tendency to buy organic products in an increasing number of product categories over time. The order in which...... organic products are adopted is inversely related to the behavioral costs of adopting them. The employed approach provides a firm basis for personalized communication aiming to increase cross-selling of organic products, increase the sale of less popular organic products, and to accelerate movements from...
Self-consistent field with pseudowavefunctions
International Nuclear Information System (INIS)
Szasz, L.
1976-01-01
A computational method is given in which the energy of an atom is computed by using pseudowavefunctions only. The method centers on a model energy expression E/sub M/ which is similar to the Hartree--Fock energy expression, but contains only pseudowavefunctions. A theorem is proved according to which the Hartree--Fock orbitals can be transformed by a linear transformation into a set of uniquely defined pseudowavefunctions which have the property that, when substituted into E/sub M/, this quantity will closely approximate the Hartree--Fock energy E/sub F/. The new method is then formulated by identifying the total energy of an atom with the minimum of E/sub M/. Application of the energy minimum principle leads to a set of equations for the pseudowavefunctions which are similar to but simpler than the Hartree--Fock equations. These equations contain pseudopotentials for which explicit expressions are derived. The possibility of replacing these pseudopotentials by simpler model potentials is discussed, and the criteria for the selection of the model potential are outlined
Sludge characterization: the role of physical consistency
Energy Technology Data Exchange (ETDEWEB)
Spinosa, Ludovico; Wichmann, Knut
2003-07-01
The physical consistency is an important parameter in sewage sludge characterization as it strongly affects almost all treatment, utilization and disposal operations. In addition, in many european Directives a reference to the physical consistency is reported as a characteristic to be evaluated for fulfilling the regulations requirements. Further, in many analytical methods for sludge different procedures are indicated depending on whether a sample is liquid or not, is solid or not. Three physical behaviours (liquid, paste-like and solid) can be observed with sludges, so the development of analytical procedures to define the boundary limit between liquid and paste-like behaviours (flowability) and that between solid and paste-like ones (solidity) is of growing interest. Several devices can be used for evaluating the flowability and solidity properties, but often they are costly and difficult to be operated in the field. Tests have been carried out to evaluate the possibility to adopt a simple extrusion procedure for flowability measurements, and a Vicat needle for solidity ones. (author)
Consistent mutational paths predict eukaryotic thermostability
Directory of Open Access Journals (Sweden)
van Noort Vera
2013-01-01
Full Text Available Abstract Background Proteomes of thermophilic prokaryotes have been instrumental in structural biology and successfully exploited in biotechnology, however many proteins required for eukaryotic cell function are absent from bacteria or archaea. With Chaetomium thermophilum, Thielavia terrestris and Thielavia heterothallica three genome sequences of thermophilic eukaryotes have been published. Results Studying the genomes and proteomes of these thermophilic fungi, we found common strategies of thermal adaptation across the different kingdoms of Life, including amino acid biases and a reduced genome size. A phylogenetics-guided comparison of thermophilic proteomes with those of other, mesophilic Sordariomycetes revealed consistent amino acid substitutions associated to thermophily that were also present in an independent lineage of thermophilic fungi. The most consistent pattern is the substitution of lysine by arginine, which we could find in almost all lineages but has not been extensively used in protein stability engineering. By exploiting mutational paths towards the thermophiles, we could predict particular amino acid residues in individual proteins that contribute to thermostability and validated some of them experimentally. By determining the three-dimensional structure of an exemplar protein from C. thermophilum (Arx1, we could also characterise the molecular consequences of some of these mutations. Conclusions The comparative analysis of these three genomes not only enhances our understanding of the evolution of thermophily, but also provides new ways to engineer protein stability.
Consistency of extreme flood estimation approaches
Felder, Guido; Paquet, Emmanuel; Penot, David; Zischg, Andreas; Weingartner, Rolf
2017-04-01
Estimations of low-probability flood events are frequently used for the planning of infrastructure as well as for determining the dimensions of flood protection measures. There are several well-established methodical procedures to estimate low-probability floods. However, a global assessment of the consistency of these methods is difficult to achieve, the "true value" of an extreme flood being not observable. Anyway, a detailed comparison performed on a given case study brings useful information about the statistical and hydrological processes involved in different methods. In this study, the following three different approaches for estimating low-probability floods are compared: a purely statistical approach (ordinary extreme value statistics), a statistical approach based on stochastic rainfall-runoff simulation (SCHADEX method), and a deterministic approach (physically based PMF estimation). These methods are tested for two different Swiss catchments. The results and some intermediate variables are used for assessing potential strengths and weaknesses of each method, as well as for evaluating the consistency of these methods.
Consistent biokinetic models for the actinide elements
International Nuclear Information System (INIS)
Leggett, R.W.
2001-01-01
The biokinetic models for Th, Np, Pu, Am and Cm currently recommended by the International Commission on Radiological Protection (ICRP) were developed within a generic framework that depicts gradual burial of skeletal activity in bone volume, depicts recycling of activity released to blood and links excretion to retention and translocation of activity. For other actinide elements such as Ac, Pa, Bk, Cf and Es, the ICRP still uses simplistic retention models that assign all skeletal activity to bone surface and depicts one-directional flow of activity from blood to long-term depositories to excreta. This mixture of updated and older models in ICRP documents has led to inconsistencies in dose estimates and interpretation of bioassay for radionuclides with reasonably similar biokinetics. This paper proposes new biokinetic models for Ac, Pa, Bk, Cf and Es that are consistent with the updated models for Th, Np, Pu, Am and Cm. The proposed models are developed within the ICRP's generic model framework for bone-surface-seeking radionuclides, and an effort has been made to develop parameter values that are consistent with results of comparative biokinetic data on the different actinide elements. (author)
Leerteoretiese basis van die andragogie
Directory of Open Access Journals (Sweden)
C. J. A. Simpson
1991-06-01
Full Text Available Learning theory basis of andragogy. A cursory glance at andragogy creates the impression that humanistic learning theory plays an all encompassing role in the learner centered approach andragogy espouses. A closer look, however, reveals that Knowles (1973, after having made an intensive study of learning theory, created an extensive framework within which human resource development can take place. The fact that Knowles attracted critique from different areas, led to a need to ascertain the role different learning theories, if any, played in the emergence of andragogy. Having looked at the assumptions displayed by the andragogical approach, as well as a comparison of different learning theories and their connection with andragogy, it became clear that andragogy contains elements of various learning theories in an adapted way. These adaptations resulted in an approach to adult education in which learners are given the opportunity to be part of the learning process in such a way that they themselves contribute to the development which takes place. Opsomming Met 'n eerste oogopslag wil dit voorkom asof humanistiese leerteorie 'n oorheersende rol in die leerdergesentreerde benadering van andragogie speel. By nadere ondersoek blyk dit egter dat Knowles (1973, na 'n deeglike studie van verskillende leerteoretiese beginsels, 'n omvangryke raamwerk geskep het waarbinne, aan die hand van verskeie aangepaste leerteoretiese beginsels, menslike hulpbronontwikkeling kan plaasvind. As gevolg van die feit dat Knowles vanuit verskillende oorde kritiek op die lyf geloop het, is besluit om die rol wat verskillende leerteorieë in andragogie speel, te bestudeer. Dit blyk dat andragogie nie net elemente van verskillende leerteorieë bevat nie, maar dat toepaslike aspekte van die teoriee wat ondersoek is, benut en aangepas is om 'n geintegreerde benadering te bewerkstellig waarin veral volwassene-leerders by leergeleenthede en hulle selfontwikkeling betrek word.
Hyperspherical Coulomb spheroidal basis in the Coulomb three-body problem
International Nuclear Information System (INIS)
Abramov, D. I.
2013-01-01
A hyperspherical Coulomb spheroidal (HSCS) representation is proposed for the Coulomb three-body problem. This is a new expansion in the set of well-known Coulomb spheroidal functions. The orthogonality of Coulomb spheroidal functions on a constant-hyperradius surface ρ = const rather than on a constant-internuclear-distance surface R = const, as in the traditional Born-Oppenheimer approach, is a distinguishing feature of the proposed approach. Owing to this, the HSCS representation proves to be consistent with the asymptotic conditions for the scattering problem at energies below the threshold for three-body breakup: only a finite number of radial functions do not vanish in the limit of ρ→∞, with the result that the formulation of the scattering problem becomes substantially simpler. In the proposed approach, the HSCS basis functions are considerably simpler than those in the well-known adiabatic hyperspherical representation, which is also consistent with the asymptotic conditions. Specifically, the HSCS basis functions are completely factorized. Therefore, there arise no problems associated with avoided crossings of adiabatic hyperspherical terms.
Basis UST leak detection systems
International Nuclear Information System (INIS)
Silveria, V.
1992-01-01
This paper reports that gasoline and other petroleum products are leaking from underground storage tanks (USTs) at an alarming rate, seeping into soil and groundwater. Buried pipes are an even greater culprit, accounting for most suspected and detected leaks according to Environmental Protection Agency (EPA) estimates. In response to this problem, the EPA issued regulations setting standards for preventing, detecting, reporting, and cleaning up leaks, as well as fiscal responsibility. However, federal regulations are only a minimum; some states have cracked down even harder Plant managers and engineers have a big job ahead of them. The EPA estimates that there are more than 75,000 fuel USTs at US industrial facilities. When considering leak detection systems, the person responsible for making the decision has five primary choices: inventory reconciliation combined with regular precision tightness tests; automatic tank gauging; groundwater monitoring; interstitial monitoring of double containment systems; and vapor monitoring
Self-consistent modeling of amorphous silicon devices
International Nuclear Information System (INIS)
Hack, M.
1987-01-01
The authors developed a computer model to describe the steady-state behaviour of a range of amorphous silicon devices. It is based on the complete set of transport equations and takes into account the important role played by the continuous distribution of localized states in the mobility gap of amorphous silicon. Using one set of parameters they have been able to self-consistently simulate the current-voltage characteristics of p-i-n (or n-i-p) solar cells under illumination, the dark behaviour of field-effect transistors, p-i-n diodes and n-i-n diodes in both the ohmic and space charge limited regimes. This model also describes the steady-state photoconductivity of amorphous silicon, in particular, its dependence on temperature, doping and illumination intensity
Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals
Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.
2017-10-01
We present a code implementing the linearized quasiparticle self-consistent GW method (LQSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N3 scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method. Program Files doi:http://dx.doi.org/10.17632/cpchkfty4w.1 Licensing provisions: GNU General Public License Programming language: Fortran 90 External routines/libraries: BLAS, LAPACK, MPI (optional) Nature of problem: Direct implementation of the GW method scales as N4 with the system size, which quickly becomes prohibitively time consuming even in the modern computers. Solution method: We implemented the GW approach using a method that switches between real space and momentum space representations. Some operations are faster in real space, whereas others are more computationally efficient in the reciprocal space. This makes our approach scale as N3. Restrictions: The limiting factor is usually the memory available in a computer. Using 10 GB/core of memory allows us to study the systems up to 15 atoms per unit cell.
Consistency of canonical formulation of Horava gravity
International Nuclear Information System (INIS)
Soo, Chopin
2011-01-01
Both the non-projectable and projectable version of Horava gravity face serious challenges. In the non-projectable version, the constraint algebra is seemingly inconsistent. The projectable version lacks a local Hamiltonian constraint, thus allowing for an extra graviton mode which can be problematic. A new formulation (based on arXiv:1007.1563) of Horava gravity which is naturally realized as a representation of the master constraint algebra (instead of the Dirac algebra) studied by loop quantum gravity researchers is presented. This formulation yields a consistent canonical theory with first class constraints; and captures the essence of Horava gravity in retaining only spatial diffeomorphisms as the physically relevant non-trivial gauge symmetry. At the same time the local Hamiltonian constraint is equivalently enforced by the master constraint.
Consistency of canonical formulation of Horava gravity
Energy Technology Data Exchange (ETDEWEB)
Soo, Chopin, E-mail: cpsoo@mail.ncku.edu.tw [Department of Physics, National Cheng Kung University, Tainan, Taiwan (China)
2011-09-22
Both the non-projectable and projectable version of Horava gravity face serious challenges. In the non-projectable version, the constraint algebra is seemingly inconsistent. The projectable version lacks a local Hamiltonian constraint, thus allowing for an extra graviton mode which can be problematic. A new formulation (based on arXiv:1007.1563) of Horava gravity which is naturally realized as a representation of the master constraint algebra (instead of the Dirac algebra) studied by loop quantum gravity researchers is presented. This formulation yields a consistent canonical theory with first class constraints; and captures the essence of Horava gravity in retaining only spatial diffeomorphisms as the physically relevant non-trivial gauge symmetry. At the same time the local Hamiltonian constraint is equivalently enforced by the master constraint.
Evaluating the hydrological consistency of evaporation products
Lopez Valencia, Oliver Miguel; Houborg, Rasmus; McCabe, Matthew
2017-01-01
Advances in space-based observations have provided the capacity to develop regional- to global-scale estimates of evaporation, offering insights into this key component of the hydrological cycle. However, the evaluation of large-scale evaporation retrievals is not a straightforward task. While a number of studies have intercompared a range of these evaporation products by examining the variance amongst them, or by comparison of pixel-scale retrievals against ground-based observations, there is a need to explore more appropriate techniques to comprehensively evaluate remote-sensing-based estimates. One possible approach is to establish the level of product agreement between related hydrological components: for instance, how well do evaporation patterns and response match with precipitation or water storage changes? To assess the suitability of this "consistency"-based approach for evaluating evaporation products, we focused our investigation on four globally distributed basins in arid and semi-arid environments, comprising the Colorado River basin, Niger River basin, Aral Sea basin, and Lake Eyre basin. In an effort to assess retrieval quality, three satellite-based global evaporation products based on different methodologies and input data, including CSIRO-PML, the MODIS Global Evapotranspiration product (MOD16), and Global Land Evaporation: the Amsterdam Methodology (GLEAM), were evaluated against rainfall data from the Global Precipitation Climatology Project (GPCP) along with Gravity Recovery and Climate Experiment (GRACE) water storage anomalies. To ensure a fair comparison, we evaluated consistency using a degree correlation approach after transforming both evaporation and precipitation data into spherical harmonics. Overall we found no persistent hydrological consistency in these dryland environments. Indeed, the degree correlation showed oscillating values between periods of low and high water storage changes, with a phase difference of about 2–3 months
Self-consistent modelling of ICRH
International Nuclear Information System (INIS)
Hellsten, T.; Hedin, J.; Johnson, T.; Laxaaback, M.; Tennfors, E.
2001-01-01
The performance of ICRH is often sensitive to the shape of the high energy part of the distribution functions of the resonating species. This requires self-consistent calculations of the distribution functions and the wave-field. In addition to the wave-particle interactions and Coulomb collisions the effects of the finite orbit width and the RF-induced spatial transport are found to be important. The inward drift dominates in general even for a symmetric toroidal wave spectrum in the centre of the plasma. An inward drift does not necessarily produce a more peaked heating profile. On the contrary, for low concentrations of hydrogen minority in deuterium plasmas it can even give rise to broader profiles. (author)
Non linear self consistency of microtearing modes
International Nuclear Information System (INIS)
Garbet, X.; Mourgues, F.; Samain, A.
1987-01-01
The self consistency of a microtearing turbulence is studied in non linear regimes where the ergodicity of the flux lines determines the electron response. The current which sustains the magnetic perturbation via the Ampere law results from the combines action of the radial electric field in the frame where the island chains are static and of the thermal electron diamagnetism. Numerical calculations show that at usual values of β pol in Tokamaks the turbulence can create a diffusion coefficient of order ν th p 2 i where p i is the ion larmor radius and ν th the electron ion collision frequency. On the other hand, collisionless regimes involving special profiles of each mode near the resonant surface seem possible
Consistent evolution in a pedestrian flow
Guan, Junbiao; Wang, Kaihua
2016-03-01
In this paper, pedestrian evacuation considering different human behaviors is studied by using a cellular automaton (CA) model combined with the snowdrift game theory. The evacuees are divided into two types, i.e. cooperators and defectors, and two different human behaviors, herding behavior and independent behavior, are investigated. It is found from a large amount of numerical simulations that the ratios of the corresponding evacuee clusters are evolved to consistent states despite 11 typically different initial conditions, which may largely owe to self-organization effect. Moreover, an appropriate proportion of initial defectors who are of herding behavior, coupled with an appropriate proportion of initial defectors who are of rationally independent thinking, are two necessary factors for short evacuation time.
Evaluating the hydrological consistency of evaporation products
Lopez Valencia, Oliver Miguel
2017-01-18
Advances in space-based observations have provided the capacity to develop regional- to global-scale estimates of evaporation, offering insights into this key component of the hydrological cycle. However, the evaluation of large-scale evaporation retrievals is not a straightforward task. While a number of studies have intercompared a range of these evaporation products by examining the variance amongst them, or by comparison of pixel-scale retrievals against ground-based observations, there is a need to explore more appropriate techniques to comprehensively evaluate remote-sensing-based estimates. One possible approach is to establish the level of product agreement between related hydrological components: for instance, how well do evaporation patterns and response match with precipitation or water storage changes? To assess the suitability of this "consistency"-based approach for evaluating evaporation products, we focused our investigation on four globally distributed basins in arid and semi-arid environments, comprising the Colorado River basin, Niger River basin, Aral Sea basin, and Lake Eyre basin. In an effort to assess retrieval quality, three satellite-based global evaporation products based on different methodologies and input data, including CSIRO-PML, the MODIS Global Evapotranspiration product (MOD16), and Global Land Evaporation: the Amsterdam Methodology (GLEAM), were evaluated against rainfall data from the Global Precipitation Climatology Project (GPCP) along with Gravity Recovery and Climate Experiment (GRACE) water storage anomalies. To ensure a fair comparison, we evaluated consistency using a degree correlation approach after transforming both evaporation and precipitation data into spherical harmonics. Overall we found no persistent hydrological consistency in these dryland environments. Indeed, the degree correlation showed oscillating values between periods of low and high water storage changes, with a phase difference of about 2–3 months
Globfit: Consistently fitting primitives by discovering global relations
Li, Yangyan; Wu, Xiaokun; Chrysathou, Yiorgos; Sharf, Andrei Sharf; Cohen-Or, Daniel; Mitra, Niloy J.
2011-01-01
Given a noisy and incomplete point set, we introduce a method that simultaneously recovers a set of locally fitted primitives along with their global mutual relations. We operate under the assumption that the data corresponds to a man-made engineering object consisting of basic primitives, possibly repeated and globally aligned under common relations. We introduce an algorithm to directly couple the local and global aspects of the problem. The local fit of the model is determined by how well the inferred model agrees to the observed data, while the global relations are iteratively learned and enforced through a constrained optimization. Starting with a set of initial RANSAC based locally fitted primitives, relations across the primitives such as orientation, placement, and equality are progressively learned and conformed to. In each stage, a set of feasible relations are extracted among the candidate relations, and then aligned to, while best fitting to the input data. The global coupling corrects the primitives obtained in the local RANSAC stage, and brings them to precise global alignment. We test the robustness of our algorithm on a range of synthesized and scanned data, with varying amounts of noise, outliers, and non-uniform sampling, and validate the results against ground truth, where available. © 2011 ACM.
Globfit: Consistently fitting primitives by discovering global relations
Li, Yangyan
2011-07-01
Given a noisy and incomplete point set, we introduce a method that simultaneously recovers a set of locally fitted primitives along with their global mutual relations. We operate under the assumption that the data corresponds to a man-made engineering object consisting of basic primitives, possibly repeated and globally aligned under common relations. We introduce an algorithm to directly couple the local and global aspects of the problem. The local fit of the model is determined by how well the inferred model agrees to the observed data, while the global relations are iteratively learned and enforced through a constrained optimization. Starting with a set of initial RANSAC based locally fitted primitives, relations across the primitives such as orientation, placement, and equality are progressively learned and conformed to. In each stage, a set of feasible relations are extracted among the candidate relations, and then aligned to, while best fitting to the input data. The global coupling corrects the primitives obtained in the local RANSAC stage, and brings them to precise global alignment. We test the robustness of our algorithm on a range of synthesized and scanned data, with varying amounts of noise, outliers, and non-uniform sampling, and validate the results against ground truth, where available. © 2011 ACM.
G-Consistent Subsets and Reduced Dynamical Quantum Maps
Ceballos, Russell R.
A quantum system which evolves in time while interacting with an external environ- ment is said to be an open quantum system (OQS), and the influence of the environment on the unperturbed unitary evolution of the system generally leads to non-unitary dynamics. This kind of open system dynamical evolution has been typically modeled by a Standard Prescription (SP) which assumes that the state of the OQS is initially uncorrelated with the environment state. It is here shown that when a minimal set of physically motivated assumptions are adopted, not only does there exist constraints on the reduced dynamics of an OQS such that this SP does not always accurately describe the possible initial cor- relations existing between the OQS and environment, but such initial correlations, and even entanglement, can be witnessed when observing a particular class of reduced state transformations termed purity extractions are observed. Furthermore, as part of a more fundamental investigation to better understand the minimal set of assumptions required to formulate well defined reduced dynamical quantum maps, it is demonstrated that there exists a one-to-one correspondence between the set of initial reduced states and the set of admissible initial system-environment composite states when G-consistency is enforced. Given the discussions surrounding the requirement of complete positivity and the reliance on the SP, the results presented here may well be found valuable for determining the ba- sic properties of reduced dynamical maps, and when restrictions on the OQS dynamics naturally emerge.
Gaussian basis functions for highly oscillatory scattering wavefunctions
Mant, B. P.; Law, M. M.
2018-04-01
We have applied a basis set of distributed Gaussian functions within the S-matrix version of the Kohn variational method to scattering problems involving deep potential energy wells. The Gaussian positions and widths are tailored to the potential using the procedure of Bačić and Light (1986 J. Chem. Phys. 85 4594) which has previously been applied to bound-state problems. The placement procedure is shown to be very efficient and gives scattering wavefunctions and observables in agreement with direct numerical solutions. We demonstrate the basis function placement method with applications to hydrogen atom–hydrogen atom scattering and antihydrogen atom–hydrogen atom scattering.
Self-consistent electronic-structure calculations for interface geometries
International Nuclear Information System (INIS)
Sowa, E.C.; Gonis, A.; MacLaren, J.M.; Zhang, X.G.
1992-01-01
This paper describes a technique for computing self-consistent electronic structures and total energies of planar defects, such as interfaces, which are embedded in an otherwise perfect crystal. As in the Layer Korringa-Kohn-Rostoker approach, the solid is treated as a set of coupled layers of atoms, using Bloch's theorem to take advantage of the two-dimensional periodicity of the individual layers. The layers are coupled using the techniques of the Real-Space Multiple-Scattering Theory, avoiding artificial slab or supercell boundary conditions. A total-energy calculation on a Cu crystal, which has been split apart at a (111) plane, is used to illustrate the method
SIMPLE ESTIMATOR AND CONSISTENT STRONGLY OF STABLE DISTRIBUTIONS
Directory of Open Access Journals (Sweden)
Cira E. Guevara Otiniano
2016-06-01
Full Text Available Stable distributions are extensively used to analyze earnings of financial assets, such as exchange rates and stock prices assets. In this paper we propose a simple and strongly consistent estimator for the scale parameter of a symmetric stable L´evy distribution. The advantage of this estimator is that your computational time is minimum thus it can be used to initialize intensive computational procedure such as maximum likelihood. With random samples of sized n we tested the efficacy of these estimators by Monte Carlo method. We also included applications for three data sets.
Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines
Malley, J. D.; Kruppa, J.; Dasgupta, A.; Malley, K. G.; Ziegler, A.
2011-01-01
Summary Background Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications. PMID:21915433
The consistency assessment of topological relations in cartographic generalization
Zheng, Chunyan; Guo, Qingsheng; Du, Xiaochu
2006-10-01
The field of research in the generalization assessment has been less studied than the generalization process itself, and it is very important to keep topological relation consistency for meeting generalization quality. This paper proposes a methodology to assess the quality of generalized map from topological relations consistency. Taking roads (including railway) and residential areas for examples, from the viewpoint of the spatial cognition, some issues about topological consistency in different map scales are analyzed. The statistic information about the inconsistent topological relations can be obtained by comparing the two matrices: one is the matrix for the topological relations in the generalized map; the other is the theoretical matrix for the topological relations that should be maintained after generalization. Based on the fuzzy set theory and the classification of map object types, the consistency evaluation model of topological relations is established. The paper proves the feasibility of the method through the example about how to evaluate the local topological relations between simple roads and residential area finally.
Robust Visual Tracking Via Consistent Low-Rank Sparse Learning
Zhang, Tianzhu
2014-06-19
Object tracking is the process of determining the states of a target in consecutive video frames based on properties of motion and appearance consistency. In this paper, we propose a consistent low-rank sparse tracker (CLRST) that builds upon the particle filter framework for tracking. By exploiting temporal consistency, the proposed CLRST algorithm adaptively prunes and selects candidate particles. By using linear sparse combinations of dictionary templates, the proposed method learns the sparse representations of image regions corresponding to candidate particles jointly by exploiting the underlying low-rank constraints. In addition, the proposed CLRST algorithm is computationally attractive since temporal consistency property helps prune particles and the low-rank minimization problem for learning joint sparse representations can be efficiently solved by a sequence of closed form update operations. We evaluate the proposed CLRST algorithm against 14 state-of-the-art tracking methods on a set of 25 challenging image sequences. Experimental results show that the CLRST algorithm performs favorably against state-of-the-art tracking methods in terms of accuracy and execution time.
PFP total process throughput calculation and basis of estimate
International Nuclear Information System (INIS)
SINCLAIR, J.C.
1999-01-01
The PFP Process Throughput Calculation and Basis of Estimate document provides the calculated value and basis of estimate for process throughput associated with material stabilization operations conducted in 234-52 Building. The process throughput data provided reflects the best estimates of material processing rates consistent with experience at the Plutonium Finishing Plant (PFP) and other U.S. Department of Energy (DOE) sites. The rates shown reflect demonstrated capacity during ''full'' operation. They do not reflect impacts of building down time. Therefore, these throughput rates need to have a Total Operating Efficiency (TOE) factor applied
Genetic basis of endocrine pathology
Directory of Open Access Journals (Sweden)
T.V. Sorokman
2017-05-01
Full Text Available The purpose of the review was analysis of literature data relating to the molecular genetic basis and diagnosis of endocrine pathology. We searched for published and unpublished researches using Pubmed as the search engine by the keywords: ‘genes’, ‘endocrine diseases’, ‘molecular diagnostics’, ‘prohormones’, ‘nuclear receptors and transcription factors’, taking into consideration studies conducted over the last 10 years, citation review of relevant primary and review articles, conference abstracts, personal files, and contact with expert informants. The criterion for the selection of articles for the study was based on their close relevance to the topic, thus out of 144 analyzed articles, the findings of the researchers covered in 32 articles were crucial. The described nosologies presented various hereditary forms of hypopituitarism, disturbances of steroid hormone biosynthesis, abnormal gender formation, monogenic forms of diabetes mellitus, endocrine tumors, etc. Pathology is identified that is associated with a mutation of genes encoding protein prohormones, receptors, steroid biosynthesis enzymes, intracellular signaling molecules, transport proteins, ion channels, and transcription factors. Among the endocrine diseases associated with defects in genes encoding protein prohormones, the defects of the GH1 gene are most common, the defects in the gene CYP21A2 (21-hydroxylase are among diseases associated with defects in genes encoding enzymes. More often mutations of genes encoding proteins belong to the class of G-protein coupled receptors. Most of the mutations associated with MEN-2A are concentrated in the rich cysteine region of the Ret receptor. More than 70 monogenic syndromes are known, in which there is a marked tolerance to glucose and some form of diabetes mellitus is diagnosed, diabetes mellitus caused by mutation of the mitochondrial gene (mutation tRNALeu, UUR is also detected. Of all the monogenic forms of
DEFF Research Database (Denmark)
Yang, Laurence; Tan, Justin; O'Brien, Edward J.
2015-01-01
based on proteomics data. This systems biology core proteome includes 212 genes not found in previous comparative genomics-based core proteome definitions, accounts for 65% of known essential genes in E. coli, and has 78% gene function overlap with minimal genomes (Buchnera aphidicola and Mycoplasma......Finding the minimal set of gene functions needed to sustain life is of both fundamental and practical importance. Minimal gene lists have been proposed by using comparative genomics-based core proteome definitions. A definition of a core proteome that is supported by empirical data, is understood...... at the systems-level, and provides a basis for computing essential cell functions is lacking. Here, we use a systems biology-based genome-scale model of metabolism and expression to define a functional core proteome consisting of 356 gene products, accounting for 44% of the Escherichia coli proteome by mass...
UpSet: Visualization of Intersecting Sets
Lex, Alexander; Gehlenborg, Nils; Strobelt, Hendrik; Vuillemot, Romain; Pfister, Hanspeter
2016-01-01
Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains. PMID:26356912
The pointer basis and the feedback stabilization of quantum systems
International Nuclear Information System (INIS)
Li, L; Chia, A; Wiseman, H M
2014-01-01
The dynamics for an open quantum system can be ‘unravelled’ in infinitely many ways, depending on how the environment is monitored, yielding different sorts of conditioned states, evolving stochastically. In the case of ideal monitoring these states are pure, and the set of states for a given monitoring forms a basis (which is overcomplete in general) for the system. It has been argued elsewhere (Atkins et al 2005 Europhys. Lett. 69 163) that the ‘pointer basis’ as introduced by Zurek et al (1993 Phys. Rev. Lett. 70 1187), should be identified with the unravelling-induced basis which decoheres most slowly. Here we show the applicability of this concept of pointer basis to the problem of state stabilization for quantum systems. In particular we prove that for linear Gaussian quantum systems, if the feedback control is assumed to be strong compared to the decoherence of the pointer basis, then the system can be stabilized in one of the pointer basis states with a fidelity close to one (the infidelity varies inversely with the control strength). Moreover, if the aim of the feedback is to maximize the fidelity of the unconditioned system state with a pure state that is one of its conditioned states, then the optimal unravelling for stabilizing the system in this way is that which induces the pointer basis for the conditioned states. We illustrate these results with a model system: quantum Brownian motion. We show that even if the feedback control strength is comparable to the decoherence, the optimal unravelling still induces a basis very close to the pointer basis. However if the feedback control is weak compared to the decoherence, this is not the case. (paper)
Consistent boundary conditions for open strings
International Nuclear Information System (INIS)
Lindstroem, Ulf; Rocek, Martin; Nieuwenhuizen, Peter van
2003-01-01
We study boundary conditions for the bosonic, spinning (NSR) and Green-Schwarz open string, as well as for (1+1)-dimensional supergravity. We consider boundary conditions that arise from (1) extremizing the action, (2) BRST, rigid or local supersymmetry, or κ(Siegel)-symmetry of the action, (3) closure of the set of boundary conditions under the symmetry transformations, and (4) the boundary limits of bulk Euler-Lagrange equations that are 'conjugate' to other boundary conditions. We find corrections to Neumann boundary conditions in the presence of a bulk tachyon field. We discuss a boundary superspace formalism. We also find that path integral quantization of the open string requires an infinite tower of boundary conditions that can be interpreted as a smoothness condition on the doubled interval; we interpret this to mean that for a path-integral formulation of open strings with only Neuman boundary conditions, the description in terms of orientifolds is not just natural, but is actually fundamental
Thermodynamically consistent data-driven computational mechanics
González, David; Chinesta, Francisco; Cueto, Elías
2018-05-01
In the paradigm of data-intensive science, automated, unsupervised discovering of governing equations for a given physical phenomenon has attracted a lot of attention in several branches of applied sciences. In this work, we propose a method able to avoid the identification of the constitutive equations of complex systems and rather work in a purely numerical manner by employing experimental data. In sharp contrast to most existing techniques, this method does not rely on the assumption on any particular form for the model (other than some fundamental restrictions placed by classical physics such as the second law of thermodynamics, for instance) nor forces the algorithm to find among a predefined set of operators those whose predictions fit best to the available data. Instead, the method is able to identify both the Hamiltonian (conservative) and dissipative parts of the dynamics while satisfying fundamental laws such as energy conservation or positive production of entropy, for instance. The proposed method is tested against some examples of discrete as well as continuum mechanics, whose accurate results demonstrate the validity of the proposed approach.
Perceptual basis of evolving Western musical styles.
Rodriguez Zivic, Pablo H; Shifres, Favio; Cecchi, Guillermo A
2013-06-11
The brain processes temporal statistics to predict future events and to categorize perceptual objects. These statistics, called expectancies, are found in music perception, and they span a variety of different features and time scales. Specifically, there is evidence that music perception involves strong expectancies regarding the distribution of a melodic interval, namely, the distance between two consecutive notes within the context of another. The recent availability of a large Western music dataset, consisting of the historical record condensed as melodic interval counts, has opened new possibilities for data-driven analysis of musical perception. In this context, we present an analytical approach that, based on cognitive theories of music expectation and machine learning techniques, recovers a set of factors that accurately identifies historical trends and stylistic transitions between the Baroque, Classical, Romantic, and Post-Romantic periods. We also offer a plausible musicological and cognitive interpretation of these factors, allowing us to propose them as data-driven principles of melodic expectation.
Exploring the Consistent behavior of Information Services
Directory of Open Access Journals (Sweden)
Kapidakis Sarantos
2016-01-01
Full Text Available Computer services are normally assumed to work well all the time. This usually happens for crucial services like bank electronic services, but not necessarily so for others, that there is no commercial interest in their operation. In this work we examined the operation and the errors of information services and tried to find clues that will help predicting the consistency of the behavior and the quality of the harvesting, which is harder because of the transient conditions and the many services and the huge amount of harvested information. We found many unexpected situations. The services that always successfully satisfy a request may in fact return part of it. A significant part of the OAI services have ceased working while many other serves occasionally fail to respond. Some services fail in the same way each time, and we pronounce them dead, as we do not see a way to overcome that. Others also always, or sometimes fail, but not in the same way, and we hope that their behavior is affected by temporary factors, that may improve later on. We categorized the services into classes, to study their behavior in more detail.
[Consistent Declarative Memory with Depressive Symptomatology].
Botelho de Oliveira, Silvia; Flórez, Ruth Natalia Suárez; Caballero, Diego Andrés Vásquez
2012-12-01
Some studies have suggested that potentiated remembrance of negative events on people with depressive disorders seems to be an important factor in the etiology, course and maintenance of depression. Evaluate the emotional memory in people with and without depressive symptomatology by means of an audio-visual test. 73 university students were evaluated, male and female, between 18 and 40 years old, distributed in two groups: with depressive symptomatology (32) and without depressive symptomatology (40), using the Scale from the Center of Epidemiologic Studies for Depression (CES-D, English Abbreviation) and a cutting point of 20. There were not meaningful differences between free and voluntary recalls, with and without depressive symptomatology, in spite of the fact that both groups had granted a higher emotional value to the audio-visual test and that they had associated it with emotional sadness. People with depressive symptomatology did not exhibit the effect of mnemonic potentiation generally associated to the content of the emotional version of the test; therefore, the hypothesis of emotional consistency was not validated. Copyright © 2012 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.
Self consistent field theory of virus assembly
Li, Siyu; Orland, Henri; Zandi, Roya
2018-04-01
The ground state dominance approximation (GSDA) has been extensively used to study the assembly of viral shells. In this work we employ the self-consistent field theory (SCFT) to investigate the adsorption of RNA onto positively charged spherical viral shells and examine the conditions when GSDA does not apply and SCFT has to be used to obtain a reliable solution. We find that there are two regimes in which GSDA does work. First, when the genomic RNA length is long enough compared to the capsid radius, and second, when the interaction between the genome and capsid is so strong that the genome is basically localized next to the wall. We find that for the case in which RNA is more or less distributed uniformly in the shell, regardless of the length of RNA, GSDA is not a good approximation. We observe that as the polymer-shell interaction becomes stronger, the energy gap between the ground state and first excited state increases and thus GSDA becomes a better approximation. We also present our results corresponding to the genome persistence length obtained through the tangent-tangent correlation length and show that it is zero in case of GSDA but is equal to the inverse of the energy gap when using SCFT.
Consistency based correlations for tailings consolidation
Energy Technology Data Exchange (ETDEWEB)
Azam, S.; Paul, A.C. [Regina Univ., Regina, SK (Canada). Environmental Systems Engineering
2010-07-01
The extraction of oil, uranium, metals and mineral resources from the earth generates significant amounts of tailings slurry. The tailings are contained in a disposal area with perimeter dykes constructed from the coarser fraction of the slurry. There are many unique challenges pertaining to the management of the containment facilities for several decades beyond mine closure that are a result of the slow settling rates of the fines and the high standing toxic waters. Many tailings dam failures in different parts of the world have been reported to result in significant contaminant releases causing public concern over the conventional practice of tailings disposal. Therefore, in order to reduce and minimize the environmental footprint, the fluid tailings need to undergo efficient consolidation. This paper presented an investigation into the consolidation behaviour of tailings in conjunction with soil consistency that captured physicochemical interactions. The paper discussed the large strain consolidation behaviour (volume compressibility and hydraulic conductivity) of six fine-grained soil slurries based on published data. The paper provided background information on the study and presented the research methodology. The geotechnical index properties of the selected materials were also presented. The large strain consolidation, volume compressibility correlations, and hydraulic conductivity correlations were provided. It was concluded that the normalized void ratio best described volume compressibility whereas liquidity index best explained the hydraulic conductivity. 17 refs., 3 tabs., 4 figs.
Consistency between GRUAN sondes, LBLRTM and IASI
Directory of Open Access Journals (Sweden)
X. Calbet
2017-06-01
Full Text Available Radiosonde soundings from the GCOS Reference Upper-Air Network (GRUAN data record are shown to be consistent with Infrared Atmospheric Sounding Instrument (IASI-measured radiances via LBLRTM (Line-By-Line Radiative Transfer Model in the part of the spectrum that is mostly affected by water vapour absorption in the upper troposphere (from 700 hPa up. This result is key for climate data records, since GRUAN, IASI and LBLRTM constitute reference measurements or a reference radiative transfer model in each of their fields. This is specially the case for night-time radiosonde measurements. Although the sample size is small (16 cases, daytime GRUAN radiosonde measurements seem to have a small dry bias of 2.5 % in absolute terms of relative humidity, located mainly in the upper troposphere, with respect to LBLRTM and IASI. Full metrological closure is not yet possible and will not be until collocation uncertainties are better characterized and a full uncertainty covariance matrix is clarified for GRUAN.
Self-consistent nuclear energy systems
International Nuclear Information System (INIS)
Shimizu, A.; Fujiie, Y.
1995-01-01
A concept of self-consistent energy systems (SCNES) has been proposed as an ultimate goal of the nuclear energy system in the coming centuries. SCNES should realize a stable and unlimited energy supply without endangering the human race and the global environment. It is defined as a system that realizes at least the following four objectives simultaneously: (a) energy generation -attain high efficiency in the utilization of fission energy; (b) fuel production - secure inexhaustible energy source: breeding of fissile material with the breeding ratio greater than one and complete burning of transuranium through recycling; (c) burning of radionuclides - zero release of radionuclides from the system: complete burning of transuranium and elimination of radioactive fission products by neutron capture reactions through recycling; (d) system safety - achieve system safety both for the public and experts: eliminate criticality-related safety issues by using natural laws and simple logic. This paper describes the concept of SCNES and discusses the feasibility of the system. Both ''neutron balance'' and ''energbalance'' of the system are introduced as the necessary conditions to be satisfied at least by SCNES. Evaluations made so far indicate that both the neutron balance and the energy balance can be realized by fast reactors but not by thermal reactors. Concerning the system safety, two safety concepts: ''self controllability'' and ''self-terminability'' are introduced to eliminate the criticality-related safety issues in fast reactors. (author)
Electromagnetic Basis of Metabolism and Heredity
Freund, Friedemann; Stolc, Viktor
2016-01-01
Living organisms control their cellular biological clocks to maintain functional oscillation of the redox cycle, also called the "metabolic cycle" or "respiratory cycle". Organization of cellular processes requires parallel processing on a synchronized time-base. These clocks coordinate the timing of all biochemical processes in the cell, including energy production, DNA replication, and RNA transcription. When this universal time keeping function is perturbed by exogenous induction of reactive oxygen species (ROS), the rate of metabolism changes. This causes oxidative stress, aging and mutations. Therefore, good temporal coordination of the redox cycle not only actively prevents chemical conflict between the reductive and oxidative partial reactions; it also maintains genome integrity and lifespan. Moreover, this universal biochemical rhythm can be disrupted by ROS induction in vivo. This in turn can be achieved by blocking the electron transport chain either endogenously or exogenously by various metabolites, e.g. hydrogen sulfide (H2S), highly diffusible drugs, and carbon monoxide (CO). Alternatively, the electron transport in vivo can be attenuated via a coherent or interfering transfer of energy from exogenous ultralow frequency (ULF) and extremely low frequency (ELF) electromagnetic (EM) fields, suggesting that-on Earth-such ambient fields are an omnipresent (and probably crucially important) factor for the time-setting basis of universal biochemical reactions in living cells. Our work demonstrated previously un-described evidence for quantum effects in biology by electromagnetic coupling below thermal noise at the universal electron transport chain (ETC) in vivo.