WorldWideScience

Sample records for minimal basis set

  1. Quantum Dynamics with Short-Time Trajectories and Minimal Adaptive Basis Sets.

    Science.gov (United States)

    Saller, Maximilian A C; Habershon, Scott

    2017-07-11

    Methods for solving the time-dependent Schrödinger equation via basis set expansion of the wave function can generally be categorized as having either static (time-independent) or dynamic (time-dependent) basis functions. We have recently introduced an alternative simulation approach which represents a middle road between these two extremes, employing dynamic (classical-like) trajectories to create a static basis set of Gaussian wavepackets in regions of phase-space relevant to future propagation of the wave function [J. Chem. Theory Comput., 11, 8 (2015)]. Here, we propose and test a modification of our methodology which aims to reduce the size of basis sets generated in our original scheme. In particular, we employ short-time classical trajectories to continuously generate new basis functions for short-time quantum propagation of the wave function; to avoid the continued growth of the basis set describing the time-dependent wave function, we employ Matching Pursuit to periodically minimize the number of basis functions required to accurately describe the wave function. Overall, this approach generates a basis set which is adapted to evolution of the wave function while also being as small as possible. In applications to challenging benchmark problems, namely a 4-dimensional model of photoexcited pyrazine and three different double-well tunnelling problems, we find that our new scheme enables accurate wave function propagation with basis sets which are around an order-of-magnitude smaller than our original trajectory-guided basis set methodology, highlighting the benefits of adaptive strategies for wave function propagation.

  2. Approaching the basis set limit for DFT calculations using an environment-adapted minimal basis with perturbation theory: Formulation, proof of concept, and a pilot implementation

    International Nuclear Information System (INIS)

    Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Teresa; Skylaris, Chris-Kriton; Head-Gordon, Martin

    2016-01-01

    Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set produces <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.

  3. Optimized Basis Sets for the Environment in the Domain-Specific Basis Set Approach of the Incremental Scheme.

    Science.gov (United States)

    Anacker, Tony; Hill, J Grant; Friedrich, Joachim

    2016-04-21

    Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.

  4. Spectral properties of minimal-basis-set orbitals: Implications for molecular electronic continuum states

    Science.gov (United States)

    Langhoff, P. W.; Winstead, C. L.

    Early studies of the electronically excited states of molecules by John A. Pople and coworkers employing ab initio single-excitation configuration interaction (SECI) calculations helped to simulate related applications of these methods to the partial-channel photoionization cross sections of polyatomic molecules. The Gaussian representations of molecular orbitals adopted by Pople and coworkers can describe SECI continuum states when sufficiently large basis sets are employed. Minimal-basis virtual Fock orbitals stabilized in the continuous portions of such SECI spectra are generally associated with strong photoionization resonances. The spectral attributes of these resonance orbitals are illustrated here by revisiting previously reported experimental and theoretical studies of molecular formaldehyde (H2CO) in combination with recently calculated continuum orbital amplitudes.

  5. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    Energy Technology Data Exchange (ETDEWEB)

    Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au [School of Chemistry and Biochemistry, The University of Western Australia, Perth, WA 6009 (Australia)

    2015-05-15

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.

  6. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    International Nuclear Information System (INIS)

    Spackman, Peter R.; Karton, Amir

    2015-01-01

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L α two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol –1 . The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol –1

  7. Chemical bonding analysis for solid-state systems using intrinsic oriented quasiatomic minimal-basis-set orbitals

    International Nuclear Information System (INIS)

    Yao, Y.X.; Wang, C.Z.; Ho, K.M.

    2010-01-01

    A chemical bonding scheme is presented for the analysis of solid-state systems. The scheme is based on the intrinsic oriented quasiatomic minimal-basis-set orbitals (IO-QUAMBOs) previously developed by Ivanic and Ruedenberg for molecular systems. In the solid-state scheme, IO-QUAMBOs are generated by a unitary transformation of the quasiatomic orbitals located at each site of the system with the criteria of maximizing the sum of the fourth power of interatomic orbital bond order. Possible bonding and antibonding characters are indicated by the single particle matrix elements, and can be further examined by the projected density of states. We demonstrate the method by applications to graphene and (6,0) zigzag carbon nanotube. The oriented-orbital scheme automatically describes the system in terms of sp 2 hybridization. The effect of curvature on the electronic structure of the zigzag carbon nanotube is also manifested in the deformation of the intrinsic oriented orbitals as well as a breaking of symmetry leading to nonzero single particle density matrix elements. In an additional study, the analysis is performed on the Al 3 V compound. The main covalent bonding characters are identified in a straightforward way without resorting to the symmetry analysis. Our method provides a general way for chemical bonding analysis of ab initio electronic structure calculations with any type of basis sets.

  8. Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures.

    Science.gov (United States)

    Papior, Nick R; Calogero, Gaetano; Brandbyge, Mads

    2018-06-27

    We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C 60 ). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.

  9. MOCUS, Minimal Cut Sets and Minimal Path Sets from Fault Tree Analysis

    International Nuclear Information System (INIS)

    Fussell, J.B.; Henry, E.B.; Marshall, N.H.

    1976-01-01

    1 - Description of problem or function: From a description of the Boolean failure logic of a system, called a fault tree, and control parameters specifying the minimal cut set length to be obtained MOCUS determines the system failure modes, or minimal cut sets, and the system success modes, or minimal path sets. 2 - Method of solution: MOCUS uses direct resolution of the fault tree into the cut and path sets. The algorithm used starts with the main failure of interest, the top event, and proceeds to basic independent component failures, called primary events, to resolve the fault tree to obtain the minimal sets. A key point of the algorithm is that an and gate alone always increases the number of path sets; an or gate alone always increases the number of cut sets and increases the size of path sets. Other types of logic gates must be described in terms of and and or logic gates. 3 - Restrictions on the complexity of the problem: Output from MOCUS can include minimal cut and path sets for up to 20 gates

  10. Charge transfer interaction using quasiatomic minimal-basis orbitals in the effective fragment potential method

    International Nuclear Information System (INIS)

    Xu, Peng; Gordon, Mark S.

    2013-01-01

    The charge transfer (CT) interaction, the most time-consuming term in the general effective fragment potential method, is made much more computationally efficient. This is accomplished by the projection of the quasiatomic minimal-basis-set orbitals (QUAMBOs) as the atomic basis onto the self-consistent field virtual molecular orbital (MO) space to select a subspace of the full virtual space called the valence virtual space. The diagonalization of the Fock matrix in terms of QUAMBOs recovers the canonical occupied orbitals and, more importantly, gives rise to the valence virtual orbitals (VVOs). The CT energies obtained using VVOs are generally as accurate as those obtained with the full virtual space canonical MOs because the QUAMBOs span the valence part of the virtual space, which can generally be regarded as “chemically important.” The number of QUAMBOs is the same as the number of minimal-basis MOs of a molecule. Therefore, the number of VVOs is significantly smaller than the number of canonical virtual MOs, especially for large atomic basis sets. This leads to a dramatic decrease in the computational cost

  11. Using SETS to find minimal cut sets in large fault trees

    International Nuclear Information System (INIS)

    Worrell, R.B.; Stack, D.W.

    1978-01-01

    An efficient algebraic algorithm for finding the minimal cut sets for a large fault tree was defined and a new procedure which implements the algorithm was added to the Set Equation Transformation System (SETS). The algorithm includes the identification and separate processing of independent subtrees, the coalescing of consecutive gates of the same kind, the creation of additional independent subtrees, and the derivation of the fault tree stem equation in stages. The computer time required to determine the minimal cut sets using these techniques is shown to be substantially less than the computer time required to determine the minimal cut sets when these techniques are not employed. It is shown for a given example that the execution time required to determine the minimal cut sets can be reduced from 7,686 seconds to 7 seconds when all of these techniques are employed

  12. Abelian groups with a minimal generating set | Ruzicka ...

    African Journals Online (AJOL)

    We study the existence of minimal generating sets in Abelian groups. We prove that Abelian groups with minimal generating sets are not closed under quotients, nor under subgroups, nor under infinite products. We give necessary and sufficient conditions for existence of a minimal generating set providing that the Abelian ...

  13. On the effects of basis set truncation and electron correlation in conformers of 2-hydroxy-acetamide

    Science.gov (United States)

    Szarecka, A.; Day, G.; Grout, P. J.; Wilson, S.

    Ab initio quantum chemical calculations have been used to study the differences in energy between two gas phase conformers of the 2-hydroxy-acetamide molecule that possess intramolecular hydrogen bonding. In particular, rotation around the central C-C bond has been considered as a factor determining the structure of the hydrogen bond and stabilization of the conformer. Energy calculations include full geometiy optimization using both the restricted matrix Hartree-Fock model and second-order many-body perturbation theory with a number of commonly used basis sets. The basis sets employed ranged from the minimal STO-3G set to [`]split-valence' sets up to 6-31 G. The effects of polarization functions were also studied. The results display a strong basis set dependence.

  14. Minimization of Basis Risk in Parametric Earthquake Cat Bonds

    Science.gov (United States)

    Franco, G.

    2009-12-01

    A catastrophe -cat- bond is an instrument used by insurance and reinsurance companies, by governments or by groups of nations to cede catastrophic risk to the financial markets, which are capable of supplying cover for highly destructive events, surpassing the typical capacity of traditional reinsurance contracts. Parametric cat bonds, a specific type of cat bonds, use trigger mechanisms or indices that depend on physical event parameters published by respected third parties in order to determine whether a part or the entire bond principal is to be paid for a certain event. First generation cat bonds, or cat-in-a-box bonds, display a trigger mechanism that consists of a set of geographic zones in which certain conditions need to be met by an earthquake’s magnitude and depth in order to trigger payment of the bond principal. Second generation cat bonds use an index formulation that typically consists of a sum of products of a set of weights by a polynomial function of the ground motion variables reported by a geographically distributed seismic network. These instruments are especially appealing to developing countries with incipient insurance industries wishing to cede catastrophic losses to the financial markets because the payment trigger mechanism is transparent and does not involve the parties ceding or accepting the risk, significantly reducing moral hazard. In order to be successful in the market, however, parametric cat bonds have typically been required to specify relatively simple trigger conditions. The consequence of such simplifications is the increase of basis risk. This risk represents the possibility that the trigger mechanism fails to accurately capture the actual losses of a catastrophic event, namely that it does not trigger for a highly destructive event or vice versa, that a payment of the bond principal is caused by an event that produced insignificant losses. The first case disfavors the sponsor who was seeking cover for its losses while the

  15. Algorithm for finding minimal cut sets in a fault tree

    International Nuclear Information System (INIS)

    Rosenberg, Ladislav

    1996-01-01

    This paper presents several algorithms that have been used in a computer code for fault-tree analysing by the minimal cut sets method. The main algorithm is the more efficient version of the new CARA algorithm, which finds minimal cut sets with an auxiliary dynamical structure. The presented algorithm for finding the minimal cut sets enables one to do so by defined requirements - according to the order of minimal cut sets, or to the number of minimal cut sets, or both. This algorithm is from three to six times faster when compared with the primary version of the CARA algorithm

  16. Ab initio localized basis set study of structural parameters and elastic properties of HfO2 polymorphs

    International Nuclear Information System (INIS)

    Caravaca, M A; Casali, R A

    2005-01-01

    The SIESTA approach based on pseudopotentials and a localized basis set is used to calculate the electronic, elastic and equilibrium properties of P 2 1 /c, Pbca, Pnma, Fm3m, P4 2 nmc and Pa3 phases of HfO 2 . Using separable Troullier-Martins norm-conserving pseudopotentials which include partial core corrections for Hf, we tested important physical properties as a function of the basis set size, grid size and cut-off ratio of the pseudo-atomic orbitals (PAOs). We found that calculations in this oxide with the LDA approach and using a minimal basis set (simple zeta, SZ) improve calculated phase transition pressures with respect to the double-zeta basis set and LDA (DZ-LDA), and show similar accuracy to that determined with the PPPW and GGA approach. Still, the equilibrium volumes and structural properties calculated with SZ-LDA compare better with experiments than the GGA approach. The bandgaps and elastic and structural properties calculated with DZ-LDA are accurate in agreement with previous state of the art ab initio calculations and experimental evidence and cannot be improved with a polarized basis set. These calculated properties show low sensitivity to the PAO localization parameter range between 40 and 100 meV. However, this is not true for the relative energy, which improves upon decrease of the mentioned parameter. We found a non-linear behaviour in the lattice parameters with pressure in the P 2 1 /c phase, showing a discontinuity of the derivative of the a lattice parameter with respect to external pressure, as found in experiments. The common enthalpy values calculated with the minimal basis set give pressure transitions of 3.3 and 10.8?GPa for P2 1 /c → Pbca and Pbca → Pnma, respectively, in accordance with different high pressure experimental values

  17. Dynamical basis set

    International Nuclear Information System (INIS)

    Blanco, M.; Heller, E.J.

    1985-01-01

    A new Cartesian basis set is defined that is suitable for the representation of molecular vibration-rotation bound states. The Cartesian basis functions are superpositions of semiclassical states generated through the use of classical trajectories that conform to the intrinsic dynamics of the molecule. Although semiclassical input is employed, the method becomes ab initio through the standard matrix diagonalization variational method. Special attention is given to classical-quantum correspondences for angular momentum. In particular, it is shown that the use of semiclassical information preferentially leads to angular momentum eigenstates with magnetic quantum number Vertical BarMVertical Bar equal to the total angular momentum J. The present method offers a reliable technique for representing highly excited vibrational-rotational states where perturbation techniques are no longer applicable

  18. Enumeration of minimal stoichiometric precursor sets in metabolic networks.

    Science.gov (United States)

    Andrade, Ricardo; Wannagat, Martin; Klein, Cecilia C; Acuña, Vicente; Marchetti-Spaccamela, Alberto; Milreu, Paulo V; Stougie, Leen; Sagot, Marie-France

    2016-01-01

    What an organism needs at least from its environment to produce a set of metabolites, e.g. target(s) of interest and/or biomass, has been called a minimal precursor set. Early approaches to enumerate all minimal precursor sets took into account only the topology of the metabolic network (topological precursor sets). Due to cycles and the stoichiometric values of the reactions, it is often not possible to produce the target(s) from a topological precursor set in the sense that there is no feasible flux. Although considering the stoichiometry makes the problem harder, it enables to obtain biologically reasonable precursor sets that we call stoichiometric. Recently a method to enumerate all minimal stoichiometric precursor sets was proposed in the literature. The relationship between topological and stoichiometric precursor sets had however not yet been studied. Such relationship between topological and stoichiometric precursor sets is highlighted. We also present two algorithms that enumerate all minimal stoichiometric precursor sets. The first one is of theoretical interest only and is based on the above mentioned relationship. The second approach solves a series of mixed integer linear programming problems. We compared the computed minimal precursor sets to experimentally obtained growth media of several Escherichia coli strains using genome-scale metabolic networks. The results show that the second approach efficiently enumerates minimal precursor sets taking stoichiometry into account, and allows for broad in silico studies of strains or species interactions that may help to understand e.g. pathotype and niche-specific metabolic capabilities. sasita is written in Java, uses cplex as LP solver and can be downloaded together with all networks and input files used in this paper at http://www.sasita.gforge.inria.fr.

  19. Speech/Nonspeech Detection Using Minimal Walsh Basis Functions

    Directory of Open Access Journals (Sweden)

    Pwint Moe

    2007-01-01

    Full Text Available This paper presents a new method to detect speech/nonspeech components of a given noisy signal. Employing the combination of binary Walsh basis functions and an analysis-synthesis scheme, the original noisy speech signal is modified first. From the modified signals, the speech components are distinguished from the nonspeech components by using a simple decision scheme. Minimal number of Walsh basis functions to be applied is determined using singular value decomposition (SVD. The main advantages of the proposed method are low computational complexity, less parameters to be adjusted, and simple implementation. It is observed that the use of Walsh basis functions makes the proposed algorithm efficiently applicable in real-world situations where processing time is crucial. Simulation results indicate that the proposed algorithm achieves high-speech and nonspeech detection rates while maintaining a low error rate for different noisy conditions.

  20. Ab initio localized basis set study of structural parameters and elastic properties of HfO{sub 2} polymorphs

    Energy Technology Data Exchange (ETDEWEB)

    Caravaca, M A [Facultad de Ingenieria, Universidad Nacional del Nordeste, Avenida Las Heras 727, 3500-Resistencia (Argentina); Casali, R A [Facultad de Ciencias Exactas y Naturales y Agrimensura, Universidad Nacional del Nordeste, Avenida Libertad, 5600-Corrientes (Argentina)

    2005-09-21

    The SIESTA approach based on pseudopotentials and a localized basis set is used to calculate the electronic, elastic and equilibrium properties of P 2{sub 1}/c, Pbca, Pnma, Fm3m, P4{sub 2}nmc and Pa3 phases of HfO{sub 2}. Using separable Troullier-Martins norm-conserving pseudopotentials which include partial core corrections for Hf, we tested important physical properties as a function of the basis set size, grid size and cut-off ratio of the pseudo-atomic orbitals (PAOs). We found that calculations in this oxide with the LDA approach and using a minimal basis set (simple zeta, SZ) improve calculated phase transition pressures with respect to the double-zeta basis set and LDA (DZ-LDA), and show similar accuracy to that determined with the PPPW and GGA approach. Still, the equilibrium volumes and structural properties calculated with SZ-LDA compare better with experiments than the GGA approach. The bandgaps and elastic and structural properties calculated with DZ-LDA are accurate in agreement with previous state of the art ab initio calculations and experimental evidence and cannot be improved with a polarized basis set. These calculated properties show low sensitivity to the PAO localization parameter range between 40 and 100 meV. However, this is not true for the relative energy, which improves upon decrease of the mentioned parameter. We found a non-linear behaviour in the lattice parameters with pressure in the P 2{sub 1}/c phase, showing a discontinuity of the derivative of the a lattice parameter with respect to external pressure, as found in experiments. The common enthalpy values calculated with the minimal basis set give pressure transitions of 3.3 and 10.8?GPa for P2{sub 1}/c {yields} Pbca and Pbca {yields} Pnma, respectively, in accordance with different high pressure experimental values.

  1. Geminal embedding scheme for optimal atomic basis set construction in correlated calculations

    Energy Technology Data Exchange (ETDEWEB)

    Sorella, S., E-mail: sorella@sissa.it [International School for Advanced Studies (SISSA), Via Beirut 2-4, 34014 Trieste, Italy and INFM Democritos National Simulation Center, Trieste (Italy); Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr [Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France); Mazzola, G., E-mail: gmazzola@phys.ethz.ch [Theoretische Physik, ETH Zurich, 8093 Zurich (Switzerland); Casula, M., E-mail: michele.casula@impmc.upmc.fr [CNRS and Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France)

    2015-12-28

    We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.

  2. Minimal generating sets of groups, rings, and fields | Halbeisen ...

    African Journals Online (AJOL)

    A subset X of a group (or a ring, or a field) is called generating, if the smallest subgroup (or subring, or subfield) containing X is the group (ring, field) itself. A generating set X is called minimal generating, if X does not properly contain any generating set. The existence and cardinalities of minimal generating sets of various ...

  3. Density Functional Theory and the Basis Set Truncation Problem with Correlation Consistent Basis Sets: Elephant in the Room or Mouse in the Closet?

    Science.gov (United States)

    Feller, David; Dixon, David A

    2018-03-08

    Two recent papers in this journal called into question the suitability of the correlation consistent basis sets for density functional theory (DFT) calculations, because the sets were designed for correlated methods such as configuration interaction, perturbation theory, and coupled cluster theory. These papers focused on the ability of the correlation consistent and other basis sets to reproduce total energies, atomization energies, and dipole moments obtained from "quasi-exact" multiwavelet results. Undesirably large errors were observed for the correlation consistent basis sets. One of the papers argued that basis sets specifically optimized for DFT methods were "essential" for obtaining high accuracy. In this work we re-examined the performance of the correlation consistent basis sets by resolving problems with the previous calculations and by making more appropriate basis set choices for the alkali and alkaline-earth metals and second-row elements. When this is done, the statistical errors with respect to the benchmark values and with respect to DFT optimized basis sets are greatly reduced, especially in light of the relatively large intrinsic error of the underlying DFT method. When judged with respect to high-quality Feller-Peterson-Dixon coupled cluster theory atomization energies, the PBE0 DFT method used in the previous studies exhibits a mean absolute deviation more than a factor of 50 larger than the quintuple zeta basis set truncation error.

  4. Minimal cut-set methodology for artificial intelligence applications

    International Nuclear Information System (INIS)

    Weisbin, C.R.; de Saussure, G.; Barhen, J.; Oblow, E.M.; White, J.C.

    1984-01-01

    This paper reviews minimal cut-set theory and illustrates its application with an example. The minimal cut-set approach uses disjunctive normal form in Boolean algebra and various Boolean operators to simplify very complicated tree structures composed of AND/OR gates. The simplification process is automated and performed off-line using existing computer codes to implement the Boolean reduction on the finite, but large tree structure. With this approach, on-line expert diagnostic systems whose response time is critical, could determine directly whether a goal is achievable by comparing the actual system state to a concisely stored set of preprocessed critical state elements

  5. KCUT, code to generate minimal cut sets for fault trees

    International Nuclear Information System (INIS)

    Han, Sang Hoon

    2008-01-01

    1 - Description of program or function: KCUT is a software to generate minimal cut sets for fault trees. 2 - Methods: Expand a fault tree into cut sets and delete non minimal cut sets. 3 - Restrictions on the complexity of the problem: Size and complexity of the fault tree

  6. The Dirac Equation in the algebraic approximation. VII. A comparison of molecular finite difference and finite basis set calculations using distributed Gaussian basis sets

    NARCIS (Netherlands)

    Quiney, H. M.; Glushkov, V. N.; Wilson, S.; Sabin,; Brandas, E

    2001-01-01

    A comparison is made of the accuracy achieved in finite difference and finite basis set approximations to the Dirac equation for the ground state of the hydrogen molecular ion. The finite basis set calculations are carried out using a distributed basis set of Gaussian functions the exponents and

  7. Obtaining a minimal set of rewrite rules

    CSIR Research Space (South Africa)

    Davel, M

    2005-11-01

    Full Text Available In this paper the authors describe a new approach to rewrite rule extraction and analysis, using Minimal Representation Graphs. This approach provides a mechanism for obtaining the smallest possible rule set – within a context-dependent rewrite rule...

  8. A comparison of the behavior of functional/basis set combinations for hydrogen-bonding in the water dimer with emphasis on basis set superposition error.

    Science.gov (United States)

    Plumley, Joshua A; Dannenberg, J J

    2011-06-01

    We evaluate the performance of ten functionals (B3LYP, M05, M05-2X, M06, M06-2X, B2PLYP, B2PLYPD, X3LYP, B97D, and MPWB1K) in combination with 16 basis sets ranging in complexity from 6-31G(d) to aug-cc-pV5Z for the calculation of the H-bonded water dimer with the goal of defining which combinations of functionals and basis sets provide a combination of economy and accuracy for H-bonded systems. We have compared the results to the best non-density functional theory (non-DFT) molecular orbital (MO) calculations and to experimental results. Several of the smaller basis sets lead to qualitatively incorrect geometries when optimized on a normal potential energy surface (PES). This problem disappears when the optimization is performed on a counterpoise (CP) corrected PES. The calculated interaction energies (ΔEs) with the largest basis sets vary from -4.42 (B97D) to -5.19 (B2PLYPD) kcal/mol for the different functionals. Small basis sets generally predict stronger interactions than the large ones. We found that, because of error compensation, the smaller basis sets gave the best results (in comparison to experimental and high-level non-DFT MO calculations) when combined with a functional that predicts a weak interaction with the largest basis set. As many applications are complex systems and require economical calculations, we suggest the following functional/basis set combinations in order of increasing complexity and cost: (1) D95(d,p) with B3LYP, B97D, M06, or MPWB1k; (2) 6-311G(d,p) with B3LYP; (3) D95++(d,p) with B3LYP, B97D, or MPWB1K; (4) 6-311++G(d,p) with B3LYP or B97D; and (5) aug-cc-pVDZ with M05-2X, M06-2X, or X3LYP. Copyright © 2011 Wiley Periodicals, Inc.

  9. BACFIRE, Minimal Cut Sets Common Cause Failure Fault Tree Analysis

    International Nuclear Information System (INIS)

    Fussell, J.B.

    1983-01-01

    1 - Description of problem or function: BACFIRE, designed to aid in common cause failure analysis, searches among the basic events of a minimal cut set of the system logic model for common potential causes of failure. The potential cause of failure is called a qualitative failure characteristics. The algorithm searches qualitative failure characteristics (that are part of the program input) of the basic events contained in a set to find those characteristics common to all basic events. This search is repeated for all cut sets input to the program. Common cause failure analysis is thereby performed without inclusion of secondary failure in the system logic model. By using BACFIRE, a common cause failure analysis can be added to an existing system safety and reliability analysis. 2 - Method of solution: BACFIRE searches the qualitative failure characteristics of the basic events contained in the fault tree minimal cut set to find those characteristics common to all basic events by either of two criteria. The first criterion can be met if all the basic events in a minimal cut set are associated by a condition which alone may increase the probability of multiple component malfunction. The second criterion is met if all the basic events in a minimal cut set are susceptible to the same secondary failure cause and are located in the same domain for that cause of secondary failure. 3 - Restrictions on the complexity of the problem - Maxima of: 1001 secondary failure maps, 101 basic events, 10 cut sets

  10. IMPORTANCE, Minimal Cut Sets and System Availability from Fault Tree Analysis

    International Nuclear Information System (INIS)

    Lambert, H. W.

    1987-01-01

    1 - Description of problem or function: IMPORTANCE computes various measures of probabilistic importance of basic events and minimal cut sets to a fault tree or reliability network diagram. The minimal cut sets, the failure rates and the fault duration times (i.e., the repair times) of all basic events contained in the minimal cut sets are supplied as input data. The failure and repair distributions are assumed to be exponential. IMPORTANCE, a quantitative evaluation code, then determines the probability of the top event and computes the importance of minimal cut sets and basic events by a numerical ranking. Two measures are computed. The first describes system behavior at one point in time; the second describes sequences of failures that cause the system to fail in time. All measures are computed assuming statistical independence of basic events. In addition, system unavailability and expected number of system failures are computed by the code. 2 - Method of solution: Seven measures of basic event importance and two measures of cut set importance can be computed. Birnbaum's measure of importance (i.e., the partial derivative) and the probability of the top event are computed using the min cut upper bound. If there are no replicated events in the minimal cut sets, then the min cut upper bound is exact. If basic events are replicated in the minimal cut sets, then based on experience the min cut upper bound is accurate if the probability of the top event is less than 0.1. Simpson's rule is used in computing the time-integrated measures of importance. Newton's method for approximating the roots of an equation is employed in the options where the importance measures are computed as a function of the probability of the top event, and a shell sort puts the output in descending order of importance

  11. The aug-cc-pVnZ-F12 basis set family: Correlation consistent basis sets for explicitly correlated benchmark calculations on anions and noncovalent complexes.

    Science.gov (United States)

    Sylvetsky, Nitai; Kesharwani, Manoj K; Martin, Jan M L

    2017-10-07

    We have developed a new basis set family, denoted as aug-cc-pVnZ-F12 (or aVnZ-F12 for short), for explicitly correlated calculations. The sets included in this family were constructed by supplementing the corresponding cc-pVnZ-F12 sets with additional diffuse functions on the higher angular momenta (i.e., additional d-h functions on non-hydrogen atoms and p-g on hydrogen atoms), optimized for the MP2-F12 energy of the relevant atomic anions. The new basis sets have been benchmarked against electron affinities of the first- and second-row atoms, the W4-17 dataset of total atomization energies, the S66 dataset of noncovalent interactions, the Benchmark Energy and Geometry Data Base water cluster subset, and the WATER23 subset of the GMTKN24 and GMTKN30 benchmark suites. The aVnZ-F12 basis sets displayed excellent performance, not just for electron affinities but also for noncovalent interaction energies of neutral and anionic species. Appropriate CABSs (complementary auxiliary basis sets) were explored for the S66 noncovalent interaction benchmark: between similar-sized basis sets, CABSs were found to be more transferable than generally assumed.

  12. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    Science.gov (United States)

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  13. Groebner basis, resultants and the generalized Mandelbrot set

    Energy Technology Data Exchange (ETDEWEB)

    Geum, Young Hee [Centre of Research for Computational Sciences and Informatics in Biology, Bioindustry, Environment, Agriculture and Healthcare, University of Malaya, 50603 Kuala Lumpur (Malaysia)], E-mail: conpana@empal.com; Hare, Kevin G. [Department of Pure Mathematics, University of Waterloo, Waterloo, Ont., N2L 3G1 (Canada)], E-mail: kghare@math.uwaterloo.ca

    2009-10-30

    This paper demonstrates how the Groebner basis algorithm can be used for finding the bifurcation points in the generalized Mandelbrot set. It also shows how resultants can be used to find components of the generalized Mandelbrot set.

  14. Groebner basis, resultants and the generalized Mandelbrot set

    International Nuclear Information System (INIS)

    Geum, Young Hee; Hare, Kevin G.

    2009-01-01

    This paper demonstrates how the Groebner basis algorithm can be used for finding the bifurcation points in the generalized Mandelbrot set. It also shows how resultants can be used to find components of the generalized Mandelbrot set.

  15. Localized atomic basis set in the projector augmented wave method

    DEFF Research Database (Denmark)

    Larsen, Ask Hjorth; Vanin, Marco; Mortensen, Jens Jørgen

    2009-01-01

    We present an implementation of localized atomic-orbital basis sets in the projector augmented wave (PAW) formalism within the density-functional theory. The implementation in the real-space GPAW code provides a complementary basis set to the accurate but computationally more demanding grid...

  16. Acquiring minimally invasive surgical skills

    NARCIS (Netherlands)

    Hiemstra, Ellen

    2012-01-01

    Many topics in surgical skills education have been implemented without a solid scientific basis. For that reason we have tried to find this scientific basis. We have focused on training and evaluation of minimally invasive surgical skills in a training setting and in practice in the operating room.

  17. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    Directory of Open Access Journals (Sweden)

    Khang Jie Liew

    Full Text Available This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  18. Dynamical pruning of static localized basis sets in time-dependent quantum dynamics

    NARCIS (Netherlands)

    McCormack, D.A.

    2006-01-01

    We investigate the viability of dynamical pruning of localized basis sets in time-dependent quantum wave packet methods. Basis functions that have a very small population at any given time are removed from the active set. The basis functions themselves are time independent, but the set of active

  19. A geometrical correction for the inter- and intra-molecular basis set superposition error in Hartree-Fock and density functional theory calculations for large systems

    Science.gov (United States)

    Kruse, Holger; Grimme, Stefan

    2012-04-01

    A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model

  20. A geometrical correction for the inter- and intra-molecular basis set superposition error in Hartree-Fock and density functional theory calculations for large systems.

    Science.gov (United States)

    Kruse, Holger; Grimme, Stefan

    2012-04-21

    A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model

  1. Basis set construction for molecular electronic structure theory: natural orbital and Gauss-Slater basis for smooth pseudopotentials.

    Science.gov (United States)

    Petruzielo, F R; Toulouse, Julien; Umrigar, C J

    2011-02-14

    A simple yet general method for constructing basis sets for molecular electronic structure calculations is presented. These basis sets consist of atomic natural orbitals from a multiconfigurational self-consistent field calculation supplemented with primitive functions, chosen such that the asymptotics are appropriate for the potential of the system. Primitives are optimized for the homonuclear diatomic molecule to produce a balanced basis set. Two general features that facilitate this basis construction are demonstrated. First, weak coupling exists between the optimal exponents of primitives with different angular momenta. Second, the optimal primitive exponents for a chosen system depend weakly on the particular level of theory employed for optimization. The explicit case considered here is a basis set appropriate for the Burkatzki-Filippi-Dolg pseudopotentials. Since these pseudopotentials are finite at nuclei and have a Coulomb tail, the recently proposed Gauss-Slater functions are the appropriate primitives. Double- and triple-zeta bases are developed for elements hydrogen through argon. These new bases offer significant gains over the corresponding Burkatzki-Filippi-Dolg bases at various levels of theory. Using a Gaussian expansion of the basis functions, these bases can be employed in any electronic structure method. Quantum Monte Carlo provides an added benefit: expansions are unnecessary since the integrals are evaluated numerically.

  2. Minimal set of auxiliary fields and S-matrix for extended supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Fradkin, E S; Vasiliev, M A [Physical Lebedev Institute - Moscow

    1979-05-19

    Minimal set of auxiliary fields for linearized SO(2) supergravity and one-parameter extension of the minimal auxiliary fields in the SO(1) supergravity are constructed. The expression for the S-matrix in SO(2) supergravity are given.

  3. On the performance of atomic natural orbital basis sets: A full configuration interaction study

    International Nuclear Information System (INIS)

    Illas, F.; Ricart, J.M.; Rubio, J.; Bagus, P.S.

    1990-01-01

    The performance of atomic natural orbital (ANO) basis sets has been studied by comparing self-consistant field (SCF) and full configuration interaction (CI) results obtained for the first row atoms and hydrides. The ANO results have been compared with those obtained using a segmented basis set containing the same number of contracted basis functions. The total energies obtained with the ANO basis sets are always lower than the one obtained by using the segmented one. However, for the hydrides, differential electronic correlation energy obtained with the ANO basis set may be smaller than the one recovered with the segmented set. We relate this poorer differential correlation energy for the ANO basis set to the fact that only one contracted d function is used for the ANO and segmented basis sets

  4. Molecular basis sets - a general similarity-based approach for representing chemical spaces.

    Science.gov (United States)

    Raghavendra, Akshay S; Maggiora, Gerald M

    2007-01-01

    A new method, based on generalized Fourier analysis, is described that utilizes the concept of "molecular basis sets" to represent chemical space within an abstract vector space. The basis vectors in this space are abstract molecular vectors. Inner products among the basis vectors are determined using an ansatz that associates molecular similarities between pairs of molecules with their corresponding inner products. Moreover, the fact that similarities between pairs of molecules are, in essentially all cases, nonzero implies that the abstract molecular basis vectors are nonorthogonal, but since the similarity of a molecule with itself is unity, the molecular vectors are normalized to unity. A symmetric orthogonalization procedure, which optimally preserves the character of the original set of molecular basis vectors, is used to construct appropriate orthonormal basis sets. Molecules can then be represented, in general, by sets of orthonormal "molecule-like" basis vectors within a proper Euclidean vector space. However, the dimension of the space can become quite large. Thus, the work presented here assesses the effect of basis set size on a number of properties including the average squared error and average norm of molecular vectors represented in the space-the results clearly show the expected reduction in average squared error and increase in average norm as the basis set size is increased. Several distance-based statistics are also considered. These include the distribution of distances and their differences with respect to basis sets of differing size and several comparative distance measures such as Spearman rank correlation and Kruscal stress. All of the measures show that, even though the dimension can be high, the chemical spaces they represent, nonetheless, behave in a well-controlled and reasonable manner. Other abstract vector spaces analogous to that described here can also be constructed providing that the appropriate inner products can be directly

  5. Dynamical basis sets for algebraic variational calculations in quantum-mechanical scattering theory

    Science.gov (United States)

    Sun, Yan; Kouri, Donald J.; Truhlar, Donald G.; Schwenke, David W.

    1990-01-01

    New basis sets are proposed for linear algebraic variational calculations of transition amplitudes in quantum-mechanical scattering problems. These basis sets are hybrids of those that yield the Kohn variational principle (KVP) and those that yield the generalized Newton variational principle (GNVP) when substituted in Schlessinger's stationary expression for the T operator. Trial calculations show that efficiencies almost as great as that of the GNVP and much greater than the KVP can be obtained, even for basis sets with the majority of the members independent of energy.

  6. Towards a minimal generic set of domains of functioning and health.

    Science.gov (United States)

    Cieza, Alarcos; Oberhauser, Cornelia; Bickenbach, Jerome; Chatterji, Somnath; Stucki, Gerold

    2014-03-03

    The World Health Organization (WHO) has argued that functioning, and, more concretely, functioning domains constitute the operationalization that best captures our intuitive notion of health. Functioning is, therefore, a major public-health goal. A great deal of data about functioning is already available. Nonetheless, it is not possible to compare and optimally utilize this information. One potential approach to address this challenge is to propose a generic and minimal set of functioning domains that captures the experience of individuals and populations with respect to functioning and health. The objective of this investigation was to identify a minimal generic set of ICF domains suitable for describing functioning in adults at both the individual and population levels. We performed a psychometric study using data from: 1) the German National Health Interview and Examination Survey 1998, 2) the United States National Health and Nutrition Examination Survey 2007/2008, and 3) the ICF Core Set studies. Random Forests and Group Lasso regression were applied using one self-reported general-health question as a dependent variable. The domains selected were compared to those of the World Health Survey (WHS) developed by the WHO. Seven domains of the International Classification of Functioning, Disability and Health (ICF) are proposed as a minimal generic set of functioning and health: energy and drive functions, emotional functions, sensation of pain, carrying out daily routine, walking, moving around, and remunerative employment. The WHS domains of self-care, cognition, interpersonal activities, and vision were not included in our selection. The minimal generic set proposed in this study is the starting point to address one of the most important challenges in health measurement--the comparability of data across studies and countries. It also represents the first step in developing a common metric of health to link information from the general population to information

  7. Comparison of Property-Oriented Basis Sets for the Computation of Electronic and Nuclear Relaxation Hyperpolarizabilities.

    Science.gov (United States)

    Zaleśny, Robert; Baranowska-Łączkowska, Angelika; Medveď, Miroslav; Luis, Josep M

    2015-09-08

    In the present work, we perform an assessment of several property-oriented atomic basis sets in computing (hyper)polarizabilities with a focus on the vibrational contributions. Our analysis encompasses the Pol and LPol-ds basis sets of Sadlej and co-workers, the def2-SVPD and def2-TZVPD basis sets of Rappoport and Furche, and the ORP basis set of Baranowska-Łączkowska and Łączkowski. Additionally, we use the d-aug-cc-pVQZ and aug-cc-pVTZ basis sets of Dunning and co-workers to determine the reference estimates of the investigated electric properties for small- and medium-sized molecules, respectively. We combine these basis sets with ab initio post-Hartree-Fock quantum-chemistry approaches (including the coupled cluster method) to calculate electronic and nuclear relaxation (hyper)polarizabilities of carbon dioxide, formaldehyde, cis-diazene, and a medium-sized Schiff base. The primary finding of our study is that, among all studied property-oriented basis sets, only the def2-TZVPD and ORP basis sets yield nuclear relaxation (hyper)polarizabilities of small molecules with average absolute errors less than 5.5%. A similar accuracy for the nuclear relaxation (hyper)polarizabilites of the studied systems can also be reached using the aug-cc-pVDZ basis set (5.3%), although for more accurate calculations of vibrational contributions, i.e., average absolute errors less than 1%, the aug-cc-pVTZ basis set is recommended. It was also demonstrated that anharmonic contributions to first and second hyperpolarizabilities of a medium-sized Schiff base are particularly difficult to accurately predict at the correlated level using property-oriented basis sets. For instance, the value of the nuclear relaxation first hyperpolarizability computed at the MP2/def2-TZVPD level of theory is roughly 3 times larger than that determined using the aug-cc-pVTZ basis set. We link the failure of the def2-TZVPD basis set with the difficulties in predicting the first-order field

  8. Automatic Generation of Minimal Cut Sets

    Directory of Open Access Journals (Sweden)

    Sentot Kromodimoeljo

    2015-06-01

    Full Text Available A cut set is a collection of component failure modes that could lead to a system failure. Cut Set Analysis (CSA is applied to critical systems to identify and rank system vulnerabilities at design time. Model checking tools have been used to automate the generation of minimal cut sets but are generally based on checking reachability of system failure states. This paper describes a new approach to CSA using a Linear Temporal Logic (LTL model checker called BT Analyser that supports the generation of multiple counterexamples. The approach enables a broader class of system failures to be analysed, by generalising from failure state formulae to failure behaviours expressed in LTL. The traditional approach to CSA using model checking requires the model or system failure to be modified, usually by hand, to eliminate already-discovered cut sets, and the model checker to be rerun, at each step. By contrast, the new approach works incrementally and fully automatically, thereby removing the tedious and error-prone manual process and resulting in significantly reduced computation time. This in turn enables larger models to be checked. Two different strategies for using BT Analyser for CSA are presented. There is generally no single best strategy for model checking: their relative efficiency depends on the model and property being analysed. Comparative results are given for the A320 hydraulics case study in the Behavior Tree modelling language.

  9. Acquiring minimally invasive surgical skills

    OpenAIRE

    Hiemstra, Ellen

    2012-01-01

    Many topics in surgical skills education have been implemented without a solid scientific basis. For that reason we have tried to find this scientific basis. We have focused on training and evaluation of minimally invasive surgical skills in a training setting and in practice in the operating room. This thesis has led to an enlarged insight in the organization of surgical skills training during residency training of surgical medical specialists.

  10. A Comparison of the Behavior of Functional/Basis Set Combinations for Hydrogen-Bonding in the Water Dimer with Emphasis on Basis Set Superposition Error

    OpenAIRE

    Plumley, Joshua A.; Dannenberg, J. J.

    2011-01-01

    We evaluate the performance of nine functionals (B3LYP, M05, M05-2X, M06, M06-2X, B2PLYP, B2PLYPD, X3LYP, B97D and MPWB1K) in combination with 16 basis sets ranging in complexity from 6-31G(d) to aug-cc-pV5Z for the calculation of the H-bonded water dimer with the goal of defining which combinations of functionals and basis sets provide a combination of economy and accuracy for H-bonded systems. We have compared the results to the best non-DFT molecular orbital calculations and to experimenta...

  11. Useful lower limits to polarization contributions to intermolecular interactions using a minimal basis of localized orthogonal orbitals: theory and analysis of the water dimer.

    Science.gov (United States)

    Azar, R Julian; Horn, Paul Richard; Sundstrom, Eric Jon; Head-Gordon, Martin

    2013-02-28

    The problem of describing the energy-lowering associated with polarization of interacting molecules is considered in the overlapping regime for self-consistent field wavefunctions. The existing approach of solving for absolutely localized molecular orbital (ALMO) coefficients that are block-diagonal in the fragments is shown based on formal grounds and practical calculations to often overestimate the strength of polarization effects. A new approach using a minimal basis of polarized orthogonal local MOs (polMOs) is developed as an alternative. The polMO basis is minimal in the sense that one polarization function is provided for each unpolarized orbital that is occupied; such an approach is exact in second-order perturbation theory. Based on formal grounds and practical calculations, the polMO approach is shown to underestimate the strength of polarization effects. In contrast to the ALMO method, however, the polMO approach yields results that are very stable to improvements in the underlying AO basis expansion. Combining the ALMO and polMO approaches allows an estimate of the range of energy-lowering due to polarization. Extensive numerical calculations on the water dimer using a large range of basis sets with Hartree-Fock theory and a variety of different density functionals illustrate the key considerations. Results are also presented for the polarization-dominated Na(+)CH4 complex. Implications for energy decomposition analysis of intermolecular interactions are discussed.

  12. Some considerations about Gaussian basis sets for electric property calculations

    Science.gov (United States)

    Arruda, Priscilla M.; Canal Neto, A.; Jorge, F. E.

    Recently, segmented contracted basis sets of double, triple, and quadruple zeta valence quality plus polarization functions (XZP, X = D, T, and Q, respectively) for the atoms from H to Ar were reported. In this work, with the objective of having a better description of polarizabilities, the QZP set was augmented with diffuse (s and p symmetries) and polarization (p, d, f, and g symmetries) functions that were chosen to maximize the mean dipole polarizability at the UHF and UMP2 levels, respectively. At the HF and B3LYP levels of theory, electric dipole moment and static polarizability for a sample of molecules were evaluated. Comparison with experimental data and results obtained with a similar size basis set, whose diffuse functions were optimized for the ground state energy of the anion, was done.

  13. Development of new auxiliary basis functions of the Karlsruhe segmented contracted basis sets including diffuse basis functions (def2-SVPD, def2-TZVPPD, and def2-QVPPD) for RI-MP2 and RI-CC calculations.

    Science.gov (United States)

    Hellweg, Arnim; Rappoport, Dmitrij

    2015-01-14

    We report optimized auxiliary basis sets for use with the Karlsruhe segmented contracted basis sets including moderately diffuse basis functions (Rappoport and Furche, J. Chem. Phys., 2010, 133, 134105) in resolution-of-the-identity (RI) post-self-consistent field (post-SCF) computations for the elements H-Rn (except lanthanides). The errors of the RI approximation using optimized auxiliary basis sets are analyzed on a comprehensive test set of molecules containing the most common oxidation states of each element and do not exceed those of the corresponding unaugmented basis sets. During these studies an unsatisfying performance of the def2-SVP and def2-QZVPP auxiliary basis sets for Barium was found and improved sets are provided. We establish the versatility of the def2-SVPD, def2-TZVPPD, and def2-QZVPPD basis sets for RI-MP2 and RI-CC (coupled-cluster) energy and property calculations. The influence of diffuse basis functions on correlation energy, basis set superposition error, atomic electron affinity, dipole moments, and computational timings is evaluated at different levels of theory using benchmark sets and showcase examples.

  14. Denjoy minimal sets and Birkhoff periodic orbits for non-exact monotone twist maps

    Science.gov (United States)

    Qin, Wen-Xin; Wang, Ya-Nan

    2018-06-01

    A non-exact monotone twist map φbarF is a composition of an exact monotone twist map φ bar with a generating function H and a vertical translation VF with VF ((x , y)) = (x , y - F). We show in this paper that for each ω ∈ R, there exists a critical value Fd (ω) ≥ 0 depending on H and ω such that for 0 ≤ F ≤Fd (ω), the non-exact twist map φbarF has an invariant Denjoy minimal set with irrational rotation number ω lying on a Lipschitz graph, or Birkhoff (p , q)-periodic orbits for rational ω = p / q. Like the Aubry-Mather theory, we also construct heteroclinic orbits connecting Birkhoff periodic orbits, and show that quasi-periodic orbits in these Denjoy minimal sets can be approximated by periodic orbits. In particular, we demonstrate that at the critical value F =Fd (ω), the Denjoy minimal set is not uniformly hyperbolic and can be approximated by smooth curves.

  15. Correlation consistent basis sets for actinides. I. The Th and U atoms

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, Kirk A., E-mail: kipeters@wsu.edu [Department of Chemistry, Washington State University, Pullman, Washington 99164-4630 (United States)

    2015-02-21

    New correlation consistent basis sets based on both pseudopotential (PP) and all-electron Douglas-Kroll-Hess (DKH) Hamiltonians have been developed from double- to quadruple-zeta quality for the actinide atoms thorium and uranium. Sets for valence electron correlation (5f6s6p6d), cc − pV nZ − PP and cc − pV nZ − DK3, as well as outer-core correlation (valence + 5s5p5d), cc − pwCV nZ − PP and cc − pwCV nZ − DK3, are reported (n = D, T, Q). The -PP sets are constructed in conjunction with small-core, 60-electron PPs, while the -DK3 sets utilized the 3rd-order Douglas-Kroll-Hess scalar relativistic Hamiltonian. Both series of basis sets show systematic convergence towards the complete basis set limit, both at the Hartree-Fock and correlated levels of theory, making them amenable to standard basis set extrapolation techniques. To assess the utility of the new basis sets, extensive coupled cluster composite thermochemistry calculations of ThF{sub n} (n = 2 − 4), ThO{sub 2}, and UF{sub n} (n = 4 − 6) have been carried out. After accurately accounting for valence and outer-core correlation, spin-orbit coupling, and even Lamb shift effects, the final 298 K atomization enthalpies of ThF{sub 4}, ThF{sub 3}, ThF{sub 2}, and ThO{sub 2} are all within their experimental uncertainties. Bond dissociation energies of ThF{sub 4} and ThF{sub 3}, as well as UF{sub 6} and UF{sub 5}, were similarly accurate. The derived enthalpies of formation for these species also showed a very satisfactory agreement with experiment, demonstrating that the new basis sets allow for the use of accurate composite schemes just as in molecular systems composed only of lighter atoms. The differences between the PP and DK3 approaches were found to increase with the change in formal oxidation state on the actinide atom, approaching 5-6 kcal/mol for the atomization enthalpies of ThF{sub 4} and ThO{sub 2}. The DKH3 atomization energy of ThO{sub 2} was calculated to be smaller than the DKH2

  16. Complexity Reduction in Large Quantum Systems: Fragment Identification and Population Analysis via a Local Optimized Minimal Basis

    International Nuclear Information System (INIS)

    Mohr, Stephan; Masella, Michel; Ratcliff, Laura E.; Genovese, Luigi

    2017-01-01

    We present, within Kohn-Sham Density Functional Theory calculations, a quantitative method to identify and assess the partitioning of a large quantum mechanical system into fragments. We then introduce a simple and efficient formalism (which can be written as generalization of other well-known population analyses) to extract, from first principles, electrostatic multipoles for these fragments. The corresponding fragment multipoles can in this way be seen as reliable (pseudo-) observables. By applying our formalism within the code BigDFT, we show that the usage of a minimal set of in-situ optimized basis functions is of utmost importance for having at the same time a proper fragment definition and an accurate description of the electronic structure. With this approach it becomes possible to simplify the modeling of environmental fragments by a set of multipoles, without notable loss of precision in the description of the active quantum mechanical region. Furthermore, this leads to a considerable reduction of the degrees of freedom by an effective coarsegraining approach, eventually also paving the way towards efficient QM/QM and QM/MM methods coupling together different levels of accuracy.

  17. Incomplete basis-set problem. V. Application of CIBS to many-electron systems

    International Nuclear Information System (INIS)

    McDowell, K.; Lewis, L.

    1982-01-01

    Five versions of CIBS (corrections to an incomplete basis set) theory are used to compute first and second corrections to Roothaan--Hartree--Fock energies via expansion of a given basis set. Version one is an order by order perturbation approximation which neglects virtual orbitals; version two is a full CIBS expansion which neglects virtual orbitals; version three is an order by order perturbation approximation which includes virtual orbitals; version four is a full CIBS expansion which includes orthogonalization to virtual orbitals but neglects virtual orbital coupling terms; and version five is a full CIBS expansion with inclusion of coupling to virtual orbitals. Results are presented for the atomic and molecular systems He, Be, H 2 , LiH, Li 2 , and H 2 O. Version five is shown to produce a corrected Hartree--Fock energy which is essentially in agreement with a comparable SCF result using the same expanded basis set. Versions one through four yield varying degrees of agreement; however, it is evident that the effect of the virtual orbitals must be included. From the results, CIBS version five is shown to be a viable quantitative procedure which can be used to expand or to study the use of basis sets in quantum chemistry

  18. Accuracy of Lagrange-sinc functions as a basis set for electronic structure calculations of atoms and molecules

    International Nuclear Information System (INIS)

    Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook; Kim, Woo Youn

    2015-01-01

    We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal to 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems

  19. Polarization functions for the modified m6-31G basis sets for atoms Ga through Kr.

    Science.gov (United States)

    Mitin, Alexander V

    2013-09-05

    The 2df polarization functions for the modified m6-31G basis sets of the third-row atoms Ga through Kr (Int J Quantum Chem, 2007, 107, 3028; Int J. Quantum Chem, 2009, 109, 1158) are proposed. The performances of the m6-31G, m6-31G(d,p), and m6-31G(2df,p) basis sets were examined in molecular calculations carried out by the density functional theory (DFT) method with B3LYP hybrid functional, Møller-Plesset perturbation theory of the second order (MP2), quadratic configuration interaction method with single and double substitutions and were compared with those for the known 6-31G basis sets as well as with the other similar 641 and 6-311G basis sets with and without polarization functions. Obtained results have shown that the performances of the m6-31G, m6-31G(d,p), and m6-31G(2df,p) basis sets are better in comparison with the performances of the known 6-31G, 6-31G(d,p) and 6-31G(2df,p) basis sets. These improvements are mainly reached due to better approximations of different electrons belonging to the different atomic shells in the modified basis sets. Applicability of the modified basis sets in thermochemical calculations is also discussed. © 2013 Wiley Periodicals, Inc.

  20. Toward the International Classification of Functioning, Disability and Health (ICF) Rehabilitation Set: A Minimal Generic Set of Domains for Rehabilitation as a Health Strategy.

    Science.gov (United States)

    Prodinger, Birgit; Cieza, Alarcos; Oberhauser, Cornelia; Bickenbach, Jerome; Üstün, Tevfik Bedirhan; Chatterji, Somnath; Stucki, Gerold

    2016-06-01

    To develop a comprehensive set of the International Classification of Functioning, Disability and Health (ICF) categories as a minimal standard for reporting and assessing functioning and disability in clinical populations along the continuum of care. The specific aims were to specify the domains of functioning recommended for an ICF Rehabilitation Set and to identify a minimal set of environmental factors (EFs) to be used alongside the ICF Rehabilitation Set when describing disability across individuals and populations with various health conditions. Secondary analysis of existing data sets using regression methods (Random Forests and Group Lasso regression) and expert consultations. Along the continuum of care, including acute, early postacute, and long-term and community rehabilitation settings. Persons (N=9863) with various health conditions participated in primary studies. The number of respondents for whom the dependent variable data were available and used in this analysis was 9264. Not applicable. For regression analyses, self-reported general health was used as a dependent variable. The ICF categories from the functioning component and the EF component were used as independent variables for the development of the ICF Rehabilitation Set and the minimal set of EFs, respectively. Thirty ICF categories to be complemented with 12 EFs were identified as relevant to the identified ICF sets. The ICF Rehabilitation Set constitutes of 9 ICF categories from the component body functions and 21 from the component activities and participation. The minimal set of EFs contains 12 categories spanning all chapters of the EF component of the ICF. The identified sets proposed serve as minimal generic sets of aspects of functioning in clinical populations for reporting data within and across heath conditions, time, clinical settings including rehabilitation, and countries. These sets present a reference framework for harmonizing existing information on disability across

  1. Energy optimized Gaussian basis sets for the atoms T1 - Rn

    International Nuclear Information System (INIS)

    Faegri, K. Jr.

    1987-01-01

    Energy optimized Gaussian basis sets have been derived for the atoms Tl-Rn. Two sets are presented - a (20,16,10,6) set and a (22,17,13,8) set. The smallest sets yield atomic energies 107 to 123 mH above the numerical Hartree-Fock values, while the larger sets give energies 11 mH above the numerical results. Energy trends from the smaller sets indicate that reduced shielding by p-electrons may place a greater demand on the flexibility of d- and f-orbital description for the lighter elements of the series

  2. Conductance calculations with a wavelet basis set

    DEFF Research Database (Denmark)

    Thygesen, Kristian Sommer; Bollinger, Mikkel; Jacobsen, Karsten Wedel

    2003-01-01

    We present a method based on density functional theory (DFT) for calculating the conductance of a phase-coherent system. The metallic contacts and the central region where the electron scattering occurs, are treated on the same footing taking their full atomic and electronic structure into account....... The linear-response conductance is calculated from the Green's function which is represented in terms of a system-independent basis set containing wavelets with compact support. This allows us to rigorously separate the central region from the contacts and to test for convergence in a systematic way...

  3. Correlation consistent basis sets for lanthanides: The atoms La–Lu

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Qing; Peterson, Kirk A., E-mail: kipeters@wsu.edu [Department of Chemistry, Washington State University, Pullman, Washington 99164-4630 (United States)

    2016-08-07

    Using the 3rd-order Douglas-Kroll-Hess (DKH3) Hamiltonian, all-electron correlation consistent basis sets of double-, triple-, and quadruple-zeta quality have been developed for the lanthanide elements La through Lu. Basis sets designed for the recovery of valence correlation (defined here as 4f5s5p5d6s), cc-pVnZ-DK3, and outer-core correlation (valence + 4s4p4d), cc-pwCVnZ-DK3, are reported (n = D, T, and Q). Systematic convergence of both Hartree-Fock and correlation energies towards their respective complete basis set (CBS) limits are observed. Benchmark calculations of the first three ionization potentials (IPs) of La through Lu are reported at the DKH3 coupled cluster singles and doubles with perturbative triples, CCSD(T), level of theory, including effects of correlation down through the 4s electrons. Spin-orbit coupling is treated at the 2-component HF level. After extrapolation to the CBS limit, the average errors with respect to experiment were just 0.52, 1.14, and 4.24 kcal/mol for the 1st, 2nd, and 3rd IPs, respectively, compared to the average experimental uncertainties of 0.03, 1.78, and 2.65 kcal/mol, respectively. The new basis sets are also used in CCSD(T) benchmark calculations of the equilibrium geometries, atomization energies, and heats of formation for Gd{sub 2}, GdF, and GdF{sub 3}. Except for the equilibrium geometry and harmonic frequency of GdF, which are accurately known from experiment, all other calculated quantities represent significant improvements compared to the existing experimental quantities. With estimated uncertainties of about ±3 kcal/mol, the 0 K atomization energies (298 K heats of formation) are calculated to be (all in kcal/mol): 33.2 (160.1) for Gd{sub 2}, 151.7 (−36.6) for GdF, and 447.1 (−295.2) for GdF{sub 3}.

  4. Accurate and balanced anisotropic Gaussian type orbital basis sets for atoms in strong magnetic fields

    Science.gov (United States)

    Zhu, Wuming; Trickey, S. B.

    2017-12-01

    In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li+, Be+, and B+, in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B field.

  5. Accurate and balanced anisotropic Gaussian type orbital basis sets for atoms in strong magnetic fields.

    Science.gov (United States)

    Zhu, Wuming; Trickey, S B

    2017-12-28

    In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li + , Be + , and B + , in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B

  6. Current-voltage curves for molecular junctions computed using all-electron basis sets

    International Nuclear Information System (INIS)

    Bauschlicher, Charles W.; Lawson, John W.

    2006-01-01

    We present current-voltage (I-V) curves computed using all-electron basis sets on the conducting molecule. The all-electron results are very similar to previous results obtained using effective core potentials (ECP). A hybrid integration scheme is used that keeps the all-electron calculations cost competitive with respect to the ECP calculations. By neglecting the coupling of states to the contacts below a fixed energy cutoff, the density matrix for the core electrons can be evaluated analytically. The full density matrix is formed by adding this core contribution to the valence part that is evaluated numerically. Expanding the definition of the core in the all-electron calculations significantly reduces the computational effort and, up to biases of about 2 V, the results are very similar to those obtained using more rigorous approaches. The convergence of the I-V curves and transmission coefficients with respect to basis set is discussed. The addition of diffuse functions is critical in approaching basis set completeness

  7. Gaussian basis sets for use in correlated molecular calculations. XI. Pseudopotential-based and all-electron relativistic basis sets for alkali metal (K-Fr) and alkaline earth (Ca-Ra) elements

    Science.gov (United States)

    Hill, J. Grant; Peterson, Kirk A.

    2017-12-01

    New correlation consistent basis sets based on pseudopotential (PP) Hamiltonians have been developed from double- to quintuple-zeta quality for the late alkali (K-Fr) and alkaline earth (Ca-Ra) metals. These are accompanied by new all-electron basis sets of double- to quadruple-zeta quality that have been contracted for use with both Douglas-Kroll-Hess (DKH) and eXact 2-Component (X2C) scalar relativistic Hamiltonians. Sets for valence correlation (ms), cc-pVnZ-PP and cc-pVnZ-(DK,DK3/X2C), in addition to outer-core correlation [valence + (m-1)sp], cc-p(w)CVnZ-PP and cc-pwCVnZ-(DK,DK3/X2C), are reported. The -PP sets have been developed for use with small-core PPs [I. S. Lim et al., J. Chem. Phys. 122, 104103 (2005) and I. S. Lim et al., J. Chem. Phys. 124, 034107 (2006)], while the all-electron sets utilized second-order DKH Hamiltonians for 4s and 5s elements and third-order DKH for 6s and 7s. The accuracy of the basis sets is assessed through benchmark calculations at the coupled-cluster level of theory for both atomic and molecular properties. Not surprisingly, it is found that outer-core correlation is vital for accurate calculation of the thermodynamic and spectroscopic properties of diatomic molecules containing these elements.

  8. Gaussian basis sets for use in correlated molecular calculations. XI. Pseudopotential-based and all-electron relativistic basis sets for alkali metal (K-Fr) and alkaline earth (Ca-Ra) elements.

    Science.gov (United States)

    Hill, J Grant; Peterson, Kirk A

    2017-12-28

    New correlation consistent basis sets based on pseudopotential (PP) Hamiltonians have been developed from double- to quintuple-zeta quality for the late alkali (K-Fr) and alkaline earth (Ca-Ra) metals. These are accompanied by new all-electron basis sets of double- to quadruple-zeta quality that have been contracted for use with both Douglas-Kroll-Hess (DKH) and eXact 2-Component (X2C) scalar relativistic Hamiltonians. Sets for valence correlation (ms), cc-pVnZ-PP and cc-pVnZ-(DK,DK3/X2C), in addition to outer-core correlation [valence + (m-1)sp], cc-p(w)CVnZ-PP and cc-pwCVnZ-(DK,DK3/X2C), are reported. The -PP sets have been developed for use with small-core PPs [I. S. Lim et al., J. Chem. Phys. 122, 104103 (2005) and I. S. Lim et al., J. Chem. Phys. 124, 034107 (2006)], while the all-electron sets utilized second-order DKH Hamiltonians for 4s and 5s elements and third-order DKH for 6s and 7s. The accuracy of the basis sets is assessed through benchmark calculations at the coupled-cluster level of theory for both atomic and molecular properties. Not surprisingly, it is found that outer-core correlation is vital for accurate calculation of the thermodynamic and spectroscopic properties of diatomic molecules containing these elements.

  9. Localized orbitals vs. pseudopotential-plane waves basis sets: performances and accuracy for molecular magnetic systems

    CERN Document Server

    Massobrio, C

    2003-01-01

    Density functional theory, in combination with a) a careful choice of the exchange-correlation part of the total energy and b) localized basis sets for the electronic orbital, has become the method of choice for calculating the exchange-couplings in magnetic molecular complexes. Orbital expansion on plane waves can be seen as an alternative basis set especially suited to allow optimization of newly synthesized materials of unknown geometries. However, little is known on the predictive power of this scheme to yield quantitative values for exchange coupling constants J as small as a few hundredths of eV (50-300 cm sup - sup 1). We have used density functional theory and a plane waves basis set to calculate the exchange couplings J of three homodinuclear Cu-based molecular complexes with experimental values ranging from +40 cm sup - sup 1 to -300 cm sup - sup 1. The plane waves basis set proves as accurate as the localized basis set, thereby suggesting that this approach can be reliably employed to predict and r...

  10. Localized orbitals vs. pseudopotential-plane waves basis sets: performances and accuracy for molecular magnetic systems

    International Nuclear Information System (INIS)

    Massobrio, C.; Ruiz, E.

    2003-01-01

    Density functional theory, in combination with a) a careful choice of the exchange-correlation part of the total energy and b) localized basis sets for the electronic orbital, has become the method of choice for calculating the exchange-couplings in magnetic molecular complexes. Orbital expansion on plane waves can be seen as an alternative basis set especially suited to allow optimization of newly synthesized materials of unknown geometries. However, little is known on the predictive power of this scheme to yield quantitative values for exchange coupling constants J as small as a few hundredths of eV (50-300 cm -1 ). We have used density functional theory and a plane waves basis set to calculate the exchange couplings J of three homodinuclear Cu-based molecular complexes with experimental values ranging from +40 cm -1 to -300 cm -1 . The plane waves basis set proves as accurate as the localized basis set, thereby suggesting that this approach can be reliably employed to predict and rationalize the magnetic properties of molecular-based materials. (author)

  11. Basis set convergence on static electric dipole polarizability calculations of alkali-metal clusters

    International Nuclear Information System (INIS)

    Souza, Fabio A. L. de; Jorge, Francisco E.

    2013-01-01

    A hierarchical sequence of all-electron segmented contracted basis sets of double, triple and quadruple zeta valence qualities plus polarization functions augmented with diffuse functions for the atoms from H to Ar was constructed. A systematic study of basis sets required to obtain reliable and accurate values of static dipole polarizabilities of lithium and sodium clusters (n = 2, 4, 6 and 8) at their optimized equilibrium geometries is reported. Three methods are examined: Hartree-Fock (HF), second-order Moeller-Plesset perturbation theory (MP2), and density functional theory (DFT). By direct calculations or by fitting the directly calculated values through one extrapolation scheme, estimates of the HF, MP2 and DFT complete basis set limits were obtained. Comparison with experimental and theoretical data reported previously in the literature is done (author)

  12. Basis set convergence on static electric dipole polarizability calculations of alkali-metal clusters

    Energy Technology Data Exchange (ETDEWEB)

    Souza, Fabio A. L. de; Jorge, Francisco E., E-mail: jorge@cce.ufes.br [Departamento de Fisica, Universidade Federal do Espirito Santo, 29060-900 Vitoria-ES (Brazil)

    2013-07-15

    A hierarchical sequence of all-electron segmented contracted basis sets of double, triple and quadruple zeta valence qualities plus polarization functions augmented with diffuse functions for the atoms from H to Ar was constructed. A systematic study of basis sets required to obtain reliable and accurate values of static dipole polarizabilities of lithium and sodium clusters (n = 2, 4, 6 and 8) at their optimized equilibrium geometries is reported. Three methods are examined: Hartree-Fock (HF), second-order Moeller-Plesset perturbation theory (MP2), and density functional theory (DFT). By direct calculations or by fitting the directly calculated values through one extrapolation scheme, estimates of the HF, MP2 and DFT complete basis set limits were obtained. Comparison with experimental and theoretical data reported previously in the literature is done (author)

  13. A Hartree–Fock study of the confined helium atom: Local and global basis set approaches

    Energy Technology Data Exchange (ETDEWEB)

    Young, Toby D., E-mail: tyoung@ippt.pan.pl [Zakład Metod Komputerowych, Instytut Podstawowych Prolemów Techniki Polskiej Akademia Nauk, ul. Pawińskiego 5b, 02-106 Warszawa (Poland); Vargas, Rubicelia [Universidad Autónoma Metropolitana Iztapalapa, División de Ciencias Básicas e Ingenierías, Departamento de Química, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, D.F. C.P. 09340, México (Mexico); Garza, Jorge, E-mail: jgo@xanum.uam.mx [Universidad Autónoma Metropolitana Iztapalapa, División de Ciencias Básicas e Ingenierías, Departamento de Química, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, D.F. C.P. 09340, México (Mexico)

    2016-02-15

    Two different basis set methods are used to calculate atomic energy within Hartree–Fock theory. The first is a local basis set approach using high-order real-space finite elements and the second is a global basis set approach using modified Slater-type orbitals. These two approaches are applied to the confined helium atom and are compared by calculating one- and two-electron contributions to the total energy. As a measure of the quality of the electron density, the cusp condition is analyzed. - Highlights: • Two different basis set methods for atomic Hartree–Fock theory. • Galerkin finite element method and modified Slater-type orbitals. • Confined atom model (helium) under small-to-extreme confinement radii. • Detailed analysis of the electron wave-function and the cusp condition.

  14. Continuum contributions to dipole oscillator-strength sum rules for hydrogen in finite basis sets

    DEFF Research Database (Denmark)

    Oddershede, Jens; Ogilvie, John F.; Sauer, Stephan P. A.

    2017-01-01

    Calculations of the continuum contributions to dipole oscillator sum rules for hydrogen are performed using both exact and basis-set representations of the stick spectra of the continuum wave function. We show that the same results are obtained for the sum rules in both cases, but that the conver......Calculations of the continuum contributions to dipole oscillator sum rules for hydrogen are performed using both exact and basis-set representations of the stick spectra of the continuum wave function. We show that the same results are obtained for the sum rules in both cases......, but that the convergence towards the final results with increasing excitation energies included in the sum over states is slower in the basis-set cases when we use the best basis. We argue also that this conclusion most likely holds also for larger atoms or molecules....

  15. MRD-CI potential surfaces using balanced basis sets. IV. The H2 molecule and the H3 surface

    International Nuclear Information System (INIS)

    Wright, J.S.; Kruus, E.

    1986-01-01

    The utility of midbond functions in molecular calculations was tested in two cases where the correct results are known: the H 2 potential curve and the collinear H 3 potential surface. For H 2 , a variety of basis sets both with and without bond functions was compared to the exact nonrelativistic potential curve of Kolos and Wolniewicz [J. Chem. Phys. 43, 2429 (1965)]. It was found that optimally balanced basis sets at two levels of quality were the double zeta single polarization plus sp bond function basis (BF1) and the triple zeta double polarization plus two sets of sp bond function basis (BF2). These gave bond dissociation energies D/sub e/ = 4.7341 and 4.7368 eV, respectively (expt. 4.7477 eV). Four basis sets were tested for basis set superposition errors, which were found to be small relative to basis set incompleteness and therefore did not affect any conclusions regarding basis set balance. Basis sets BF1 and BF2 were used to construct potential surfaces for collinear H 3 , along with the corresponding basis sets DZ*P and TZ*PP which contain no bond functions. Barrier heights of 12.52, 10.37, 10.06, and 9.96 kcal/mol were obtained for basis sets DZ*P, TZ*PP, BF1, and BF2, respectively, compared to an estimated limiting value of 9.60 kcal/mol. Difference maps, force constants, and relative rms deviations show that the bond functions improve the surface shape as well as the barrier height

  16. Optimal set of selected uranium enrichments that minimizes blending consequences

    International Nuclear Information System (INIS)

    Nachlas, J.A.; Kurstedt, H.A. Jr.; Lobber, J.S. Jr.

    1977-01-01

    Identities, quantities, and costs associated with producing a set of selected enrichments and blending them to provide fuel for existing reactors are investigated using an optimization model constructed with appropriate constraints. Selected enrichments are required for either nuclear reactor fuel standardization or potential uranium enrichment alternatives such as the gas centrifuge. Using a mixed-integer linear program, the model minimizes present worth costs for a 39-product-enrichment reference case. For four ingredients, the marginal blending cost is only 0.18% of the total direct production cost. Natural uranium is not an optimal blending ingredient. Optimal values reappear in most sets of ingredient enrichments

  17. New basis set for the prediction of the specific rotation in flexible biological molecules

    DEFF Research Database (Denmark)

    Baranowska-Łaczkowska, Angelika; Z. Łaczkowski, Krzysztof Z. Łaczkowski; Henriksen, Christian

    2016-01-01

    are compared to those obtained with the (d-)aug-cc-pVXZ (X = D, T and Q) basis sets of Dunning et al. The ORP values are in good overall agreement with the aug-cc-pVTZ results making the ORP a good basis set for routine TD-DFT optical rotation calculations of conformationally flexible molecules. The results...

  18. Decision Optimization of Machine Sets Taking Into Consideration Logical Tree Minimization of Design Guidelines

    Science.gov (United States)

    Deptuła, A.; Partyka, M. A.

    2014-08-01

    The method of minimization of complex partial multi-valued logical functions determines the degree of importance of construction and exploitation parameters playing the role of logical decision variables. Logical functions are taken into consideration in the issues of modelling machine sets. In multi-valued logical functions with weighting products, it is possible to use a modified Quine - McCluskey algorithm of multi-valued functions minimization. Taking into account weighting coefficients in the logical tree minimization reflects a physical model of the object being analysed much better

  19. A two-center-oscillator-basis as an alternative set for heavy ion processes

    International Nuclear Information System (INIS)

    Tornow, V.; Reinhard, P.G.; Drechsel, D.

    1977-01-01

    The two-center-oscillator-basis, which is constructed from harmonic oscillator wave functions developing about two different centers, suffers from numerical problems at small center separations due to the overcompleteness of the set. In order to overcome these problems we admix higer oscillator wave functions before the orthogonalization, or antisymmetrization resp. This yields a numerically stable basis set at each center separation. The results obtained for the potential energy suface are comparable with the results of more elaborate models. (orig.) [de

  20. Magnetic anisotropy basis sets for epitaxial (110) and (111) REFe2 nanofilms

    International Nuclear Information System (INIS)

    Bowden, G J; Martin, K N; Fox, A; Rainford, B D; Groot, P A J de

    2008-01-01

    Magnetic anisotropy basis sets for the cubic Laves phase rare earth intermetallic REFe 2 compounds are discussed in some detail. Such compounds can be either free standing, or thin films grown in either (110) or (111) mode using molecular beam epitaxy. For the latter, it is useful to rotate to a new coordinate system where the z-axis coincides with the growth axes of the film. In this paper, three symmetry adapted basis sets are given, for multi-pole moments up to n = 12. These sets can be used for free-standing compounds and for (110) and (111) epitaxial films. In addition, the distortion of REFe 2 films, grown on sapphire substrates, is also considered. The distortions are different for the (110) and (111) films. Strain-induced harmonic sets are given for both specific and general distortions. Finally, some predictions are made concerning the preferred direction of easy magnetization in (111) molecular beam epitaxy grown REFe 2 films

  1. Distinguishing the elements of a full product basis set needs only projective measurements and classical communication

    International Nuclear Information System (INIS)

    Chen Pingxing; Li Chengzu

    2004-01-01

    Nonlocality without entanglement is an interesting field. A manifestation of quantum nonlocality without entanglement is the possible local indistinguishability of orthogonal product states. In this paper we analyze the character of operators to distinguish the elements of a full product basis set in a multipartite system, and show that distinguishing perfectly these product bases needs only local projective measurements and classical communication, and these measurements cannot damage each product basis. Employing these conclusions one can discuss local distinguishability of the elements of any full product basis set easily. Finally we discuss the generalization of these results to the locally distinguishability of the elements of incomplete product basis set

  2. Function of One Regular Separable Relation Set Decided for the Minimal Covering in Multiple Valued Logic

    Directory of Open Access Journals (Sweden)

    Liu Yu Zhen

    2016-01-01

    Full Text Available Multiple-valued logic is an important branch of the computer science and technology. Multiple-valued logic studies the theory, multiple-valued circuit & multiple-valued system, and the applications of multiple-valued logic included.In the theory of multiple-valued logic, one primary and important problem is the completeness of function sets, which can be solved depending on the decision for all the precomplete sets(also called maximal closed sets of K-valued function sets noted by PK*, and another is the decision for Sheffer function, which can be totally solved by picking out all of the minimal covering of the precomplete sets. In the function structure theory of multi-logic, decision on Sheffer function is an important role. It contains structure and decision of full multi-logic and partial multi-logic. Its decision is closely related to decision of completeness of function which can be done by deciding the minimal covering of full multi-logic and partial-logic. By theory of completeness of partial multi-logic, we prove that function of one regular separable relation is not minimal covering of PK* under the condition of m = 2, σ = e.

  3. Quality of Gaussian basis sets: direct optimization of orbital exponents by the method of conjugate gradients

    International Nuclear Information System (INIS)

    Kari, R.E.; Mezey, P.G.; Csizmadia, I.G.

    1975-01-01

    Expressions are given for calculating the energy gradient vector in the exponent space of Gaussian basis sets and a technique to optimize orbital exponents using the method of conjugate gradients is described. The method is tested on the (9/sups/5/supp/) Gaussian basis space and optimum exponents are determined for the carbon atom. The analysis of the results shows that the calculated one-electron properties converge more slowly to their optimum values than the total energy converges to its optimum value. In addition, basis sets approximating the optimum total energy very well can still be markedly improved for the prediction of one-electron properties. For smaller basis sets, this improvement does not warrant the necessary expense

  4. Relativistic double-zeta, triple-zeta, and quadruple-zeta basis sets for the lanthanides La–Lu

    NARCIS (Netherlands)

    Dyall, K.G.; Gomes, A.S.P.; Visscher, L.

    2010-01-01

    Relativistic basis sets of double-zeta, triple-zeta, and quadruple-zeta quality have been optimized for the lanthanide elements La-Lu. The basis sets include SCF exponents for the occupied spinors and for the 6p shell, exponents of correlating functions for the valence shells (4f, 5d and 6s) and the

  5. An approach to develop chemical intuition for atomistic electron transport calculations using basis set rotations

    Energy Technology Data Exchange (ETDEWEB)

    Borges, A.; Solomon, G. C. [Department of Chemistry and Nano-Science Center, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen Ø (Denmark)

    2016-05-21

    Single molecule conductance measurements are often interpreted through computational modeling, but the complexity of these calculations makes it difficult to directly link them to simpler concepts and models. Previous work has attempted to make this connection using maximally localized Wannier functions and symmetry adapted basis sets, but their use can be ambiguous and non-trivial. Starting from a Hamiltonian and overlap matrix written in a hydrogen-like basis set, we demonstrate a simple approach to obtain a new basis set that is chemically more intuitive and allows interpretation in terms of simple concepts and models. By diagonalizing the Hamiltonians corresponding to each atom in the molecule, we obtain a basis set that can be partitioned into pseudo-σ and −π and allows partitioning of the Landuaer-Büttiker transmission as well as create simple Hückel models that reproduce the key features of the full calculation. This method provides a link between complex calculations and simple concepts and models to provide intuition or extract parameters for more complex model systems.

  6. Accurate Conformational Energy Differences of Carbohydrates: A Complete Basis Set Extrapolation

    Czech Academy of Sciences Publication Activity Database

    Csonka, G. I.; Kaminský, Jakub

    2011-01-01

    Roč. 7, č. 4 (2011), s. 988-997 ISSN 1549-9618 Institutional research plan: CEZ:AV0Z40550506 Keywords : MP2 * basis set extrapolation * saccharides Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.215, year: 2011

  7. Basis set effects on coupled cluster benchmarks of electronically excited states: CC3, CCSDR(3) and CC2

    DEFF Research Database (Denmark)

    Silva-Junior, Mario R.; Sauer, Stephan P. A.; Schreiber, Marko

    2010-01-01

    Vertical electronic excitation energies and one-electron properties of 28 medium-sized molecules from a previously proposed benchmark set are revisited using the augmented correlation-consistent triple-zeta aug-cc-pVTZ basis set in CC2, CCSDR(3), and CC3 calculations. The results are compared...... to those obtained previously with the smaller TZVP basis set. For each of the three coupled cluster methods, a correlation coefficient greater than 0.994 is found between the vertical excitation energies computed with the two basis sets. The deviations of the CC2 and CCSDR(3) results from the CC3 reference...... values are very similar for both basis sets, thus confirming previous conclusions on the intrinsic accuracy of CC2 and CCSDR(3). This similarity justifies the use of CC2- or CCSDR(3)-based corrections to account for basis set incompleteness in CC3 studies of vertical excitation energies. For oscillator...

  8. First-principle modelling of forsterite surface properties: Accuracy of methods and basis sets.

    Science.gov (United States)

    Demichelis, Raffaella; Bruno, Marco; Massaro, Francesco R; Prencipe, Mauro; De La Pierre, Marco; Nestola, Fabrizio

    2015-07-15

    The seven main crystal surfaces of forsterite (Mg2 SiO4 ) were modeled using various Gaussian-type basis sets, and several formulations for the exchange-correlation functional within the density functional theory (DFT). The recently developed pob-TZVP basis set provides the best results for all properties that are strongly dependent on the accuracy of the wavefunction. Convergence on the structure and on the basis set superposition error-corrected surface energy can be reached also with poorer basis sets. The effect of adopting different DFT functionals was assessed. All functionals give the same stability order for the various surfaces. Surfaces do not exhibit any major structural differences when optimized with different functionals, except for higher energy orientations where major rearrangements occur around the Mg sites at the surface or subsurface. When dispersions are not accounted for, all functionals provide similar surface energies. The inclusion of empirical dispersions raises the energy of all surfaces by a nearly systematic value proportional to the scaling factor s of the dispersion formulation. An estimation for the surface energy is provided through adopting C6 coefficients that are more suitable than the standard ones to describe O-O interactions in minerals. A 2 × 2 supercell of the most stable surface (010) was optimized. No surface reconstruction was observed. The resulting structure and surface energy show no difference with respect to those obtained when using the primitive cell. This result validates the (010) surface model here adopted, that will serve as a reference for future studies on adsorption and reactivity of water and carbon dioxide at this interface. © 2015 Wiley Periodicals, Inc.

  9. Gaussian basis sets for use in correlated molecular calculations. IV. Calculation of static electrical response properties

    International Nuclear Information System (INIS)

    Woon, D.E.; Dunning, T.H. Jr.

    1994-01-01

    An accurate description of the electrical properties of atoms and molecules is critical for quantitative predictions of the nonlinear properties of molecules and of long-range atomic and molecular interactions between both neutral and charged species. We report a systematic study of the basis sets required to obtain accurate correlated values for the static dipole (α 1 ), quadrupole (α 2 ), and octopole (α 3 ) polarizabilities and the hyperpolarizability (γ) of the rare gas atoms He, Ne, and Ar. Several methods of correlation treatment were examined, including various orders of Moller--Plesset perturbation theory (MP2, MP3, MP4), coupled-cluster theory with and without perturbative treatment of triple excitations [CCSD, CCSD(T)], and singles and doubles configuration interaction (CISD). All of the basis sets considered here were constructed by adding even-tempered sets of diffuse functions to the correlation consistent basis sets of Dunning and co-workers. With multiply-augmented sets we find that the electrical properties of the rare gas atoms converge smoothly to values that are in excellent agreement with the available experimental data and/or previously computed results. As a further test of the basis sets presented here, the dipole polarizabilities of the F - and Cl - anions and of the HCl and N 2 molecules are also reported

  10. Approaching the theoretical limit in periodic local MP2 calculations with atomic-orbital basis sets: the case of LiH.

    Science.gov (United States)

    Usvyat, Denis; Civalleri, Bartolomeo; Maschio, Lorenzo; Dovesi, Roberto; Pisani, Cesare; Schütz, Martin

    2011-06-07

    The atomic orbital basis set limit is approached in periodic correlated calculations for solid LiH. The valence correlation energy is evaluated at the level of the local periodic second order Møller-Plesset perturbation theory (MP2), using basis sets of progressively increasing size, and also employing "bond"-centered basis functions in addition to the standard atom-centered ones. Extended basis sets, which contain linear dependencies, are processed only at the MP2 stage via a dual basis set scheme. The local approximation (domain) error has been consistently eliminated by expanding the orbital excitation domains. As a final result, it is demonstrated that the complete basis set limit can be reached for both HF and local MP2 periodic calculations, and a general scheme is outlined for the definition of high-quality atomic-orbital basis sets for solids. © 2011 American Institute of Physics

  11. The Raman Spectrum of the Squarate (C4O4-2 Anion: An Ab Initio Basis Set Dependence Study

    Directory of Open Access Journals (Sweden)

    Miranda Sandro G. de

    2002-01-01

    Full Text Available The Raman excitation profile of the squarate anion, C4O4-2 , was calculated using ab initio methods at the Hartree-Fock using Linear Response Theory (LRT for six excitation frequencies: 632.5, 514.5, 488.0, 457.9, 363.8 and 337.1 nm. Five basis set functions (6-31G*, 6-31+G*, cc-pVDZ, aug-cc-pVDZ and Sadlej's polarizability basis set were investigated aiming to evaluate the performance of the 6-31G* set for numerical convergence and computational cost in relation to the larger basis sets. All basis sets reproduce the main spectroscopic features of the Raman spectrum of this anion for the excitation interval investigated. The 6-31G* basis set presented, on average, the same accuracy of numerical results as the larger sets but at a fraction of the computational cost showing that it is suitable for the theoretical investigation of the squarate dianion and its complexes and derivatives.

  12. Need for reaction coordinates to ensure a complete basis set in an adiabatic representation of ion-atom collisions

    Science.gov (United States)

    Rabli, Djamal; McCarroll, Ronald

    2018-02-01

    This review surveys the different theoretical approaches, used to describe inelastic and rearrangement processes in collisions involving atoms and ions. For a range of energies from a few meV up to about 1 keV, the adiabatic representation is expected to be valid and under these conditions, inelastic and rearrangement processes take place via a network of avoided crossings of the potential energy curves of the collision system. In general, such avoided crossings are finite in number. The non-adiabatic coupling, due to the breakdown of the Born-Oppenheimer separation of the electronic and nuclear variables, depends on the ratio of the electron mass to the nuclear mass terms in the total Hamiltonian. By limiting terms in the total Hamiltonian correct to first order in the electron to nuclear mass ratio, a system of reaction coordinates is found which allows for a correct description of both inelastic channels. The connection between the use of reaction coordinates in the quantum description and the electron translation factors of the impact parameter approach is established. A major result is that only when reaction coordinates are used, is it possible to introduce the notion of a minimal basis set. Such a set must include all avoided crossings including both radial coupling and long range Coriolis coupling. But, only when reactive coordinates are used, can such a basis set be considered as complete. In particular when the centre of nuclear mass is used as centre of coordinates, rather than the correct reaction coordinates, it is shown that erroneous results are obtained. A few results to illustrate this important point are presented: one concerning a simple two-state Landau-Zener type avoided crossing, the other concerning a network of multiple crossings in a typical electron capture process involving a highly charged ion with a neutral atom.

  13. Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni

    KAUST Repository

    Shuttleworth, I.G.

    2015-01-01

    © 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.

  14. Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni

    KAUST Repository

    Shuttleworth, I.G.

    2015-11-01

    © 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.

  15. Comparison of construction algorithms for minimal, acyclic, deterministic, finite-state automata from sets of strings

    NARCIS (Netherlands)

    Daciuk, J; Champarnaud, JM; Maurel, D

    2003-01-01

    This paper compares various methods for constructing minimal, deterministic, acyclic, finite-state automata (recognizers) from sets of words. Incremental, semi-incremental, and non-incremental methods have been implemented and evaluated.

  16. Artificial neural network classification using a minimal training set - Comparison to conventional supervised classification

    Science.gov (United States)

    Hepner, George F.; Logan, Thomas; Ritter, Niles; Bryant, Nevin

    1990-01-01

    Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.

  17. Nuclear-electronic orbital reduced explicitly correlated Hartree-Fock approach: Restricted basis sets and open-shell systems

    International Nuclear Information System (INIS)

    Brorsen, Kurt R.; Sirjoosingh, Andrew; Pak, Michael V.; Hammes-Schiffer, Sharon

    2015-01-01

    The nuclear electronic orbital (NEO) reduced explicitly correlated Hartree-Fock (RXCHF) approach couples select electronic orbitals to the nuclear orbital via Gaussian-type geminal functions. This approach is extended to enable the use of a restricted basis set for the explicitly correlated electronic orbitals and an open-shell treatment for the other electronic orbitals. The working equations are derived and the implementation is discussed for both extensions. The RXCHF method with a restricted basis set is applied to HCN and FHF − and is shown to agree quantitatively with results from RXCHF calculations with a full basis set. The number of many-particle integrals that must be calculated for these two molecules is reduced by over an order of magnitude with essentially no loss in accuracy, and the reduction factor will increase substantially for larger systems. Typically, the computational cost of RXCHF calculations with restricted basis sets will scale in terms of the number of basis functions centered on the quantum nucleus and the covalently bonded neighbor(s). In addition, the RXCHF method with an odd number of electrons that are not explicitly correlated to the nuclear orbital is implemented using a restricted open-shell formalism for these electrons. This method is applied to HCN + , and the nuclear densities are in qualitative agreement with grid-based calculations. Future work will focus on the significance of nonadiabatic effects in molecular systems and the further enhancement of the NEO-RXCHF approach to accurately describe such effects

  18. Nuclear-electronic orbital reduced explicitly correlated Hartree-Fock approach: Restricted basis sets and open-shell systems

    Energy Technology Data Exchange (ETDEWEB)

    Brorsen, Kurt R.; Sirjoosingh, Andrew; Pak, Michael V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu [Department of Chemistry, University of Illinois at Urbana-Champaign, 600 South Mathews Ave., Urbana, Illinois 61801 (United States)

    2015-06-07

    The nuclear electronic orbital (NEO) reduced explicitly correlated Hartree-Fock (RXCHF) approach couples select electronic orbitals to the nuclear orbital via Gaussian-type geminal functions. This approach is extended to enable the use of a restricted basis set for the explicitly correlated electronic orbitals and an open-shell treatment for the other electronic orbitals. The working equations are derived and the implementation is discussed for both extensions. The RXCHF method with a restricted basis set is applied to HCN and FHF{sup −} and is shown to agree quantitatively with results from RXCHF calculations with a full basis set. The number of many-particle integrals that must be calculated for these two molecules is reduced by over an order of magnitude with essentially no loss in accuracy, and the reduction factor will increase substantially for larger systems. Typically, the computational cost of RXCHF calculations with restricted basis sets will scale in terms of the number of basis functions centered on the quantum nucleus and the covalently bonded neighbor(s). In addition, the RXCHF method with an odd number of electrons that are not explicitly correlated to the nuclear orbital is implemented using a restricted open-shell formalism for these electrons. This method is applied to HCN{sup +}, and the nuclear densities are in qualitative agreement with grid-based calculations. Future work will focus on the significance of nonadiabatic effects in molecular systems and the further enhancement of the NEO-RXCHF approach to accurately describe such effects.

  19. Push it to the limit: Characterizing the convergence of common sequences of basis sets for intermolecular interactions as described by density functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Witte, Jonathon [Department of Chemistry, University of California, Berkeley, California 94720 (United States); Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Neaton, Jeffrey B. [Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Department of Physics, University of California, Berkeley, California 94720 (United States); Kavli Energy Nanosciences Institute at Berkeley, Berkeley, California 94720 (United States); Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu [Department of Chemistry, University of California, Berkeley, California 94720 (United States); Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States)

    2016-05-21

    With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methods and systems examined, the most complete basis is Jensen’s pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.

  20. On the use of Locally Dense Basis Sets in the Calculation of EPR Hyperfine Couplings

    DEFF Research Database (Denmark)

    Hedegård, Erik D.; Sauer, Stephan P. A.; Milhøj, Birgitte O.

    2013-01-01

    The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site in the c......The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site...

  1. On the use of locally dense basis sets in the calculation of EPR hyperfine couplings

    DEFF Research Database (Denmark)

    Milhøj, Birgitte Olai; Hedegård, Erik D.; Sauer, Stephan P. A.

    2013-01-01

    The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site in the c......The usage of locally dense basis sets in the calculation of Electron Paramagnetic Resonance (EPR) hyperne coupling constants is investigated at the level of Density Functional Theory (DFT) for two model systems of biologically important transition metal complexes: One for the active site...

  2. Correlation consistent basis sets for actinides. II. The atoms Ac and Np-Lr.

    Science.gov (United States)

    Feng, Rulin; Peterson, Kirk A

    2017-08-28

    New correlation consistent basis sets optimized using the all-electron third-order Douglas-Kroll-Hess (DKH3) scalar relativistic Hamiltonian are reported for the actinide elements Ac and Np through Lr. These complete the series of sets reported previously for Th-U [K. A. Peterson, J. Chem. Phys. 142, 074105 (2015); M. Vasiliu et al., J. Phys. Chem. A 119, 11422 (2015)]. The new sets range in size from double- to quadruple-zeta and encompass both those optimized for valence (6s6p5f7s6d) and outer-core electron correlations (valence + 5s5p5d). The final sets have been contracted for both the DKH3 and eXact 2-component (X2C) Hamiltonians, yielding cc-pVnZ-DK3/cc-pVnZ-X2C sets for valence correlation and cc-pwCVnZ-DK3/cc-pwCVnZ-X2C sets for outer-core correlation (n = D, T, Q in each case). In order to test the effectiveness of the new basis sets, both atomic and molecular benchmark calculations have been carried out. In the first case, the first three atomic ionization potentials (IPs) of all the actinide elements Ac-Lr have been calculated using the Feller-Peterson-Dixon (FPD) composite approach, primarily with the multireference configuration interaction (MRCI) method. Excellent convergence towards the respective complete basis set (CBS) limits is achieved with the new sets, leading to good agreement with experiment, where these exist, after accurately accounting for spin-orbit effects using the 4-component Dirac-Hartree-Fock method. For a molecular test, the IP and atomization energy (AE) of PuO 2 have been calculated also using the FPD method but using a coupled cluster approach with spin-orbit coupling accounted for using the 4-component MRCI. The present calculations yield an IP 0 for PuO 2 of 159.8 kcal/mol, which is in excellent agreement with the experimental electron transfer bracketing value of 162 ± 3 kcal/mol. Likewise, the calculated 0 K AE of 305.6 kcal/mol is in very good agreement with the currently accepted experimental value of 303.1 ± 5 kcal

  3. The Bethe Sum Rule and Basis Set Selection in the Calculation of Generalized Oscillator Strengths

    DEFF Research Database (Denmark)

    Cabrera-Trujillo, Remigio; Sabin, John R.; Oddershede, Jens

    1999-01-01

    Fulfillment of the Bethe sum rule may be construed as a measure of basis set quality for atomic and molecular properties involving the generalized oscillator strength distribution. It is first shown that, in the case of a complete basis, the Bethe sum rule is fulfilled exactly in the random phase...

  4. Straightening the Hierarchical Staircase for Basis Set Extrapolations: A Low-Cost Approach to High-Accuracy Computational Chemistry

    Science.gov (United States)

    Varandas, António J. C.

    2018-04-01

    Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.

  5. Consistent gaussian basis sets of double- and triple-zeta valence with polarization quality of the fifth period for solid-state calculations.

    Science.gov (United States)

    Laun, Joachim; Vilela Oliveira, Daniel; Bredow, Thomas

    2018-02-22

    Consistent basis sets of double- and triple-zeta valence with polarization quality for the fifth period have been derived for periodic quantum-chemical solid-state calculations with the crystalline-orbital program CRYSTAL. They are an extension of the pob-TZVP basis sets, and are based on the full-relativistic effective core potentials (ECPs) of the Stuttgart/Cologne group and on the def2-SVP and def2-TZVP valence basis of the Ahlrichs group. We optimized orbital exponents and contraction coefficients to supply robust and stable self-consistent field (SCF) convergence for a wide range of different compounds. The computed crystal structures are compared to those obtained with standard basis sets available from the CRYSTAL basis set database. For the applied hybrid density functional PW1PW, the average deviations of calculated lattice constants from experimental references are smaller with pob-DZVP and pob-TZVP than with standard basis sets. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  6. An expanded calibration study of the explicitly correlated CCSD(T)-F12b method using large basis set standard CCSD(T) atomization energies.

    Science.gov (United States)

    Feller, David; Peterson, Kirk A

    2013-08-28

    The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies 0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.

  7. Estimation of isotropic nuclear magnetic shieldings in the CCSD(T) and MP2 complete basis set limit using affordable correlation calculations

    DEFF Research Database (Denmark)

    Kupka, Teobald; Stachów, Michal; Kaminsky, Jakub

    2013-01-01

    , estimated from calculations with the family of polarizationconsistent pcS-n basis sets is reported. This dependence was also supported by inspection of profiles of deviation between CBS estimated nuclear shieldings and obtained with significantly smaller basis sets pcS-2 and aug-cc-pVTZ-J for the selected......A linear correlation between isotropic nuclear magnetic shielding constants for seven model molecules (CH2O, H2O, HF, F2, HCN, SiH4 and H2S) calculated with 37 methods (34 density functionals, RHF, MP2 and CCSD(T) ), with affordable pcS-2 basis set and corresponding complete basis set results...... set of 37 calculation methods. It was possible to formulate a practical approach of estimating the values of isotropic nuclear magnetic shielding constants at the CCSD(T)/CBS and MP2/CBS levels from affordable CCSD(T)/pcS-2, MP2/pcS-2 and DFT/CBS calculations with pcS-n basis sets. The proposed method...

  8. Basis set approach in the constrained interpolation profile method

    International Nuclear Information System (INIS)

    Utsumi, T.; Koga, J.; Yabe, T.; Ogata, Y.; Matsunaga, E.; Aoki, T.; Sekine, M.

    2003-07-01

    We propose a simple polynomial basis-set that is easily extendable to any desired higher-order accuracy. This method is based on the Constrained Interpolation Profile (CIP) method and the profile is chosen so that the subgrid scale solution approaches the real solution by the constraints from the spatial derivative of the original equation. Thus the solution even on the subgrid scale becomes consistent with the master equation. By increasing the order of the polynomial, this solution quickly converges. 3rd and 5th order polynomials are tested on the one-dimensional Schroedinger equation and are proved to give solutions a few orders of magnitude higher in accuracy than conventional methods for lower-lying eigenstates. (author)

  9. Predicting Pt-195 NMR chemical shift using new relativistic all-electron basis set

    NARCIS (Netherlands)

    Paschoal, D.; Fonseca Guerra, C.; de Oliveira, M.A.L.; Ramalho, T.C.; Dos Santos, H.F.

    2016-01-01

    Predicting NMR properties is a valuable tool to assist the experimentalists in the characterization of molecular structure. For heavy metals, such as Pt-195, only a few computational protocols are available. In the present contribution, all-electron Gaussian basis sets, suitable to calculate the

  10. Inelastic scattering with Chebyshev polynomials and preconditioned conjugate gradient minimization.

    Science.gov (United States)

    Temel, Burcin; Mills, Greg; Metiu, Horia

    2008-03-27

    We describe and test an implementation, using a basis set of Chebyshev polynomials, of a variational method for solving scattering problems in quantum mechanics. This minimum error method (MEM) determines the wave function Psi by minimizing the least-squares error in the function (H Psi - E Psi), where E is the desired scattering energy. We compare the MEM to an alternative, the Kohn variational principle (KVP), by solving the Secrest-Johnson model of two-dimensional inelastic scattering, which has been studied previously using the KVP and for which other numerical solutions are available. We use a conjugate gradient (CG) method to minimize the error, and by preconditioning the CG search, we are able to greatly reduce the number of iterations necessary; the method is thus faster and more stable than a matrix inversion, as is required in the KVP. Also, we avoid errors due to scattering off of the boundaries, which presents substantial problems for other methods, by matching the wave function in the interaction region to the correct asymptotic states at the specified energy; the use of Chebyshev polynomials allows this boundary condition to be implemented accurately. The use of Chebyshev polynomials allows for a rapid and accurate evaluation of the kinetic energy. This basis set is as efficient as plane waves but does not impose an artificial periodicity on the system. There are problems in surface science and molecular electronics which cannot be solved if periodicity is imposed, and the Chebyshev basis set is a good alternative in such situations.

  11. Many-body calculations of molecular electric polarizabilities in asymptotically complete basis sets

    Science.gov (United States)

    Monten, Ruben; Hajgató, Balázs; Deleuze, Michael S.

    2011-10-01

    The static dipole polarizabilities of Ne, CO, N2, F2, HF, H2O, HCN, and C2H2 (acetylene) have been determined close to the Full-CI limit along with an asymptotically complete basis set (CBS), according to the principles of a Focal Point Analysis. For this purpose the results of Finite Field calculations up to the level of Coupled Cluster theory including Single, Double, Triple, Quadruple and perturbative Pentuple excitations [CCSDTQ(P)] were used, in conjunction with suited extrapolations of energies obtained using augmented and doubly-augmented Dunning's correlation consistent polarized valence basis sets of improving quality. The polarizability characteristics of C2H4 (ethylene) and C2H6 (ethane) have been determined on the same grounds at the CCSDTQ level in the CBS limit. Comparison is made with results obtained using lower levels in electronic correlation, or taking into account the relaxation of the molecular structure due to an adiabatic polarization process. Vibrational corrections to electronic polarizabilities have been empirically estimated according to Born-Oppenheimer Molecular Dynamical simulations employing Density Functional Theory. Confrontation with experiment ultimately indicates relative accuracies of the order of 1 to 2%.

  12. Compensation of temperature frequency pushing in microwave resonator-meters on the basis VCO

    Directory of Open Access Journals (Sweden)

    Drobakhin O. O.

    2008-02-01

    Full Text Available It is shown that the influence of temperature oscillations on the error of measurements of parameters in the case of the application of microwave resonator meters on the basis of a voltage-controlled oscillator (VCO can be minimized by software using a special algorithm of VCO frequency setting correction. An algorithm of VCO frequency setting correction for triangle control voltage is proposed.

  13. Using the critical incident technique to define a minimal data set for requirements elicitation in public health.

    Science.gov (United States)

    Olvingson, Christina; Hallberg, Niklas; Timpka, Toomas; Greenes, Robert A

    2002-12-18

    The introduction of computer-based information systems (ISs) in public health provides enhanced possibilities for service improvements and hence also for improvement of the population's health. Not least, new communication systems can help in the socialization and integration process needed between the different professions and geographical regions. Therefore, development of ISs that truly support public health practices require that technical, cognitive, and social issues be taken into consideration. A notable problem is to capture 'voices' of all potential users, i.e., the viewpoints of different public health practitioners. Failing to capture these voices will result in inefficient or even useless systems. The aim of this study is to develop a minimal data set for capturing users' voices on problems experienced by public health professionals in their daily work and opinions about how these problems can be solved. The issues of concern thus captured can be used both as the basis for formulating the requirements of ISs for public health professionals and to create an understanding of the use context. Further, the data can help in directing the design to the features most important for the users.

  14. An editor for the maintenance and use of a bank of contracted Gaussian basis set functions

    International Nuclear Information System (INIS)

    Taurian, O.E.

    1984-01-01

    A bank of basis sets to be used in ab-initio calculations has been created. The bases are sets of contracted Gaussian type orbitals to be used as input to any molecular integral package. In this communication we shall describe the organization of the bank and a portable editor program which was designed for its maintenance and use. This program is operated by commands and it may be used to obtain any kind of information about the bases in the bank as well as to produce output to be directly used as input for different integral programs. The editor may also be used to format basis sets in the conventional way utilized in publications, as well as to generate a complete, or partial, manual of the contents of the bank if so desired. (orig.)

  15. Geometrical correction for the inter- and intramolecular basis set superposition error in periodic density functional theory calculations.

    Science.gov (United States)

    Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan

    2013-09-26

    We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface.

  16. Atomic Cholesky decompositions: A route to unbiased auxiliary basis sets for density fitting approximation with tunable accuracy and efficiency

    Science.gov (United States)

    Aquilante, Francesco; Gagliardi, Laura; Pedersen, Thomas Bondo; Lindh, Roland

    2009-04-01

    Cholesky decomposition of the atomic two-electron integral matrix has recently been proposed as a procedure for automated generation of auxiliary basis sets for the density fitting approximation [F. Aquilante et al., J. Chem. Phys. 127, 114107 (2007)]. In order to increase computational performance while maintaining accuracy, we propose here to reduce the number of primitive Gaussian functions of the contracted auxiliary basis functions by means of a second Cholesky decomposition. Test calculations show that this procedure is most beneficial in conjunction with highly contracted atomic orbital basis sets such as atomic natural orbitals, and that the error resulting from the second decomposition is negligible. We also demonstrate theoretically as well as computationally that the locality of the fitting coefficients can be controlled by means of the decomposition threshold even with the long-ranged Coulomb metric. Cholesky decomposition-based auxiliary basis sets are thus ideally suited for local density fitting approximations.

  17. Performance assessment of density functional methods with Gaussian and Slater basis sets using 7σ orbital momentum distributions of N2O

    Science.gov (United States)

    Wang, Feng; Pang, Wenning; Duffy, Patrick

    2012-12-01

    Performance of a number of commonly used density functional methods in chemistry (B3LYP, Bhandh, BP86, PW91, VWN, LB94, PBe0, SAOP and X3LYP and the Hartree-Fock (HF) method) has been assessed using orbital momentum distributions of the 7σ orbital of nitrous oxide (NNO), which models electron behaviour in a chemically significant region. The density functional methods are combined with a number of Gaussian basis sets (Pople's 6-31G*, 6-311G**, DGauss TZVP and Dunning's aug-cc-pVTZ as well as even-tempered Slater basis sets, namely, et-DZPp, et-QZ3P, et-QZ+5P and et-pVQZ). Orbital momentum distributions of the 7σ orbital in the ground electronic state of NNO, which are obtained from a Fourier transform into momentum space from single point electronic calculations employing the above models, are compared with experimental measurement of the same orbital from electron momentum spectroscopy (EMS). The present study reveals information on performance of (a) the density functional methods, (b) Gaussian and Slater basis sets, (c) combinations of the density functional methods and basis sets, that is, the models, (d) orbital momentum distributions, rather than a group of specific molecular properties and (e) the entire region of chemical significance of the orbital. It is found that discrepancies of this orbital between the measured and the calculated occur in the small momentum region (i.e. large r region). In general, Slater basis sets achieve better overall performance than the Gaussian basis sets. Performance of the Gaussian basis sets varies noticeably when combining with different Vxc functionals, but Dunning's augcc-pVTZ basis set achieves the best performance for the momentum distributions of this orbital. The overall performance of the B3LYP and BP86 models is similar to newer models such as X3LYP and SAOP. The present study also demonstrates that the combinations of the density functional methods and the basis sets indeed make a difference in the quality of the

  18. Systematic determination of extended atomic orbital basis sets and application to molecular SCF and MCSCF calculations

    Energy Technology Data Exchange (ETDEWEB)

    Feller, D.F.

    1979-01-01

    The behavior of the two exponential parameters in an even-tempered gaussian basis set is investigated as the set optimally approaches an integral transform representation of the radial portion of atomic and molecular orbitals. This approach permits a highly accurate assessment of the Hartree-Fock limit for atoms and molecules.

  19. Minimal Blocking Sets in PG(2, 8) and Maximal Partial Spreads in PG(3, 8)

    DEFF Research Database (Denmark)

    Barat, Janos

    2004-01-01

    We prove that PG(2, 8) does not contain minimal blocking sets of size 14. Using this result we prove that 58 is the largest size for a maximal partial spread of PG(3, 8). This supports the conjecture that q2-q+ 2 is the largest size for a maximal partial spread of PG(3, q), q>7....

  20. Molecular Properties by Quantum Monte Carlo: An Investigation on the Role of the Wave Function Ansatz and the Basis Set in the Water Molecule

    Science.gov (United States)

    Zen, Andrea; Luo, Ye; Sorella, Sandro; Guidoni, Leonardo

    2014-01-01

    Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely, the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudopotential, and the basis set for QMC calculations. We also introduce a new method for the computation of forces with finite variance on open systems and a new strategy for the definition of the atomic orbitals involved in the Jastrow-Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets. PMID:24526929

  1. Adaptive local basis set for Kohn–Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Gaigong [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Lin, Lin, E-mail: linlin@math.berkeley.edu [Department of Mathematics, University of California, Berkeley, Berkeley, CA 94720 (United States); Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Hu, Wei, E-mail: whu@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Yang, Chao, E-mail: cyang@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Pask, John E., E-mail: pask1@llnl.gov [Physics Division, Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States)

    2017-04-15

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynman forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Since the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H{sub 2} and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.

  2. Consistent structures and interactions by density functional theory with small atomic orbital basis sets.

    Science.gov (United States)

    Grimme, Stefan; Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas

    2015-08-07

    A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of "low-cost" electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT methods

  3. Consistent structures and interactions by density functional theory with small atomic orbital basis sets

    International Nuclear Information System (INIS)

    Grimme, Stefan; Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas

    2015-01-01

    A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of “low-cost” electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT

  4. TREDRA, Minimal Cut Sets Fault Tree Plot Program

    International Nuclear Information System (INIS)

    Fussell, J.B.

    1983-01-01

    1 - Description of problem or function: TREDRA is a computer program for drafting report-quality fault trees. The input to TREDRA is similar to input for standard computer programs that find minimal cut sets from fault trees. Output includes fault tree plots containing all standard fault tree logic and event symbols, gate and event labels, and an output description for each event in the fault tree. TREDRA contains the following features: a variety of program options that allow flexibility in the program output; capability for automatic pagination of the output fault tree, when necessary; input groups which allow labeling of gates, events, and their output descriptions; a symbol library which includes standard fault tree symbols plus several less frequently used symbols; user control of character size and overall plot size; and extensive input error checking and diagnostic oriented output. 2 - Method of solution: Fault trees are generated by user-supplied control parameters and a coded description of the fault tree structure consisting of the name of each gate, the gate type, the number of inputs to the gate, and the names of these inputs. 3 - Restrictions on the complexity of the problem: TREDRA can produce fault trees with a minimum of 3 and a maximum of 56 levels. The width of each level may range from 3 to 37. A total of 50 transfers is allowed during pagination

  5. 42 CFR 415.170 - Conditions for payment on a fee schedule basis for physician services in a teaching setting.

    Science.gov (United States)

    2010-10-01

    ... physician services in a teaching setting. 415.170 Section 415.170 Public Health CENTERS FOR MEDICARE... BY PHYSICIANS IN PROVIDERS, SUPERVISING PHYSICIANS IN TEACHING SETTINGS, AND RESIDENTS IN CERTAIN SETTINGS Physician Services in Teaching Settings § 415.170 Conditions for payment on a fee schedule basis...

  6. Kohn-Sham potentials from electron densities using a matrix representation within finite atomic orbital basis sets

    Science.gov (United States)

    Zhang, Xing; Carter, Emily A.

    2018-01-01

    We revisit the static response function-based Kohn-Sham (KS) inversion procedure for determining the KS effective potential that corresponds to a given target electron density within finite atomic orbital basis sets. Instead of expanding the potential in an auxiliary basis set, we directly update the potential in its matrix representation. Through numerical examples, we show that the reconstructed density rapidly converges to the target density. Preliminary results are presented to illustrate the possibility of obtaining a local potential in real space from the optimized potential in its matrix representation. We have further applied this matrix-based KS inversion approach to density functional embedding theory. A proof-of-concept study of a solvated proton transfer reaction demonstrates the method's promise.

  7. Optimization of auxiliary basis sets for the LEDO expansion and a projection technique for LEDO-DFT.

    Science.gov (United States)

    Götz, Andreas W; Kollmar, Christian; Hess, Bernd A

    2005-09-01

    We present a systematic procedure for the optimization of the expansion basis for the limited expansion of diatomic overlap density functional theory (LEDO-DFT) and report on optimized auxiliary orbitals for the Ahlrichs split valence plus polarization basis set (SVP) for the elements H, Li--F, and Na--Cl. A new method to deal with near-linear dependences in the LEDO expansion basis is introduced, which greatly reduces the computational effort of LEDO-DFT calculations. Numerical results for a test set of small molecules demonstrate the accuracy of electronic energies, structural parameters, dipole moments, and harmonic frequencies. For larger molecular systems the numerical errors introduced by the LEDO approximation can lead to an uncontrollable behavior of the self-consistent field (SCF) process. A projection technique suggested by Löwdin is presented in the framework of LEDO-DFT, which guarantees for SCF convergence. Numerical results on some critical test molecules suggest the general applicability of the auxiliary orbitals presented in combination with this projection technique. Timing results indicate that LEDO-DFT is competitive with conventional density fitting methods. (c) 2005 Wiley Periodicals, Inc.

  8. On the accuracy of density-functional theory exchange-correlation functionals for H bonds in small water clusters: Benchmarks approaching the complete basis set limit

    Science.gov (United States)

    Santra, Biswajit; Michaelides, Angelos; Scheffler, Matthias

    2007-11-01

    The ability of several density-functional theory (DFT) exchange-correlation functionals to describe hydrogen bonds in small water clusters (dimer to pentamer) in their global minimum energy structures is evaluated with reference to second order Møller-Plesset perturbation theory (MP2). Errors from basis set incompleteness have been minimized in both the MP2 reference data and the DFT calculations, thus enabling a consistent systematic evaluation of the true performance of the tested functionals. Among all the functionals considered, the hybrid X3LYP and PBE0 functionals offer the best performance and among the nonhybrid generalized gradient approximation functionals, mPWLYP and PBE1W perform best. The popular BLYP and B3LYP functionals consistently underbind and PBE and PW91 display rather variable performance with cluster size.

  9. Formulation of improved basis sets for the study of polymer dynamics through diffusion theory methods.

    Science.gov (United States)

    Gaspari, Roberto; Rapallo, Arnaldo

    2008-06-28

    In this work a new method is proposed for the choice of basis functions in diffusion theory (DT) calculations. This method, named hybrid basis approach (HBA), combines the two previously adopted long time sorting procedure (LTSP) and maximum correlation approximation (MCA) techniques; the first emphasizing contributions from the long time dynamics, the latter being based on the local correlations along the chain. In order to fulfill this task, the HBA procedure employs a first order basis set corresponding to a high order MCA one and generates upper order approximations according to LTSP. A test of the method is made first on a melt of cis-1,4-polyisoprene decamers where HBA and LTSP are compared in terms of efficiency. Both convergence properties and numerical stability are improved by the use of the HBA basis set whose performance is evaluated on local dynamics, by computing the correlation times of selected bond vectors along the chain, and on global ones, through the eigenvalues of the diffusion operator L. Further use of the DT with a HBA basis set has been made on a 71-mer of syndiotactic trans-1,2-polypentadiene in toluene solution, whose dynamical properties have been computed with a high order calculation and compared to the "numerical experiment" provided by the molecular dynamics (MD) simulation in explicit solvent. The necessary equilibrium averages have been obtained by a vacuum trajectory of the chain where solvent effects on conformational properties have been reproduced with a proper screening of the nonbonded interactions, corresponding to a definite value of the mean radius of gyration of the polymer in vacuum. Results show a very good agreement between DT calculations and the MD numerical experiment. This suggests a further use of DT methods with the necessary input quantities obtained by the only knowledge of some experimental values, i.e., the mean radius of gyration of the chain and the viscosity of the solution, and by a suitable vacuum

  10. Pseudo-atomic orbitals as basis sets for the O(N) DFT code CONQUEST

    Energy Technology Data Exchange (ETDEWEB)

    Torralba, A S; Brazdova, V; Gillan, M J; Bowler, D R [Materials Simulation Laboratory, UCL, Gower Street, London WC1E 6BT (United Kingdom); Todorovic, M; Miyazaki, T [National Institute for Materials Science, 1-2-1 Sengen, Tsukuba, Ibaraki 305-0047 (Japan); Choudhury, R [London Centre for Nanotechnology, UCL, 17-19 Gordon Street, London WC1H 0AH (United Kingdom)], E-mail: david.bowler@ucl.ac.uk

    2008-07-23

    Various aspects of the implementation of pseudo-atomic orbitals (PAOs) as basis functions for the linear scaling CONQUEST code are presented. Preliminary results for the assignment of a large set of PAOs to a smaller space of support functions are encouraging, and an important related proof on the necessary symmetry of the support functions is shown. Details of the generation and integration schemes for the PAOs are also given.

  11. Density functional theory calculations of the lowest energy quintet and triplet states of model hemes: role of functional, basis set, and zero-point energy corrections.

    Science.gov (United States)

    Khvostichenko, Daria; Choi, Andrew; Boulatov, Roman

    2008-04-24

    We investigated the effect of several computational variables, including the choice of the basis set, application of symmetry constraints, and zero-point energy (ZPE) corrections, on the structural parameters and predicted ground electronic state of model 5-coordinate hemes (iron(II) porphines axially coordinated by a single imidazole or 2-methylimidazole). We studied the performance of B3LYP and B3PW91 with eight Pople-style basis sets (up to 6-311+G*) and B97-1, OLYP, and TPSS functionals with 6-31G and 6-31G* basis sets. Only hybrid functionals B3LYP, B3PW91, and B97-1 reproduced the quintet ground state of the model hemes. With a given functional, the choice of the basis set caused up to 2.7 kcal/mol variation of the quintet-triplet electronic energy gap (DeltaEel), in several cases, resulting in the inversion of the sign of DeltaEel. Single-point energy calculations with triple-zeta basis sets of the Pople (up to 6-311G++(2d,2p)), Ahlrichs (TZVP and TZVPP), and Dunning (cc-pVTZ) families showed the same trend. The zero-point energy of the quintet state was approximately 1 kcal/mol lower than that of the triplet, and accounting for ZPE corrections was crucial for establishing the ground state if the electronic energy of the triplet state was approximately 1 kcal/mol less than that of the quintet. Within a given model chemistry, effects of symmetry constraints and of a "tense" structure of the iron porphine fragment coordinated to 2-methylimidazole on DeltaEel were limited to 0.3 kcal/mol. For both model hemes the best agreement with crystallographic structural data was achieved with small 6-31G and 6-31G* basis sets. Deviation of the computed frequency of the Fe-Im stretching mode from the experimental value with the basis set decreased in the order: nonaugmented basis sets, basis sets with polarization functions, and basis sets with polarization and diffuse functions. Contraction of Pople-style basis sets (double-zeta or triple-zeta) affected the results

  12. Symmetry-adapted basis sets automatic generation for problems in chemistry and physics

    CERN Document Server

    Avery, John Scales; Avery, James Emil

    2012-01-01

    In theoretical physics, theoretical chemistry and engineering, one often wishes to solve partial differential equations subject to a set of boundary conditions. This gives rise to eigenvalue problems of which some solutions may be very difficult to find. For example, the problem of finding eigenfunctions and eigenvalues for the Hamiltonian of a many-particle system is usually so difficult that it requires approximate methods, the most common of which is expansion of the eigenfunctions in terms of basis functions that obey the boundary conditions of the problem. The computational effort needed

  13. Quadratic Hedging of Basis Risk

    Directory of Open Access Journals (Sweden)

    Hardy Hulley

    2015-02-01

    Full Text Available This paper examines a simple basis risk model based on correlated geometric Brownian motions. We apply quadratic criteria to minimize basis risk and hedge in an optimal manner. Initially, we derive the Föllmer–Schweizer decomposition for a European claim. This allows pricing and hedging under the minimal martingale measure, corresponding to the local risk-minimizing strategy. Furthermore, since the mean-variance tradeoff process is deterministic in our setup, the minimal martingale- and variance-optimal martingale measures coincide. Consequently, the mean-variance optimal strategy is easily constructed. Simple pricing and hedging formulae for put and call options are derived in terms of the Black–Scholes formula. Due to market incompleteness, these formulae depend on the drift parameters of the processes. By making a further equilibrium assumption, we derive an approximate hedging formula, which does not require knowledge of these parameters. The hedging strategies are tested using Monte Carlo experiments, and are compared with results achieved using a utility maximization approach.

  14. A simplified density matrix minimization for linear scaling self-consistent field theory

    International Nuclear Information System (INIS)

    Challacombe, M.

    1999-01-01

    A simplified version of the Li, Nunes and Vanderbilt [Phys. Rev. B 47, 10891 (1993)] and Daw [Phys. Rev. B 47, 10895 (1993)] density matrix minimization is introduced that requires four fewer matrix multiplies per minimization step relative to previous formulations. The simplified method also exhibits superior convergence properties, such that the bulk of the work may be shifted to the quadratically convergent McWeeny purification, which brings the density matrix to idempotency. Both orthogonal and nonorthogonal versions are derived. The AINV algorithm of Benzi, Meyer, and Tuma [SIAM J. Sci. Comp. 17, 1135 (1996)] is introduced to linear scaling electronic structure theory, and found to be essential in transformations between orthogonal and nonorthogonal representations. These methods have been developed with an atom-blocked sparse matrix algebra that achieves sustained megafloating point operations per second rates as high as 50% of theoretical, and implemented in the MondoSCF suite of linear scaling SCF programs. For the first time, linear scaling Hartree - Fock theory is demonstrated with three-dimensional systems, including water clusters and estane polymers. The nonorthogonal minimization is shown to be uncompetitive with minimization in an orthonormal representation. An early onset of linear scaling is found for both minimal and double zeta basis sets, and crossovers with a highly optimized eigensolver are achieved. Calculations with up to 6000 basis functions are reported. The scaling of errors with system size is investigated for various levels of approximation. copyright 1999 American Institute of Physics

  15. Optimization of metabolite basis sets prior to quantitation in magnetic resonance spectroscopy: an approach based on quantum mechanics

    International Nuclear Information System (INIS)

    Lazariev, A; Graveron-Demilly, D; Allouche, A-R; Aubert-Frécon, M; Fauvelle, F; Piotto, M; Elbayed, K; Namer, I-J; Van Ormondt, D

    2011-01-01

    High-resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. The need to monitor diseases and pharmaceutical follow-up requires an automatic quantitation of HRMAS 1 H signals. However, for several metabolites, the values of chemical shifts of proton groups may slightly differ according to the micro-environment in the tissue or cells, in particular to its pH. This hampers the accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis set: the metabolite fingerprints are not correct anymore. In this work, we propose an accurate method coupling quantum mechanical simulations and quantitation algorithms to handle basis-set changes. The proposed algorithm automatically corrects mismatches between the signals of the simulated basis set and the signal under analysis by maximizing the normalized cross-correlation between the mentioned signals. Optimized chemical shift values of the metabolites are obtained. This method, QM-QUEST, provides more robust fitting while limiting user involvement and respects the correct fingerprints of metabolites. Its efficiency is demonstrated by accurately quantitating 33 signals from tissue samples of human brains with oligodendroglioma, obtained at 11.7 tesla. The corresponding chemical shift changes of several metabolites within the series are also analyzed

  16. Optimization of metabolite basis sets prior to quantitation in magnetic resonance spectroscopy: an approach based on quantum mechanics

    Science.gov (United States)

    Lazariev, A.; Allouche, A.-R.; Aubert-Frécon, M.; Fauvelle, F.; Piotto, M.; Elbayed, K.; Namer, I.-J.; van Ormondt, D.; Graveron-Demilly, D.

    2011-11-01

    High-resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. The need to monitor diseases and pharmaceutical follow-up requires an automatic quantitation of HRMAS 1H signals. However, for several metabolites, the values of chemical shifts of proton groups may slightly differ according to the micro-environment in the tissue or cells, in particular to its pH. This hampers the accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis set: the metabolite fingerprints are not correct anymore. In this work, we propose an accurate method coupling quantum mechanical simulations and quantitation algorithms to handle basis-set changes. The proposed algorithm automatically corrects mismatches between the signals of the simulated basis set and the signal under analysis by maximizing the normalized cross-correlation between the mentioned signals. Optimized chemical shift values of the metabolites are obtained. This method, QM-QUEST, provides more robust fitting while limiting user involvement and respects the correct fingerprints of metabolites. Its efficiency is demonstrated by accurately quantitating 33 signals from tissue samples of human brains with oligodendroglioma, obtained at 11.7 tesla. The corresponding chemical shift changes of several metabolites within the series are also analyzed.

  17. EVALUATION OF SETTING TIME OF MINERAL TRIOXIDE AGGREGATE AND BIODENTINE IN THE PRESENCE OF HUMAN BLOOD AND MINIMAL ESSENTIAL MEDIA - AN IN VITRO STUDY

    Directory of Open Access Journals (Sweden)

    Gopi Krishna Reddy Moosani

    2017-12-01

    Full Text Available BACKGROUND The aim of this study was to compare the ability of MTA and Biodentine to set in the presence of human blood and minimal essential media. MATERIALS AND METHODS Eighty 1 x 3 inches plexi glass sheets were taken. In each sheet, 10 wells were created and divided into 10 groups. Odd number groups were filled with MTA and even groups were filled with Biodentine. Within these groups 4 groups were control groups and the remaining 6 groups were experimental groups (i.e., blood, minimal essential media, blood and minimal essential media. Each block was submerged for 4, 5, 6, 8, 24, 36, and 48 hours in an experimental liquid at 370C with 100% humidity. RESULTS The setting times varied for the 2 materials, with contrasting differences in the setting times between MTA and Biodentine samples. Majority of the MTA samples did not set until 24 hrs. but at 36 hours all the samples of MTA are set. While for Biodentine samples, all of them had set by 6 hours. There is a significant difference in setting time between MTA and Biodentine. CONCLUSION This outcome draws into question the proposed setting time given by each respective manufacturer. Furthermore, despite Biodentine being marketed as a direct competitor to MTA with superior handling properties, MTA consistently set at a faster rate under the conditions of this study.

  18. On sets of vectors of a finite vector space in which every subset of basis size is a basis II

    OpenAIRE

    Ball, Simeon; De Beule, Jan

    2012-01-01

    This article contains a proof of the MDS conjecture for k a parts per thousand currency sign 2p - 2. That is, that if S is a set of vectors of in which every subset of S of size k is a basis, where q = p (h) , p is prime and q is not and k a parts per thousand currency sign 2p - 2, then |S| a parts per thousand currency sign q + 1. It also contains a short proof of the same fact for k a parts per thousand currency sign p, for all q.

  19. Zeroth-order exchange energy as a criterion for optimized atomic basis sets in interatomic force calculations

    International Nuclear Information System (INIS)

    Varandas, A.J.C.

    1980-01-01

    A suggestion is made for using the zeroth-order exchange term, at the one-exchange level, in the perturbation development of the interaction energy as a criterion for optmizing the atomic basis sets in interatomic force calculations. The approach is illustrated for the case of two helium atoms. (orig.)

  20. The static response function in Kohn-Sham theory: An appropriate basis for its matrix representation in case of finite AO basis sets

    International Nuclear Information System (INIS)

    Kollmar, Christian; Neese, Frank

    2014-01-01

    The role of the static Kohn-Sham (KS) response function describing the response of the electron density to a change of the local KS potential is discussed in both the theory of the optimized effective potential (OEP) and the so-called inverse Kohn-Sham problem involving the task to find the local KS potential for a given electron density. In a general discussion of the integral equation to be solved in both cases, it is argued that a unique solution of this equation can be found even in case of finite atomic orbital basis sets. It is shown how a matrix representation of the response function can be obtained if the exchange-correlation potential is expanded in terms of a Schmidt-orthogonalized basis comprising orbitals products of occupied and virtual orbitals. The viability of this approach in both OEP theory and the inverse KS problem is illustrated by numerical examples

  1. Unbounded dynamics and compact invariant sets of one Hamiltonian system defined by the minimally coupled field

    Energy Technology Data Exchange (ETDEWEB)

    Starkov, Konstantin E., E-mail: kstarkov@ipn.mx

    2015-06-12

    In this paper we study some features of global dynamics for one Hamiltonian system arisen in cosmology which is formed by the minimally coupled field; this system was introduced by Maciejewski et al. in 2007. We establish that under some simple conditions imposed on parameters of this system all trajectories are unbounded in both of time directions. Further, we present other conditions for system parameters under which we localize the domain with unbounded dynamics; this domain is defined with help of bounds for values of the Hamiltonian level surface parameter. We describe the case when our system possesses periodic orbits which are found explicitly. In the rest of the cases we get some localization bounds for compact invariant sets. - Highlights: • Domain with unbounded dynamics is localized. • Equations for periodic orbits are given in one level set. • Localizations for compact invariant sets are got.

  2. Velocity-gauge real-time TDDFT within a numerical atomic orbital basis set

    Science.gov (United States)

    Pemmaraju, C. D.; Vila, F. D.; Kas, J. J.; Sato, S. A.; Rehr, J. J.; Yabana, K.; Prendergast, David

    2018-05-01

    The interaction of laser fields with solid-state systems can be modeled efficiently within the velocity-gauge formalism of real-time time dependent density functional theory (RT-TDDFT). In this article, we discuss the implementation of the velocity-gauge RT-TDDFT equations for electron dynamics within a linear combination of atomic orbitals (LCAO) basis set framework. Numerical results obtained from our LCAO implementation, for the electronic response of periodic systems to both weak and intense laser fields, are compared to those obtained from established real-space grid and Full-Potential Linearized Augmented Planewave approaches. Potential applications of the LCAO based scheme in the context of extreme ultra-violet and soft X-ray spectroscopies involving core-electronic excitations are discussed.

  3. Basis set effects on the energy of intramolecular O-H...halogen hydrogen bridges in ortho-halophenols and 2,4-dihalo-malonaldehyde

    International Nuclear Information System (INIS)

    Buemi, Giuseppe

    2004-01-01

    Ab initio calculations of hydrogen bridge energies (E HB ) of 2-halophenols were carried out at various levels of sophistication using a variety of basis sets in order to verify their ability in reproducing the experimentally-determined gas phase ordering, and the related experimental frequencies of the O-H vibration stretching mode. The semiempirical AM1 and PM3 approaches were adopted, too. Calculations were extended to the O-H...X bridge of a particular conformation of 2,4-dihalo-malonaldehyde. The results and their trend with respect to the electronegativity of the halogen series are highly dependant on the basis set. The less sophisticated 3-21G, CEP121G and LANL2DZ basis sets (with and without correlation energy inclusion) predict E HB decreasing on decreasing the electronegativity power whilst the opposite is generally found when more extended bases are used. However, all high level calculations confirm the nearly negligible energy differences between the examined O-H...X bridges

  4. Ab initio calculation of reaction energies. III. Basis set dependence of relative energies on the FH2 and H2CO potential energy surfaces

    International Nuclear Information System (INIS)

    Frisch, M.J.; Binkley, J.S.; Schaefer, H.F. III

    1984-01-01

    The relative energies of the stationary points on the FH 2 and H 2 CO nuclear potential energy surfaces relevant to the hydrogen atom abstraction, H 2 elimination and 1,2-hydrogen shift reactions have been examined using fourth-order Moller--Plesset perturbation theory and a variety of basis sets. The theoretical absolute zero activation energy for the F+H 2 →FH+H reaction is in better agreement with experiment than previous theoretical studies, and part of the disagreement between earlier theoretical calculations and experiment is found to result from the use of assumed rather than calculated zero-point vibrational energies. The fourth-order reaction energy for the elimination of hydrogen from formaldehyde is within 2 kcal mol -1 of the experimental value using the largest basis set considered. The qualitative features of the H 2 CO surface are unchanged by expansion of the basis set beyond the polarized triple-zeta level, but diffuse functions and several sets of polarization functions are found to be necessary for quantitative accuracy in predicted reaction and activation energies. Basis sets and levels of perturbation theory which represent good compromises between computational efficiency and accuracy are recommended

  5. Perturbation expansion theory corrected from basis set superposition error. I. Locally projected excited orbitals and single excitations.

    Science.gov (United States)

    Nagata, Takeshi; Iwata, Suehiro

    2004-02-22

    The locally projected self-consistent field molecular orbital method for molecular interaction (LP SCF MI) is reformulated for multifragment systems. For the perturbation expansion, two types of the local excited orbitals are defined; one is fully local in the basis set on a fragment, and the other has to be partially delocalized to the basis sets on the other fragments. The perturbation expansion calculations only within single excitations (LP SE MP2) are tested for water dimer, hydrogen fluoride dimer, and colinear symmetric ArM+ Ar (M = Na and K). The calculated binding energies of LP SE MP2 are all close to the corresponding counterpoise corrected SCF binding energy. By adding the single excitations, the deficiency in LP SCF MI is thus removed. The results suggest that the exclusion of the charge-transfer effects in LP SCF MI might indeed be the cause of the underestimation for the binding energy. (c) 2004 American Institute of Physics.

  6. Method and basis set dependence of anharmonic ground state nuclear wave functions and zero-point energies: Application to SSSH

    Science.gov (United States)

    Kolmann, Stephen J.; Jordan, Meredith J. T.

    2010-02-01

    One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol-1 at the CCSD(T)/6-31G∗ level of theory, has a 4 kJ mol-1 dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol-1 lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol-1 lower in energy at the CCSD(T)/6-31G∗ level of theory. Ideally, for sub-kJ mol-1 thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.

  7. Method and basis set dependence of anharmonic ground state nuclear wave functions and zero-point energies: application to SSSH.

    Science.gov (United States)

    Kolmann, Stephen J; Jordan, Meredith J T

    2010-02-07

    One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol(-1) at the CCSD(T)/6-31G* level of theory, has a 4 kJ mol(-1) dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol(-1) lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol(-1) lower in energy at the CCSD(T)/6-31G* level of theory. Ideally, for sub-kJ mol(-1) thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.

  8. Specialized minimal PDFs for optimized LHC calculations

    NARCIS (Netherlands)

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Rojo, Juan

    2016-01-01

    We present a methodology for the construction of parton distribution functions (PDFs) designed to provide an accurate representation of PDF uncertainties for specific processes or classes of processes with a minimal number of PDF error sets: specialized minimal PDF sets, or SM-PDFs. We construct

  9. Higher-Order Minimal Functional Graphs

    DEFF Research Database (Denmark)

    Jones, Neil D; Rosendahl, Mads

    1994-01-01

    We present a minimal function graph semantics for a higher-order functional language with applicative evaluation order. The semantics captures the intermediate calls performed during the evaluation of a program. This information may be used in abstract interpretation as a basis for proving...

  10. Radiobiological basis for setting neutron radiation safety standards

    International Nuclear Information System (INIS)

    Straume, T.

    1985-01-01

    Present neutron standards, adopted more than 20 yr ago from a weak radiobiological data base, have been in doubt for a number of years and are currently under challenge. Moreover, recent dosimetric re-evaluations indicate that Hiroshima neutron doses may have been much lower than previously thought, suggesting that direct data for neutron-induced cancer in humans may in fact not be available. These recent developments make it urgent to determine the extent to which neutron cancer risk in man can be estimated from data that are available. Two approaches are proposed here that are anchored in particularly robust epidemiological and experimental data and appear most likely to provide reliable estimates of neutron cancer risk in man. The first approach uses gamma-ray dose-response relationships for human carcinogenesis, available from Nagasaki (Hiroshima data are also considered), together with highly characterized neutron and gamma-ray data for human cytogenetics. When tested against relevant experimental data, this approach either adequately predicts or somewhat overestimates neutron tumorigenesis (and mutagenesis) in animals. The second approach also uses the Nagasaki gamma-ray cancer data, but together with neutron RBEs from animal tumorigenesis studies. Both approaches give similar results and provide a basis for setting neutron radiation safety standards. They appear to be an improvement over previous approaches, including those that rely on highly uncertain maximum neutron RBEs and unnecessary extrapolations of gamma-ray data to very low doses. Results suggest that, at the presently accepted neutron dose limit of 0.5 rad/yr, the cancer mortality risk to radiation workers is not very different from accidental mortality risks to workers in various nonradiation occupations

  11. The 6-31B(d) basis set and the BMC-QCISD and BMC-CCSD multicoefficient correlation methods.

    Science.gov (United States)

    Lynch, Benjamin J; Zhao, Yan; Truhlar, Donald G

    2005-03-03

    Three new multicoefficient correlation methods (MCCMs) called BMC-QCISD, BMC-CCSD, and BMC-CCSD-C are optimized against 274 data that include atomization energies, electron affinities, ionization potentials, and reaction barrier heights. A new basis set called 6-31B(d) is developed and used as part of the new methods. BMC-QCISD has mean unsigned errors in calculating atomization energies per bond and barrier heights of 0.49 and 0.80 kcal/mol, respectively. BMC-CCSD has mean unsigned errors of 0.42 and 0.71 kcal/mol for the same two quantities. BMC-CCSD-C is an equally effective variant of BMC-CCSD that employs Cartesian rather than spherical harmonic basis sets. The mean unsigned error of BMC-CCSD or BMC-CCSD-C for atomization energies, barrier heights, ionization potentials, and electron affinities is 22% lower than G3SX(MP2) at an order of magnitude less cost for gradients for molecules with 9-13 atoms, and it scales better (N6 vs N,7 where N is the number of atoms) when the size of the molecule is increased.

  12. Formation and physical characteristics of van der Waals molecules, cations, and anions: Estimates of complete basis set values

    Czech Academy of Sciences Publication Activity Database

    Zahradník, Rudolf; Šroubková, Libuše

    2005-01-01

    Roč. 104, č. 1 (2005), s. 52-63 ISSN 0020-7608 Institutional research plan: CEZ:AV0Z40400503 Keywords : intermolecular complexes * van der Waals species * ab initio calculations * complete basis set values * estimates Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.192, year: 2005

  13. OBESITY OF ADULTS LIVING IN THE URBAN SETTINGS AS BASIS FOR DIFFERENT APPLICATIONS OF RECREATIONAL SPORTS

    Directory of Open Access Journals (Sweden)

    Vesko Drašković

    2008-08-01

    Full Text Available According to the World Health Organization’s data, obesity is one of the main risk factors for the human health, especially in so called “mature age”, that is in forties and fiftees of the human’s life. There are many causes of obesity, and the most common ones are unadequate or excessive nutrition, low quality food rich in fats and highly caloric sweetener, unsufficient physical activity – hypokinesy, but also technical and technological development of the modern World (TV, cell phones, elevators, cars etc.. The objective of this research is to define the obesity of adults living in the urban settings through BMI (body mass index and to create, on the basis of these findings, the basis for different applications of the recreational sports programme.

  14. Structural basis for inhibition of the histone chaperone activity of SET/TAF-Iβ by cytochrome c.

    Science.gov (United States)

    González-Arzola, Katiuska; Díaz-Moreno, Irene; Cano-González, Ana; Díaz-Quintana, Antonio; Velázquez-Campoy, Adrián; Moreno-Beltrán, Blas; López-Rivas, Abelardo; De la Rosa, Miguel A

    2015-08-11

    Chromatin is pivotal for regulation of the DNA damage process insofar as it influences access to DNA and serves as a DNA repair docking site. Recent works identify histone chaperones as key regulators of damaged chromatin's transcriptional activity. However, understanding how chaperones are modulated during DNA damage response is still challenging. This study reveals that the histone chaperone SET/TAF-Iβ interacts with cytochrome c following DNA damage. Specifically, cytochrome c is shown to be translocated into cell nuclei upon induction of DNA damage, but not upon stimulation of the death receptor or stress-induced pathways. Cytochrome c was found to competitively hinder binding of SET/TAF-Iβ to core histones, thereby locking its histone-binding domains and inhibiting its nucleosome assembly activity. In addition, we have used NMR spectroscopy, calorimetry, mutagenesis, and molecular docking to provide an insight into the structural features of the formation of the complex between cytochrome c and SET/TAF-Iβ. Overall, these findings establish a framework for understanding the molecular basis of cytochrome c-mediated blocking of SET/TAF-Iβ, which subsequently may facilitate the development of new drugs to silence the oncogenic effect of SET/TAF-Iβ's histone chaperone activity.

  15. Lithium photoionization cross-section and dynamic polarizability using square integrable basis sets and correlated wave functions

    International Nuclear Information System (INIS)

    Hollauer, E.; Nascimento, M.A.C.

    1985-01-01

    The photoionization cross-section and dynamic polarizability for lithium atom are calculated using a discrete basis set to represent both the bound and the continuum-states of the atom, to construct an approximation to the dynamic polarizability. From the imaginary part of the complex dynamic polarizability one extracts the photoionization cross-section and from its real part the dynamic polarizability. The results are in good agreement with the experiments and other more elaborate calculations (Author) [pt

  16. Basis Set Convergence of Indirect Spin-Spin Coupling Constants in the Kohn-Sham Limit for Several Small Molecules

    Czech Academy of Sciences Publication Activity Database

    Kupka, T.; Nieradka, M.; Stachów, M.; Pluta, T.; Nowak, P.; Kjaer, H.; Kongsted, J.; Kaminský, Jakub

    2012-01-01

    Roč. 116, č. 14 (2012), s. 3728-3738 ISSN 1089-5639 R&D Projects: GA ČR GPP208/10/P356 Institutional research plan: CEZ:AV0Z40550506 Keywords : consistent basis-sets * density-functional methods * ab-inition calculations * polarization propagator approximation Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.771, year: 2012

  17. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    Science.gov (United States)

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  18. Minimal Flavor Constraints for Technicolor

    DEFF Research Database (Denmark)

    Sakuma, Hidenori; Sannino, Francesco

    2010-01-01

    We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self-coupling and mas......We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self...

  19. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-03-27

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.

  20. Minimal and non-minimal standard models: Universality of radiative corrections

    International Nuclear Information System (INIS)

    Passarino, G.

    1991-01-01

    The possibility of describing electroweak processes by means of models with a non-minimal Higgs sector is analyzed. The renormalization procedure which leads to a set of fitting equations for the bare parameters of the lagrangian is first reviewed for the minimal standard model. A solution of the fitting equations is obtained, which correctly includes large higher-order corrections. Predictions for physical observables, notably the W boson mass and the Z O partial widths, are discussed in detail. Finally the extension to non-minimal models is described under the assumption that new physics will appear only inside the vector boson self-energies and the concept of universality of radiative corrections is introduced, showing that to a large extent they are insensitive to the details of the enlarged Higgs sector. Consequences for the bounds on the top quark mass are also discussed. (orig.)

  1. Peptide dynamics by molecular dynamics simulation and diffusion theory method with improved basis sets

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, Po Jen; Lai, S. K., E-mail: sklai@coll.phy.ncu.edu.tw [Complex Liquids Laboratory, Department of Physics, National Central University, Chungli 320, Taiwan and Molecular Science and Technology Program, Taiwan International Graduate Program, Academia Sinica, Taipei 115, Taiwan (China); Rapallo, Arnaldo [Istituto per lo Studio delle Macromolecole (ISMAC) Consiglio Nazionale delle Ricerche (CNR), via E. Bassini 15, C.A.P 20133 Milano (Italy)

    2014-03-14

    Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit

  2. Peptide dynamics by molecular dynamics simulation and diffusion theory method with improved basis sets

    International Nuclear Information System (INIS)

    Hsu, Po Jen; Lai, S. K.; Rapallo, Arnaldo

    2014-01-01

    Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit

  3. Relaxation of functions of STO-3G and 6-31G* basis sets in the series of isoelectronic to LiF molecule

    International Nuclear Information System (INIS)

    Ermakov, A.I.; Belousov, V.V.

    2007-01-01

    Relaxation effect of functions of the basis sets (BS) STO-3G and 6-31G* on their equilibration in the series of isoelectron molecules: LiF, BeO, BN and C 2 is considered. Values of parameters (exponential factor of basis functions, orbital exponents of Gauss primitives and coefficients of their grouping) of basis functions in molecules are discovered using the criterion of minimum of energy by the unlimited Hartree-Fock method calculations (UHF) with the help of direct optimization of parameters: the simplex-method and Rosenbrock method. Certain schemes of optimization differing by the amount of varying parameters have been done. Interaction of basis functions parameters of concerned sets through medium values of the Gauss exponents is established. Effects of relaxation on the change of full energy and relative errors of the calculations of interatomic distances, normal oscillations frequencies, dissociation energy and other properties of molecules are considered. Change of full energy during the relaxation of basis functions (RBF) STO-3G and 6-31G* amounts 1100 and 80 kJ/mol correspondingly, and it is in need of the account during estimation of energetic characteristics, especially for systems with high-polar chemical bonds. The relaxation BS STO-3G practically in all considered cases improves description of molecular properties, whereas the relaxation BS 6-31G* lightly effects on its equilibration [ru

  4. Numerical Aspects of Atomic Physics: Helium Basis Sets and Matrix Diagonalization

    Science.gov (United States)

    Jentschura, Ulrich; Noble, Jonathan

    2014-03-01

    We present a matrix diagonalization algorithm for complex symmetric matrices, which can be used in order to determine the resonance energies of auto-ionizing states of comparatively simple quantum many-body systems such as helium. The algorithm is based in multi-precision arithmetic and proceeds via a tridiagonalization of the complex symmetric (not necessarily Hermitian) input matrix using generalized Householder transformations. Example calculations involving so-called PT-symmetric quantum systems lead to reference values which pertain to the imaginary cubic perturbation (the imaginary cubic anharmonic oscillator). We then proceed to novel basis sets for the helium atom and present results for Bethe logarithms in hydrogen and helium, obtained using the enhanced numerical techniques. Some intricacies of ``canned'' algorithms such as those used in LAPACK will be discussed. Our algorithm, for complex symmetric matrices such as those describing cubic resonances after complex scaling, is faster than LAPACK's built-in routines, for specific classes of input matrices. It also offer flexibility in terms of the calculation of the so-called implicit shift, which is used in order to ``pivot'' the system toward the convergence to diagonal form. We conclude with a wider overview.

  5. Optimal Piecewise Linear Basis Functions in Two Dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Brooks III, E D; Szoke, A

    2009-01-26

    We use a variational approach to optimize the center point coefficients associated with the piecewise linear basis functions introduced by Stone and Adams [1], for polygonal zones in two Cartesian dimensions. Our strategy provides optimal center point coefficients, as a function of the location of the center point, by minimizing the error induced when the basis function interpolation is used for the solution of the time independent diffusion equation within the polygonal zone. By using optimal center point coefficients, one expects to minimize the errors that occur when these basis functions are used to discretize diffusion equations, or transport equations in optically thick zones (where they approach the solution of the diffusion equation). Our optimal center point coefficients satisfy the requirements placed upon the basis functions for any location of the center point. We also find that the location of the center point can be optimized, but this requires numerical calculations. Curiously, the optimum center point location is independent of the values of the dependent variable on the corners only for quadrilaterals.

  6. Stages of the recognition and roentgenological semiotics of minimal peripheric lung cancer

    International Nuclear Information System (INIS)

    Lindenbraten, L.D.

    1987-01-01

    The system of diagnosis of peripheral cancer should be aimed at its detection at stage TI m , i.e. at the detection of a tumor whose shadow on a radiogram 70x70 mm was within 0.5-1.5 cm, and on a plain chest X-ray it was within. Fluorographic and roentgenographic semiotics of minimal peripheral cancer was considered on 40 cases. it was pointed out that the diagnosis of early stages of tumor development could be made only by improving the organizational basis of mass screening by setting up consultative cancer pulmonological commissions. Physicians should be aware of minimum changes in the pulmonary tissue

  7. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  8. Simulations of smog-chamber experiments using the two-dimensional volatility basis set: linear oxygenated precursors.

    Science.gov (United States)

    Chacon-Madrid, Heber J; Murphy, Benjamin N; Pandis, Spyros N; Donahue, Neil M

    2012-10-16

    We use a two-dimensional volatility basis set (2D-VBS) box model to simulate secondary organic aerosol (SOA) mass yields of linear oxygenated molecules: n-tridecanal, 2- and 7-tridecanone, 2- and 7-tridecanol, and n-pentadecane. A hybrid model with explicit, a priori treatment of the first-generation products for each precursor molecule, followed by a generic 2D-VBS mechanism for later-generation chemistry, results in excellent model-measurement agreement. This strongly confirms that the 2D-VBS mechanism is a predictive tool for SOA modeling but also suggests that certain important first-generation products for major primary SOA precursors should be treated explicitly for optimal SOA predictions.

  9. Systems biology perspectives on minimal and simpler cells.

    Science.gov (United States)

    Xavier, Joana C; Patil, Kiran Raosaheb; Rocha, Isabel

    2014-09-01

    The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  10. Systems Biology Perspectives on Minimal and Simpler Cells

    Science.gov (United States)

    Xavier, Joana C.; Patil, Kiran Raosaheb

    2014-01-01

    SUMMARY The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. PMID:25184563

  11. Specialized minimal PDFs for optimized LHC calculations

    CERN Document Server

    Carrazza, Stefano; Kassabov, Zahari; Rojo, Juan

    2016-04-15

    We present a methodology for the construction of parton distribution functions (PDFs) designed to provide an accurate representation of PDF uncertainties for specific processes or classes of processes with a minimal number of PDF error sets: specialized minimal PDF sets, or SM-PDFs. We construct these SM-PDFs in such a way that sets corresponding to different input processes can be combined without losing information, specifically on their correlations, and that they are robust upon smooth variations of the kinematic cuts. The proposed strategy never discards information, so that the SM-PDF sets can be enlarged by the addition of new processes, until the prior PDF set is eventually recovered for a large enough set of processes. We illustrate the method by producing SM-PDFs tailored to Higgs, top quark pair, and electroweak gauge boson physics, and determine that, when the PDF4LHC15 combined set is used as the prior, around 11, 4 and 11 Hessian eigenvectors respectively are enough to fully describe the corresp...

  12. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  13. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming.

    Science.gov (United States)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-08-01

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

  14. Sequential unconstrained minimization algorithms for constrained optimization

    International Nuclear Information System (INIS)

    Byrne, Charles

    2008-01-01

    The problem of minimizing a function f(x):R J → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G k (x)=f(x)+g k (x), to obtain x k . The auxiliary functions g k (x):D subset of R J → R + are nonnegative on the set D, each x k is assumed to lie within D, and the objective is to minimize the continuous function f:R J → R over x in the set C = D-bar, the closure of D. We assume that such minimizers exist, and denote one such by x-circumflex. We assume that the functions g k (x) satisfy the inequalities 0≤g k (x)≤G k-1 (x)-G k-1 (x k-1 ), for k = 2, 3, .... Using this assumption, we show that the sequence {(x k )} is decreasing and converges to f(x-circumflex). If the restriction of f(x) to D has bounded level sets, which happens if x-circumflex is unique and f(x) is closed, proper and convex, then the sequence {x k } is bounded, and f(x*)=f(x-circumflex), for any cluster point x*. Therefore, if x-circumflex is unique, x* = x-circumflex and {x k } → x-circumflex. When x-circumflex is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results

  15. Learn with SAT to Minimize Büchi Automata

    Directory of Open Access Journals (Sweden)

    Stephan Barth

    2012-10-01

    Full Text Available We describe a minimization procedure for nondeterministic Büchi automata (NBA. For an automaton A another automaton A_min with the minimal number of states is learned with the help of a SAT-solver. This is done by successively computing automata A' that approximate A in the sense that they accept a given finite set of positive examples and reject a given finite set of negative examples. In the course of the procedure these example sets are successively increased. Thus, our method can be seen as an instance of a generic learning algorithm based on a "minimally adequate teacher'' in the sense of Angluin. We use a SAT solver to find an NBA for given sets of positive and negative examples. We use complementation via construction of deterministic parity automata to check candidates computed in this manner for equivalence with A. Failure of equivalence yields new positive or negative examples. Our method proved successful on complete samplings of small automata and of quite some examples of bigger automata. We successfully ran the minimization on over ten thousand automata with mostly up to ten states, including the complements of all possible automata with two states and alphabet size three and discuss results and runtimes; single examples had over 100 states.

  16. Electronic structure of crystalline uranium nitrides UN, U2N3 and UN2: LCAO calculations with the basis set optimization

    International Nuclear Information System (INIS)

    Evarestov, R A; Panin, A I; Bandura, A V; Losev, M V

    2008-01-01

    The results of LCAO DFT calculations of lattice parameters, cohesive energy and bulk modulus of the crystalline uranium nitrides UN, U 2 N 3 and UN 2 are presented and discussed. The LCAO computer codes Gaussian03 and Crystal06 are applied. The calculations are made with the uranium atom relativistic effective small core potential by Stuttgart-Cologne group (60 electrons in the core). The calculations include the U atom basis set optimization. Powell, Hooke-Jeeves, conjugated gradient and Box methods are implemented in the author's optimization package, being external to the codes for molecular and periodic calculations. The basis set optimization in LCAO calculations improves the agreement of the lattice parameter and bulk modulus of UN crystal with the experimental data, the change of the cohesive energy due to the optimization is small. The mixed metallic-covalent chemical bonding is found both in LCAO calculations of UN and U 2 N 3 crystals; UN 2 crystal has the semiconducting nature

  17. Minimal spanning trees, filaments and galaxy clustering

    International Nuclear Information System (INIS)

    Barrow, J.D.; Sonoda, D.H.

    1985-01-01

    A graph theoretical technique for assessing intrinsic patterns in point data sets is described. A unique construction, the minimal spanning tree, can be associated with any point data set given all the inter-point separations. This construction enables the skeletal pattern of galaxy clustering to be singled out in quantitative fashion and differs from other statistics applied to these data sets. This technique is described and applied to two- and three-dimensional distributions of galaxies and also to comparable random samples and numerical simulations. The observed CfA and Zwicky data exhibit characteristic distributions of edge-lengths in their minimal spanning trees which are distinct from those found in random samples. (author)

  18. On the choice of an optimal value-set of qualitative attributes for information retrieval in databases

    International Nuclear Information System (INIS)

    Ryjov, A.; Loginov, D.

    1994-01-01

    The problem of choosing an optimal set of significances of qualitative attributes for information retrieval in databases is addressed. Given a particular database, a set of significances is called optimal if it results in the minimization of losses of information and information noise for information retrieval in the data base. Obviously, such a set of significances depends on the statistical parameters of the data base. The software, which enables to calculate on the basis of the statistical parameters of the given data base, the losses of information and the information noise for arbitrary sets of significances of qualitative attributes, is described. The software also permits to compare various sets of significances of qualitative attributes and to choose the optimal set of significances

  19. Calculations of wavefunctions and energies of electron system in Coulomb potential by variational method without a basis set

    International Nuclear Information System (INIS)

    Bykov, V.P.; Gerasimov, A.V.

    1992-08-01

    A new variational method without a basis set for calculation of the eigenvalues and eigenfunctions of Hamiltonians is suggested. The expansion of this method for the Coulomb potentials is given. Calculation of the energy and charge distribution in the two-electron system for different values of the nuclear charge Z is made. It is shown that at small Z the Coulomb forces disintegrate the electron cloud into two clots. (author). 3 refs, 4 figs, 1 tab

  20. Setting clear expectations for safety basis development

    International Nuclear Information System (INIS)

    MORENO, M.R.

    2003-01-01

    DOE-RL has set clear expectations for a cost-effective approach for achieving compliance with the Nuclear Safety Management requirements (10 CFR 830, Nuclear Safety Rule) which will ensure long-term benefit to Hanford. To facilitate implementation of these expectations, tools were developed to streamline and standardize safety analysis and safety document development resulting in a shorter and more predictable DOE approval cycle. A Hanford Safety Analysis and Risk Assessment Handbook (SARAH) was issued to standardized methodologies for development of safety analyses. A Microsoft Excel spreadsheet (RADIDOSE) was issued for the evaluation of radiological consequences for accident scenarios often postulated for Hanford. A standard Site Documented Safety Analysis (DSA) detailing the safety management programs was issued for use as a means of compliance with a majority of 3009 Standard chapters. An in-process review was developed between DOE and the Contractor to facilitate DOE approval and provide early course correction. As a result of setting expectations and providing safety analysis tools, the four Hanford Site waste management nuclear facilities were able to integrate into one Master Waste Management Documented Safety Analysis (WM-DSA)

  1. process setting models for the minimization of costs defectives

    African Journals Online (AJOL)

    Dr Obe

    determine the mean setting so as to minimise the total loss through under-limit complaints and loss of sales and goodwill as well as over-limit losses through excess materials and rework costs. Models are developed for the two types of setting of the mean so that the minimum costs of losses are achieved. Also, a model is ...

  2. Specialized minimal PDFs for optimized LHC calculations

    International Nuclear Information System (INIS)

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Rojo, Juan

    2016-01-01

    We present a methodology for the construction of parton distribution functions (PDFs) designed to provide an accurate representation of PDF uncertainties for specific processes or classes of processes with a minimal number of PDF error sets: specialized minimal PDF sets, or SM-PDFs. We construct these SM-PDFs in such a way that sets corresponding to different input processes can be combined without losing information, specifically as regards their correlations, and that they are robust upon smooth variations of the kinematic cuts. The proposed strategy never discards information, so that the SM-PDF sets can be enlarged by the addition of new processes, until the prior PDF set is eventually recovered for a large enough set of processes. We illustrate the method by producing SM-PDFs tailored to Higgs, top-quark pair, and electroweak gauge boson physics, and we determine that, when the PDF4LHC15 combined set is used as the prior, around 11, 4, and 11 Hessian eigenvectors, respectively, are enough to fully describe the corresponding processes. (orig.)

  3. Vibrational frequency scaling factors for correlation consistent basis sets and the methods CC2 and MP2 and their spin-scaled SCS and SOS variants

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no [Centre for Theoretical and Computational Chemistry CTCC, Department of Chemistry, University of Tromsø, N-9037 Tromsø (Norway); Törk, Lisa; Hättig, Christof, E-mail: christof.haettig@rub.de [Lehrstuhl für Theoretische Chemie, Ruhr-Universität Bochum, D-44801 Bochum (Germany)

    2014-11-21

    We present scaling factors for vibrational frequencies calculated within the harmonic approximation and the correlated wave-function methods coupled cluster singles and doubles model (CC2) and Møller-Plesset perturbation theory (MP2) with and without a spin-component scaling (SCS or spin-opposite scaling (SOS)). Frequency scaling factors and the remaining deviations from the reference data are evaluated for several non-augmented basis sets of the cc-pVXZ family of generally contracted correlation-consistent basis sets as well as for the segmented contracted TZVPP basis. We find that the SCS and SOS variants of CC2 and MP2 lead to a slightly better accuracy for the scaled vibrational frequencies. The determined frequency scaling factors can also be used for vibrational frequencies calculated for excited states through response theory with CC2 and the algebraic diagrammatic construction through second order and their spin-component scaled variants.

  4. Basis set expansion for inverse problems in plasma diagnostic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jones, B.; Ruiz, C. L. [Sandia National Laboratories, PO Box 5800, Albuquerque, New Mexico 87185 (United States)

    2013-07-15

    A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)] is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20–25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.

  5. Novel approach for tomographic reconstruction of gas concentration distributions in air: Use of smooth basis functions and simulated annealing

    Science.gov (United States)

    Drescher, A. C.; Gadgil, A. J.; Price, P. N.; Nazaroff, W. W.

    Optical remote sensing and iterative computed tomography (CT) can be applied to measure the spatial distribution of gaseous pollutant concentrations. We conducted chamber experiments to test this combination of techniques using an open path Fourier transform infrared spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). Although ART converged to solutions that showed excellent agreement with the measured ray-integral concentrations, the solutions were inconsistent with simultaneously gathered point-sample concentration measurements. A new CT method was developed that combines (1) the superposition of bivariate Gaussians to represent the concentration distribution and (2) a simulated annealing minimization routine to find the parameters of the Gaussian basis functions that result in the best fit to the ray-integral concentration data. This method, named smooth basis function minimization (SBFM), generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present an analysis of two sets of experimental data that compares the performance of ART and SBFM. We conclude that SBFM is a superior CT reconstruction method for practical indoor and outdoor air monitoring applications.

  6. Review of Minimal Flavor Constraints for Technicolor

    DEFF Research Database (Denmark)

    S. Fukano, Hidenori; Sannino, Francesco

    2010-01-01

    We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self-coupling and mas......We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self...

  7. Minimization of rad waste production in NPP Dukovany

    International Nuclear Information System (INIS)

    Kulovany, J.

    2001-01-01

    A whole range of measures has been taken in the power plant in connection with the minimization of radioactive waste. It will lead to the set goals. The procedures that prevent possible endangering of the operation take precedence during introduction of the minimization measures. Further economically undemanding procedures are implemented that bring about minimization in an effective way. In accordance with the EMS principles it can be expected that the minimizing measures will be implemented also in areas where their greatest contribution will be for the environment

  8. Distributed Submodular Minimization And Motion Coordination Over Discrete State Space

    KAUST Repository

    Jaleel, Hassan

    2017-09-21

    Submodular set-functions are extensively used in large-scale combinatorial optimization problems arising in complex networks and machine learning. While there has been significant interest in distributed formulations of convex optimization, distributed minimization of submodular functions has not received significant attention. Thus, our main contribution is a framework for minimizing submodular functions in a distributed manner. The proposed framework is based on the ideas of Lovasz extension of submodular functions and distributed optimization of convex functions. The framework exploits a fundamental property of submodularity that the Lovasz extension of a submodular function is a convex function and can be computed efficiently. Moreover, a minimizer of a submodular function can be computed by computing the minimizer of its Lovasz extension. In the proposed framework, we employ a consensus based distributed optimization algorithm to minimize set-valued submodular functions as well as general submodular functions defined over set products. We also identify distributed motion coordination in multiagent systems as a new application domain for submodular function minimization. For demonstrating key ideas of the proposed framework, we select a complex setup of the capture the flag game, which offers a variety of challenges relevant to multiagent system. We formulate the problem as a submodular minimization problem and verify through extensive simulations that the proposed framework results in feasible policies for the agents.

  9. Improving the performance of minimizers and winnowing schemes.

    Science.gov (United States)

    Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl

    2017-07-15

    The minimizers scheme is a method for selecting k -mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k -mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k -mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. We provide an in-depth analysis of the effect of k -mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al. ) on the expected density of minimizers in a random sequence. The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git . gmarcais@cs.cmu.edu or carlk@cs.cmu.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  10. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: Dispersion, induction, and basis set superposition error

    Science.gov (United States)

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.

    2012-10-01

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.

  11. Basis set expansion for inverse problems in plasma diagnostic analysis

    Science.gov (United States)

    Jones, B.; Ruiz, C. L.

    2013-07-01

    A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)], 10.1063/1.1482156 is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20-25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.

  12. Will the changes proposed to the conceptual framework’s definitions and recognition criteria provide a better basis for the IASB standard setting?

    NARCIS (Netherlands)

    Brouwer, A.; Hoogendoorn, M.; Naarding, E.

    2015-01-01

    In this paper we evaluate the International Accounting Standards Board’s (IASB) efforts, in a discussion paper (DP) of 2013, to develop a new conceptual framework (CF) in the light of its stated ambition to establish a robust and consistent basis for future standard setting, thereby guiding standard

  13. Harm minimization among teenage drinkers

    DEFF Research Database (Denmark)

    Jørgensen, Morten Hulvej; Curtis, Tine; Christensen, Pia Haudrup

    2007-01-01

    AIM: To examine strategies of harm minimization employed by teenage drinkers. DESIGN, SETTING AND PARTICIPANTS: Two periods of ethnographic fieldwork were conducted in a rural Danish community of approximately 2000 inhabitants. The fieldwork included 50 days of participant observation among 13....... In regulating the social context of drinking they relied on their personal experiences more than on formalized knowledge about alcohol and harm, which they had learned from prevention campaigns and educational programmes. CONCLUSIONS: In this study we found that teenagers may help each other to minimize alcohol...

  14. Free time minimizers for the three-body problem

    Science.gov (United States)

    Moeckel, Richard; Montgomery, Richard; Sánchez Morgado, Héctor

    2018-03-01

    Free time minimizers of the action (called "semi-static" solutions by Mañe in International congress on dynamical systems in Montevideo (a tribute to Ricardo Mañé), vol 362, pp 120-131, 1996) play a central role in the theory of weak KAM solutions to the Hamilton-Jacobi equation (Fathi in Weak KAM Theorem in Lagrangian Dynamics Preliminary Version Number 10, 2017). We prove that any solution to Newton's three-body problem which is asymptotic to Lagrange's parabolic homothetic solution is eventually a free time minimizer. Conversely, we prove that every free time minimizer tends to Lagrange's solution, provided the mass ratios lie in a certain large open set of mass ratios. We were inspired by the work of Da Luz and Maderna (Math Proc Camb Philos Soc 156:209-227, 1980) which showed that every free time minimizer for the N-body problem is parabolic and therefore must be asymptotic to the set of central configurations. We exclude being asymptotic to Euler's central configurations by a second variation argument. Central configurations correspond to rest points for the McGehee blown-up dynamics. The large open set of mass ratios are those for which the linearized dynamics at each Euler rest point has a complex eigenvalue.

  15. On minimizers of causal variational principles

    International Nuclear Information System (INIS)

    Schiefeneder, Daniela

    2011-01-01

    Causal variational principles are a class of nonlinear minimization problems which arise in a formulation of relativistic quantum theory referred to as the fermionic projector approach. This thesis is devoted to a numerical and analytic study of the minimizers of a general class of causal variational principles. We begin with a numerical investigation of variational principles for the fermionic projector in discrete space-time. It is shown that for sufficiently many space-time points, the minimizing fermionic projector induces non-trivial causal relations on the space-time points. We then generalize the setting by introducing a class of causal variational principles for measures on a compact manifold. In our main result we prove under general assumptions that the support of a minimizing measure is either completely timelike, or it is singular in the sense that its interior is empty. In the examples of the circle, the sphere and certain flag manifolds, the general results are supplemented by a more detailed analysis of the minimizers. (orig.)

  16. LLNL Waste Minimization Program Plan

    International Nuclear Information System (INIS)

    1990-01-01

    This document is the February 14, 1990 version of the LLNL Waste Minimization Program Plan (WMPP). The Waste Minimization Policy field has undergone continuous changes since its formal inception in the 1984 HSWA legislation. The first LLNL WMPP, Revision A, is dated March 1985. A series of informal revision were made on approximately a semi-annual basis. This Revision 2 is the third formal issuance of the WMPP document. EPA has issued a proposed new policy statement on source reduction and recycling. This policy reflects a preventative strategy to reduce or eliminate the generation of environmentally-harmful pollutants which may be released to the air, land surface, water, or ground water. In accordance with this new policy new guidance to hazardous waste generators on the elements of a Waste Minimization Program was issued. In response to these policies, DOE has revised and issued implementation guidance for DOE Order 5400.1, Waste Minimization Plan and Waste Reduction reporting of DOE Hazardous, Radioactive, and Radioactive Mixed Wastes, final draft January 1990. This WMPP is formatted to meet the current DOE guidance outlines. The current WMPP will be revised to reflect all of these proposed changes when guidelines are established. Updates, changes and revisions to the overall LLNL WMPP will be made as appropriate to reflect ever-changing regulatory requirements. 3 figs., 4 tabs

  17. Sea ice in the Baltic Sea - revisiting BASIS ice, a~historical data set covering the period 1960/1961-1978/1979

    Science.gov (United States)

    Löptien, U.; Dietze, H.

    2014-06-01

    The Baltic Sea is a seasonally ice-covered, marginal sea, situated in central northern Europe. It is an essential waterway connecting highly industrialised countries. Because ship traffic is intermittently hindered by sea ice, the local weather services have been monitoring sea ice conditions for decades. In the present study we revisit a historical monitoring data set, covering the winters 1960/1961. This data set, dubbed Data Bank for Baltic Sea Ice and Sea Surface Temperatures (BASIS) ice, is based on hand-drawn maps that were collected and then digitised 1981 in a joint project of the Finnish Institute of Marine Research (today Finish Meteorological Institute (FMI)) and the Swedish Meteorological and Hydrological Institute (SMHI). BASIS ice was designed for storage on punch cards and all ice information is encoded by five digits. This makes the data hard to access. Here we present a post-processed product based on the original five-digit code. Specifically, we convert to standard ice quantities (including information on ice types), which we distribute in the current and free Network Common Data Format (NetCDF). Our post-processed data set will help to assess numerical ice models and provide easy-to-access unique historical reference material for sea ice in the Baltic Sea. In addition we provide statistics showcasing the data quality. The website www.baltic-ocean.org hosts the post-prossed data and the conversion code. The data are also archived at the Data Publisher for Earth & Environmental Science PANGEA (doi:10.1594/PANGEA.832353).

  18. Sea ice in the Baltic Sea - revisiting BASIS ice, a historical data set covering the period 1960/1961-1978/1979

    Science.gov (United States)

    Löptien, U.; Dietze, H.

    2014-12-01

    The Baltic Sea is a seasonally ice-covered, marginal sea in central northern Europe. It is an essential waterway connecting highly industrialised countries. Because ship traffic is intermittently hindered by sea ice, the local weather services have been monitoring sea ice conditions for decades. In the present study we revisit a historical monitoring data set, covering the winters 1960/1961 to 1978/1979. This data set, dubbed Data Bank for Baltic Sea Ice and Sea Surface Temperatures (BASIS) ice, is based on hand-drawn maps that were collected and then digitised in 1981 in a joint project of the Finnish Institute of Marine Research (today the Finnish Meteorological Institute (FMI)) and the Swedish Meteorological and Hydrological Institute (SMHI). BASIS ice was designed for storage on punch cards and all ice information is encoded by five digits. This makes the data hard to access. Here we present a post-processed product based on the original five-digit code. Specifically, we convert to standard ice quantities (including information on ice types), which we distribute in the current and free Network Common Data Format (NetCDF). Our post-processed data set will help to assess numerical ice models and provide easy-to-access unique historical reference material for sea ice in the Baltic Sea. In addition we provide statistics showcasing the data quality. The website http://www.baltic-ocean.org hosts the post-processed data and the conversion code. The data are also archived at the Data Publisher for Earth & Environmental Science, PANGAEA (doi:10.1594/PANGAEA.832353).

  19. Subject-specific cardiovascular system model-based identification and diagnosis of septic shock with a minimally invasive data set: animal experiments and proof of concept

    International Nuclear Information System (INIS)

    Geoffrey Chase, J; Starfinger, Christina; Hann, Christopher E; Lambermont, Bernard; Ghuysen, Alexandre; Kolh, Philippe; Dauby, Pierre C; Desaive, Thomas; Shaw, Geoffrey M

    2011-01-01

    A cardiovascular system (CVS) model and parameter identification method have previously been validated for identifying different cardiac and circulatory dysfunctions in simulation and using porcine models of pulmonary embolism, hypovolemia with PEEP titrations and induced endotoxic shock. However, these studies required both left and right heart catheters to collect the data required for subject-specific monitoring and diagnosis—a maximally invasive data set in a critical care setting although it does occur in practice. Hence, use of this model-based diagnostic would require significant additional invasive sensors for some subjects, which is unacceptable in some, if not all, cases. The main goal of this study is to prove the concept of using only measurements from one side of the heart (right) in a 'minimal' data set to identify an effective patient-specific model that can capture key clinical trends in endotoxic shock. This research extends existing methods to a reduced and minimal data set requiring only a single catheter and reducing the risk of infection and other complications—a very common, typical situation in critical care patients, particularly after cardiac surgery. The extended methods and assumptions that found it are developed and presented in a case study for the patient-specific parameter identification of pig-specific parameters in an animal model of induced endotoxic shock. This case study is used to define the impact of this minimal data set on the quality and accuracy of the model application for monitoring, detecting and diagnosing septic shock. Six anesthetized healthy pigs weighing 20–30 kg received a 0.5 mg kg −1 endotoxin infusion over a period of 30 min from T0 to T30. For this research, only right heart measurements were obtained. Errors for the identified model are within 8% when the model is identified from data, re-simulated and then compared to the experimentally measured data, including measurements not used in the

  20. Basis set effects on the energy and hardness profiles of the ...

    Indian Academy of Sciences (India)

    Unknown

    maximum hardness principle (MHP); spurious stationary points; hydrogen fluoride dimer. 1. Introduction ... This error can be solved when accounting for the basis ..... DURSI for financial support through the Distinguished. University Research ...

  1. One-dimensional Gromov minimal filling problem

    International Nuclear Information System (INIS)

    Ivanov, Alexandr O; Tuzhilin, Alexey A

    2012-01-01

    The paper is devoted to a new branch in the theory of one-dimensional variational problems with branching extremals, the investigation of one-dimensional minimal fillings introduced by the authors. On the one hand, this problem is a one-dimensional version of a generalization of Gromov's minimal fillings problem to the case of stratified manifolds. On the other hand, this problem is interesting in itself and also can be considered as a generalization of another classical problem, the Steiner problem on the construction of a shortest network connecting a given set of terminals. Besides the statement of the problem, we discuss several properties of the minimal fillings and state several conjectures. Bibliography: 38 titles.

  2. Minimal genera of open 4-manifolds

    OpenAIRE

    Gompf, Robert E.

    2013-01-01

    We study exotic smoothings of open 4-manifolds using the minimal genus function and its analog for end homology. While traditional techniques in open 4-manifold smoothing theory give no control of minimal genera, we make progress by using the adjunction inequality for Stein surfaces. Smoothings can be constructed with much more control of these genus functions than the compact setting seems to allow. As an application, we expand the range of 4-manifolds known to have exotic smoothings (up to ...

  3. Theories of minimalism in architecture: When prologue becomes palimpsest

    Directory of Open Access Journals (Sweden)

    Stevanović Vladimir

    2014-01-01

    Full Text Available This paper examines the modus and conditions of constituting and establishing architectural discourse on minimalism. One of the key topics in this discourse are historical line of development and the analysis of theoretical influences, which comprise connections of recent minimalism with the theorizations of various minimal, architectural and artistic, forms and concepts from the past. The paper shall particularly discuss those theoretical relations which, in a unitary way, link minimalism in architecture with its artistic nominal counterpart - minimal art. These are the relations founded on the basis of interpretative models on self-referentiality, phenomenological experience and contextualism, which are superficialy observed, common to both, artistic and architectural, minimalist discourses. It seems that in this constellation certain relations on the historical line of minimalism in architecture are questionable, while some other are overlooked. Precisely, posmodern fundamentalism is the architectural direction: 1 in which these three interpretations also existed; 2 from which architectural theorists retroactively appropriated many architects proclaiming them minimalists; 3 which establish identical relations with modern and postmodern theoretical and socio-historical contexts, as well as it will be done in minimalism. In spite of this, theoretical field of postmodern fundamentalism is surprisingly neglected in the discourse of minimalism in architecture. Instead of understanding postmodern fundamentalism as a kind of prologue to minimalism in architecture, it becomes an erased palimpsest over whom the different history of minimalism is rewriting, the history in which minimal art which occupies a central place.

  4. Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo

    Science.gov (United States)

    Krogel, Jaron T.; Reboredo, Fernando A.

    2018-01-01

    Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.

  5. An electroweak basis for neutrinoless double β decay

    Science.gov (United States)

    Graesser, Michael L.

    2017-08-01

    A discovery of neutrinoless double- β decay would be profound, providing the first direct experimental evidence of Δ L = 2 lepton number violating processes. While a natural explanation is provided by an effective Majorana neutrino mass, other new physics interpretations should be carefully evaluated. At low-energies such new physics could man-ifest itself in the form of color and SU(2) L × U(1) Y invariant higher dimension operators. Here we determine a complete set of electroweak invariant dimension-9 operators, and our analysis supersedes those that only impose U(1) em invariance. Imposing electroweak invariance implies: 1) a significantly reduced set of leading order operators compared to only imposing U(1) em invariance; and 2) other collider signatures. Prior to imposing electroweak invariance we find a minimal basis of 24 dimension-9 operators, which is reduced to 11 electroweak invariant operators at leading order in the expansion in the Higgs vacuum expectation value. We set up a systematic analysis of the hadronic realization of the 4-quark operators using chiral perturbation theory, and apply it to determine which of these operators have long-distance pion enhancements at leading order in the chiral expansion. We also find at dimension-11 and dimension-13 the electroweak invariant operators that after electroweak symmetry breaking produce the remaining Δ L = 2 operators that would appear at dimension-9 if only U(1) em is imposed.

  6. New STO(II-3Gmag family basis sets for the calculations of the molecules magnetic properties

    Directory of Open Access Journals (Sweden)

    Karina Kapusta

    2015-10-01

    Full Text Available An efficient approach for construction of physically justified STO(II-3Gmag family basis sets for calculation of molecules magnetic properties has been proposed. The procedure of construction based upon the taken into account the second order of perturbation theory in the magnetic field case. Analytical form of correction functions has been obtained using the closed representation of the Green functions by the solution of nonhomogeneous Schrödinger equation for the model problem of "one-electron atom in the external uniform magnetic field". Their performance has been evaluated for the DFT level calculations carried out with a number of functionals. The test calculations of magnetic susceptibility and 1H nuclear magnetic shielding tensors demonstrated a good agreement of the calculated values with the experimental data.

  7. Stabilization of a locally minimal forest

    Science.gov (United States)

    Ivanov, A. O.; Mel'nikova, A. E.; Tuzhilin, A. A.

    2014-03-01

    The method of partial stabilization of locally minimal networks, which was invented by Ivanov and Tuzhilin to construct examples of shortest trees with given topology, is developed. According to this method, boundary vertices of degree 2 are not added to all edges of the original locally minimal tree, but only to some of them. The problem of partial stabilization of locally minimal trees in a finite-dimensional Euclidean space is solved completely in the paper, that is, without any restrictions imposed on the number of edges remaining free of subdivision. A criterion for the realizability of such stabilization is established. In addition, the general problem of searching for the shortest forest connecting a finite family of boundary compact sets in an arbitrary metric space is formalized; it is shown that such forests exist for any family of compact sets if and only if for any finite subset of the ambient space there exists a shortest tree connecting it. The theory developed here allows us to establish further generalizations of the stabilization theorem both for arbitrary metric spaces and for metric spaces with some special properties. Bibliography: 10 titles.

  8. Simulating Metabolite Basis Sets for in vivo MRS Quantification; Incorporating details of the PRESS Pulse Sequence by means of the GAMMA C++ library

    NARCIS (Netherlands)

    Van der Veen, J.W.; Van Ormondt, D.; De Beer, R.

    2012-01-01

    In this work we report on generating/using simulated metabolite basis sets for the quantification of in vivo MRS signals, assuming that they have been acquired by using the PRESS pulse sequence. To that end we have employed the classes and functions of the GAMMA C++ library. By using several

  9. Excited state nuclear forces from the Tamm-Dancoff approximation to time-dependent density functional theory within the plane wave basis set framework

    Science.gov (United States)

    Hutter, Jürg

    2003-03-01

    An efficient formulation of time-dependent linear response density functional theory for the use within the plane wave basis set framework is presented. The method avoids the transformation of the Kohn-Sham matrix into the canonical basis and references virtual orbitals only through a projection operator. Using a Lagrangian formulation nuclear derivatives of excited state energies within the Tamm-Dancoff approximation are derived. The algorithms were implemented into a pseudo potential/plane wave code and applied to the calculation of adiabatic excitation energies, optimized geometries and vibrational frequencies of three low lying states of formaldehyde. An overall good agreement with other time-dependent density functional calculations, multireference configuration interaction calculations and experimental data was found.

  10. Development of a waste minimization plan for a Department of Energy remedial action program: Ideas for minimizing waste in remediation scenarios

    International Nuclear Information System (INIS)

    Hubbard, Linda M.; Galen, Glen R.

    1992-01-01

    Waste minimization has become an important consideration in the management of hazardous waste because of regulatory as well as cost considerations. Waste minimization techniques are often process specific or industry specific and generally are not applicable to site remediation activities. This paper will examine ways in which waste can be minimized in a remediation setting such as the U.S. Department of Energy's Formerly Utilized Sites Remedial Action Program, where the bulk of the waste produced results from remediating existing contamination, not from generating new waste. (author)

  11. On the Convergence of the ccJ-pVXZ and pcJ-n Basis Sets in CCSD Calculations of Nuclear Spin-Spin Coupling Constants

    DEFF Research Database (Denmark)

    Faber, Rasmus; Sauer, Stephan P. A.

    2018-01-01

    The basis set convergence of nuclear spin-spin coupling constants (SSCC) calculated at the coupled cluster singles and doubles (CCSD) level has been investigated for ten difficult molecules. Eight of the molecules contain fluorine atoms and nine contain double or triple bonds. Results obtained...

  12. Monitoring of German Fertility: Estimation of Monthly and Yearly Total Fertility Rates on the Basis of Preliminary Monthly Data

    Directory of Open Access Journals (Sweden)

    Gabriele Doblhammer

    2011-02-01

    Full Text Available This paper introduces a set of methods for estimating fertility indicators in the absence of recent and short-term birth statistics. For Germany, we propose a set of straightforward methods that allow for the computation of monthly and yearly total fertility rates (mTFR on the basis of preliminary monthly data, including a confidence interval. The method for estimating most current fertility rates can be applied when no information on the age structure and the number of women exposed to childbearing is available. The methods introduced in this study are useful for calculating monthly birth indicators, with minimal requirements for data quality and statistical effort. In addition, we suggest an approach for projecting the yearly TFR based on preliminary monthly information up to June.

  13. Considering a non-polynomial basis for local kernel regression problem

    Science.gov (United States)

    Silalahi, Divo Dharma; Midi, Habshah

    2017-01-01

    A common used as solution for local kernel nonparametric regression problem is given using polynomial regression. In this study, we demonstrated the estimator and properties using maximum likelihood estimator for a non-polynomial basis such B-spline to replacing the polynomial basis. This estimator allows for flexibility in the selection of a bandwidth and a knot. The best estimator was selected by finding an optimal bandwidth and knot through minimizing the famous generalized validation function.

  14. Geometric Measure Theory and Minimal Surfaces

    CERN Document Server

    Bombieri, Enrico

    2011-01-01

    W.K. ALLARD: On the first variation of area and generalized mean curvature.- F.J. ALMGREN Jr.: Geometric measure theory and elliptic variational problems.- E. GIUSTI: Minimal surfaces with obstacles.- J. GUCKENHEIMER: Singularities in soap-bubble-like and soap-film-like surfaces.- D. KINDERLEHRER: The analyticity of the coincidence set in variational inequalities.- M. MIRANDA: Boundaries of Caciopoli sets in the calculus of variations.- L. PICCININI: De Giorgi's measure and thin obstacles.

  15. Minimizing Banking Risk in a Lévy Process Setting

    Directory of Open Access Journals (Sweden)

    F. Gideon

    2007-01-01

    Full Text Available The primary functions of a bank are to obtain funds through deposits from external sources and to use the said funds to issue loans. Moreover, risk management practices related to the withdrawal of these bank deposits have always been of considerable interest. In this spirit, we construct Lévy process-driven models of banking reserves in order to address the problem of hedging deposit withdrawals from such institutions by means of reserves. Here reserves are related to outstanding debt and acts as a proxy for the assets held by the bank. The aforementioned modeling enables us to formulate a stochastic optimal control problem related to the minimization of reserve, depository, and intrinsic risk that are associated with the reserve process, the net cash flows from depository activity, and cumulative costs of the bank's provisioning strategy, respectively. A discussion of the main risk management issues arising from the optimization problem mentioned earlier forms an integral part of our paper. This includes the presentation of a numerical example involving a simulation of the provisions made for deposit withdrawals via treasuries and reserves.

  16. Evolution in time of an N-atom system. I. A physical basis set for the projection of the master equation

    International Nuclear Information System (INIS)

    Freedhoff, Helen

    2004-01-01

    We study an aggregate of N identical two-level atoms (TLA's) coupled by the retarded interatomic interaction, using the Lehmberg-Agarwal master equation. First, we calculate the entangled eigenstates of the system; then, we use these eigenstates as a basis set for the projection of the master equation. We demonstrate that in this basis the equations of motion for the level populations, as well as the expressions for the emission and absorption spectra, assume a simple mathematical structure and allow for a transparent physical interpretation. To illustrate the use of the general theory in emission processes, we study an isosceles triangle of atoms, and present in the long wavelength limit the (cascade) emission spectrum for a hexagon of atoms fully excited at t=0. To illustrate its use for absorption processes, we tabulate (in the same limit) the biexciton absorption frequencies, linewidths, and relative intensities for polygons consisting of N=2,...,9 TLA's

  17. Laser-Doppler-Velocimetry on the basis of frequency selective absorption: set-up and test of a Doppler Gloval Velocimeter; Laser-Doppler-Velocimetry auf der Basis frequenzselektiver Absorption: Aufbau und Einsatz eines Doppler Global Velocimeters

    Energy Technology Data Exchange (ETDEWEB)

    Roehle, I.

    1999-11-01

    A Doppler Global Velocimeter was set up in the frame of a PhD thesis. This velocimeter is optimized to carry out high accuracy, three component, time averaged planar velocity measurements. The anemometer was successfully applied to wind tunnel and test rig flows, and the measurement accuracy was investigated. A volumetric data-set of the flow field inside an industrial combustion chamber was measured. This data field contained about 400.000 vectors. DGV measurements in the intake of a jet engine model were carried out applying a fibre bundle boroskope. The flow structure of the wake of a car model in a wind tunnel was investigated. The measurement accuracy of the DGV-System is {+-}0.5 m/s when operated under ideal conditions. This study can serve as a basis to evaluate the use of DGV for aerodynamic development experiments. (orig.) [German] Im Rahmen der Dissertation wurde ein auf hohe Messgenauigkeit optimiertes DGV-Geraet fuer zeitlich gemittelte Drei-Komponenten-Geschwindigkeitsmessungen entwickelt und gebaut, an Laborstroemungen, an Teststaenden und an Windkanaelen erfolgreich eingesetzt und das Potential der Messtechnik, insbesondere im Hinblick auf Messgenauigkeit, untersucht. Im Fall einer industriellen Brennkammer konnte ein Volumen-Datensatz des Stroemungsfeldes erstellt werden, dessen Umfang bei ca. 400.000 Vektoren lag. Es wurden DGV-Messungen mittels eines flexiblen Endoskops auf Basis eines Faserbuendels durchgefuehrt und damit die Stroemung in einem Flugzeugeinlauf vermessen. Es wurden DGV-Messungen im Nachlauf eines PKW-Modells in einem Windkanal durchgefuehrt. Die Messgenauigkeit des erstellten DGV-Systems betraegt unter Idealbedingungen {+-}0,5 m/s. Durch die Arbeit wurde eine Basis zur Beurteilung des Nutzens der DGV-Technik fuer aerodynamische Entwicklungsarbeiten geschaffen. (orig.)

  18. Current trends in treatment of obesity in Karachi and possibilities of cost minimization.

    Science.gov (United States)

    Hussain, Mirza Izhar; Naqvi, Baqir Shyum

    2015-03-01

    Our study finds out drug usage trends in over weight and obese patients without any compelling indications in Karachi, looks for deviations of current practices from evidence based antihypertensive therapeutic guidelines and identifies not only cost minimization opportunities but also communication strategies to improve patients' awareness and compliance to achieve therapeutic goal. In present study two sets were used. Randomized stratified independent surveys were conducted in hospital doctors and family physicians (general practitioners), using pretested questionnaires. Sample size was 100. Statistical analysis was conducted on Statistical Package for Social Science (SPSS). Opportunities of cost minimization were also analyzed. One the basis of doctors' feedback, preference is given to non-pharmacologic management of obesity. Mass media campaign and media usage were recommended to increase patients awareness and patients' education along with strengthening family support systems was recommended for better compliance of the patients to doctor's advice. Local therapeutic guidelines for weight reduction were not found. Feedbacks showed that global therapeutic guidelines were followed by the doctors practicing in the community and hospitals in Karachi. However, high price branded drugs were used instead of low priced generic therapeutic equivalents. Patient's education is required for better awareness and improving patients' compliance. The doctors found preferring brand leaders instead of low cost options. This trend increases cost of therapy by 0.59 to 4.17 times. Therefore, there are great opportunities for cost minimization by using evidence-based clinically effective and safe medicines.

  19. Electromagnetic field limits set by the V-Curve.

    Energy Technology Data Exchange (ETDEWEB)

    Warne, Larry Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jorgenson, Roy Eberhardt [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hudson, Howard Gerald [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-07-01

    When emitters of electromagnetic energy are operated in the vicinity of sensitive components, the electric field at the component location must be kept below a certain level in order to prevent the component from being damaged, or in the case of electro-explosive devices, initiating. The V-Curve is a convenient way to set the electric field limit because it requires minimal information about the problem configuration. In this report we will discuss the basis for the V-Curve. We also consider deviations from the original V-Curve resulting from inductive versus capacitive antennas, increases in directivity gain for long antennas, decreases in input impedance when operating in a bounded region, and mismatches dictated by transmission line losses. In addition, we consider mitigating effects resulting from limited antenna sizes.

  20. A quantum molecular similarity analysis of changes in molecular electron density caused by basis set flotation and electric field application

    Science.gov (United States)

    Simon, Sílvia; Duran, Miquel

    1997-08-01

    Quantum molecular similarity (QMS) techniques are used to assess the response of the electron density of various small molecules to application of a static, uniform electric field. Likewise, QMS is used to analyze the changes in electron density generated by the process of floating a basis set. The results obtained show an interrelation between the floating process, the optimum geometry, and the presence of an external field. Cases involving the Le Chatelier principle are discussed, and an insight on the changes of bond critical point properties, self-similarity values and density differences is performed.

  1. Stabilization of a locally minimal forest

    International Nuclear Information System (INIS)

    Ivanov, A O; Mel'nikova, A E; Tuzhilin, A A

    2014-01-01

    The method of partial stabilization of locally minimal networks, which was invented by Ivanov and Tuzhilin to construct examples of shortest trees with given topology, is developed. According to this method, boundary vertices of degree 2 are not added to all edges of the original locally minimal tree, but only to some of them. The problem of partial stabilization of locally minimal trees in a finite-dimensional Euclidean space is solved completely in the paper, that is, without any restrictions imposed on the number of edges remaining free of subdivision. A criterion for the realizability of such stabilization is established. In addition, the general problem of searching for the shortest forest connecting a finite family of boundary compact sets in an arbitrary metric space is formalized; it is shown that such forests exist for any family of compact sets if and only if for any finite subset of the ambient space there exists a shortest tree connecting it. The theory developed here allows us to establish further generalizations of the stabilization theorem both for arbitrary metric spaces and for metric spaces with some special properties. Bibliography: 10 titles

  2. Waste minimization at Chalk River Laboratories

    Energy Technology Data Exchange (ETDEWEB)

    Kranz, P.; Wong, P.C.F. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2011-07-01

    CRL (including solid radioactive waste, inactive waste and recyclables) decreased by 14% from 2007 to 2010. It should be noted that the workforce at CRL increased by approximately 15% during the same period. When considering the refuse volume data on a per capita basis, the volume of overall refuse per person was reduced from 3.03 m{sup 3}/person in 2007 to 2.25 m{sup 3}/person in 2010. This represents a 26% reduction in refuse in three years. This paper describes the waste minimization initiatives and the achievements at CRL in details, and also the planned initiatives in future. (author)

  3. Waste minimization at Chalk River Laboratories

    International Nuclear Information System (INIS)

    Kranz, P.; Wong, P.C.F.

    2011-01-01

    CRL (including solid radioactive waste, inactive waste and recyclables) decreased by 14% from 2007 to 2010. It should be noted that the workforce at CRL increased by approximately 15% during the same period. When considering the refuse volume data on a per capita basis, the volume of overall refuse per person was reduced from 3.03 m 3 /person in 2007 to 2.25 m 3 /person in 2010. This represents a 26% reduction in refuse in three years. This paper describes the waste minimization initiatives and the achievements at CRL in details, and also the planned initiatives in future. (author)

  4. Minimally Invasive Parathyroidectomy

    Directory of Open Access Journals (Sweden)

    Lee F. Starker

    2011-01-01

    Full Text Available Minimally invasive parathyroidectomy (MIP is an operative approach for the treatment of primary hyperparathyroidism (pHPT. Currently, routine use of improved preoperative localization studies, cervical block anesthesia in the conscious patient, and intraoperative parathyroid hormone analyses aid in guiding surgical therapy. MIP requires less surgical dissection causing decreased trauma to tissues, can be performed safely in the ambulatory setting, and is at least as effective as standard cervical exploration. This paper reviews advances in preoperative localization, anesthetic techniques, and intraoperative management of patients undergoing MIP for the treatment of pHPT.

  5. Generalized bi-quasi-variational inequalities for quasi-semi-monotone and bi-quasi-semi-monotone operators with applications in non-compact settings and minimization problems

    Directory of Open Access Journals (Sweden)

    Chowdhury Molhammad SR

    2000-01-01

    Full Text Available Results are obtained on existence theorems of generalized bi-quasi-variational inequalities for quasi-semi-monotone and bi-quasi-semi-monotone operators in both compact and non-compact settings. We shall use the concept of escaping sequences introduced by Border (Fixed Point Theorem with Applications to Economics and Game Theory, Cambridge University Press, Cambridge, 1985 to obtain results in non-compact settings. Existence theorems on non-compact generalized bi-complementarity problems for quasi-semi-monotone and bi-quasi-semi-monotone operators are also obtained. Moreover, as applications of some results of this paper on generalized bi-quasi-variational inequalities, we shall obtain existence of solutions for some kind of minimization problems with quasi- semi-monotone and bi-quasi-semi-monotone operators.

  6. Electronic structure of crystalline uranium nitrides UN, U{sub 2}N{sub 3} and UN{sub 2}: LCAO calculations with the basis set optimization

    Energy Technology Data Exchange (ETDEWEB)

    Evarestov, R A; Panin, A I; Bandura, A V; Losev, M V [Department of Quantum Chemistry, St. Petersburg State University, University Prospect 26, Stary Peterghof, St. Petersburg, 198504 (Russian Federation)], E-mail: re1973@re1973.spb.edu

    2008-06-01

    The results of LCAO DFT calculations of lattice parameters, cohesive energy and bulk modulus of the crystalline uranium nitrides UN, U{sub 2}N{sub 3} and UN{sub 2} are presented and discussed. The LCAO computer codes Gaussian03 and Crystal06 are applied. The calculations are made with the uranium atom relativistic effective small core potential by Stuttgart-Cologne group (60 electrons in the core). The calculations include the U atom basis set optimization. Powell, Hooke-Jeeves, conjugated gradient and Box methods are implemented in the author's optimization package, being external to the codes for molecular and periodic calculations. The basis set optimization in LCAO calculations improves the agreement of the lattice parameter and bulk modulus of UN crystal with the experimental data, the change of the cohesive energy due to the optimization is small. The mixed metallic-covalent chemical bonding is found both in LCAO calculations of UN and U{sub 2}N{sub 3} crystals; UN{sub 2} crystal has the semiconducting nature.

  7. The minimally tuned minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Essig, Rouven; Fortin, Jean-Francois

    2008-01-01

    The regions in the Minimal Supersymmetric Standard Model with the minimal amount of fine-tuning of electroweak symmetry breaking are presented for general messenger scale. No a priori relations among the soft supersymmetry breaking parameters are assumed and fine-tuning is minimized with respect to all the important parameters which affect electroweak symmetry breaking. The superpartner spectra in the minimally tuned region of parameter space are quite distinctive with large stop mixing at the low scale and negative squark soft masses at the high scale. The minimal amount of tuning increases enormously for a Higgs mass beyond roughly 120 GeV

  8. Controllers with Minimal Observation Power (Application to Timed Systems)

    DEFF Research Database (Denmark)

    Bulychev, Petr; Cassez, Franck; David, Alexandre

    2012-01-01

    We consider the problem of controller synthesis under imper- fect information in a setting where there is a set of available observable predicates equipped with a cost function. The problem that we address is the computation of a subset of predicates sufficient for control and whose cost is minimal...

  9. Minimizing hydride cracking in zirconium alloys

    International Nuclear Information System (INIS)

    Coleman, C.E.; Cheadle, B.A.; Ambler, J.F.R.; Eadie, R.L.

    1985-01-01

    Zirconium alloy components can fail by hydride cracking if they contain large flaws and are highly stressed. If cracking in such components is suspected, crack growth can be minimized by following two simple operating rules: components should be heated up from at least 30K below any operating temperature above 450K, and when the component requires cooling to room temperature from a high temperature, any tensile stress should be reduced as much and as quickly as is practical during cooling. This paper describes the physical basis for these rules

  10. Common-cause analysis using sets

    International Nuclear Information System (INIS)

    Worrell, R.B.; Stack, D.W.

    1977-12-01

    Common-cause analysis was developed at the Aerojet Nuclear Company for studying the behavior of a system that is affected by special conditions and secondary causes. Common-cause analysis is related to fault tree analysis. Common-cause candidates are minimal cut sets whose primary events are closely linked by a special condition or are susceptible to the same secondary cause. It is shown that common-cause candidates can be identified using the Set Equation Transformation System (SETS). A Boolean equation is used to establish the special conditions and secondary cause susceptibilities for each primary event in the fault tree. A transformation of variables (substituting equals for equals), executed on a minimal cut set equation, results in replacing each primary event by the right side of its special condition/secondary cause equation and leads to the identification of the common-cause candidates

  11. The minimal non-minimal standard model

    International Nuclear Information System (INIS)

    Bij, J.J. van der

    2006-01-01

    In this Letter I discuss a class of extensions of the standard model that have a minimal number of possible parameters, but can in principle explain dark matter and inflation. It is pointed out that the so-called new minimal standard model contains a large number of parameters that can be put to zero, without affecting the renormalizability of the model. With the extra restrictions one might call it the minimal (new) non-minimal standard model (MNMSM). A few hidden discrete variables are present. It is argued that the inflaton should be higher-dimensional. Experimental consequences for the LHC and the ILC are discussed

  12. Minimal Reducts with Grasp

    Directory of Open Access Journals (Sweden)

    Iris Iddaly Mendez Gurrola

    2011-03-01

    Full Text Available The proper detection of patient level of dementia is important to offer the suitable treatment. The diagnosis is based on certain criteria, reflected in the clinical examinations. From these examinations emerge the limitations and the degree in which each patient is in. In order to reduce the total of limitations to be evaluated, we used the rough set theory, this theory has been applied in areas of the artificial intelligence such as decision analysis, expert systems, knowledge discovery, classification with multiple attributes. In our case this theory is applied to find the minimal limitations set or reduct that generate the same classification that considering all the limitations, to fulfill this purpose we development an algorithm GRASP (Greedy Randomized Adaptive Search Procedure.

  13. Determining an Estimate of an Equivalence Relation for Moderate and Large Sized Sets

    Directory of Open Access Journals (Sweden)

    Leszek Klukowski

    2017-01-01

    Full Text Available This paper presents two approaches to determining estimates of an equivalence relation on the basis of pairwise comparisons with random errors. Obtaining such an estimate requires the solution of a discrete programming problem which minimizes the sum of the differences between the form of the relation and the comparisons. The problem is NP hard and can be solved with the use of exact algorithms for sets of moderate size, i.e. about 50 elements. In the case of larger sets, i.e. at least 200 comparisons for each element, it is necessary to apply heuristic algorithms. The paper presents results (a statistical preprocessing, which enable us to determine the optimal or a near-optimal solution with acceptable computational cost. They include: the development of a statistical procedure producing comparisons with low probabilities of errors and a heuristic algorithm based on such comparisons. The proposed approach guarantees the applicability of such estimators for any size of set. (original abstract

  14. Economic communication model set

    Science.gov (United States)

    Zvereva, Olga M.; Berg, Dmitry B.

    2017-06-01

    This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.

  15. Minimization of decision tree depth for multi-label decision tables

    KAUST Repository

    Azad, Mohammad

    2014-10-01

    In this paper, we consider multi-label decision tables that have a set of decisions attached to each row. Our goal is to find one decision from the set of decisions for each row by using decision tree as our tool. Considering our target to minimize the depth of the decision tree, we devised various kinds of greedy algorithms as well as dynamic programming algorithm. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of depth of decision trees.

  16. Minimization of decision tree depth for multi-label decision tables

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2014-01-01

    In this paper, we consider multi-label decision tables that have a set of decisions attached to each row. Our goal is to find one decision from the set of decisions for each row by using decision tree as our tool. Considering our target to minimize the depth of the decision tree, we devised various kinds of greedy algorithms as well as dynamic programming algorithm. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of depth of decision trees.

  17. Differential calculus on the space of Steiner minimal trees in Riemannian manifolds

    International Nuclear Information System (INIS)

    Ivanov, A O; Tuzhilin, A A

    2001-01-01

    It is proved that the length of a minimal spanning tree, the length of a Steiner minimal tree, and the Steiner ratio regarded as functions of finite subsets of a connected complete Riemannian manifold have directional derivatives in all directions. The derivatives of these functions are calculated and some properties of their critical points are found. In particular, a geometric criterion for a finite set to be critical for the Steiner ratio is found. This criterion imposes essential restrictions on the geometry of the sets for which the Steiner ratio attains its minimum, that is, the sets on which the Steiner ratio of the boundary set is equal to the Steiner ratio of the ambient space

  18. PREP KITT, System Reliability by Fault Tree Analysis. PREP, Min Path Set and Min Cut Set for Fault Tree Analysis, Monte-Carlo Method. KITT, Component and System Reliability Information from Kinetic Fault Tree Theory

    International Nuclear Information System (INIS)

    Vesely, W.E.; Narum, R.E.

    1997-01-01

    1 - Description of problem or function: The PREP/KITT computer program package obtains system reliability information from a system fault tree. The PREP program finds the minimal cut sets and/or the minimal path sets of the system fault tree. (A minimal cut set is a smallest set of components such that if all the components are simultaneously failed the system is failed. A minimal path set is a smallest set of components such that if all of the components are simultaneously functioning the system is functioning.) The KITT programs determine reliability information for the components of each minimal cut or path set, for each minimal cut or path set, and for the system. Exact, time-dependent reliability information is determined for each component and for each minimal cut set or path set. For the system, reliability results are obtained by upper bound approximations or by a bracketing procedure in which various upper and lower bounds may be obtained as close to one another as desired. The KITT programs can handle independent components which are non-repairable or which have a constant repair time. Any assortment of non-repairable components and components having constant repair times can be considered. Any inhibit conditions having constant probabilities of occurrence can be handled. The failure intensity of each component is assumed to be constant with respect to time. The KITT2 program can also handle components which during different time intervals, called phases, may have different reliability properties. 2 - Method of solution: The PREP program obtains minimal cut sets by either direct deterministic testing or by an efficient Monte Carlo algorithm. The minimal path sets are obtained using the Monte Carlo algorithm. The reliability information is obtained by the KITT programs from numerical solution of the simple integral balance equations of kinetic tree theory. 3 - Restrictions on the complexity of the problem: The PREP program will obtain the minimal cut and

  19. Iterated greedy algorithms to minimize the total family flow time for job-shop scheduling with job families and sequence-dependent set-ups

    Science.gov (United States)

    Kim, Ji-Su; Park, Jung-Hyeon; Lee, Dong-Ho

    2017-10-01

    This study addresses a variant of job-shop scheduling in which jobs are grouped into job families, but they are processed individually. The problem can be found in various industrial systems, especially in reprocessing shops of remanufacturing systems. If the reprocessing shop is a job-shop type and has the component-matching requirements, it can be regarded as a job shop with job families since the components of a product constitute a job family. In particular, sequence-dependent set-ups in which set-up time depends on the job just completed and the next job to be processed are also considered. The objective is to minimize the total family flow time, i.e. the maximum among the completion times of the jobs within a job family. A mixed-integer programming model is developed and two iterated greedy algorithms with different local search methods are proposed. Computational experiments were conducted on modified benchmark instances and the results are reported.

  20. Genomic determinants of sporulation in Bacilli and Clostridia: towards the minimal set of sporulation-specific genes.

    Science.gov (United States)

    Galperin, Michael Y; Mekhedov, Sergei L; Puigbo, Pere; Smirnov, Sergey; Wolf, Yuri I; Rigden, Daniel J

    2012-11-01

    Three classes of low-G+C Gram-positive bacteria (Firmicutes), Bacilli, Clostridia and Negativicutes, include numerous members that are capable of producing heat-resistant endospores. Spore-forming firmicutes include many environmentally important organisms, such as insect pathogens and cellulose-degrading industrial strains, as well as human pathogens responsible for such diseases as anthrax, botulism, gas gangrene and tetanus. In the best-studied model organism Bacillus subtilis, sporulation involves over 500 genes, many of which are conserved among other bacilli and clostridia. This work aimed to define the genomic requirements for sporulation through an analysis of the presence of sporulation genes in various firmicutes, including those with smaller genomes than B. subtilis. Cultivable spore-formers were found to have genomes larger than 2300 kb and encompass over 2150 protein-coding genes of which 60 are orthologues of genes that are apparently essential for sporulation in B. subtilis. Clostridial spore-formers lack, among others, spoIIB, sda, spoVID and safA genes and have non-orthologous displacements of spoIIQ and spoIVFA, suggesting substantial differences between bacilli and clostridia in the engulfment and spore coat formation steps. Many B. subtilis sporulation genes, particularly those encoding small acid-soluble spore proteins and spore coat proteins, were found only in the family Bacillaceae, or even in a subset of Bacillus spp. Phylogenetic profiles of sporulation genes, compiled in this work, confirm the presence of a common sporulation gene core, but also illuminate the diversity of the sporulation processes within various lineages. These profiles should help further experimental studies of uncharacterized widespread sporulation genes, which would ultimately allow delineation of the minimal set(s) of sporulation-specific genes in Bacilli and Clostridia. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.

  1. Minimalism

    CERN Document Server

    Obendorf, Hartmut

    2009-01-01

    The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.

  2. Minimal Self-Models and the Free Energy Principle

    Directory of Open Access Journals (Sweden)

    Jakub eLimanowski

    2013-09-01

    Full Text Available The term "minimal phenomenal selfhood" describes the basic, pre-reflective experience of being a self (Blanke & Metzinger, 2009. Theoretical accounts of the minimal self have long recognized the importance and the ambivalence of the body as both part of the physical world, and the enabling condition for being in this world (Gallagher, 2005; Grafton, 2009. A recent account of minimal phenomenal selfhood (MPS, Metzinger, 2004a centers on the consideration that minimal selfhood emerges as the result of basic self-modeling mechanisms, thereby being founded on pre-reflective bodily processes. The free energy principle (FEP, Friston, 2010 is a novel unified theory of cortical function that builds upon the imperative that self-organizing systems entail hierarchical generative models of the causes of their sensory input, which are optimized by minimizing free energy as an approximation of the log-likelihood of the model. The implementation of the FEP via predictive coding mechanisms and in particular the active inference principle emphasizes the role of embodiment for predictive self-modeling, which has been appreciated in recent publications. In this review, we provide an overview of these conceptions and illustrate thereby the potential power of the FEP in explaining the mechanisms underlying minimal selfhood and its key constituents, multisensory integration, interoception, agency, perspective, and the experience of mineness. We conclude that the conceptualization of MPS can be well mapped onto a hierarchical generative model furnished by the free energy principle and may constitute the basis for higher-level, cognitive forms of self-referral, as well as the understanding of other minds.

  3. Towards the assembly of a minimal oscillator

    NARCIS (Netherlands)

    Nourian, Z.

    2015-01-01

    Life must have started with lower degree of complexity and connectivity. This statement readily triggers the question how simple is the simplest representation of life? In different words and considering a constructive approach, what are the requirements for creating a minimal cell? This thesis sets

  4. Gap-minimal systems of notations and the constructible hierarchy

    Science.gov (United States)

    Lucian, M. L.

    1972-01-01

    If a constructibly countable ordinal alpha is a gap ordinal, then the order type of the set of index ordinals smaller than alpha is exactly alpha. The gap ordinals are the only points of discontinuity of a certain ordinal-valued function. The notion of gap minimality for well ordered systems of notations is defined, and the existence of gap-minimal systems of notations of arbitrarily large constructibly countable length is established.

  5. FTA, Fault Tree Analysis for Minimal Cut Sets, Graphics for CALCOMP

    International Nuclear Information System (INIS)

    Van Slyke, W.J.; Griffing, D.E.; Diven, J.

    1978-01-01

    1 - Description of problem or function: The FTA (Fault Tree Analysis) system was designed to predict probabilities of the modes of failure for complex systems and to graphically present the structure of systems. There are three programs in the system. Program ALLCUTS performs the calculations. Program KILMER constructs a CalComp plot file of the system fault tree. Program BRANCH builds a cross-reference list of the system fault tree. 2 - Method of solution: ALLCUTS employs a top-down set expansion algorithm to find fault tree cut-sets and then optionally calculates their probability using a currently accepted cut-set quantification method. The methodology is adapted from that in WASH-1400 (draft), August 1974. 3 - Restrictions on the complexity of the problem: Maxima of: 175 basic events, 425 rate events. ALLCUTS may be expanded to solve larger problems depending on available core memory

  6. The power of simplification: Operator interface with the AP1000R during design-basis and beyond design-basis events

    International Nuclear Information System (INIS)

    Williams, M. G.; Mouser, M. R.; Simon, J. B.

    2012-01-01

    The AP1000 R plant is an 1100-MWe pressurized water reactor with passive safety features and extensive plant simplifications that enhance construction, operation, maintenance, safety and cost. The passive safety features are designed to function without safety-grade support systems such as component cooling water, service water, compressed air or HVAC. The AP1000 passive safety features achieve and maintain safe shutdown in case of a design-basis accident for 72 hours without need for operator action, meeting the expectations provided in the European Utility Requirements and the Utility Requirement Document for passive plants. Limited operator actions may be required to maintain safe conditions in the spent fuel pool (SFP) via passive means. This safety approach therefore minimizes the reliance on operator action for accident mitigation, and this paper examines the operator interaction with the Human-System Interface (HSI) as the severity of an accident increases from an anticipated transient to a design basis accident and finally, to a beyond-design-basis event. The AP1000 Control Room design provides an extremely effective environment for addressing the first 72 hours of design-basis events and transients, providing ease of information dissemination and minimal reliance upon operator actions. Symptom-based procedures including Emergency Operating Procedures (EOPs), Abnormal Operating Procedures (AOPs) and Alarm Response Procedures (ARPs) are used to mitigate design basis transients and accidents. Use of the Computerized Procedure System (CPS) aids the operators during mitigation of the event. The CPS provides cues and direction to the operators as the event progresses. If the event becomes progressively worse or lasts longer than 72 hours, and depending upon the nature of failures that may have occurred, minimal operator actions may be required outside of the control room in areas that have been designed to be accessible using components that have been designed

  7. Evaluation of one-dimensional and two-dimensional volatility basis sets in simulating the aging of secondary organic aerosol with smog-chamber experiments.

    Science.gov (United States)

    Zhao, Bin; Wang, Shuxiao; Donahue, Neil M; Chuang, Wayne; Hildebrandt Ruiz, Lea; Ng, Nga L; Wang, Yangjun; Hao, Jiming

    2015-02-17

    We evaluate the one-dimensional volatility basis set (1D-VBS) and two-dimensional volatility basis set (2D-VBS) in simulating the aging of SOA derived from toluene and α-pinene against smog-chamber experiments. If we simulate the first-generation products with empirical chamber fits and the subsequent aging chemistry with a 1D-VBS or a 2D-VBS, the models mostly overestimate the SOA concentrations in the toluene oxidation experiments. This is because the empirical chamber fits include both first-generation oxidation and aging; simulating aging in addition to this results in double counting of the initial aging effects. If the first-generation oxidation is treated explicitly, the base-case 2D-VBS underestimates the SOA concentrations and O:C increase of the toluene oxidation experiments; it generally underestimates the SOA concentrations and overestimates the O:C increase of the α-pinene experiments. With the first-generation oxidation treated explicitly, we could modify the 2D-VBS configuration individually for toluene and α-pinene to achieve good model-measurement agreement. However, we are unable to simulate the oxidation of both toluene and α-pinene with the same 2D-VBS configuration. We suggest that future models should implement parallel layers for anthropogenic (aromatic) and biogenic precursors, and that more modeling studies and laboratory research be done to optimize the "best-guess" parameters for each layer.

  8. FTAP, Minimal Cut Sets of Arbitrary Fault Trees. FRTPLT, Fault Tree Structure and Logical Gates Plot for Program FTAP. FRTGEN, Fault Trees by Sub-tree Generator from Parent Tree for Program FTAP

    International Nuclear Information System (INIS)

    Willie, Randall R.; Rabien, U.

    1997-01-01

    1 - Description of problem or function: FTAP is a general-purpose program for deriving minimal reliability cut and path set families from the fault tree for a complex system. The program has a number of useful features that make it well-suited to nearly all fault tree applications. An input fault tree may specify the system state as any logical function of subsystem or component state variables or complements of these variables; thus, for instance, 'exclusive-or' type relations may be formed. When fault tree logical relations involve complements of state variables, the analyst may instruct FTAP to produce a family of prime implicants, a generalization of the minimal cut set concept. The program offers the flexibility of several distinct methods of generating cut set families. FTAP can also identify certain subsystems as system modules and provide a collection of minimal cut set families that essentially expresses the system state as a function of these module state variables. Another feature allows a useful subfamily to be obtained when the family of minimal cut sets or prime implicants is too large to be found in its entirety; this subfamily may consist of only those sets not containing more than some fixed number of elements or only those sets 'interesting' to the analyst in some special sense. Finally, the analyst can modify the input fault tree in various ways by declaring state variables identically true or false. 2 - Method of solution: Fault tree methods are based on the observation that the system state, either working or failed, can usually be expressed as a Boolean relation between states of several large, readily identifiable subsystems. The state of each subsystem in turn depends on states of simpler subsystems and components which compose it, so that the state of the system itself is determined by a hierarchy of logical relationships between states of subsystems. A fault tree is a graphical representation of these relationships. 3 - Restrictions on the

  9. Chemical basis for minimal cognition

    DEFF Research Database (Denmark)

    Hanczyc, Martin; Ikegami, Takashi

    tension between the drop of oil and its environment. We embed a chemical reaction in the oil phase that reacts with water when an oily precursor comes in contact with the water phase at the liquidliquid interface. This reaction not only powers the droplet to move in the aqueous phase but also allows...... for sustained movement. The direction of the movement is governed by a self-generated pH gradient that surrounds the droplet. In addition this self-generated gradient can be overridden by an externally imposed pH gradient, and therefore the direction of droplet motion may be controlled. Also we noticed...... that convection flow is generated inside the oil droplet to cause the movement, which was also confirmed by simulating the fluid dynamics integrated with chemical reactions (Matsuno et al., 2007, ACAL 07, Springer, p.179, Springer). We can observe that the droplet senses the gradient in the environment (either...

  10. Prederivatives of gamma paraconvex set-valued maps and Pareto optimality conditions for set optimization problems.

    Science.gov (United States)

    Huang, Hui; Ning, Jixian

    2017-01-01

    Prederivatives play an important role in the research of set optimization problems. First, we establish several existence theorems of prederivatives for γ -paraconvex set-valued mappings in Banach spaces with [Formula: see text]. Then, in terms of prederivatives, we establish both necessary and sufficient conditions for the existence of Pareto minimal solution of set optimization problems.

  11. Rule extraction from minimal neural networks for credit card screening.

    Science.gov (United States)

    Setiono, Rudy; Baesens, Bart; Mues, Christophe

    2011-08-01

    While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.

  12. Many-Body Energy Decomposition with Basis Set Superposition Error Corrections.

    Science.gov (United States)

    Mayer, István; Bakó, Imre

    2017-05-09

    The problem of performing many-body decompositions of energy is considered in the case when BSSE corrections are also performed. It is discussed that the two different schemes that have been proposed go back to the two different interpretations of the original Boys-Bernardi counterpoise correction scheme. It is argued that from the physical point of view the "hierarchical" scheme of Valiron and Mayer should be preferred and not the scheme recently discussed by Ouyang and Bettens, because it permits the energy of the individual monomers and all the two-body, three-body, etc. energy components to be free of unphysical dependence on the arrangement (basis functions) of other subsystems in the cluster.

  13. Electrochemistry as a basis for radiochemical generator systems

    International Nuclear Information System (INIS)

    Bentley, G.E.; Steinkruger, F.J.; Wanek, P.M.

    1984-01-01

    Ion exchange and solvent extraction techniques have been used extensively as the basis for radiochemical generators exploiting the differences in absorption behavior between the parent nuclide and its useful daughter nuclide. Many parent/daughter pairs of nuclides have sufficiently different polarographic half wave potentials so that their electrochemical behavior may be exploited for rapid separation of the daughter from the parent with minimal contamination of the product with the parent isotope

  14. A new recursive incremental algorithm for building minimal acyclic deterministic finite automata

    NARCIS (Netherlands)

    Watson, B.W.; Martin-Vide, C.; Mitrana, V.

    2003-01-01

    This chapter presents a new algorithm for incrementally building minimal acyclic deterministic finite automata. Such minimal automata are a compact representation of a finite set of words (e.g. in a spell checker). The incremental aspect of such algorithms (where the intermediate automaton is

  15. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.

    Science.gov (United States)

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.

  16. Minimal changes in health status questionnaires: distinction between minimally detectable change and minimally important change

    Directory of Open Access Journals (Sweden)

    Knol Dirk L

    2006-08-01

    Full Text Available Abstract Changes in scores on health status questionnaires are difficult to interpret. Several methods to determine minimally important changes (MICs have been proposed which can broadly be divided in distribution-based and anchor-based methods. Comparisons of these methods have led to insight into essential differences between these approaches. Some authors have tried to come to a uniform measure for the MIC, such as 0.5 standard deviation and the value of one standard error of measurement (SEM. Others have emphasized the diversity of MIC values, depending on the type of anchor, the definition of minimal importance on the anchor, and characteristics of the disease under study. A closer look makes clear that some distribution-based methods have been merely focused on minimally detectable changes. For assessing minimally important changes, anchor-based methods are preferred, as they include a definition of what is minimally important. Acknowledging the distinction between minimally detectable and minimally important changes is useful, not only to avoid confusion among MIC methods, but also to gain information on two important benchmarks on the scale of a health status measurement instrument. Appreciating the distinction, it becomes possible to judge whether the minimally detectable change of a measurement instrument is sufficiently small to detect minimally important changes.

  17. Environmental Restoration Progam Waste Minimization and Pollution Prevention Awareness Program Plan

    Energy Technology Data Exchange (ETDEWEB)

    Grumski, J. T.; Swindle, D. W.; Bates, L. D.; DeLozier, M. F.P.; Frye, C. E.; Mitchell, M. E.

    1991-09-30

    In response to DOE Order 5400.1 this plan outlines the requirements for a Waste Minimization and Pollution Prevention Awareness Program for the Environmental Restoration (ER) Program at Martin Marietta Energy System, Inc. Statements of the national, Department of Energy, Energy Systems, and Energy Systems ER Program policies on waste minimization are included and reflect the attitudes of these organizations and their commitment to the waste minimization effort. Organizational responsibilities for the waste minimization effort are clearly defined and discussed, and the program objectives and goals are set forth. Waste assessment is addressed as being a key element in developing the waste generation baseline. There are discussions on the scope of ER-specific waste minimization techniques and approaches to employee awareness and training. There is also a discussion on the process for continual evaluation of the Waste Minimization Program. Appendixes present an implementation schedule for the Waste Minimization and Pollution Prevention Program, the program budget, an organization chart, and the ER waste minimization policy.

  18. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue.

    Science.gov (United States)

    Jezernik, Saso; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  19. Environmental Restoration Progam Waste Minimization and Pollution Prevention Awareness Program Plan

    International Nuclear Information System (INIS)

    1991-01-01

    In response to DOE Order 5400.1 this plan outlines the requirements for a Waste Minimization and Pollution Prevention Awareness Program for the Environmental Restoration (ER) Program at Martin Marietta Energy System, Inc. Statements of the national, Department of Energy, Energy Systems, and Energy Systems ER Program policies on waste minimization are included and reflect the attitudes of these organizations and their commitment to the waste minimization effort. Organizational responsibilities for the waste minimization effort are clearly defined and discussed, and the program objectives and goals are set forth. Waste assessment is addressed as being a key element in developing the waste generation baseline. There are discussions on the scope of ER-specific waste minimization techniques and approaches to employee awareness and training. There is also a discussion on the process for continual evaluation of the Waste Minimization Program. Appendixes present an implementation schedule for the Waste Minimization and Pollution Prevention Program, the program budget, an organization chart, and the ER waste minimization policy

  20. The power of simplification: Operator interface with the AP1000{sup R} during design-basis and beyond design-basis events

    Energy Technology Data Exchange (ETDEWEB)

    Williams, M. G.; Mouser, M. R.; Simon, J. B. [Westinghouse Electric Company, 1000 Westinghouse Drive, Cranberry Township, PA 16066 (United States)

    2012-07-01

    The AP1000{sup R} plant is an 1100-MWe pressurized water reactor with passive safety features and extensive plant simplifications that enhance construction, operation, maintenance, safety and cost. The passive safety features are designed to function without safety-grade support systems such as component cooling water, service water, compressed air or HVAC. The AP1000 passive safety features achieve and maintain safe shutdown in case of a design-basis accident for 72 hours without need for operator action, meeting the expectations provided in the European Utility Requirements and the Utility Requirement Document for passive plants. Limited operator actions may be required to maintain safe conditions in the spent fuel pool (SFP) via passive means. This safety approach therefore minimizes the reliance on operator action for accident mitigation, and this paper examines the operator interaction with the Human-System Interface (HSI) as the severity of an accident increases from an anticipated transient to a design basis accident and finally, to a beyond-design-basis event. The AP1000 Control Room design provides an extremely effective environment for addressing the first 72 hours of design-basis events and transients, providing ease of information dissemination and minimal reliance upon operator actions. Symptom-based procedures including Emergency Operating Procedures (EOPs), Abnormal Operating Procedures (AOPs) and Alarm Response Procedures (ARPs) are used to mitigate design basis transients and accidents. Use of the Computerized Procedure System (CPS) aids the operators during mitigation of the event. The CPS provides cues and direction to the operators as the event progresses. If the event becomes progressively worse or lasts longer than 72 hours, and depending upon the nature of failures that may have occurred, minimal operator actions may be required outside of the control room in areas that have been designed to be accessible using components that have been

  1. A game on the universe of sets

    International Nuclear Information System (INIS)

    Saveliev, D I

    2008-01-01

    Working in set theory without the axiom of regularity, we consider a two-person game on the universe of sets. In this game, the players choose in turn an element of a given set, an element of this element and so on. A player wins if he leaves his opponent no possibility of making a move, that is, if he has chosen the empty set. Winning sets (those admitting a winning strategy for one of the players) form a natural hierarchy with levels indexed by ordinals (in the finite case, the ordinal indicates the shortest length of a winning strategy). We show that the class of hereditarily winning sets is an inner model containing all well-founded sets and that each of the four possible relations between the universe, the class of hereditarily winning sets, and the class of well-founded sets is consistent. As far as the class of winning sets is concerned, either it is equal to the whole universe, or many of the axioms of set theory cannot hold on this class. Somewhat surprisingly, this does not apply to the axiom of regularity: we show that the failure of this axiom is consistent with its relativization to winning sets. We then establish more subtle properties of winning non-well-founded sets. We describe all classes of ordinals for which the following is consistent: winning sets without minimal elements (in the sense of membership) occur exactly at the levels indexed by the ordinals of this class. In particular, we show that if an even level of the hierarchy of winning sets contains a set without minimal elements, then all higher levels contain such sets. We show that the failure of the axiom of regularity implies that all odd levels contain sets without minimal elements, but it is consistent with the absence of such sets at all even levels as well as with their appearance at an arbitrary even non-limit or countable-cofinal level. To obtain consistency results, we propose a new method for obtaining models with non-well-founded sets. Finally, we study how long this game can

  2. Motivations for seeking minimally invasive cosmetic procedures in an academic outpatient setting.

    Science.gov (United States)

    Sobanko, Joseph F; Taglienti, Anthony J; Wilson, Anthony J; Sarwer, David B; Margolis, David J; Dai, Julia; Percec, Ivona

    2015-11-01

    The demand for minimally invasive cosmetic procedures has continued to rise, yet few studies have examined this patient population. This study sought to define the demographics, social characteristics, and motivations of patients seeking minimally invasive facial cosmetic procedures. A prospective, single-institution cohort study of 72 patients was conducted from 2011 through 2014 at an urban academic medical center. Patients were aged 25 through 70 years; presented for botulinum toxin or soft tissue filler injections; and completed demographic, informational, and psychometric questionnaires before treatment. Descriptive statistics were conducted using Stata statistical software. The average patient was 47.8 years old, was married, had children, was employed, possessed a college or advanced degree, and reported an above-average income. Most patients felt that the first signs of aging occurred around their eyes (74.6%), and a similar percentage expressed this area was the site most desired for rejuvenation. Almost one-third of patients experienced a "major life event" within the preceding year, nearly half had sought prior counseling from a mental health specialist, and 23.6% were being actively prescribed psychiatric medication at the time of treatment. Patients undergoing injectable aesthetic treatments in an urban outpatient academic center were mostly employed, highly educated, affluent women who believed that their procedure would positively impact their appearance. A significant minority experienced a major life event within the past year, which an astute clinician should address during the initial patient consultation. This study helps to better understand the psychosocial factors characterizing this patient population. 4 Therapeutic. © 2015 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.

  3. Minimal surfaces

    CERN Document Server

    Dierkes, Ulrich; Sauvigny, Friedrich; Jakob, Ruben; Kuster, Albrecht

    2010-01-01

    Minimal Surfaces is the first volume of a three volume treatise on minimal surfaces (Grundlehren Nr. 339-341). Each volume can be read and studied independently of the others. The central theme is boundary value problems for minimal surfaces. The treatise is a substantially revised and extended version of the monograph Minimal Surfaces I, II (Grundlehren Nr. 295 & 296). The first volume begins with an exposition of basic ideas of the theory of surfaces in three-dimensional Euclidean space, followed by an introduction of minimal surfaces as stationary points of area, or equivalently

  4. Radial basis function neural networks with sequential learning MRAN and its applications

    CERN Document Server

    Sundararajan, N; Wei Lu Ying

    1999-01-01

    This book presents in detail the newly developed sequential learning algorithm for radial basis function neural networks, which realizes a minimal network. This algorithm, created by the authors, is referred to as Minimal Resource Allocation Networks (MRAN). The book describes the application of MRAN in different areas, including pattern recognition, time series prediction, system identification, control, communication and signal processing. Benchmark problems from these areas have been studied, and MRAN is compared with other algorithms. In order to make the book self-contained, a review of t

  5. Electronic structure of thin films by the self-consistent numerical-basis-set linear combination of atomic orbitals method: Ni(001)

    International Nuclear Information System (INIS)

    Wang, C.S.; Freeman, A.J.

    1979-01-01

    We present the self-consistent numerical-basis-set linear combination of atomic orbitals (LCAO) discrete variational method for treating the electronic structure of thin films. As in the case of bulk solids, this method provides for thin films accurate solutions of the one-particle local density equations with a non-muffin-tin potential. Hamiltonian and overlap matrix elements are evaluated accurately by means of a three-dimensional numerical Diophantine integration scheme. Application of this method is made to the self-consistent solution of one-, three-, and five-layer Ni(001) unsupported films. The LCAO Bloch basis set consists of valence orbitals (3d, 4s, and 4p states for transition metals) orthogonalized to the frozen-core wave functions. The self-consistent potential is obtained iteratively within the superposition of overlapping spherical atomic charge density model with the atomic configurations treated as adjustable parameters. Thus the crystal Coulomb potential is constructed as a superposition of overlapping spherically symmetric atomic potentials and, correspondingly, the local density Kohn-Sham (α = 2/3) potential is determined from a superposition of atomic charge densities. At each iteration in the self-consistency procedure, the crystal charge density is evaluated using a sampling of 15 independent k points in (1/8)th of the irreducible two-dimensional Brillouin zone. The total density of states (DOS) and projected local DOS (by layer plane) are calculated using an analytic linear energy triangle method (presented as an Appendix) generalized from the tetrahedron scheme for bulk systems. Distinct differences are obtained between the surface and central plane local DOS. The central plane DOS is found to converge rapidly to the DOS of bulk paramagnetic Ni obtained by Wang and Callaway. Only a very small surplus charge (0.03 electron/atom) is found on the surface planes, in agreement with jellium model calculations

  6. Irreducible descriptive sets of attributes for information systems

    KAUST Repository

    Moshkov, Mikhail; Skowron, Andrzej; Suraj, Zbigniew

    2010-01-01

    . An irreducible descriptive set for the considered information system S is a minimal (relative to the inclusion) set B of attributes which defines exactly the set Ext(S) by means of true and realizable rules constructed over attributes from the considered set B

  7. Inflation in non-minimal matter-curvature coupling theories

    Energy Technology Data Exchange (ETDEWEB)

    Gomes, C.; Bertolami, O. [Departamento de Física e Astronomia and Centro de Física do Porto, Faculdade de Ciências da Universidade do Porto, Rua do Campo Alegre s/n, 4169-007 Porto (Portugal); Rosa, J.G., E-mail: claudio.gomes@fc.up.pt, E-mail: joao.rosa@ua.pt, E-mail: orfeu.bertolami@fc.up.pt [Departamento de Física da Universidade de Aveiro and CIDMA, Campus de Santiago, 3810-183 Aveiro (Portugal)

    2017-06-01

    We study inflationary scenarios driven by a scalar field in the presence of a non-minimal coupling between matter and curvature. We show that the Friedmann equation can be significantly modified when the energy density during inflation exceeds a critical value determined by the non-minimal coupling, which in turn may considerably modify the spectrum of primordial perturbations and the inflationary dynamics. In particular, we show that these models are characterised by a consistency relation between the tensor-to-scalar ratio and the tensor spectral index that can differ significantly from the predictions of general relativity. We also give examples of observational predictions for some of the most commonly considered potentials and use the results of the Planck collaboration to set limits on the scale of the non-minimal coupling.

  8. Minimally invasive surgical treatment of malignant pleural effusions.

    Science.gov (United States)

    Ciuche, Adrian; Nistor, Claudiu; Pantile, Daniel; Prof Horvat, Teodor

    2011-10-01

    Usually the pleural cavity contains a small amount of liquid (approximately 10 ml). Pleural effusions appear when the liquid production rate overpasses the absorption rate with a greater amount of liquid inside the pleural cavity. Between January 1998 to December 2008 we conducted a study in order to establish the adequate surgical treatment for MPEs. Effective control of a recurrent malignant pleural effusion can greatly improve the quality of life of the cancer patient. The present review collects and examines the clinical results of minimally invasive techniques designed to treat this problem. Patients with MPEs were studied according to several criteria. In our study we observed the superiority of intraoperative talc poudrage, probably due to a more uniform distribution of talc particles over the pleural surface. Minimal pleurotomy with thoracic drainage and instillation of a talc suspension is also a safe and effective technique and should be employed when there are contraindications for the thoracoscopic minimally invasive procedure. On the basis of comparisons involving effectiveness, morbidity, and convenience, we recommend the thoracoscopic insufflations of talc as a fine powder with pleural drainage as the procedure of choice.

  9. Minimization of Decision Tree Average Depth for Decision Tables with Many-valued Decisions

    KAUST Repository

    Azad, Mohammad

    2014-09-13

    The paper is devoted to the analysis of greedy algorithms for the minimization of average depth of decision trees for decision tables such that each row is labeled with a set of decisions. The goal is to find one decision from the set of decisions. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of average depth of decision trees.

  10. Minimization of Decision Tree Average Depth for Decision Tables with Many-valued Decisions

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2014-01-01

    The paper is devoted to the analysis of greedy algorithms for the minimization of average depth of decision trees for decision tables such that each row is labeled with a set of decisions. The goal is to find one decision from the set of decisions. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of average depth of decision trees.

  11. MEnDiGa: A Minimal Engine for Digital Games

    OpenAIRE

    Boaventura, Filipe M. B.; Sarinho, Victor T.

    2017-01-01

    Game engines generate high dependence of developed games on provided implementation resources. Feature modeling is a technique that captures commonalities and variabilities results of domain analysis to provide a basis for automated configuration of concrete products. This paper presents the Minimal Engine for Digital Games (MEnDiGa), a simplified collection of game assets based on game features capable of building small and casual games regardless of their implementation resources. It presen...

  12. Minimizing convex functions by continuous descent methods

    Directory of Open Access Journals (Sweden)

    Sergiu Aizicovici

    2010-01-01

    Full Text Available We study continuous descent methods for minimizing convex functions, defined on general Banach spaces, which are associated with an appropriate complete metric space of vector fields. We show that there exists an everywhere dense open set in this space of vector fields such that each of its elements generates strongly convergent trajectories.

  13. Accelerating GW calculations with optimal polarizability basis

    Energy Technology Data Exchange (ETDEWEB)

    Umari, P.; Stenuit, G. [CNR-IOM DEMOCRITOS Theory Elettra Group, Basovizza (Trieste) (Italy); Qian, X.; Marzari, N. [Department of Materials Science and Engineering, MIT, Cambridge, MA (United States); Giacomazzi, L.; Baroni, S. [CNR-IOM DEMOCRITOS Theory Elettra Group, Basovizza (Trieste) (Italy); SISSA - Scuola Internazionale Superiore di Studi Avanzati, Trieste (Italy)

    2011-03-15

    We present a method for accelerating GW quasi-particle (QP) calculations. This is achieved through the introduction of optimal basis sets for representing polarizability matrices. First the real-space products of Wannier like orbitals are constructed and then optimal basis sets are obtained through singular value decomposition. Our method is validated by calculating the vertical ionization energies of the benzene molecule and the band structure of crystalline silicon. Its potentialities are illustrated by calculating the QP spectrum of a model structure of vitreous silica. Finally, we apply our method for studying the electronic structure properties of a model of quasi-stoichiometric amorphous silicon nitride and of its point defects. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  14. BWR NSSS design basis documentation

    International Nuclear Information System (INIS)

    Vij, R.S.; Bates, R.E.

    2004-01-01

    programs that GE has participated in and describes the different options and approaches that have been used by various utilities in their design basis programs. Some of these variations deal with the scope and depth of coverage of the information, while others are related to the process (how the work is done). Both of these topics can have a significant effect on the program cost. Some insight into these effects is provided. The final section of the paper presents a set of lessons learned and a recommendation for an optimum approach to a design basis information program. The lessons learned reflect the knowledge that GE has gained by participating in design basis programs with nineteen domestic and international BWR owner/operators. The optimum approach described in this paper is GE's attempt to define a set of information and a work process for a utility/GE NSSS Design Basis Information program that will maximize the cost effectiveness of the program for the utility. (author)

  15. On Time with Minimal Expected Cost!

    DEFF Research Database (Denmark)

    David, Alexandre; Jensen, Peter Gjøl; Larsen, Kim Guldstrand

    2014-01-01

    (Priced) timed games are two-player quantitative games involving an environment assumed to be completely antogonistic. Classical analysis consists in the synthesis of strategies ensuring safety, time-bounded or cost-bounded reachability objectives. Assuming a randomized environment, the (priced......) timed game essentially defines an infinite-state Markov (reward) decision proces. In this setting the objective is classically to find a strategy that will minimize the expected reachability cost, but with no guarantees on worst-case behaviour. In this paper, we provide efficient methods for computing...... reachability strategies that will both ensure worst case time-bounds as well as provide (near-) minimal expected cost. Our method extends the synthesis algorithms of the synthesis tool Uppaal-Tiga with suitable adapted reinforcement learning techniques, that exhibits several orders of magnitude improvements w...

  16. Minimizing System Modification in an Incremental Design Approach

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Pop, Traian

    2001-01-01

    In this paper we present an approach to mapping and scheduling of distributed embedded systems for hard real-time applications, aiming at minimizing the system modification cost. We consider an incremental design process that starts from an already existing sys-tem running a set of applications. We...

  17. Implementation of Waste Minimization at a complex R ampersand D site

    International Nuclear Information System (INIS)

    Lang, R.E.; Thuot, J.R.; Devgun, J.S.

    1995-01-01

    Under the 1994 Waste Minimization/Pollution Prevention Crosscut Plan, the Department of Energy (DOE) has set a goal of 50% reduction in waste at its facilities by the end of 1999. Each DOE site is required to set site-specific goals to reduce generation of all types of waste including hazardous, radioactive, and mixed. To meet these goals, Argonne National Laboratory (ANL), Argonne, IL, has developed and implemented a comprehensive Pollution Prevention/Waste Minimization (PP/WMin) Program. The facilities and activities at the site vary from research into basic sciences and research into nuclear fuel cycle to high energy physics and decontamination and decommissioning projects. As a multidisciplinary R ampersand D facility and a multiactivity site, ANL generates waste streams that are varied, in physical form as well as in chemical constituents. This in turn presents a significant challenge to put a cohesive site-wide PP/WMin Program into action. In this paper, we will describe ANL's key activities and waste streams, the regulatory drivers for waste minimization, and the DOE goals in this area, and we will discuss ANL's strategy for waste minimization and it's implementation across the site

  18. Reduction of very large reaction mechanisms using methods based on simulation error minimization

    Energy Technology Data Exchange (ETDEWEB)

    Nagy, Tibor; Turanyi, Tamas [Institute of Chemistry, Eoetvoes University (ELTE), P.O. Box 32, H-1518 Budapest (Hungary)

    2009-02-15

    A new species reduction method called the Simulation Error Minimization Connectivity Method (SEM-CM) was developed. According to the SEM-CM algorithm, a mechanism building procedure is started from the important species. Strongly connected sets of species, identified on the basis of the normalized Jacobian, are added and several consistent mechanisms are produced. The combustion model is simulated with each of these mechanisms and the mechanism causing the smallest error (i.e. deviation from the model that uses the full mechanism), considering the important species only, is selected. Then, in several steps other strongly connected sets of species are added, the size of the mechanism is gradually increased and the procedure is terminated when the error becomes smaller than the required threshold. A new method for the elimination of redundant reactions is also presented, which is called the Principal Component Analysis of Matrix F with Simulation Error Minimization (SEM-PCAF). According to this method, several reduced mechanisms are produced by using various PCAF thresholds. The reduced mechanism having the least CPU time requirement among the ones having almost the smallest error is selected. Application of SEM-CM and SEM-PCAF together provides a very efficient way to eliminate redundant species and reactions from large mechanisms. The suggested approach was tested on a mechanism containing 6874 irreversible reactions of 345 species that describes methane partial oxidation to high conversion. The aim is to accurately reproduce the concentration-time profiles of 12 major species with less than 5% error at the conditions of an industrial application. The reduced mechanism consists of 246 reactions of 47 species and its simulation is 116 times faster than using the full mechanism. The SEM-CM was found to be more effective than the classic Connectivity Method, and also than the DRG, two-stage DRG, DRGASA, basic DRGEP and extended DRGEP methods. (author)

  19. Temporal structure of consciousness and minimal self in schizophrenia

    Directory of Open Access Journals (Sweden)

    Brice eMartin

    2014-10-01

    Full Text Available The concept of the minimal self refers to the consciousness of oneself as an immediate subject of experience. According to recent studies, disturbances of the minimal self may be a core feature of schizophrenia. They are emphasized in classical psychiatry literature and in phenomenological work. Impaired minimal self experience may be defined as a distortion of one’s first-person experiential perspective as, for example, an ‘altered presence’ during which the sense of the experienced self (‘mineness’ is subtly affected, or ‘altered sense of demarcation’, i.e. a difficulty discriminating the self from the non-self. Little is known, however, about the cognitive basis of these disturbances. In fact, recent work indicates that disorders of the self are not correlated with cognitive impairments commonly found in schizophrenia such as working-memory and attention disorders. In addition, a major difficulty with exploring the minimal self experimentally lies in its definition as being non self-reflexive, and distinct from the verbalized, explicit awareness of an ‘I’.In this paper we shall discuss the possibility that disturbances of the minimal self observed in patients with schizophrenia are related to alterations in time processing. We shall review the literature on schizophrenia and time processing that lends support to this possibility. In particular we shall discuss the involvement of temporal integration windows on different time scales (implicit time processing as well as duration perception disturbances (explicit time processing in disorders of the minimal self. We argue that a better understanding of the relationship between time and the minimal self as well of issues of embodiment require research that looks more specifically at implicit time processing. Some methodological issues will be discussed.

  20. What is the best density functional to describe water clusters: evaluation of widely used density functionals with various basis sets for (H2O)n (n = 1-10)

    Czech Academy of Sciences Publication Activity Database

    Li, F.; Wang, L.; Zhao, J.; Xie, J. R. H.; Riley, Kevin Eugene; Chen, Z.

    2011-01-01

    Roč. 130, 2/3 (2011), s. 341-352 ISSN 1432-881X Institutional research plan: CEZ:AV0Z40550506 Keywords : water cluster * density functional theory * MP2 . CCSD(T) * basis set * relative energies Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.162, year: 2011

  1. On the uniqueness of minimizers for a class of variational problems with Polyconvex integrand

    KAUST Repository

    Awi, Romeo

    2017-02-05

    We prove existence and uniqueness of minimizers for a family of energy functionals that arises in Elasticity and involves polyconvex integrands over a certain subset of displacement maps. This work extends previous results by Awi and Gangbo to a larger class of integrands. First, we study these variational problems over displacements for which the determinant is positive. Second, we consider a limit case in which the functionals are degenerate. In that case, the set of admissible displacements reduces to that of incompressible displacements which are measure preserving maps. Finally, we establish that the minimizer over the set of incompressible maps may be obtained as a limit of minimizers corresponding to a sequence of minimization problems over general displacements provided we have enough regularity on the dual problems. We point out that these results defy the direct methods of the calculus of variations.

  2. Two-Agent Scheduling to Minimize the Maximum Cost with Position-Dependent Jobs

    Directory of Open Access Journals (Sweden)

    Long Wan

    2015-01-01

    Full Text Available This paper investigates a single-machine two-agent scheduling problem to minimize the maximum costs with position-dependent jobs. There are two agents, each with a set of independent jobs, competing to perform their jobs on a common machine. In our scheduling setting, the actual position-dependent processing time of one job is characterized by variable function dependent on the position of the job in the sequence. Each agent wants to fulfil the objective of minimizing the maximum cost of its own jobs. We develop a feasible method to achieve all the Pareto optimal points in polynomial time.

  3. Novel gene sets improve set-level classification of prokaryotic gene expression data.

    Science.gov (United States)

    Holec, Matěj; Kuželka, Ondřej; Železný, Filip

    2015-10-28

    Set-level classification of gene expression data has received significant attention recently. In this setting, high-dimensional vectors of features corresponding to genes are converted into lower-dimensional vectors of features corresponding to biologically interpretable gene sets. The dimensionality reduction brings the promise of a decreased risk of overfitting, potentially resulting in improved accuracy of the learned classifiers. However, recent empirical research has not confirmed this expectation. Here we hypothesize that the reported unfavorable classification results in the set-level framework were due to the adoption of unsuitable gene sets defined typically on the basis of the Gene ontology and the KEGG database of metabolic networks. We explore an alternative approach to defining gene sets, based on regulatory interactions, which we expect to collect genes with more correlated expression. We hypothesize that such more correlated gene sets will enable to learn more accurate classifiers. We define two families of gene sets using information on regulatory interactions, and evaluate them on phenotype-classification tasks using public prokaryotic gene expression data sets. From each of the two gene-set families, we first select the best-performing subtype. The two selected subtypes are then evaluated on independent (testing) data sets against state-of-the-art gene sets and against the conventional gene-level approach. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. Novel gene sets defined on the basis of regulatory interactions improve set-level classification of gene expression data. The experimental scripts and other material needed to reproduce the experiments are available at http://ida.felk.cvut.cz/novelgenesets.tar.gz.

  4. Typed Sets as a Basis for Object-Oriented Database Schemas

    NARCIS (Netherlands)

    Balsters, H.; de By, R.A.; Zicari, R.

    The object-oriented data model TM is a language that is based on the formal theory of FM, a typed language with object-oriented features such as attributes and methods in the presence of subtyping. The general (typed) set constructs of FM allow one to deal with (database) constraints in TM. The

  5. Design of cognitive engine for cognitive radio based on the rough sets and radial basis function neural network

    Science.gov (United States)

    Yang, Yanchao; Jiang, Hong; Liu, Congbin; Lan, Zhongli

    2013-03-01

    Cognitive radio (CR) is an intelligent wireless communication system which can dynamically adjust the parameters to improve system performance depending on the environmental change and quality of service. The core technology for CR is the design of cognitive engine, which introduces reasoning and learning methods in the field of artificial intelligence, to achieve the perception, adaptation and learning capability. Considering the dynamical wireless environment and demands, this paper proposes a design of cognitive engine based on the rough sets (RS) and radial basis function neural network (RBF_NN). The method uses experienced knowledge and environment information processed by RS module to train the RBF_NN, and then the learning model is used to reconfigure communication parameters to allocate resources rationally and improve system performance. After training learning model, the performance is evaluated according to two benchmark functions. The simulation results demonstrate the effectiveness of the model and the proposed cognitive engine can effectively achieve the goal of learning and reconfiguration in cognitive radio.

  6. Algorithms for detecting and analysing autocatalytic sets.

    Science.gov (United States)

    Hordijk, Wim; Smith, Joshua I; Steel, Mike

    2015-01-01

    Autocatalytic sets are considered to be fundamental to the origin of life. Prior theoretical and computational work on the existence and properties of these sets has relied on a fast algorithm for detectingself-sustaining autocatalytic sets in chemical reaction systems. Here, we introduce and apply a modified version and several extensions of the basic algorithm: (i) a modification aimed at reducing the number of calls to the computationally most expensive part of the algorithm, (ii) the application of a previously introduced extension of the basic algorithm to sample the smallest possible autocatalytic sets within a reaction network, and the application of a statistical test which provides a probable lower bound on the number of such smallest sets, (iii) the introduction and application of another extension of the basic algorithm to detect autocatalytic sets in a reaction system where molecules can also inhibit (as well as catalyse) reactions, (iv) a further, more abstract, extension of the theory behind searching for autocatalytic sets. (i) The modified algorithm outperforms the original one in the number of calls to the computationally most expensive procedure, which, in some cases also leads to a significant improvement in overall running time, (ii) our statistical test provides strong support for the existence of very large numbers (even millions) of minimal autocatalytic sets in a well-studied polymer model, where these minimal sets share about half of their reactions on average, (iii) "uninhibited" autocatalytic sets can be found in reaction systems that allow inhibition, but their number and sizes depend on the level of inhibition relative to the level of catalysis. (i) Improvements in the overall running time when searching for autocatalytic sets can potentially be obtained by using a modified version of the algorithm, (ii) the existence of large numbers of minimal autocatalytic sets can have important consequences for the possible evolvability of

  7. minimal pairs of polytopes and their number of vertices

    African Journals Online (AJOL)

    Preferred Customer

    Using this operation we give a new algorithm to reduce and find a minimal pair of polytopes from the given ... Key words/phrases: Pairs of compact convex sets, Blaschke addition, Minkowski sum, mnimality ... product K(X)×K(X) by K2. (X).

  8. An alternating minimization method for blind deconvolution from Poisson data

    International Nuclear Information System (INIS)

    Prato, Marco; La Camera, Andrea; Bonettini, Silvia

    2014-01-01

    Blind deconvolution is a particularly challenging inverse problem since information on both the desired target and the acquisition system have to be inferred from the measured data. When the collected data are affected by Poisson noise, this problem is typically addressed by the minimization of the Kullback-Leibler divergence, in which the unknowns are sought in particular feasible sets depending on the a priori information provided by the specific application. If these sets are separated, then the resulting constrained minimization problem can be addressed with an inexact alternating strategy. In this paper we apply this optimization tool to the problem of reconstructing astronomical images from adaptive optics systems, and we show that the proposed approach succeeds in providing very good results in the blind deconvolution of nondense stellar clusters

  9. Minimizing size of decision trees for multi-label decision tables

    KAUST Repository

    Azad, Mohammad

    2014-09-29

    We used decision tree as a model to discover the knowledge from multi-label decision tables where each row has a set of decisions attached to it and our goal is to find out one arbitrary decision from the set of decisions attached to a row. The size of the decision tree can be small as well as very large. We study here different greedy as well as dynamic programming algorithms to minimize the size of the decision trees. When we compare the optimal result from dynamic programming algorithm, we found some greedy algorithms produce results which are close to the optimal result for the minimization of number of nodes (at most 18.92% difference), number of nonterminal nodes (at most 20.76% difference), and number of terminal nodes (at most 18.71% difference).

  10. Minimizing size of decision trees for multi-label decision tables

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2014-01-01

    We used decision tree as a model to discover the knowledge from multi-label decision tables where each row has a set of decisions attached to it and our goal is to find out one arbitrary decision from the set of decisions attached to a row. The size of the decision tree can be small as well as very large. We study here different greedy as well as dynamic programming algorithms to minimize the size of the decision trees. When we compare the optimal result from dynamic programming algorithm, we found some greedy algorithms produce results which are close to the optimal result for the minimization of number of nodes (at most 18.92% difference), number of nonterminal nodes (at most 20.76% difference), and number of terminal nodes (at most 18.71% difference).

  11. CHESS-changing horizon efficient set search: A simple principle for multiobjective optimization

    DEFF Research Database (Denmark)

    Borges, Pedro Manuel F. C.

    2000-01-01

    This paper presents a new concept for generating approximations to the non-dominated set in multiobjective optimization problems. The approximation set A is constructed by solving several single-objective minimization problems in which a particular function D(A, z) is minimized. A new algorithm t...

  12. Low-dose dual-energy cone-beam CT using a total-variation minimization algorithm

    International Nuclear Information System (INIS)

    Min, Jong Hwan

    2011-02-01

    Dual-energy cone-beam CT is an important imaging modality in diagnostic applications, and may also find its use in other application such as therapeutic image guidance. Despite of its clinical values, relatively high radiation dose of dual-energy scan may pose a challenge to its wide use. In this work, we investigated a low-dose, pre-reconstruction type of dual-energy cone-beam CT (CBCT) using a total-variation minimization algorithm for image reconstruction. An empirical dual-energy calibration method was used to prepare material-specific projection data. Raw data at high and low tube voltages are converted into a set of basis functions which can be linearly combined to produce material-specific data using the coefficients obtained through the calibration process. From much fewer views than are conventionally used, material specific images are reconstructed by use of the total-variation minimization algorithm. An experimental study was performed to demonstrate the feasibility of the proposed method using a micro-CT system. We have reconstructed images of the phantoms from only 90 projections acquired at tube voltages of 40 kVp and 90 kVp each. Aluminum-only and acryl-only images were successfully decomposed. We evaluated the quality of the reconstructed images by use of contrast-to-noise ratio and detectability. A low-dose dual-energy CBCT can be realized via the proposed method by greatly reducing the number of projections

  13. Comprehensive simulation-enhanced training curriculum for an advanced minimally invasive procedure: a randomized controlled trial.

    Science.gov (United States)

    Zevin, Boris; Dedy, Nicolas J; Bonrath, Esther M; Grantcharov, Teodor P

    2017-05-01

    There is no comprehensive simulation-enhanced training curriculum to address cognitive, psychomotor, and nontechnical skills for an advanced minimally invasive procedure. 1) To develop and provide evidence of validity for a comprehensive simulation-enhanced training (SET) curriculum for an advanced minimally invasive procedure; (2) to demonstrate transfer of acquired psychomotor skills from a simulation laboratory to live porcine model; and (3) to compare training outcomes of SET curriculum group and chief resident group. University. This prospective single-blinded, randomized, controlled trial allocated 20 intermediate-level surgery residents to receive either conventional training (control) or SET curriculum training (intervention). The SET curriculum consisted of cognitive, psychomotor, and nontechnical training modules. Psychomotor skills in a live anesthetized porcine model in the OR was the primary outcome. Knowledge of advanced minimally invasive and bariatric surgery and nontechnical skills in a simulated OR crisis scenario were the secondary outcomes. Residents in the SET curriculum group went on to perform a laparoscopic jejunojejunostomy in the OR. Cognitive, psychomotor, and nontechnical skills of SET curriculum group were also compared to a group of 12 chief surgery residents. SET curriculum group demonstrated superior psychomotor skills in a live porcine model (56 [47-62] versus 44 [38-53], Ppsychomotor skills in the live porcine model and in the OR in a human patient (56 [47-62] versus 63 [61-68]; P = .21). SET curriculum group demonstrated inferior knowledge (13 [11-15] versus 16 [14-16]; P<.05), equivalent psychomotor skill (63 [61-68] versus 68 [62-74]; P = .50), and superior nontechnical skills (41 [38-45] versus 34 [27-35], P<.01) compared with chief resident group. Completion of the SET curriculum resulted in superior training outcomes, compared with conventional surgery training. Implementation of the SET curriculum can standardize training

  14. Basic Minimal Dominating Functions of Quadratic Residue Cayley ...

    African Journals Online (AJOL)

    Domination arises in the study of numerous facility location problems where the number of facilities is fixed and one attempt to minimize the number of facilities necessary so that everyone is serviced. This problem reduces to finding a minimum dominating set in the graph corresponding to this network. In this paper we study ...

  15. Identifying finite-time coherent sets from limited quantities of Lagrangian data

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Matthew O. [Program in Applied and Computational Mathematics, Princeton University, New Jersey 08544 (United States); Rypina, Irina I. [Department of Physical Oceanography, Woods Hole Oceanographic Institute, Massachusetts 02543 (United States); Rowley, Clarence W. [Department of Mechanical and Aerospace Engineering, Princeton University, New Jersey 08544 (United States)

    2015-08-15

    A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that “leak” from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, “data rich” test problems, and conceptually related methods based on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or “mesh-free” methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea.

  16. Identifying finite-time coherent sets from limited quantities of Lagrangian data

    International Nuclear Information System (INIS)

    Williams, Matthew O.; Rypina, Irina I.; Rowley, Clarence W.

    2015-01-01

    A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that “leak” from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, “data rich” test problems, and conceptually related methods based on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or “mesh-free” methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea

  17. Identifying finite-time coherent sets from limited quantities of Lagrangian data.

    Science.gov (United States)

    Williams, Matthew O; Rypina, Irina I; Rowley, Clarence W

    2015-08-01

    A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that "leak" from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, "data rich" test problems, and conceptually related methods based on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or "mesh-free" methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea.

  18. Loss Minimization Sliding Mode Control of IPM Synchronous Motor Drives

    Directory of Open Access Journals (Sweden)

    Mehran Zamanifar

    2010-01-01

    Full Text Available In this paper, a nonlinear loss minimization control strategy for an interior permanent magnet synchronous motor (IPMSM based on a newly developed sliding mode approach is presented. This control method sets force the speed control of the IPMSM drives and simultaneously ensures the minimization of the losses besides the uncertainties exist in the system such as parameter variations which have undesirable effects on the controller performance except at near nominal conditions. Simulation results are presented to show the effectiveness of the proposed controller.

  19. Expressing clinical data sets with openEHR archetypes: a solid basis for ubiquitous computing.

    Science.gov (United States)

    Garde, Sebastian; Hovenga, Evelyn; Buck, Jasmin; Knaup, Petra

    2007-12-01

    The purpose of this paper is to analyse the feasibility and usefulness of expressing clinical data sets (CDSs) as openEHR archetypes. For this, we present an approach to transform CDS into archetypes, and outline typical problems with CDS and analyse whether some of these problems can be overcome by the use of archetypes. Literature review and analysis of a selection of existing Australian, German, other European and international CDSs; transfer of a CDS for Paediatric Oncology into openEHR archetypes; implementation of CDSs in application systems. To explore the feasibility of expressing CDS as archetypes an approach to transform existing CDSs into archetypes is presented in this paper. In case of the Paediatric Oncology CDS (which consists of 260 data items) this lead to the definition of 48 openEHR archetypes. To analyse the usefulness of expressing CDS as archetypes, we identified nine problems with CDS that currently remain unsolved without a common model underpinning the CDS. Typical problems include incompatible basic data types and overlapping and incompatible definitions of clinical content. A solution to most of these problems based on openEHR archetypes is motivated. With regard to integrity constraints, further research is required. While openEHR cannot overcome all barriers to Ubiquitous Computing, it can provide the common basis for ubiquitous presence of meaningful and computer-processable knowledge and information, which we believe is a basic requirement for Ubiquitous Computing. Expressing CDSs as openEHR archetypes is feasible and advantageous as it fosters semantic interoperability, supports ubiquitous computing, and helps to develop archetypes that are arguably of better quality than the original CDS.

  20. Mixed low-level waste minimization at Los Alamos

    International Nuclear Information System (INIS)

    Starke, T.P.

    1998-01-01

    During the first six months of University of California 98 Fiscal Year (July--December) Los Alamos National Laboratory has achieved a 57% reduction in mixed low-level waste generation. This has been accomplished through a systems approach that identified and minimized the largest MLLW streams. These included surface-contaminated lead, lead-lined gloveboxes, printed circuit boards, and activated fluorescent lamps. Specific waste minimization projects have been initiated to address these streams. In addition, several chemical processing equipment upgrades are being implemented. Use of contaminated lead is planned for several high energy proton beam stop applications and stainless steel encapsulated lead is being evaluated for other radiological control area applications. INEEL is assisting Los Alamos with a complete systems analysis of analytical chemistry derived mixed wastes at the CMR building and with a minimum life-cycle cost standard glovebox design. Funding for waste minimization upgrades has come from several sources: generator programs, waste management, the generator set-aside program, and Defense Programs funding to INEEL

  1. Absolutely minimal extensions of functions on metric spaces

    International Nuclear Information System (INIS)

    Milman, V A

    1999-01-01

    Extensions of a real-valued function from the boundary ∂X 0 of an open subset X 0 of a metric space (X,d) to X 0 are discussed. For the broad class of initial data coming under discussion (linearly bounded functions) locally Lipschitz extensions to X 0 that preserve localized moduli of continuity are constructed. In the set of these extensions an absolutely minimal extension is selected, which was considered before by Aronsson for Lipschitz initial functions in the case X 0 subset of R n . An absolutely minimal extension can be regarded as an ∞-harmonic function, that is, a limit of p-harmonic functions as p→+∞. The proof of the existence of absolutely minimal extensions in a metric space with intrinsic metric is carried out by the Perron method. To this end, ∞-subharmonic, ∞-superharmonic, and ∞-harmonic functions on a metric space are defined and their properties are established

  2. Mixed low-level waste minimization at Los Alamos

    Energy Technology Data Exchange (ETDEWEB)

    Starke, T.P.

    1998-12-01

    During the first six months of University of California 98 Fiscal Year (July--December) Los Alamos National Laboratory has achieved a 57% reduction in mixed low-level waste generation. This has been accomplished through a systems approach that identified and minimized the largest MLLW streams. These included surface-contaminated lead, lead-lined gloveboxes, printed circuit boards, and activated fluorescent lamps. Specific waste minimization projects have been initiated to address these streams. In addition, several chemical processing equipment upgrades are being implemented. Use of contaminated lead is planned for several high energy proton beam stop applications and stainless steel encapsulated lead is being evaluated for other radiological control area applications. INEEL is assisting Los Alamos with a complete systems analysis of analytical chemistry derived mixed wastes at the CMR building and with a minimum life-cycle cost standard glovebox design. Funding for waste minimization upgrades has come from several sources: generator programs, waste management, the generator set-aside program, and Defense Programs funding to INEEL.

  3. Probing community nurses' professional basis

    DEFF Research Database (Denmark)

    Schaarup, Clara; Pape-Haugaard, Louise; Jensen, Merete Hartun

    2017-01-01

    Complicated and long-lasting wound care of diabetic foot ulcers are moving from specialists in wound care at hospitals towards community nurses without specialist diabetic foot ulcer wound care knowledge. The aim of the study is to elucidate community nurses' professional basis for treating...... diabetic foot ulcers. A situational case study design was adopted in an archetypical Danish community nursing setting. Experience is a crucial component in the community nurses' professional basis for treating diabetic foot ulcers. Peer-to-peer training is the prevailing way to learn about diabetic foot...... ulcer, however, this contributes to the risk of low evidence-based practice. Finally, a frequent behaviour among the community nurses is to consult colleagues before treating the diabetic foot ulcers....

  4. The basis property of eigenfunctions in the problem of a nonhomogeneous damped string

    Directory of Open Access Journals (Sweden)

    Łukasz Rzepnicki

    2017-01-01

    Full Text Available The equation which describes the small vibrations of a nonhomogeneous damped string can be rewritten as an abstract Cauchy problem for the densely defined closed operator \\(i A\\. We prove that the set of root vectors of the operator \\(A\\ forms a basis of subspaces in a certain Hilbert space \\(H\\. Furthermore, we give the rate of convergence for the decomposition with respect to this basis. In the second main result we show that with additional assumptions the set of root vectors of the operator \\(A\\ is a Riesz basis for \\(H\\.

  5. Electric dipole moment constraints on minimal electroweak baryogenesis

    CERN Document Server

    Huber, S J; Ritz, A; Huber, Stephan J.; Pospelov, Maxim; Ritz, Adam

    2007-01-01

    We study the simplest generic extension of the Standard Model which allows for conventional electroweak baryogenesis, through the addition of dimension six operators in the Higgs sector. At least one such operator is required to be CP-odd, and we study the constraints on such a minimal setup, and related scenarios with minimal flavor violation, from the null results of searches for electric dipole moments (EDMs), utilizing the full set of two-loop contributions to the EDMs. The results indicate that the current bounds are stringent, particularly that of the recently updated neutron EDM, but fall short of ruling out these scenarios. The next generation of EDM experiments should be sufficiently sensitive to provide a conclusive test.

  6. Rigorous force field optimization principles based on statistical distance minimization

    Energy Technology Data Exchange (ETDEWEB)

    Vlcek, Lukas, E-mail: vlcekl1@ornl.gov [Chemical Sciences Division, Geochemistry & Interfacial Sciences Group, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6110 (United States); Joint Institute for Computational Sciences, University of Tennessee, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6173 (United States); Chialvo, Ariel A. [Chemical Sciences Division, Geochemistry & Interfacial Sciences Group, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6110 (United States)

    2015-10-14

    We use the concept of statistical distance to define a measure of distinguishability between a pair of statistical mechanical systems, i.e., a model and its target, and show that its minimization leads to general convergence of the model’s static measurable properties to those of the target. We exploit this feature to define a rigorous basis for the development of accurate and robust effective molecular force fields that are inherently compatible with coarse-grained experimental data. The new model optimization principles and their efficient implementation are illustrated through selected examples, whose outcome demonstrates the higher robustness and predictive accuracy of the approach compared to other currently used methods, such as force matching and relative entropy minimization. We also discuss relations between the newly developed principles and established thermodynamic concepts, which include the Gibbs-Bogoliubov inequality and the thermodynamic length.

  7. International Spinal Cord Injury Urinary Tract Infection Basic Data Set

    DEFF Research Database (Denmark)

    Goetz, L L; Cardenas, D D; Kennelly, M

    2013-01-01

    To develop an International Spinal Cord Injury (SCI) Urinary Tract Infection (UTI) Basic Data Set presenting a standardized format for the collection and reporting of a minimal amount of information on UTIs in daily practice or research.......To develop an International Spinal Cord Injury (SCI) Urinary Tract Infection (UTI) Basic Data Set presenting a standardized format for the collection and reporting of a minimal amount of information on UTIs in daily practice or research....

  8. Determination of the minimal fusion peptide of bovine leukemia virus gp30

    International Nuclear Information System (INIS)

    Lorin, Aurelien; Lins, Laurence; Stroobant, Vincent; Brasseur, Robert; Charloteaux, Benoit

    2007-01-01

    In this study, we determined the minimal N-terminal fusion peptide of the gp30 of the bovine leukemia virus on the basis of the tilted peptide theory. We first used molecular modelling to predict that the gp30 minimal fusion peptide corresponds to the 15 first residues. Liposome lipid-mixing and leakage assays confirmed that the 15-residue long peptide induces fusion in vitro and that it is the shortest peptide inducing optimal fusion since longer peptides destabilize liposomes to the same extent but not shorter ones. The 15-residue long peptide can thus be considered as the minimal fusion peptide. The effect of mutations reported in the literature was also investigated. Interestingly, mutations related to glycoproteins unable to induce syncytia in cell-cell fusion assays correspond to peptides predicted as non-tilted. The relationship between obliquity and fusogenicity was also confirmed in vitro for one tilted and one non-tilted mutant peptide

  9. Authorization basis requirements comparison report

    Energy Technology Data Exchange (ETDEWEB)

    Brantley, W.M.

    1997-08-18

    The TWRS Authorization Basis (AB) consists of a set of documents identified by TWRS management with the concurrence of DOE-RL. Upon implementation of the TWRS Basis for Interim Operation (BIO) and Technical Safety Requirements (TSRs), the AB list will be revised to include the BIO and TSRs. Some documents that currently form part of the AB will be removed from the list. This SD identifies each - requirement from those documents, and recommends a disposition for each to ensure that necessary requirements are retained when the AB is revised to incorporate the BIO and TSRs. This SD also identifies documents that will remain part of the AB after the BIO and TSRs are implemented. This document does not change the AB, but provides guidance for the preparation of change documentation.

  10. Authorization basis requirements comparison report

    International Nuclear Information System (INIS)

    Brantley, W.M.

    1997-01-01

    The TWRS Authorization Basis (AB) consists of a set of documents identified by TWRS management with the concurrence of DOE-RL. Upon implementation of the TWRS Basis for Interim Operation (BIO) and Technical Safety Requirements (TSRs), the AB list will be revised to include the BIO and TSRs. Some documents that currently form part of the AB will be removed from the list. This SD identifies each - requirement from those documents, and recommends a disposition for each to ensure that necessary requirements are retained when the AB is revised to incorporate the BIO and TSRs. This SD also identifies documents that will remain part of the AB after the BIO and TSRs are implemented. This document does not change the AB, but provides guidance for the preparation of change documentation

  11. TreeBASIS Feature Descriptor and Its Hardware Implementation

    Directory of Open Access Journals (Sweden)

    Spencer Fowers

    2014-01-01

    Full Text Available This paper presents a novel feature descriptor called TreeBASIS that provides improvements in descriptor size, computation time, matching speed, and accuracy. This new descriptor uses a binary vocabulary tree that is computed using basis dictionary images and a test set of feature region images. To facilitate real-time implementation, a feature region image is binary quantized and the resulting quantized vector is passed into the BASIS vocabulary tree. A Hamming distance is then computed between the feature region image and the effectively descriptive basis dictionary image at a node to determine the branch taken and the path the feature region image takes is saved as a descriptor. The TreeBASIS feature descriptor is an excellent candidate for hardware implementation because of its reduced descriptor size and the fact that descriptors can be created and features matched without the use of floating point operations. The TreeBASIS descriptor is more computationally and space efficient than other descriptors such as BASIS, SIFT, and SURF. Moreover, it can be computed entirely in hardware without the support of a CPU for additional software-based computations. Experimental results and a hardware implementation show that the TreeBASIS descriptor compares well with other descriptors for frame-to-frame homography computation while requiring fewer hardware resources.

  12. Assessment of WWER fuel condition in design basis accident

    International Nuclear Information System (INIS)

    Bibilashvili, Yu.; Sokolov, N.; Andreeva-Andrievskaya, L.; Vlasov, Yu.; Nechaeva, O.; Salatov, A.

    1994-01-01

    The fuel behaviour in design basis accidents is assessed by means of the verified code RAPTA-5. The code uses a set of high temperature physico-chemical properties of the fuel components as determined for commercially produced materials, fuel rod simulators and fuel rod bundles. The WWER fuel criteria available in Russia for design basis accidents do not generally differ from the similar criteria adopted for PWR's. 12 figs., 11 refs

  13. Multiple-scattering theory with a truncated basis set

    International Nuclear Information System (INIS)

    Zhang, X.; Butler, W.H.

    1992-01-01

    Multiple-scattering theory (MST) is an extremely efficient technique for calculating the electronic structure of an assembly of atoms. The wave function in MST is expanded in terms of spherical waves centered on each atom and indexed by their orbital and azimuthal quantum numbers, l and m. The secular equation which determines the characteristic energies can be truncated at a value of the orbital angular momentum l max , for which the higher angular momentum phase shifts, δ l (l>l max ), are sufficiently small. Generally, the wave-function coefficients which are calculated from the secular equation are also truncated at l max . Here we point out that this truncation of the wave function is not necessary and is in fact inconsistent with the truncation of the secular equation. A consistent procedure is described in which the states with higher orbital angular momenta are retained but with their phase shifts set to zero. We show that this treatment gives smooth, continuous, and correctly normalized wave functions and that the total charge density calculated from the corresponding Green function agrees with the Lloyd formula result. We also show that this augmented wave function can be written as a linear combination of Andersen's muffin-tin orbitals in the case of muffin-tin potentials, and can be used to generalize the muffin-tin orbital idea to full-cell potentals

  14. Doubly stochastic radial basis function methods

    Science.gov (United States)

    Yang, Fenglian; Yan, Liang; Ling, Leevan

    2018-06-01

    We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).

  15. Non-Asymptotic Confidence Sets for Circular Means

    Directory of Open Access Journals (Sweden)

    Thomas Hotz

    2016-10-01

    Full Text Available The mean of data on the unit circle is defined as the minimizer of the average squared Euclidean distance to the data. Based on Hoeffding’s mass concentration inequalities, non-asymptotic confidence sets for circular means are constructed which are universal in the sense that they require no distributional assumptions. These are then compared with asymptotic confidence sets in simulations and for a real data set.

  16. The Biological Basis of Learning and Individuality.

    Science.gov (United States)

    Kandel, Eric R.; Hawkins, Robert D.

    1992-01-01

    Describes the biological basis of learning and individuality. Presents an overview of recent discoveries that suggest learning engages a simple set of rules that modify the strength of connection between neurons in the brain. The changes are cited as playing an important role in making each individual unique. (MCO)

  17. Assessing and minimizing contamination in time of flight based validation data

    Science.gov (United States)

    Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald

    2017-10-01

    Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.

  18. Assessment of WWER fuel condition in design basis accident

    Energy Technology Data Exchange (ETDEWEB)

    Bibilashvili, Yu; Sokolov, N; Andreeva-Andrievskaya, L; Vlasov, Yu; Nechaeva, O; Salatov, A [Vsesoyuznyj Nauchno-Issledovatel` skij Inst. Neorganicheskikh Materialov, Moscow (Russian Federation)

    1994-12-31

    The fuel behaviour in design basis accidents is assessed by means of the verified code RAPTA-5. The code uses a set of high temperature physico-chemical properties of the fuel components as determined for commercially produced materials, fuel rod simulators and fuel rod bundles. The WWER fuel criteria available in Russia for design basis accidents do not generally differ from the similar criteria adopted for PWR`s. 12 figs., 11 refs.

  19. Calculation of wave-functions with frozen orbitals in mixed quantum mechanics/molecular mechanics methods. II. Application of the local basis equation.

    Science.gov (United States)

    Ferenczy, György G

    2013-04-05

    The application of the local basis equation (Ferenczy and Adams, J. Chem. Phys. 2009, 130, 134108) in mixed quantum mechanics/molecular mechanics (QM/MM) and quantum mechanics/quantum mechanics (QM/QM) methods is investigated. This equation is suitable to derive local basis nonorthogonal orbitals that minimize the energy of the system and it exhibits good convergence properties in a self-consistent field solution. These features make the equation appropriate to be used in mixed QM/MM and QM/QM methods to optimize orbitals in the field of frozen localized orbitals connecting the subsystems. Calculations performed for several properties in divers systems show that the method is robust with various choices of the frozen orbitals and frontier atom properties. With appropriate basis set assignment, it gives results equivalent with those of a related approach [G. G. Ferenczy previous paper in this issue] using the Huzinaga equation. Thus, the local basis equation can be used in mixed QM/MM methods with small size quantum subsystems to calculate properties in good agreement with reference Hartree-Fock-Roothaan results. It is shown that bond charges are not necessary when the local basis equation is applied, although they are required for the self-consistent field solution of the Huzinaga equation based method. Conversely, the deformation of the wave-function near to the boundary is observed without bond charges and this has a significant effect on deprotonation energies but a less pronounced effect when the total charge of the system is conserved. The local basis equation can also be used to define a two layer quantum system with nonorthogonal localized orbitals surrounding the central delocalized quantum subsystem. Copyright © 2013 Wiley Periodicals, Inc.

  20. New Technique for Improving Performance of LDPC Codes in the Presence of Trapping Sets

    Directory of Open Access Journals (Sweden)

    Mohamed Adnan Landolsi

    2008-06-01

    Full Text Available Trapping sets are considered the primary factor for degrading the performance of low-density parity-check (LDPC codes in the error-floor region. The effect of trapping sets on the performance of an LDPC code becomes worse as the code size decreases. One approach to tackle this problem is to minimize trapping sets during LDPC code design. However, while trapping sets can be reduced, their complete elimination is infeasible due to the presence of cycles in the underlying LDPC code bipartite graph. In this work, we introduce a new technique based on trapping sets neutralization to minimize the negative effect of trapping sets under belief propagation (BP decoding. Simulation results for random, progressive edge growth (PEG and MacKay LDPC codes demonstrate the effectiveness of the proposed technique. The hardware cost of the proposed technique is also shown to be minimal.

  1. Symmetry Adapted Basis Sets

    DEFF Research Database (Denmark)

    Avery, John Scales; Rettrup, Sten; Avery, James Emil

    automatically with computer techniques. The method has a wide range of applicability, and can be used to solve difficult eigenvalue problems in a number of fields. The book is of special interest to quantum theorists, computer scientists, computational chemists and applied mathematicians....

  2. Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks

    Science.gov (United States)

    Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.

    Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.

  3. Migration and the Wage-Settings Curve

    DEFF Research Database (Denmark)

    Brücker, Herbert; Jahn, Elke

    Germany on basis of a wage-setting curve. The wage-setting curve relies on the assumption that wages respond to a hange in the unemployment rate, albeit imperfectly. This allows one to derive the wage and employment effects of migration simultaneously in a general equilibrium framework. Using...

  4. Abstract sets and finite ordinals an introduction to the study of set theory

    CERN Document Server

    Keene, G B

    2007-01-01

    This text unites the logical and philosophical aspects of set theory in a manner intelligible both to mathematicians without training in formal logic and to logicians without a mathematical background. It combines an elementary level of treatment with the highest possible degree of logical rigor and precision.Starting with an explanation of all the basic logical terms and related operations, the text progresses through a stage-by-stage elaboration that proves the fundamental theorems of finite sets. It focuses on the Bernays theory of finite classes and finite sets, exploring the system's basi

  5. Set optimization and applications the state of the art : from set relations to set-valued risk measures

    CERN Document Server

    Heyde, Frank; Löhne, Andreas; Rudloff, Birgit; Schrage, Carola

    2015-01-01

    This volume presents five surveys with extensive bibliographies and six original contributions on set optimization and its applications in mathematical finance and game theory. The topics range from more conventional approaches that look for minimal/maximal elements with respect to vector orders or set relations, to the new complete-lattice approach that comprises a coherent solution concept for set optimization problems, along with existence results, duality theorems, optimality conditions, variational inequalities and theoretical foundations for algorithms. Modern approaches to scalarization methods can be found as well as a fundamental contribution to conditional analysis. The theory is tailor-made for financial applications, in particular risk evaluation and [super-]hedging for market models with transaction costs, but it also provides a refreshing new perspective on vector optimization. There is no comparable volume on the market, making the book an invaluable resource for researchers working in vector o...

  6. Activity recognition from minimal distinguishing subsequence mining

    Science.gov (United States)

    Iqbal, Mohammad; Pao, Hsing-Kuo

    2017-08-01

    Human activity recognition is one of the most important research topics in the era of Internet of Things. To separate different activities given sensory data, we utilize a Minimal Distinguishing Subsequence (MDS) mining approach to efficiently find distinguishing patterns among different activities. We first transform the sensory data into a series of sensor triggering events and operate the MDS mining procedure afterwards. The gap constraints are also considered in the MDS mining. Given the multi-class nature of most activity recognition tasks, we modify the MDS mining approach from a binary case to a multi-class one to fit the need for multiple activity recognition. We also study how to select the best parameter set including the minimal and the maximal support thresholds in finding the MDSs for effective activity recognition. Overall, the prediction accuracy is 86.59% on the van Kasteren dataset which consists of four different activities for recognition.

  7. Evaluation of a School-Based Teen Obesity Prevention Minimal Intervention

    Science.gov (United States)

    Abood, Doris A.; Black, David R.; Coster, Daniel C.

    2008-01-01

    Objective: A school-based nutrition education minimal intervention (MI) was evaluated. Design: The design was experimental, with random assignment at the school level. Setting: Seven schools were randomly assigned as experimental, and 7 as delayed-treatment. Participants: The experimental group included 551 teens, and the delayed treatment group…

  8. Development of a highly maneuverable unmanned underwater vehicle on the basis of quad-copter dynamics

    Science.gov (United States)

    Amin, Osman Md; Karim, Md. Arshadul; Saad, Abdullah His

    2017-12-01

    At present, research on unmanned underwater vehicle (UUV) has become a significant & familiar topic for researchers from various engineering fields. UUV is of mainly two types - AUV (Autonomous Underwater vehicle) & ROV (Remotely Operated Vehicle). There exist a significant number of published research papers on UUV, where very few researchers emphasize on the ease of maneuvering and control of UUV. Maneuvering is important for underwater vehicle in avoiding obstacles, installing underwater piping system, searching undersea resources, underwater mine disposal operations, oceanographic surveys etc. A team from Dept. of Naval Architecture & Marine Engineering of MIST has taken a project to design a highly maneuverable unmanned underwater vehicle on the basis of quad-copter dynamics. The main objective of the research is to develop a control system for UUV which would be able to maneuver the vehicle in six DOF (Degrees of Freedom) with great ease. For this purpose we are not only focusing on controllability but also designing an efficient hull with minimal drag force & optimized propeller using CFD technique. Motors were selected on the basis of the simulated thrust generated by propellers in ANSYS Fluent software module. Settings for control parameters to carry out different types of maneuvering such as hovering, spiral, one point rotation about its centroid, gliding, rolling, drifting and zigzag motions were explained in short at the end.

  9. From maximal to minimal supersymmetry in string loop amplitudes

    Energy Technology Data Exchange (ETDEWEB)

    Berg, Marcus; Buchberger, Igor [Department of Physics, Karlstad University,651 88 Karlstad (Sweden); Schlotterer, Oliver [Max-Planck-Institut für Gravitationsphysik, Albert-Einstein-Institut,14476 Potsdam (Germany)

    2017-04-28

    We calculate one-loop string amplitudes of open and closed strings with N=1,2,4 supersymmetry in four and six dimensions, by compactification on Calabi-Yau and K3 orbifolds. In particular, we develop a method to combine contributions from all spin structures for arbitrary number of legs at minimal supersymmetry. Each amplitude is cast into a compact form by reorganizing the kinematic building blocks and casting the worldsheet integrals in a basis. Infrared regularization plays an important role to exhibit the expected factorization limits. We comment on implications for the one-loop string effective action.

  10. Symmetry-Adapted Ro-vibrational Basis Functions for Variational Nuclear Motion Calculations: TROVE Approach.

    Science.gov (United States)

    Yurchenko, Sergei N; Yachmenev, Andrey; Ovsyannikov, Roman I

    2017-09-12

    We present a general, numerically motivated approach to the construction of symmetry-adapted basis functions for solving ro-vibrational Schrödinger equations. The approach is based on the property of the Hamiltonian operator to commute with the complete set of symmetry operators and, hence, to reflect the symmetry of the system. The symmetry-adapted ro-vibrational basis set is constructed numerically by solving a set of reduced vibrational eigenvalue problems. In order to assign the irreducible representations associated with these eigenfunctions, their symmetry properties are probed on a grid of molecular geometries with the corresponding symmetry operations. The transformation matrices are reconstructed by solving overdetermined systems of linear equations related to the transformation properties of the corresponding wave functions on the grid. Our method is implemented in the variational approach TROVE and has been successfully applied to many problems covering the most important molecular symmetry groups. Several examples are used to illustrate the procedure, which can be easily applied to different types of coordinates, basis sets, and molecular systems.

  11. An Error-Entropy Minimization Algorithm for Tracking Control of Nonlinear Stochastic Systems with Non-Gaussian Variables

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yunlong; Wang, Aiping; Guo, Lei; Wang, Hong

    2017-07-09

    This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.

  12. The Effects of Minimal Group Membership on Young Preschoolers’ Social Preferences, Estimates of Similarity, and Behavioral Attribution

    Directory of Open Access Journals (Sweden)

    Nadja Richter

    2016-07-01

    Full Text Available We investigate young children’s sensitivity to minimal group membership. Previous research has suggested that children do not show sensitivity to minimal cues to group membership until the age of five to six, contributing to claims that this is an important transition in the development of intergroup cognition and behavior. In this study, we investigated whether even younger children are sensitive to minimal cues to group membership. Random assignment to one of either of two color groups created a temporary, visually salient minimal group membership in 3 and 4-year-old study participants. Using explicit measures, we tested whether children preferred minimal group members when making social judgments. We find that, in the absence of any knowledge regarding the two groups, children expressed greater liking for ingroup than outgroup targets. Moreover, children estimated that ingroup members would share their preferences. Our findings demonstrate that from early in development, humans assess unknown others on the basis of minimal cues to social similarity and that the perception of group boundaries potentially underlies social assortment in strangers.

  13. Smartphone-assisted minimally invasive neurosurgery.

    Science.gov (United States)

    Mandel, Mauricio; Petito, Carlo Emanuel; Tutihashi, Rafael; Paiva, Wellingson; Abramovicz Mandel, Suzana; Gomes Pinto, Fernando Campos; Ferreira de Andrade, Almir; Teixeira, Manoel Jacobsen; Figueiredo, Eberval Gadelha

    2018-03-13

    OBJECTIVE Advances in video and fiber optics since the 1990s have led to the development of several commercially available high-definition neuroendoscopes. This technological improvement, however, has been surpassed by the smartphone revolution. With the increasing integration of smartphone technology into medical care, the introduction of these high-quality computerized communication devices with built-in digital cameras offers new possibilities in neuroendoscopy. The aim of this study was to investigate the usefulness of smartphone-endoscope integration in performing different types of minimally invasive neurosurgery. METHODS The authors present a new surgical tool that integrates a smartphone with an endoscope by use of a specially designed adapter, thus eliminating the need for the video system customarily used for endoscopy. The authors used this novel combined system to perform minimally invasive surgery on patients with various neuropathological disorders, including cavernomas, cerebral aneurysms, hydrocephalus, subdural hematomas, contusional hematomas, and spontaneous intracerebral hematomas. RESULTS The new endoscopic system featuring smartphone-endoscope integration was used by the authors in the minimally invasive surgical treatment of 42 patients. All procedures were successfully performed, and no complications related to the use of the new method were observed. The quality of the images obtained with the smartphone was high enough to provide adequate information to the neurosurgeons, as smartphone cameras can record images in high definition or 4K resolution. Moreover, because the smartphone screen moves along with the endoscope, surgical mobility was enhanced with the use of this method, facilitating more intuitive use. In fact, this increased mobility was identified as the greatest benefit of the use of the smartphone-endoscope system compared with the use of the neuroendoscope with the standard video set. CONCLUSIONS Minimally invasive approaches

  14. Experience with the EPA manual for waste minimization opportunity assessments

    International Nuclear Information System (INIS)

    Bridges, J.S.

    1990-01-01

    The EPA Waste Minimization Opportunity Assessment Manual (EPA/625/788/003) was published to assist those responsible for managing waste minimization activities at the waste generating facility and at corporate levels. The Manual sets forth a procedure that incorporates technical and managerial principles and motivates people to develop and implement pollution prevention concepts and ideas. Environmental management has increasingly become one of cooperative endeavor whereby whether in government, industry, or other forms of enterprise, the effectiveness with whirl, people work together toward the attainment of a clean environment is largely determined by the ability of those who hold managerial position. This paper offers a description of the EPA Waste Minimization Opportunity Assessment Manual procedure which supports the waste minimization assessment as a systematic planned procedure with the objective of identifying ways to reduce or eliminate waste generation. The Manual is a management tool that blends science and management principles. The practice of managing waste minimization/pollution prevention makes use of the underlying organized science and engineering knowledge and applies it in the light of realities to gain a desired, practical result. The early stages of EPA's Pollution Prevention Research Program centered on the development of the Manual and its use at a number of facilities within the private and public sectors. This paper identifies a number of case studies and waste minimization opportunity assessment reports that demonstrate the value of using the Manual's approach. Several industry-specific waste minimization assessment manuals have resulted from the Manual's generic approach to waste minimization. There were some modifications to the Manual's generic approach when the waste stream has been other than industrial hazardous waste

  15. Adaptive local basis set for Kohn–Sham density functional theory in a discontinuous Galerkin framework I: Total energy calculation

    International Nuclear Information System (INIS)

    Lin Lin; Lu Jianfeng; Ying Lexing; Weinan, E

    2012-01-01

    Kohn–Sham density functional theory is one of the most widely used electronic structure theories. In the pseudopotential framework, uniform discretization of the Kohn–Sham Hamiltonian generally results in a large number of basis functions per atom in order to resolve the rapid oscillations of the Kohn–Sham orbitals around the nuclei. Previous attempts to reduce the number of basis functions per atom include the usage of atomic orbitals and similar objects, but the atomic orbitals generally require fine tuning in order to reach high accuracy. We present a novel discretization scheme that adaptively and systematically builds the rapid oscillations of the Kohn–Sham orbitals around the nuclei as well as environmental effects into the basis functions. The resulting basis functions are localized in the real space, and are discontinuous in the global domain. The continuous Kohn–Sham orbitals and the electron density are evaluated from the discontinuous basis functions using the discontinuous Galerkin (DG) framework. Our method is implemented in parallel and the current implementation is able to handle systems with at least thousands of atoms. Numerical examples indicate that our method can reach very high accuracy (less than 1 meV) with a very small number (4–40) of basis functions per atom.

  16. Influence of the Training Methods in the Diagnosis of Multiple Sclerosis Using Radial Basis Functions Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ángel Gutiérrez

    2015-04-01

    Full Text Available The data available in the average clinical study of a disease is very often small. This is one of the main obstacles in the application of neural networks to the classification of biological signals used for diagnosing diseases. A rule of thumb states that the number of parameters (weights that can be used for training a neural network should be around 15% of the available data, to avoid overlearning. This condition puts a limit on the dimension of the input space. Different authors have used different approaches to solve this problem, like eliminating redundancy in the data, preprocessing the data to find centers for the radial basis functions, or extracting a small number of features that were used as inputs. It is clear that the classification would be better the more features we could feed into the network. The approach utilized in this paper is incrementing the number of training elements with randomly expanding training sets. This way the number of original signals does not constraint the dimension of the input set in the radial basis network. Then we train the network using the method that minimizes the error function using the gradient descent algorithm and the method that uses the particle swarm optimization technique. A comparison between the two methods showed that for the same number of iterations on both methods, the particle swarm optimization was faster, it was learning to recognize only the sick people. On the other hand, the gradient method was not as good in general better at identifying those people.

  17. Sparsely corrupted stimulated scattering signals recovery by iterative reweighted continuous basis pursuit

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Kunpeng; Chai, Yi [College of Automation, Chongqing University, Chongqing 400044 (China); Su, Chunxiao [Research Center of Laser Fusion, CAEP, P. O. Box 919-983, Mianyang 621900 (China)

    2013-08-15

    In this paper, we consider the problem of extracting the desired signals from noisy measurements. This is a classical problem of signal recovery which is of paramount importance in inertial confinement fusion. To accomplish this task, we develop a tractable algorithm based on continuous basis pursuit and reweighted ℓ{sub 1}-minimization. By modeling the observed signals as superposition of scale time-shifted copies of theoretical waveform, structured noise, and unstructured noise on a finite time interval, a sparse optimization problem is obtained. We propose to solve this problem through an iterative procedure that alternates between convex optimization to estimate the amplitude, and local optimization to estimate the dictionary. The performance of the method was evaluated both numerically and experimentally. Numerically, we recovered theoretical signals embedded in increasing amounts of unstructured noise and compared the results with those obtained through popular denoising methods. We also applied the proposed method to a set of actual experimental data acquired from the Shenguang-II laser whose energy was below the detector noise-equivalent energy. Both simulation and experiments show that the proposed method improves the signal recovery performance and extends the dynamic detection range of detectors.

  18. Sparsely corrupted stimulated scattering signals recovery by iterative reweighted continuous basis pursuit

    International Nuclear Information System (INIS)

    Wang, Kunpeng; Chai, Yi; Su, Chunxiao

    2013-01-01

    In this paper, we consider the problem of extracting the desired signals from noisy measurements. This is a classical problem of signal recovery which is of paramount importance in inertial confinement fusion. To accomplish this task, we develop a tractable algorithm based on continuous basis pursuit and reweighted ℓ 1 -minimization. By modeling the observed signals as superposition of scale time-shifted copies of theoretical waveform, structured noise, and unstructured noise on a finite time interval, a sparse optimization problem is obtained. We propose to solve this problem through an iterative procedure that alternates between convex optimization to estimate the amplitude, and local optimization to estimate the dictionary. The performance of the method was evaluated both numerically and experimentally. Numerically, we recovered theoretical signals embedded in increasing amounts of unstructured noise and compared the results with those obtained through popular denoising methods. We also applied the proposed method to a set of actual experimental data acquired from the Shenguang-II laser whose energy was below the detector noise-equivalent energy. Both simulation and experiments show that the proposed method improves the signal recovery performance and extends the dynamic detection range of detectors

  19. Minimally invasive orthognathic surgery.

    Science.gov (United States)

    Resnick, Cory M; Kaban, Leonard B; Troulis, Maria J

    2009-02-01

    Minimally invasive surgery is defined as the discipline in which operative procedures are performed in novel ways to diminish the sequelae of standard surgical dissections. The goals of minimally invasive surgery are to reduce tissue trauma and to minimize bleeding, edema, and injury, thereby improving the rate and quality of healing. In orthognathic surgery, there are two minimally invasive techniques that can be used separately or in combination: (1) endoscopic exposure and (2) distraction osteogenesis. This article describes the historical developments of the fields of orthognathic surgery and minimally invasive surgery, as well as the integration of the two disciplines. Indications, techniques, and the most current outcome data for specific minimally invasive orthognathic surgical procedures are presented.

  20. Diffusion Forecasting Model with Basis Functions from QR-Decomposition

    Science.gov (United States)

    Harlim, John; Yang, Haizhao

    2017-12-01

    The diffusion forecasting is a nonparametric approach that provably solves the Fokker-Planck PDE corresponding to Itô diffusion without knowing the underlying equation. The key idea of this method is to approximate the solution of the Fokker-Planck equation with a discrete representation of the shift (Koopman) operator on a set of basis functions generated via the diffusion maps algorithm. While the choice of these basis functions is provably optimal under appropriate conditions, computing these basis functions is quite expensive since it requires the eigendecomposition of an N× N diffusion matrix, where N denotes the data size and could be very large. For large-scale forecasting problems, only a few leading eigenvectors are computationally achievable. To overcome this computational bottleneck, a new set of basis functions constructed by orthonormalizing selected columns of the diffusion matrix and its leading eigenvectors is proposed. This computation can be carried out efficiently via the unpivoted Householder QR factorization. The efficiency and effectiveness of the proposed algorithm will be shown in both deterministically chaotic and stochastic dynamical systems; in the former case, the superiority of the proposed basis functions over purely eigenvectors is significant, while in the latter case forecasting accuracy is improved relative to using a purely small number of eigenvectors. Supporting arguments will be provided on three- and six-dimensional chaotic ODEs, a three-dimensional SDE that mimics turbulent systems, and also on the two spatial modes associated with the boreal winter Madden-Julian Oscillation obtained from applying the Nonlinear Laplacian Spectral Analysis on the measured Outgoing Longwave Radiation.

  1. Regularity of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht

    2010-01-01

    "Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t

  2. Development of a computerized procedure system on the basis of formalized action and check

    International Nuclear Information System (INIS)

    Jung, Y. S.; Shin, Y. C.; Jung, K. H.; Sung, C. H.

    1999-01-01

    A computerized procedure system was developed on the basis of formalized action and check. The actions and checks are rendered into two dimensional flowchart with foldable, dynamic, and interactive functions. The action and check are evaluated by computer automatically, and can be overridden by operator if necessary. Detail of each action and check is described in the logic tree with several boolean instruction set and n out m of mathematical operator. Computer supports operators to execute the boolean instruction providing relevant process parameters and control devices. In spite of all these computer support, its inner logic is simple and transparent to operators to keep them in the control loop. The boolean instruction can have 3 states. And n out m operator is defined on the 3 states. The definition minimizes not only number of context switching in procedure execution, but also enables evaluation of the action and check. The process parameters are presented in both text label and graphical symbol to improve readability. Moreover, new mechanism to access any part of a procedure is devised while keeping the main stream of procedure execution

  3. Essay Prompts and Topics: Minimizing the Effect of Mean Differences.

    Science.gov (United States)

    Brown, James Dean; And Others

    1991-01-01

    Investigates whether prompts and topic types affect writing performance of college freshmen taking the Manoa Writing Placement Examination (MWPE). Finds that the MWPE is reliable but that responses to prompts and prompt sets differ. Shows that differences arising in performance on prompts or topics can be minimized by examining mean scores and…

  4. The Emotional and Moral Basis of Rationality

    Science.gov (United States)

    Boostrom, Robert

    2013-01-01

    This chapter explores the basis of rationality, arguing that critical thinking tends to be taught in schools as a set of skills because of the failure to recognize that choosing to think critically depends on the prior development of stable sentiments or moral habits that nourish a rational self. Primary among these stable sentiments are the…

  5. Environmental conditions using thermal-hydraulics computer code GOTHIC for beyond design basis external events

    International Nuclear Information System (INIS)

    Pleskunas, R.J.

    2015-01-01

    In response to the Fukushima Dai-ichi beyond design basis accident in March 2011, the Nuclear Regulatory Commission (NRC) issued Order EA-12-049, 'Issuance of Order to Modify Licenses with Regard to Requirements for Mitigation Strategies Beyond-Design-Basis-External-Events'. To outline the process to be used by individual licensees to define and implement site-specific diverse and flexible mitigation strategies (FLEX) that reduce the risks associated with beyond design basis conditions, Nuclear Energy Institute document NEI 12-06, 'Diverse and Flexible Coping Strategies (FLEX) Implementation Guide', was issued. A beyond design basis external event (BDBEE) is postulated to cause an Extended Loss of AC Power (ELAP), which will result in a loss of ventilation which has the potential to impact room habitability and equipment operability. During the ELAP, portable FLEX equipment will be used to achieve and maintain safe shutdown, and only a minimal set of instruments and controls will be available. Given these circumstances, analysis is required to determine the environmental conditions in several vital areas of the Nuclear Power Plant. The BDBEE mitigating strategies require certain room environments to be maintained such that they can support the occupancy of personnel and the functionality of equipment located therein, which is required to support the strategies associated with compliance to NRC Order EA-12-049. Three thermal-hydraulic analyses of vital areas during an extended loss of AC power using the GOTHIC computer code will be presented: 1) Safety-related pump and instrument room transient analysis; 2) Control Room transient analysis; and 3) Auxiliary/Control Building transient analysis. GOTHIC (Generation of Thermal-Hydraulic Information for Containment) is a general purpose thermal-hydraulics software package for the analysis of nuclear power plant containments, confinement buildings, and system components. It is a volume/path/heat sink

  6. Minimal Poems Written in 1979 Minimal Poems Written in 1979

    Directory of Open Access Journals (Sweden)

    Sandra Sirangelo Maggio

    2008-04-01

    Full Text Available The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism. The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism.

  7. Minimally extended SILH

    International Nuclear Information System (INIS)

    Chala, Mikael; Grojean, Christophe; Humboldt-Univ. Berlin; Lima, Leonardo de; Univ. Estadual Paulista, Sao Paulo

    2017-03-01

    Higgs boson compositeness is a phenomenologically viable scenario addressing the hierarchy problem. In minimal models, the Higgs boson is the only degree of freedom of the strong sector below the strong interaction scale. We present here the simplest extension of such a framework with an additional composite spin-zero singlet. To this end, we adopt an effective field theory approach and develop a set of rules to estimate the size of the various operator coefficients, relating them to the parameters of the strong sector and its structural features. As a result, we obtain the patterns of new interactions affecting both the new singlet and the Higgs boson's physics. We identify the characteristics of the singlet field which cause its effects on Higgs physics to dominate over the ones inherited from the composite nature of the Higgs boson. Our effective field theory construction is supported by comparisons with explicit UV models.

  8. Minimally extended SILH

    Energy Technology Data Exchange (ETDEWEB)

    Chala, Mikael [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Valencia Univ. (Spain). Dept. de Fisica Teorica y IFIC; Durieux, Gauthier; Matsedonskyi, Oleksii [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Grojean, Christophe [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Humboldt-Univ. Berlin (Germany). Inst. fuer Physik; Lima, Leonardo de [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Univ. Estadual Paulista, Sao Paulo (Brazil). Inst. de Fisica Teorica

    2017-03-15

    Higgs boson compositeness is a phenomenologically viable scenario addressing the hierarchy problem. In minimal models, the Higgs boson is the only degree of freedom of the strong sector below the strong interaction scale. We present here the simplest extension of such a framework with an additional composite spin-zero singlet. To this end, we adopt an effective field theory approach and develop a set of rules to estimate the size of the various operator coefficients, relating them to the parameters of the strong sector and its structural features. As a result, we obtain the patterns of new interactions affecting both the new singlet and the Higgs boson's physics. We identify the characteristics of the singlet field which cause its effects on Higgs physics to dominate over the ones inherited from the composite nature of the Higgs boson. Our effective field theory construction is supported by comparisons with explicit UV models.

  9. Correlates of minimal dating.

    Science.gov (United States)

    Leck, Kira

    2006-10-01

    Researchers have associated minimal dating with numerous factors. The present author tested shyness, introversion, physical attractiveness, performance evaluation, anxiety, social skill, social self-esteem, and loneliness to determine the nature of their relationships with 2 measures of self-reported minimal dating in a sample of 175 college students. For women, shyness, introversion, physical attractiveness, self-rated anxiety, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. For men, physical attractiveness, observer-rated social skill, social self-esteem, and loneliness correlated with 1 or both measures of minimal dating. The patterns of relationships were not identical for the 2 indicators of minimal dating, indicating the possibility that minimal dating is not a single construct as researchers previously believed. The present author discussed implications and suggestions for future researchers.

  10. emMAW: computing minimal absent words in external memory.

    Science.gov (United States)

    Héliou, Alice; Pissis, Solon P; Puglisi, Simon J

    2017-09-01

    The biological significance of minimal absent words has been investigated in genomes of organisms from all domains of life. For instance, three minimal absent words of the human genome were found in Ebola virus genomes. There exists an O(n) -time and O(n) -space algorithm for computing all minimal absent words of a sequence of length n on a fixed-sized alphabet based on suffix arrays. A standard implementation of this algorithm, when applied to a large sequence of length n , requires more than 20 n  bytes of RAM. Such memory requirements are a significant hurdle to the computation of minimal absent words in large datasets. We present emMAW, the first external-memory algorithm for computing minimal absent words. A free open-source implementation of our algorithm is made available. This allows for computation of minimal absent words on far bigger data sets than was previously possible. Our implementation requires less than 3 h on a standard workstation to process the full human genome when as little as 1 GB of RAM is made available. We stress that our implementation, despite making use of external memory, is fast; indeed, even on relatively smaller datasets when enough RAM is available to hold all necessary data structures, it is less than two times slower than state-of-the-art internal-memory implementations. https://github.com/solonas13/maw (free software under the terms of the GNU GPL). alice.heliou@lix.polytechnique.fr or solon.pissis@kcl.ac.uk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  11. Spent Nuclear Fuel (SNF) Project Design Basis Capacity Study

    International Nuclear Information System (INIS)

    CLEVELAND, K.J.

    2000-01-01

    This study of the design basis capacity of process systems was prepared by Fluor Federal Services for the Spent Nuclear Fuel Project. The evaluation uses a summary level model of major process sub-systems to determine the impact of sub-system interactions on the overall time to complete fuel removal operations. The process system model configuration and time cycle estimates developed in the original version of this report have been updated as operating scenario assumptions evolve. The initial document released in Fiscal Year (FY) 1996 varied the number of parallel systems and transport systems over a wide range, estimating a conservative design basis for completing fuel processing in a two year time period. Configurations modeling planned operations were updated in FY 1998 and FY 1999. The FY 1998 Base Case continued to indicate that fuel removal activities at the basins could be completed in slightly over 2 years. Evaluations completed in FY 1999 were based on schedule modifications that delayed the start of KE Basin fuel removal, with respect to the start of KW Basin fuel removal activities, by 12 months. This delay resulted in extending the time to complete all fuel removal activities by 12 months. However, the results indicated that the number of Cold Vacuum Drying (CVD) stations could be reduced from four to three without impacting the projected time to complete fuel removal activities. This update of the design basis capacity evaluation, performed for FY 2000, evaluates a fuel removal scenario that delays the start of KE Basin activities such that staffing peaks are minimized. The number of CVD stations included in all cases for the FY 2000 evaluation is reduced from three to two, since the scenario schedule results in minimal time periods of simultaneous fuel removal from both basins. The FY 2000 evaluation also considers removal of Shippingport fuel from T Plant storage and transfer to the Canister Storage Building for storage

  12. The instrument of asset securitization on the basis of investment funds

    Directory of Open Access Journals (Sweden)

    O.S. Novak

    2015-03-01

    Full Text Available This article explores the instruments of asset securitization on the basis of investment funds According to the proposed national model of asset securitization on the basis of investment funds developed the financial instruments, which provide its implementation. Depending on the opportunities the payment management proposed to service the operations of asset securitization by investment certificates with replenishment and without the possibility of replenishment. The use of financial instruments without the possibility of replenishment envisages a simple and low-cost operation of redirecting funds from the originator to the investors in the form of investment income by certification of securitization fund. The instruments with replenishment allow not only to redirect payments for operations of assets securitization, but also to manage them in order to minimize risks.

  13. Transformation rules

    OpenAIRE

    De Nicola, Rocco; Fantechi, Alessandro; Gnesi, Stefania; Inverardi, Paola; Nesi, Monica

    1991-01-01

    In the following a complete set of laws for observational congruence on finite basie LOTOS is listed. This set is not minimal but includes all the laws presented in [2] while for a minimal complete set we refer to [29]. The laws for each operator are grouped together.

  14. Minimal Super Technicolor

    DEFF Research Database (Denmark)

    Antola, M.; Di Chiara, S.; Sannino, F.

    2011-01-01

    We introduce novel extensions of the Standard Model featuring a supersymmetric technicolor sector (supertechnicolor). As the first minimal conformal supertechnicolor model we consider N=4 Super Yang-Mills which breaks to N=1 via the electroweak interactions. This is a well defined, economical......, between unparticle physics and Minimal Walking Technicolor. We consider also other N =1 extensions of the Minimal Walking Technicolor model. The new models allow all the standard model matter fields to acquire a mass....

  15. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    Science.gov (United States)

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  16. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    Directory of Open Access Journals (Sweden)

    Shih-Wei Lin

    2014-01-01

    Full Text Available Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP, which aims to minimize total service time, and proposes an iterated greedy (IG algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set.

  17. Stability of the Minimizers of Least Squares with a Non-Convex Regularization. Part I: Local Behavior

    International Nuclear Information System (INIS)

    Durand, S.; Nikolova, M.

    2006-01-01

    Many estimation problems amount to minimizing a piecewise C m objective function, with m ≥ 2, composed of a quadratic data-fidelity term and a general regularization term. It is widely accepted that the minimizers obtained using non-convex and possibly non-smooth regularization terms are frequently good estimates. However, few facts are known on the ways to control properties of these minimizers. This work is dedicated to the stability of the minimizers of such objective functions with respect to variations of the data. It consists of two parts: first we consider all local minimizers, whereas in a second part we derive results on global minimizers. In this part we focus on data points such that every local minimizer is isolated and results from a C m-1 local minimizer function, defined on some neighborhood. We demonstrate that all data points for which this fails form a set whose closure is negligible

  18. Implementation and automated validation of the minimal Z' model in FeynRules

    International Nuclear Information System (INIS)

    Basso, L.; Christensen, N.D.; Duhr, C.; Fuks, B.; Speckner, C.

    2012-01-01

    We describe the implementation of a well-known class of U(1) gauge models, the 'minimal' Z' models, in FeynRules. We also describe a new automated validation tool for FeynRules models which is controlled by a web interface and allows the user to run a complete set of 2 → 2 processes on different matrix element generators, different gauges, and compare between them all. If existing, the comparison with independent implementations is also possible. This tool has been used to validate our implementation of the 'minimal' Z' models. (authors)

  19. Geometry of convex polygons and locally minimal binary trees spanning these polygons

    International Nuclear Information System (INIS)

    Ivanov, A O; Tuzhilin, A A

    1999-01-01

    In previous works the authors have obtained an effective classification of planar locally minimal binary trees with convex boundaries. The main aim of the present paper is to find more subtle restrictions on the possible structure of such trees in terms of the geometry of the given boundary set. Special attention is given to the case of quasiregular boundaries (that is, boundaries that are sufficiently close to regular ones in a certain sense). In particular, a series of quasiregular boundaries that cannot be spanned by a locally minimal binary tree is constructed

  20. Of Minima and Maxima: The Social Significance of Minimal Competency Testing and the Search for Educational Excellence.

    Science.gov (United States)

    Ericson, David P.

    1984-01-01

    Explores the many meanings of the minimal competency testing movement and the more recent mobilization for educational excellence in the schools. Argues that increasing the value of the diploma by setting performance standards on minimal competency tests and by elevating academic graduation standards may strongly conflict with policies encouraging…

  1. Cognitive radio adaptation for power consumption minimization using biogeography-based optimization

    International Nuclear Information System (INIS)

    Qi Pei-Han; Zheng Shi-Lian; Yang Xiao-Niu; Zhao Zhi-Jin

    2016-01-01

    Adaptation is one of the key capabilities of cognitive radio, which focuses on how to adjust the radio parameters to optimize the system performance based on the knowledge of the radio environment and its capability and characteristics. In this paper, we consider the cognitive radio adaptation problem for power consumption minimization. The problem is formulated as a constrained power consumption minimization problem, and the biogeography-based optimization (BBO) is introduced to solve this optimization problem. A novel habitat suitability index (HSI) evaluation mechanism is proposed, in which both the power consumption minimization objective and the quality of services (QoS) constraints are taken into account. The results show that under different QoS requirement settings corresponding to different types of services, the algorithm can minimize power consumption while still maintaining the QoS requirements. Comparison with particle swarm optimization (PSO) and cat swarm optimization (CSO) reveals that BBO works better, especially at the early stage of the search, which means that the BBO is a better choice for real-time applications. (paper)

  2. Constructal entransy dissipation minimization for 'volume-point' heat conduction

    International Nuclear Information System (INIS)

    Chen Lingen; Wei Shuhuan; Sun Fengrui

    2008-01-01

    The 'volume to point' heat conduction problem, which can be described as to how to determine the optimal distribution of high conductivity material through the given volume such that the heat generated at every point is transferred most effectively to its boundary, has became the focus of attention in the current constructal theory literature. In general, the minimization of the maximum temperature difference in the volume is taken as the optimization objective. A new physical quantity, entransy, has been identified as a basis for optimizing heat transfer processes in terms of the analogy between heat and electrical conduction recently. Heat transfer analyses show that the entransy of an object describes its heat transfer ability, just as the electrical energy in a capacitor describes its charge transfer ability. Entransy dissipation occurs during heat transfer processes, as a measure of the heat transfer irreversibility with the dissipation related thermal resistance. By taking equivalent thermal resistance (it corresponds to the mean temperature difference), which reflects the average heat conduction effect and is defined based on entransy dissipation, as an optimization objective, the 'volume to point' constructal problem is re-analysed and re-optimized in this paper. The constructal shape of the control volume with the best average heat conduction effect is deduced. For the elemental area and the first order construct assembly, when the thermal current density in the high conductive link is linear with the length, the optimized shapes of assembly based on the minimization of entransy dissipation are the same as those based on minimization of the maximum temperature difference, and the mean temperature difference is 2/3 of the maximum temperature difference. For the second and higher order construct assemblies, the thermal current densities in the high conductive link are not linear with the length, and the optimized shapes of the assembly based on the

  3. Setting and validating the pass/fail score for the NBDHE.

    Science.gov (United States)

    Tsai, Tsung-Hsun; Dixon, Barbara Leatherman

    2013-04-01

    This report describes the overall process used for setting the pass/fail score for the National Board Dental Hygiene Examination (NBDHE). The Objective Standard Setting (OSS) method was used for setting the pass/fail score for the NBDHE. The OSS method requires a panel of experts to determine the criterion items and proportion of these items that minimally competent candidates would answer correctly, the percentage of mastery and the confidence level of the error band. A panel of 11 experts was selected by the Joint Commission on National Dental Examinations (Joint Commission). Panel members represented geographic distribution across the U.S. and had the following characteristics: full-time dental hygiene practitioners with experience in areas of preventive, periodontal, geriatric and special needs care, and full-time dental hygiene educators with experience in areas of scientific basis for dental hygiene practice, provision of clinical dental hygiene services and community health/research principles. Utilizing the expert panel's judgments, the pass/fail score was set and then the score scale was established using the Rasch measurement model. Statistical and psychometric analysis shows the actual failure rate and the OSS failure rate are reasonably consistent (2.4% vs. 2.8%). The analysis also showed the lowest error of measurement, an index of the precision at the pass/fail score point and that the highest reliability (0.97) are achieved at the pass/fail score point. The pass/fail score is a valid guide for making decisions about candidates for dental hygiene licensure. This new standard was reviewed and approved by the Joint Commission and was implemented beginning in 2011.

  4. Minimalism in architecture: Abstract conceptualization of architecture

    Directory of Open Access Journals (Sweden)

    Vasilski Dragana

    2015-01-01

    Full Text Available Minimalism in architecture contains the idea of the minimum as a leading creative tend to be considered and interpreted in working through phenomena of empathy and abstraction. In the Western culture, the root of this idea is found in empathy of Wilhelm Worringer and abstraction of Kasimir Malevich. In his dissertation, 'Abstraction and Empathy' Worringer presented his thesis on the psychology of style through which he explained the two opposing basic forms: abstraction and empathy. His conclusion on empathy as a psychological basis of observation expression is significant due to the verbal congruence with contemporary minimalist expression. His intuition was enhenced furthermore by figure of Malevich. Abstraction, as an expression of inner unfettered inspiration, has played a crucial role in the development of modern art and architecture of the twentieth century. Abstraction, which is one of the basic methods of learning in psychology (separating relevant from irrelevant features, Carl Jung is used to discover ideas. Minimalism in architecture emphasizes the level of abstraction to which the individual functions are reduced. Different types of abstraction are present: in the form as well as function of the basic elements: walls and windows. The case study is an example of Sou Fujimoto who is unequivocal in its commitment to the autonomy of abstract conceptualization of architecture.

  5. A projection-free method for representing plane-wave DFT results in an atom-centered basis

    International Nuclear Information System (INIS)

    Dunnington, Benjamin D.; Schmidt, J. R.

    2015-01-01

    Plane wave density functional theory (DFT) is a powerful tool for gaining accurate, atomic level insight into bulk and surface structures. Yet, the delocalized nature of the plane wave basis set hinders the application of many powerful post-computation analysis approaches, many of which rely on localized atom-centered basis sets. Traditionally, this gap has been bridged via projection-based techniques from a plane wave to atom-centered basis. We instead propose an alternative projection-free approach utilizing direct calculation of matrix elements of the converged plane wave DFT Hamiltonian in an atom-centered basis. This projection-free approach yields a number of compelling advantages, including strict orthonormality of the resulting bands without artificial band mixing and access to the Hamiltonian matrix elements, while faithfully preserving the underlying DFT band structure. The resulting atomic orbital representation of the Kohn-Sham wavefunction and Hamiltonian provides a gateway to a wide variety of analysis approaches. We demonstrate the utility of the approach for a diverse set of chemical systems and example analysis approaches

  6. Minimizing Mutual Couping

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna.......Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna....

  7. Local orbitals by minimizing powers of the orbital variance

    DEFF Research Database (Denmark)

    Jansik, Branislav; Høst, Stinne; Kristensen, Kasper

    2011-01-01

    's correlation consistent basis sets, it is seen that for larger penalties, the virtual orbitals become more local than the occupied ones. We also show that the local virtual HF orbitals are significantly more local than the redundant projected atomic orbitals, which often have been used to span the virtual...

  8. A strategy to find minimal energy nanocluster structures.

    Science.gov (United States)

    Rogan, José; Varas, Alejandro; Valdivia, Juan Alejandro; Kiwi, Miguel

    2013-11-05

    An unbiased strategy to search for the global and local minimal energy structures of free standing nanoclusters is presented. Our objectives are twofold: to find a diverse set of low lying local minima, as well as the global minimum. To do so, we use massively the fast inertial relaxation engine algorithm as an efficient local minimizer. This procedure turns out to be quite efficient to reach the global minimum, and also most of the local minima. We test the method with the Lennard-Jones (LJ) potential, for which an abundant literature does exist, and obtain novel results, which include a new local minimum for LJ13 , 10 new local minima for LJ14 , and thousands of new local minima for 15≤N≤65. Insights on how to choose the initial configurations, analyzing the effectiveness of the method in reaching low-energy structures, including the global minimum, are developed as a function of the number of atoms of the cluster. Also, a novel characterization of the potential energy surface, analyzing properties of the local minima basins, is provided. The procedure constitutes a promising tool to generate a diverse set of cluster conformations, both two- and three-dimensional, that can be used as an input for refinement by means of ab initio methods. Copyright © 2013 Wiley Periodicals, Inc.

  9. Gaussian basis functions for highly oscillatory scattering wavefunctions

    Science.gov (United States)

    Mant, B. P.; Law, M. M.

    2018-04-01

    We have applied a basis set of distributed Gaussian functions within the S-matrix version of the Kohn variational method to scattering problems involving deep potential energy wells. The Gaussian positions and widths are tailored to the potential using the procedure of Bačić and Light (1986 J. Chem. Phys. 85 4594) which has previously been applied to bound-state problems. The placement procedure is shown to be very efficient and gives scattering wavefunctions and observables in agreement with direct numerical solutions. We demonstrate the basis function placement method with applications to hydrogen atom–hydrogen atom scattering and antihydrogen atom–hydrogen atom scattering.

  10. Risk Minimization for Insurance Products via F-Doubly Stochastic Markov Chains

    Directory of Open Access Journals (Sweden)

    Francesca Biagini

    2016-07-01

    Full Text Available We study risk-minimization for a large class of insurance contracts. Given that the individual progress in time of visiting an insurance policy’s states follows an F -doubly stochastic Markov chain, we describe different state-dependent types of insurance benefits. These cover single payments at maturity, annuity-type payments and payments at the time of a transition. Based on the intensity of the F -doubly stochastic Markov chain, we provide the Galtchouk-Kunita-Watanabe decomposition for a general insurance contract and specify risk-minimizing strategies in a Brownian financial market setting. The results are further illustrated explicitly within an affine structure for the intensity.

  11. Heuristics for minimizing the maximum within-clusters distance

    Directory of Open Access Journals (Sweden)

    José Augusto Fioruci

    2012-12-01

    Full Text Available The clustering problem consists in finding patterns in a data set in order to divide it into clusters with high within-cluster similarity. This paper presents the study of a problem, here called MMD problem, which aims at finding a clustering with a predefined number of clusters that minimizes the largest within-cluster distance (diameter among all clusters. There are two main objectives in this paper: to propose heuristics for the MMD and to evaluate the suitability of the best proposed heuristic results according to the real classification of some data sets. Regarding the first objective, the results obtained in the experiments indicate a good performance of the best proposed heuristic that outperformed the Complete Linkage algorithm (the most used method from the literature for this problem. Nevertheless, regarding the suitability of the results according to the real classification of the data sets, the proposed heuristic achieved better quality results than C-Means algorithm, but worse than Complete Linkage.

  12. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Science.gov (United States)

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  13. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Directory of Open Access Journals (Sweden)

    Meznah Almutairy

    Full Text Available Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  14. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    Science.gov (United States)

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  15. Minimizing employee exposure to toxic chemical releases

    International Nuclear Information System (INIS)

    Plummer, R.W.; Stobbe, T.J.; Mogensen, J.E.; Jeram, L.K.

    1987-01-01

    This book describes procedures for minimizing employee exposure to toxic chemical releases and suggested personal protective equipment (PPE) to be used in the event of such chemical release. How individuals, employees, supervisors, or companies perceive the risks of chemical exposure (risk meaning both probability of exposure and effect of exposure) determines to a great extent what precautions are taken to avoid risk. In Part I, the authors develop and approach which divides the project into three phases: kinds of procedures currently being used; the types of toxic chemical release accidents and injuries that occur; and, finally, integration of this information into a set of recommended procedures which should decrease the likelihood of a toxic chemical release and, if one does occur, will minimize the exposure and its severity to employees. Part II covers the use of personal protective equipment. It addresses the questions: what personal protective equipment ensembles are used in industry in situations where the release of a toxic or dangerous chemical may occur or has occurred; and what personal protective equipment ensembles should be used in these situations

  16. OxMaR: open source free software for online minimization and randomization for clinical trials.

    Science.gov (United States)

    O'Callaghan, Christopher A

    2014-01-01

    Minimization is a valuable method for allocating participants between the control and experimental arms of clinical studies. The use of minimization reduces differences that might arise by chance between the study arms in the distribution of patient characteristics such as gender, ethnicity and age. However, unlike randomization, minimization requires real time assessment of each new participant with respect to the preceding distribution of relevant participant characteristics within the different arms of the study. For multi-site studies, this necessitates centralized computational analysis that is shared between all study locations. Unfortunately, there is no suitable freely available open source or free software that can be used for this purpose. OxMaR was developed to enable researchers in any location to use minimization for patient allocation and to access the minimization algorithm using any device that can connect to the internet such as a desktop computer, tablet or mobile phone. The software is complete in itself and requires no special packages or libraries to be installed. It is simple to set up and run over the internet using online facilities which are very low cost or even free to the user. Importantly, it provides real time information on allocation to the study lead or administrator and generates real time distributed backups with each allocation. OxMaR can readily be modified and customised and can also be used for standard randomization. It has been extensively tested and has been used successfully in a low budget multi-centre study. Hitherto, the logistical difficulties involved in minimization have precluded its use in many small studies and this software should allow more widespread use of minimization which should lead to studies with better matched control and experimental arms. OxMaR should be particularly valuable in low resource settings.

  17. OxMaR: open source free software for online minimization and randomization for clinical trials.

    Directory of Open Access Journals (Sweden)

    Christopher A O'Callaghan

    Full Text Available Minimization is a valuable method for allocating participants between the control and experimental arms of clinical studies. The use of minimization reduces differences that might arise by chance between the study arms in the distribution of patient characteristics such as gender, ethnicity and age. However, unlike randomization, minimization requires real time assessment of each new participant with respect to the preceding distribution of relevant participant characteristics within the different arms of the study. For multi-site studies, this necessitates centralized computational analysis that is shared between all study locations. Unfortunately, there is no suitable freely available open source or free software that can be used for this purpose. OxMaR was developed to enable researchers in any location to use minimization for patient allocation and to access the minimization algorithm using any device that can connect to the internet such as a desktop computer, tablet or mobile phone. The software is complete in itself and requires no special packages or libraries to be installed. It is simple to set up and run over the internet using online facilities which are very low cost or even free to the user. Importantly, it provides real time information on allocation to the study lead or administrator and generates real time distributed backups with each allocation. OxMaR can readily be modified and customised and can also be used for standard randomization. It has been extensively tested and has been used successfully in a low budget multi-centre study. Hitherto, the logistical difficulties involved in minimization have precluded its use in many small studies and this software should allow more widespread use of minimization which should lead to studies with better matched control and experimental arms. OxMaR should be particularly valuable in low resource settings.

  18. Educational texts as empirical basis in qualitative research in Physical Education

    DEFF Research Database (Denmark)

    Svendsen, Annemari Munk

    This presentation will focus attention on educational texts as empirical basis in qualitative research in Physical Education (PE). Educational texts may be defined as all kinds of texts used in a pedagogical setting, including textbooks, popular articles, webpages and political reports (Selander......). This makes them fundamental sites for illuminating what counts as knowledge in an educational setting (Selander & Skjeldbred, 2004). This presentation will introduce a qualitative research study obtained with discourse analysis of educational texts in Physical Education Teacher Education (PETE) in Denmark...... (Svendsen & Svendsen, 2014). It will present the theoretical and methodological considerations that are tied to the analysis of educational texts and discuss the qualities and challenges related to educational texts as empirical basis in qualitative research in PE. References: Apple, M. W. & Christian...

  19. Financial strategies for minimizing corporate income taxes under Brazil's new global tax system

    OpenAIRE

    Limberg, Stephen T.; Robison, John R.; Schadewald, Michael S.

    1997-01-01

    In 1996, Brazil adopted a worldwide income tax system for corporations. This system represents a fundamental change in how the Brazílian government treats multinational transactions and the tax minimizing strategies relevant to businesses. In this article, we describe the conceptual basis for worldwide tax systems and the problem of double taxation that they create. Responses to double taxation by both the governments and the priva te sector are considered. Namely, the imperfect mechanisms de...

  20. Departure fuel loads in time-minimizing migrating birds can be explained by the energy costs of being heavy

    NARCIS (Netherlands)

    Klaassen, M.R.J.; Lindstrom, A.

    1996-01-01

    Lindstrom & Alerstam (1992 Am. Nat. 140, 477-491) presented a model that predicts optimal departure fuel loads as a function of the rate of fuel deposition in time-minimizing migrants. The basis of the model is that the coverable distance per unit of fuel deposited, diminishes with increasing fuel

  1. Minimization of number of setups for mounting machines

    Energy Technology Data Exchange (ETDEWEB)

    Kolman, Pavel; Nchor, Dennis; Hampel, David [Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemědělská 1, 603 00 Brno (Czech Republic); Žák, Jaroslav [Institute of Technology and Business, Okružní 517/10, 370 01 České Budejovice (Czech Republic)

    2015-03-10

    The article deals with the problem of minimizing the number of setups for mounting SMT machines. SMT is a device used to assemble components on printed circuit boards (PCB) during the manufacturing of electronics. Each type of PCB has a different set of components, which are obligatory. Components are placed in the SMT tray. The problem consists in the fact that the total number of components used for all products is greater than the size of the tray. Therefore, every change of manufactured product requires a complete change of components in the tray (i.e., a setup change). Currently, the number of setups corresponds to the number of printed circuit board type. Any production change affects the change of setup and stops production on one shift. Many components occur in more products therefore the question arose as to how to deploy the products into groups so as to minimize the number of setups. This would result in a huge increase in efficiency of production.

  2. Minimizing off-Target Mutagenesis Risks Caused by Programmable Nucleases.

    Science.gov (United States)

    Ishida, Kentaro; Gee, Peter; Hotta, Akitsu

    2015-10-16

    Programmable nucleases, such as zinc finger nucleases (ZFNs), transcription activator like effector nucleases (TALENs), and clustered regularly interspersed short palindromic repeats associated protein-9 (CRISPR-Cas9), hold tremendous potential for applications in the clinical setting to treat genetic diseases or prevent infectious diseases. However, because the accuracy of DNA recognition by these nucleases is not always perfect, off-target mutagenesis may result in undesirable adverse events in treated patients such as cellular toxicity or tumorigenesis. Therefore, designing nucleases and analyzing their activity must be carefully evaluated to minimize off-target mutagenesis. Furthermore, rigorous genomic testing will be important to ensure the integrity of nuclease modified cells. In this review, we provide an overview of available nuclease designing platforms, nuclease engineering approaches to minimize off-target activity, and methods to evaluate both on- and off-target cleavage of CRISPR-Cas9.

  3. Minimal average consumption downlink base station power control strategy

    OpenAIRE

    Holtkamp H.; Auer G.; Haas H.

    2011-01-01

    We consider single cell multi-user OFDMA downlink resource allocation on a flat-fading channel such that average supply power is minimized while fulfilling a set of target rates. Available degrees of freedom are transmission power and duration. This paper extends our previous work on power optimal resource allocation in the mobile downlink by detailing the optimal power control strategy investigation and extracting fundamental characteristics of power optimal operation in cellular downlink. W...

  4. Legal incentives for minimizing waste

    International Nuclear Information System (INIS)

    Clearwater, S.W.; Scanlon, J.M.

    1991-01-01

    Waste minimization, or pollution prevention, has become an integral component of federal and state environmental regulation. Minimizing waste offers many economic and public relations benefits. In addition, waste minimization efforts can also dramatically reduce potential criminal requirements. This paper addresses the legal incentives for minimizing waste under current and proposed environmental laws and regulations

  5. REFORMASI SISTEM AKUNTANSI CASH BASIS MENUJU SISTEM AKUNTANSI ACCRUAL BASIS

    Directory of Open Access Journals (Sweden)

    Yuri Rahayu

    2016-03-01

    Full Text Available Abstract –  Accounting reform movement was born with the aim of structuring the direction of improvement . This movement is characterized by the enactment of the Act of 2003 and Act 1 of 2004, which became the basis of the birth of Government Regulation No.24 of 2005 on Government Accounting Standards ( SAP . The general,  accounting is based on two systems,  the cash basis  and the accrual basis. The facts speak far students still at problem with differences to the two methods that result in a lack of understanding on the treatment system for recording. The purpose method of research is particularly relevant to student references who are learning basic accounting so that it can provide information and more meaningful understanding of the accounting method cash basis and Accrual basis. This research was conducted through a normative approach, by tracing the document that references a study/library that combines source of reference that can be believed either from books and the internet are processed with a foundation of knowledge and experience of the author. The conclusion can be drawn that basically to be able to understand the difference of the system and the Cash Basis accrual student base treatment requires an understanding of both methods. To be able to have the ability and understanding of both systems required reading exercises and reference sources.   Keywords : Reform, cash basis, accrual basis   Abstrak - Gerakan reformasi akuntansi dilahirkan dengan tujuan penataan ke arah perbaikan. Gerakan ini  ditandai dengan dikeluarkannya  Undang-Undang tahun 2003 dan Undang-Undang No.1 Tahun 2004  yang menjadi dasar lahirnya Peraturan Pemerintah No.24 Tahun 2005 tentang Standar Akuntansi Pemerintah (SAP . Pada umumnya pencatatan akuntansi di dasarkan pada dua sistem yaitu basis kas (Cash Basis dan basis akrual  (Accrual Basis. Fakta berbicara Selama ini mahasiswa masih dibinggungkan dengan perbedaan ke dua metode itu sehingga

  6. A Variance Minimization Criterion to Feature Selection Using Laplacian Regularization.

    Science.gov (United States)

    He, Xiaofei; Ji, Ming; Zhang, Chiyuan; Bao, Hujun

    2011-10-01

    In many information processing tasks, one is often confronted with very high-dimensional data. Feature selection techniques are designed to find the meaningful feature subset of the original features which can facilitate clustering, classification, and retrieval. In this paper, we consider the feature selection problem in unsupervised learning scenarios, which is particularly difficult due to the absence of class labels that would guide the search for relevant information. Based on Laplacian regularized least squares, which finds a smooth function on the data manifold and minimizes the empirical loss, we propose two novel feature selection algorithms which aim to minimize the expected prediction error of the regularized regression model. Specifically, we select those features such that the size of the parameter covariance matrix of the regularized regression model is minimized. Motivated from experimental design, we use trace and determinant operators to measure the size of the covariance matrix. Efficient computational schemes are also introduced to solve the corresponding optimization problems. Extensive experimental results over various real-life data sets have demonstrated the superiority of the proposed algorithms.

  7. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping

    2015-06-24

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  8. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping; Huang, Jianhua Z.; Zhang, Nan

    2015-01-01

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  9. Vitali systems in R^n with irregular sets

    DEFF Research Database (Denmark)

    Mejlbro, Leif; Topsøe, Flemming

    1996-01-01

    Vitali type theorems are results stating that out of a given family of sets one can select pairwise disjoint sets which fill out a "large" region. Usually one works with "regular" sets such as balls. We shall establish results with sets of a more complicated geometrical structure, e.g., Cantor......-like sets are allowed. The results are related to a generalisation of the classical notion of a differentiation basis.l They concern real n-space R^n and Lebesgue measure....

  10. Is non-minimal inflation eternal?

    International Nuclear Information System (INIS)

    Feng, Chao-Jun; Li, Xin-Zhou

    2010-01-01

    The possibility that the non-minimal coupling inflation could be eternal is investigated. We calculate the quantum fluctuation of the inflaton in a Hubble time and find that it has the same value as that in the minimal case in the slow-roll limit. Armed with this result, we have studied some concrete non-minimal inflationary models including the chaotic inflation and the natural inflation, in which the inflaton is non-minimally coupled to the gravity. We find that the non-minimal coupling inflation could be eternal in some parameter spaces.

  11. Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation

    International Nuclear Information System (INIS)

    Schranz, C; Möller, K; Becher, T; Schädler, D; Weiler, N

    2014-01-01

    Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (p I ), inspiration and expiration time (t I , t E ) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal p I and adequate t E can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's ‘optimized’ settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end

  12. Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation.

    Science.gov (United States)

    Schranz, C; Becher, T; Schädler, D; Weiler, N; Möller, K

    2014-03-01

    Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (pI), inspiration and expiration time (tI, tE) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal pI and adequate tE can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's 'optimized' settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end-expiratory pressure.

  13. Acceptable risk as a basis for design

    International Nuclear Information System (INIS)

    Vrijling, J.K.; Hengel, W. van; Houben, R.J.

    1998-01-01

    Historically, human civilisations have striven to protect themselves against natural and man-made hazards. The degree of protection is a matter of political choice. Today this choice should be expressed in terms of risk and acceptable probability of failure to form the basis of the probabilistic design of the protection. It is additionally argued that the choice for a certain technology and the connected risk is made in a cost-benefit framework. The benefits and the costs including risk are weighed in the decision process. A set of rules for the evaluation of risk is proposed and tested in cases. The set of rules leads to technical advice in a question that has to be decided politically

  14. Youth Sports Clubs' Potential as Health-Promoting Setting: Profiles, Motives and Barriers

    Science.gov (United States)

    Meganck, Jeroen; Scheerder, Jeroen; Thibaut, Erik; Seghers, Jan

    2015-01-01

    Setting and Objective: For decades, the World Health Organisation has promoted settings-based health promotion, but its application to leisure settings is minimal. Focusing on organised sports as an important leisure activity, the present study had three goals: exploring the health promotion profile of youth sports clubs, identifying objective…

  15. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan; Alzahrani, Majed A.; Gao, Xin

    2014-01-01

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  16. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  17. Web Enabled DROLS Verity TopicSets

    National Research Council Canada - National Science Library

    Tong, Richard

    1999-01-01

    The focus of this effort has been the design and development of automatically generated TopicSets and HTML pages that provide the basis of the required search and browsing capability for DTIC's Web Enabled DROLS System...

  18. Advanced Test Reactor Safety Basis Upgrade Lessons Learned Relative to Design Basis Verification and Safety Basis Management

    International Nuclear Information System (INIS)

    G. L. Sharp; R. T. McCracken

    2004-01-01

    The Advanced Test Reactor (ATR) is a pressurized light-water reactor with a design thermal power of 250 MW. The principal function of the ATR is to provide a high neutron flux for testing reactor fuels and other materials. The reactor also provides other irradiation services such as radioisotope production. The ATR and its support facilities are located at the Test Reactor Area of the Idaho National Engineering and Environmental Laboratory (INEEL). An audit conducted by the Department of Energy's Office of Independent Oversight and Performance Assurance (DOE OA) raised concerns that design conditions at the ATR were not adequately analyzed in the safety analysis and that legacy design basis management practices had the potential to further impact safe operation of the facility.1 The concerns identified by the audit team, and issues raised during additional reviews performed by ATR safety analysts, were evaluated through the unreviewed safety question process resulting in shutdown of the ATR for more than three months while these concerns were resolved. Past management of the ATR safety basis, relative to facility design basis management and change control, led to concerns that discrepancies in the safety basis may have developed. Although not required by DOE orders or regulations, not performing design basis verification in conjunction with development of the 10 CFR 830 Subpart B upgraded safety basis allowed these potential weaknesses to be carried forward. Configuration management and a clear definition of the existing facility design basis have a direct relation to developing and maintaining a high quality safety basis which properly identifies and mitigates all hazards and postulated accident conditions. These relations and the impact of past safety basis management practices have been reviewed in order to identify lessons learned from the safety basis upgrade process and appropriate actions to resolve possible concerns with respect to the current ATR safety

  19. Minimal models for axion and neutrino

    Directory of Open Access Journals (Sweden)

    Y.H. Ahn

    2016-01-01

    Full Text Available The PQ mechanism resolving the strong CP problem and the seesaw mechanism explaining the smallness of neutrino masses may be related in a way that the PQ symmetry breaking scale and the seesaw scale arise from a common origin. Depending on how the PQ symmetry and the seesaw mechanism are realized, one has different predictions on the color and electromagnetic anomalies which could be tested in the future axion dark matter search experiments. Motivated by this, we construct various PQ seesaw models which are minimally extended from the (non- supersymmetric Standard Model and thus set up different benchmark points on the axion–photon–photon coupling in comparison with the standard KSVZ and DFSZ models.

  20. Triple Hierarchical Variational Inequalities with Constraints of Mixed Equilibria, Variational Inequalities, Convex Minimization, and Hierarchical Fixed Point Problems

    Directory of Open Access Journals (Sweden)

    Lu-Chuan Ceng

    2014-01-01

    Full Text Available We introduce and analyze a hybrid iterative algorithm by virtue of Korpelevich's extragradient method, viscosity approximation method, hybrid steepest-descent method, and averaged mapping approach to the gradient-projection algorithm. It is proven that under appropriate assumptions, the proposed algorithm converges strongly to a common element of the fixed point set of infinitely many nonexpansive mappings, the solution set of finitely many generalized mixed equilibrium problems (GMEPs, the solution set of finitely many variational inequality problems (VIPs, the solution set of general system of variational inequalities (GSVI, and the set of minimizers of convex minimization problem (CMP, which is just a unique solution of a triple hierarchical variational inequality (THVI in a real Hilbert space. In addition, we also consider the application of the proposed algorithm to solve a hierarchical fixed point problem with constraints of finitely many GMEPs, finitely many VIPs, GSVI, and CMP. The results obtained in this paper improve and extend the corresponding results announced by many others.

  1. Minimal families of curves on surfaces

    KAUST Repository

    Lubbes, Niels

    2014-11-01

    A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal families of a given surface.The classification of minimal families of curves can be reduced to the classification of minimal families which cover weak Del Pezzo surfaces. We classify the minimal families of weak Del Pezzo surfaces and present a table with the number of minimal families of each weak Del Pezzo surface up to Weyl equivalence.As an application of this classification we generalize some results of Schicho. We classify algebraic surfaces that carry a family of conics. We determine the minimal lexicographic degree for the parametrization of a surface that carries at least 2 minimal families. © 2014 Elsevier B.V.

  2. Hexavalent Chromium Minimization Strategy

    Science.gov (United States)

    2011-05-01

    Logistics 4 Initiative - DoD Hexavalent Chromium Minimization Non- Chrome Primer IIEXAVAJ ENT CHRO:M I~UMI CHROMIUM (VII Oil CrfVli.J CANCEfl HAnRD CD...Management Office of the Secretary of Defense Hexavalent Chromium Minimization Strategy Report Documentation Page Form ApprovedOMB No. 0704-0188...00-2011 4. TITLE AND SUBTITLE Hexavalent Chromium Minimization Strategy 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  3. International urinary tract imaging basic spinal cord injury data set

    DEFF Research Database (Denmark)

    Biering-Sørensen, F; Craggs, M; Kennelly, M

    2008-01-01

    OBJECTIVE: To create an International Urinary Tract Imaging Basic Spinal Cord Injury (SCI) Data Set within the framework of the International SCI Data Sets. SETTING: An international working group. METHODS: The draft of the Data Set was developed by a working group comprising members appointed...... of comparable minimal data. RESULTS: The variables included in the International Urinary Tract Imaging Basic SCI Data Set are the results obtained using the following investigations: intravenous pyelography or computer tomography urogram or ultrasound, X-ray, renography, clearance, cystogram, voiding cystogram...

  4. A Bayesian spatial model for neuroimaging data based on biologically informed basis functions.

    Science.gov (United States)

    Huertas, Ismael; Oldehinkel, Marianne; van Oort, Erik S B; Garcia-Solis, David; Mir, Pablo; Beckmann, Christian F; Marquand, Andre F

    2017-11-01

    The dominant approach to neuroimaging data analysis employs the voxel as the unit of computation. While convenient, voxels lack biological meaning and their size is arbitrarily determined by the resolution of the image. Here, we propose a multivariate spatial model in which neuroimaging data are characterised as a linearly weighted combination of multiscale basis functions which map onto underlying brain nuclei or networks or nuclei. In this model, the elementary building blocks are derived to reflect the functional anatomy of the brain during the resting state. This model is estimated using a Bayesian framework which accurately quantifies uncertainty and automatically finds the most accurate and parsimonious combination of basis functions describing the data. We demonstrate the utility of this framework by predicting quantitative SPECT images of striatal dopamine function and we compare a variety of basis sets including generic isotropic functions, anatomical representations of the striatum derived from structural MRI, and two different soft functional parcellations of the striatum derived from resting-state fMRI (rfMRI). We found that a combination of ∼50 multiscale functional basis functions accurately represented the striatal dopamine activity, and that functional basis functions derived from an advanced parcellation technique known as Instantaneous Connectivity Parcellation (ICP) provided the most parsimonious models of dopamine function. Importantly, functional basis functions derived from resting fMRI were more accurate than both structural and generic basis sets in representing dopamine function in the striatum for a fixed model order. We demonstrate the translational validity of our framework by constructing classification models for discriminating parkinsonian disorders and their subtypes. Here, we show that ICP approach is the only basis set that performs well across all comparisons and performs better overall than the classical voxel-based approach

  5. Geometric Energy Derivatives at the Complete Basis Set Limit: Application to the Equilibrium Structure and Molecular Force Field of Formaldehyde.

    Science.gov (United States)

    Morgan, W James; Matthews, Devin A; Ringholm, Magnus; Agarwal, Jay; Gong, Justin Z; Ruud, Kenneth; Allen, Wesley D; Stanton, John F; Schaefer, Henry F

    2018-03-13

    Geometric energy derivatives which rely on core-corrected focal-point energies extrapolated to the complete basis set (CBS) limit of coupled cluster theory with iterative and noniterative quadruple excitations, CCSDTQ and CCSDT(Q), are used as elements of molecular gradients and, in the case of CCSDT(Q), expansion coefficients of an anharmonic force field. These gradients are used to determine the CCSDTQ/CBS and CCSDT(Q)/CBS equilibrium structure of the S 0 ground state of H 2 CO where excellent agreement is observed with previous work and experimentally derived results. A fourth-order expansion about this CCSDT(Q)/CBS reference geometry using the same level of theory produces an exceptional level of agreement to spectroscopically observed vibrational band origins with a MAE of 0.57 cm -1 . Second-order vibrational perturbation theory (VPT2) and variational discrete variable representation (DVR) results are contrasted and discussed. Vibration-rotation, anharmonicity, and centrifugal distortion constants from the VPT2 analysis are reported and compared to previous work. Additionally, an initial application of a sum-over-states fourth-order vibrational perturbation theory (VPT4) formalism is employed herein, utilizing quintic and sextic derivatives obtained with a recursive algorithmic approach for response theory.

  6. Investigation of confined hydrogen atom in spherical cavity, using B-splines basis set

    Directory of Open Access Journals (Sweden)

    M Barezi

    2011-03-01

    Full Text Available Studying confined quantum systems (CQS is very important in nano technology. One of the basic CQS is a hydrogen atom confined in spherical cavity. In this article, eigenenergies and eigenfunctions of hydrogen atom in spherical cavity are calculated, using linear variational method. B-splines are used as basis functions, which can easily construct the trial wave functions with appropriate boundary conditions. The main characteristics of B-spline are its high localization and its flexibility. Besides, these functions have numerical stability and are able to spend high volume of calculation with good accuracy. The energy levels as function of cavity radius are analyzed. To check the validity and efficiency of the proposed method, extensive convergence test of eigenenergies in different cavity sizes has been carried out.

  7. Methodological basis for the optimization of a marine sea-urchin embryo test (SET) for the ecological assessment of coastal water quality.

    Science.gov (United States)

    Saco-Alvarez, Liliana; Durán, Iria; Ignacio Lorenzo, J; Beiras, Ricardo

    2010-05-01

    The sea-urchin embryo test (SET) has been frequently used as a rapid, sensitive, and cost-effective biological tool for marine monitoring worldwide, but the selection of a sensitive, objective, and automatically readable endpoint, a stricter quality control to guarantee optimum handling and biological material, and the identification of confounding factors that interfere with the response have hampered its widespread routine use. Size increase in a minimum of n=30 individuals per replicate, either normal larvae or earlier developmental stages, was preferred to observer-dependent, discontinuous responses as test endpoint. Control size increase after 48 h incubation at 20 degrees C must meet an acceptability criterion of 218 microm. In order to avoid false positives minimums of 32 per thousand salinity, 7 pH and 2mg/L oxygen, and a maximum of 40 microg/L NH(3) (NOEC) are required in the incubation media. For in situ testing size increase rates must be corrected on a degree-day basis using 12 degrees C as the developmental threshold. Copyright 2010 Elsevier Inc. All rights reserved.

  8. A criterion for flatness in minimal area metrics that define string diagrams

    International Nuclear Information System (INIS)

    Ranganathan, K.; Massachusetts Inst. of Tech., Cambridge, MA

    1992-01-01

    It has been proposed that the string diagrams of closed string field theory be defined by a minimal area problem that requires that all nontrivial homotopy curves have length greater than or equal to 2π. Consistency requires that the minimal area metric be flat in a neighbourhood of the punctures. The theorem proven in this paper, yields a criterion which if satisfied, will ensure this requirement. The theorem states roughly that the metric is flat in an open set, U if there is a unique closed curve of length 2π through every point in U and all of these closed curves are in the same free homotopy class. (orig.)

  9. 'Triune' Protection and its Implications for the Minimal State

    Directory of Open Access Journals (Sweden)

    Jinglei Hu

    2009-04-01

    Full Text Available Characterizing the libertarian ideal, Robert Nozick’s Minimal State has been a classic model wherein individual rights are taken for granted, from which state power is derivative, and legitimate only if it protects and reinforces individual rights. However, the “protection” is not so limited as it appears to be and the feedback from the state to individuals is not always positive. A microscopic analysis of the “protection” proffered by minimal state reveals three constituents (retribution, preemption and prevention or preventive restraints, with “preventive restraints” being the most controversial and extensive, and conflicting with the rights as “side-constraints”. By rejecting “utilitarianism of rights”, Nozick sets out to optimally secure rights, yet in so doing, he could hardly reconcile the clash between “constraints” and “restraints”, the inviolable rights supposed to be protected and the protective measures supposed to limit rights

  10. Multi-Objective Evaluation of Target Sets for Logistics Networks

    National Research Council Canada - National Science Library

    Emslie, Paul

    2000-01-01

    .... In the presence of many objectives--such as reducing maximum flow, lengthening routes, avoiding collateral damage, all at minimal risk to our pilots--the problem of determining the best target set is complex...

  11. Risk of incisional hernia after minimally invasive and open radical prostatectomy.

    Science.gov (United States)

    Carlsson, Sigrid V; Ehdaie, Behfar; Atoria, Coral L; Elkin, Elena B; Eastham, James A

    2013-11-01

    The number of radical prostatectomies has increased. Many urologists have shifted from the open surgical approach to minimally invasive techniques. It is not clear whether the risk of post-prostatectomy incisional hernia varies by surgical approach. In the linked Surveillance, Epidemiology and End Results (SEER)-Medicare data set we identified men 66 years old or older who were treated with minimally invasive or open radical prostatectomy for prostate cancer diagnosed from 2003 to 2007. The main study outcome was incisional hernia repair, as identified in Medicare claims after prostatectomy. We also examined the frequency of umbilical, inguinal and other hernia repairs. We identified 3,199 and 6,795 patients who underwent minimally invasive and open radical prostatectomy, respectively. The frequency of incisional hernia repair was 5.3% at a median 3.1-year followup in the minimally invasive group and 1.9% at a 4.4-year median followup in the open group, corresponding to an incidence rate of 16.1 and 4.5/1,000 person-years, respectively. Compared to the open technique, the minimally invasive procedure was associated with more than a threefold increased risk of incisional hernia repair when controlling for patient and disease characteristics (adjusted HR 3.39, 95% CI 2.63-4.38, p<0.0001). Minimally invasive radical prostatectomy was associated with an attenuated but increased risk of any hernia repair compared with open radical prostatectomy (adjusted HR 1.48, 95% CI 1.29-1.70, p<0.0001). Minimally invasive radical prostatectomy was associated with a significantly increased risk of incisional hernia compared with open radical prostatectomy. This is a potentially remediable complication of prostate cancer surgery that warrants increased vigilance with respect to surgical technique. Copyright © 2013 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  12. Efficient G0W0 using localized basis sets: a benchmark for molecules

    Science.gov (United States)

    Koval, Petr; Per Ljungberg, Mathias; Sanchez-Portal, Daniel

    Electronic structure calculations within Hedin's GW approximation are becoming increasingly accessible to the community. In particular, as it has been shown earlier and we confirm by calculations using our MBPT_LCAO package, the computational cost of the so-called G0W0 can be made comparable to the cost of a regular Hartree-Fock calculation. In this work, we study the performance of our new implementation of G0W0 to reproduce the ionization potentials of all 117 closed-shell molecules belonging to the G2/97 test set, using a pseudo-potential starting point provided by the popular density-functional package SIESTA. Moreover, the ionization potentials and electron affinities of a set of 24 acceptor molecules are compared to experiment and to reference all-electron calculations. PK: Guipuzcoa Fellow; PK,ML,DSP: Deutsche Forschungsgemeinschaft (SFB1083); PK,DSP: MINECO MAT2013-46593-C6-2-P.

  13. ALWR utility requirements - A technical basis for updated emergency planning

    International Nuclear Information System (INIS)

    Leaver, David E.W.; DeVine, John C. Jr.; Santucci, Joseph

    2004-01-01

    U.S. utilities, with substantial support from international utilities, are developing a comprehensive set of design requirements in the form of a Utility Requirements Document (URD) as part of an industry wide effort to establish a technical foundation for the next generation of light water reactors. A key aspect of the URD is a set of severe accident-related design requirements which have been developed to provide a technical basis for updated emergency planning for the ALWR. The technical basis includes design criteria for containment performance and offsite dose during severe accident conditions. An ALWR emergency planning concept is being developed which reflects this severe accident capability. The main conclusion from this work is that the likelihood and consequences of a severe accident for an ALWR are fundamentally different from that assumed in the technical basis for existing emergency planning requirements, at least in the U.S. The current technical understanding of severe accident risk is greatly improved compared to that available when the existing U.S. emergency planning requirements were established nearly 15 years ago, and the emerging ALWR designs have superior core damage prevention and severe accident mitigation capability. Thus, it is reasonable and prudent to reflect this design capability in the emergency planning requirements for the ALWR. (author)

  14. Influence of basis-set size on the X Σ 1 /2 +2 , A Π 1 /2 2 , A Π 3 /2 2 , and B Σ 1 /2 +2 potential-energy curves, A Π 3 /2 2 vibrational energies, and D1 and D2 line shapes of Rb+He

    Science.gov (United States)

    Blank, L. Aaron; Sharma, Amit R.; Weeks, David E.

    2018-03-01

    The X Σ 1 /2 +2 , A Π 1 /2 2 , A Π 3 /2 2 , and B2Σ1/2 + potential-energy curves for Rb+He are computed at the spin-orbit multireference configuration interaction level of theory using a hierarchy of Gaussian basis sets at the double-zeta (DZ), triple-zeta (TZ), and quadruple-zeta (QZ) levels of valence quality. Counterpoise and Davidson-Silver corrections are employed to remove basis-set superposition error and ameliorate size-consistency error. An extrapolation is performed to obtain a final set of potential-energy curves in the complete basis-set (CBS) limit. This yields four sets of systematically improved X Σ 1 /2 +2 , A Π 1 /2 2 , A Π 3 /2 2 , and B2Σ1/2 + potential-energy curves that are used to compute the A Π 3 /2 2 bound vibrational energies, the position of the D2 blue satellite peak, and the D1 and D2 pressure broadening and shifting coefficients, at the DZ, TZ, QZ, and CBS levels. Results are compared with previous calculations and experimental observation.

  15. Evaluation of the Volatility Basis-Set Approach for Modeling Primary and Secondary Organic Aerosol in the Mexico City Metropolitan Area

    Science.gov (United States)

    Tsimpidi, A. P.; Karydis, V. A.; Pandis, S. N.; Zavala, M.; Lei, W.; Molina, L. T.

    2007-12-01

    Anthropogenic air pollution is an increasingly serious problem for public health, agriculture, and global climate. Organic material (OM) contributes ~ 20-50% to the total fine aerosol mass at continental mid-latitudes. Although OM accounts for a large fraction of PM2.5 concentration worldwide, the contributions of primary and secondary organic aerosol have been difficult to quantify. In this study, new primary and secondary organic aerosol modules were added to PMCAMx, a three dimensional chemical transport model (Gaydos et al., 2007), for use with the SAPRC99 chemistry mechanism (Carter, 2000; ENVIRON, 2006) based on recent smog chamber studies (Robinson et al., 2007). The new modeling framework is based on the volatility basis-set approach (Lane et al., 2007): both primary and secondary organic components are assumed to be semivolatile and photochemically reactive and are distributed in logarithmically spaced volatility bins. The emission inventory, which uses as starting point the MCMA 2004 official inventory (CAM, 2006), is modified and the primary organic aerosol (POA) emissions are distributed by volatility based on dilution experiments (Robinson et al., 2007). Sensitivity tests where POA is considered as nonvolatile and POA and SOA as chemically reactive are also described. In all cases PMCAMx is applied in the Mexico City Metropolitan Area during March 2006. The modeling domain covers a 180x180x6 km region in the MCMA with 3x3 km grid resolution. The model predictions are compared with Aerodyne's Aerosol Mass Spectrometry (AMS) observations from the MILAGRO Campaign. References Robinson, A. L.; Donahue, N. M.; Shrivastava, M. K.; Weitkamp, E. A.; Sage, A. M.; Grieshop, A. P.; Lane, T. E.; Pandis, S. N.; Pierce, J. R., 2007. Rethinking organic aerosols: semivolatile emissions and photochemical aging. Science 315, 1259-1262. Gaydos, T. M.; Pinder, R. W.; Koo, B.; Fahey, K. M.; Pandis, S. N., 2007. Development and application of a three- dimensional aerosol

  16. Handbook of Gaussian basis sets

    International Nuclear Information System (INIS)

    Poirier, R.; Kari, R.; Csizmadia, I.G.

    1985-01-01

    A collection of a large body of information is presented useful for chemists involved in molecular Gaussian computations. Every effort has been made by the authors to collect all available data for cartesian Gaussian as found in the literature up to July of 1984. The data in this text includes a large collection of polarization function exponents but in this case the collection is not complete. Exponents for Slater type orbitals (STO) were included for completeness. This text offers a collection of Gaussian exponents primarily without criticism. (Auth.)

  17. Automatic Curve Fitting Based on Radial Basis Functions and a Hierarchical Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    G. Trejo-Caballero

    2015-01-01

    Full Text Available Curve fitting is a very challenging problem that arises in a wide variety of scientific and engineering applications. Given a set of data points, possibly noisy, the goal is to build a compact representation of the curve that corresponds to the best estimate of the unknown underlying relationship between two variables. Despite the large number of methods available to tackle this problem, it remains challenging and elusive. In this paper, a new method to tackle such problem using strictly a linear combination of radial basis functions (RBFs is proposed. To be more specific, we divide the parameter search space into linear and nonlinear parameter subspaces. We use a hierarchical genetic algorithm (HGA to minimize a model selection criterion, which allows us to automatically and simultaneously determine the nonlinear parameters and then, by the least-squares method through Singular Value Decomposition method, to compute the linear parameters. The method is fully automatic and does not require subjective parameters, for example, smooth factor or centre locations, to perform the solution. In order to validate the efficacy of our approach, we perform an experimental study with several tests on benchmarks smooth functions. A comparative analysis with two successful methods based on RBF networks has been included.

  18. Minimal Gromov-Witten rings

    International Nuclear Information System (INIS)

    Przyjalkowski, V V

    2008-01-01

    We construct an abstract theory of Gromov-Witten invariants of genus 0 for quantum minimal Fano varieties (a minimal class of varieties which is natural from the quantum cohomological viewpoint). Namely, we consider the minimal Gromov-Witten ring: a commutative algebra whose generators and relations are of the form used in the Gromov-Witten theory of Fano varieties (of unspecified dimension). The Gromov-Witten theory of any quantum minimal variety is a homomorphism from this ring to C. We prove an abstract reconstruction theorem which says that this ring is isomorphic to the free commutative ring generated by 'prime two-pointed invariants'. We also find solutions of the differential equation of type DN for a Fano variety of dimension N in terms of the generating series of one-pointed Gromov-Witten invariants

  19. On what basis are medical cost-effectiveness thresholds set? Clashing opinions and an absence of data: a systematic review.

    Science.gov (United States)

    Cameron, David; Ubels, Jasper; Norström, Fredrik

    2018-01-01

    The amount a government should be willing to invest in adopting new medical treatments has long been under debate. With many countries using formal cost-effectiveness (C/E) thresholds when examining potential new treatments and ever-growing medical costs, accurately setting the level of a C/E threshold can be essential for an efficient healthcare system. The aim of this systematic review is to describe the prominent approaches to setting a C/E threshold, compile available national-level C/E threshold data and willingness-to-pay (WTP) data, and to discern whether associations exist between these values, gross domestic product (GDP) and health-adjusted life expectancy (HALE). This review further examines current obstacles faced with the presently available data. A systematic review was performed to collect articles which have studied national C/E thresholds and willingness-to-pay (WTP) per quality-adjusted life year (QALY) in the general population. Associations between GDP, HALE, WTP, and C/E thresholds were analyzed with correlations. Seventeen countries were identified from nine unique sources to have formal C/E thresholds within our inclusion criteria. Thirteen countries from nine sources were identified to have WTP per QALY data within our inclusion criteria. Two possible associations were identified: C/E thresholds with HALE (quadratic correlation of 0.63), and C/E thresholds with GDP per capita (polynomial correlation of 0.84). However, these results are based on few observations and therefore firm conclusions cannot be made. Most national C/E thresholds identified in our review fall within the WHO's recommended range of one-to-three times GDP per capita. However, the quality and quantity of data available regarding national average WTP per QALY, opportunity costs, and C/E thresholds is poor in comparison to the importance of adequate investment in healthcare. There exists an obvious risk that countries might either over- or underinvest in healthcare if they

  20. General description of transverse mode Bessel beams and construction of basis Bessel fields

    Science.gov (United States)

    Wang, Jia Jie; Wriedt, Thomas; Lock, James A.; Jiao, Yong Chang

    2017-07-01

    Based on an analysis of polarized Bessel beams using the Hertz vector potentials and the angular spectrum representation (ASR), a general description of transverse mode Bessel beams is proposed. As opposed to the cases of linearly and circularly polarized Bessel beams, the magnetic and electric fields of a Bessel beam in a transverse mode are orthogonal to each other. Both sets of fields together form a complete set of basis Bessel fields, in terms of which an arbitrary Bessel beam can be regarded as a linear combination. The completeness of the basis Bessel fields is analyzed from the perspectives of waveguide theory and vector wave functions. Decompositions of linearly polarized, circularly polarized, and circularly symmetric n-order Bessel beams in terms of basis Bessel fields are given. The results presented in this paper provide a fresh perspective on the description of Bessel beams, which are useful in casting insights into the experimental generation of Bessel beams and the interpretation of light scattering-related problems in practice.

  1. Minimal Marking: A Success Story

    Science.gov (United States)

    McNeilly, Anne

    2014-01-01

    The minimal-marking project conducted in Ryerson's School of Journalism throughout 2012 and early 2013 resulted in significantly higher grammar scores in two first-year classes of minimally marked university students when compared to two traditionally marked classes. The "minimal-marking" concept (Haswell, 1983), which requires…

  2. Minimal families of curves on surfaces

    KAUST Repository

    Lubbes, Niels

    2014-01-01

    A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal

  3. Minimally invasive approaches for the treatment of inflammatory bowel disease

    Institute of Scientific and Technical Information of China (English)

    Marco Zoccali; Alessandro Fichera

    2012-01-01

    Despite significant improvements in medical management of inflammatory bowel disease,many of these patients still require surgery at some point in the course of their disease.Their young age and poor general conditions,worsened by the aggressive medical treatments,make minimally invasive approaches particularly enticing to this patient population.However,the typical inflammatory changes that characterize these diseases have hindered wide diffusion of laparoscopy in this setting,currently mostly pursued in high-volume referral centers,despite accumulating evidences in the literature supporting the benefits of minimally invasive surgery.The largest body of evidence currently available for terminal ileal Crohn's disease shows improved short term outcomes after laparoscopic surgery,with prolonged operative times.For Crohn's colitis,high quality evidence supporting laparoscopic surgery is lacking.Encouraging preliminary results have been obtained with the adoption of laparoscopic restorative total proctocolectomy for the treatment of ulcerative colitis.A consensus about patients' selection and the need for staging has not been reached yet.Despite the lack of conclusive evidence,a wave of enthusiasm is pushing towards less invasive strategies,to further minimize surgical trauma,with single incision laparoscopic surgery being the most realistic future development.

  4. Pectoral Fascial (PECS) I and II Blocks as Rescue Analgesia in a Patient Undergoing Minimally Invasive Cardiac Surgery.

    Science.gov (United States)

    Yalamuri, Suraj; Klinger, Rebecca Y; Bullock, W Michael; Glower, Donald D; Bottiger, Brandi A; Gadsden, Jeffrey C

    Patients undergoing minimally invasive cardiac surgery have the potential for significant pain from the thoracotomy site. We report the successful use of pectoral nerve block types I and II (Pecs I and II) as rescue analgesia in a patient undergoing minimally invasive mitral valve repair. In this case, a 78-year-old man, with no history of chronic pain, underwent mitral valve repair via right anterior thoracotomy for severe mitral regurgitation. After extubation, he complained of 10/10 pain at the incision site that was minimally responsive to intravenous opioids. He required supplemental oxygen because of poor pulmonary mechanics, with shallow breathing and splinting due to pain, and subsequent intensive care unit readmission. Ultrasound-guided Pecs I and II blocks were performed on the right side with 30 mL of 0.2% ropivacaine with 1:400,000 epinephrine. The blocks resulted in near-complete chest wall analgesia and improved pulmonary mechanics for approximately 24 hours. After the single-injection blocks regressed, a second set of blocks was performed with 266 mg of liposomal bupivacaine mixed with bupivacaine. This second set of blocks provided extended analgesia for an additional 48 hours. The patient was weaned rapidly from supplemental oxygen after the blocks because of improved analgesia. Pectoral nerve blocks have been described in the setting of breast surgery to provide chest wall analgesia. We report the first successful use of Pecs blocks to provide effective chest wall analgesia for a patient undergoing minimally invasive cardiac surgery with thoracotomy. We believe that these blocks may provide an important nonopioid option for the management of pain during recovery from minimally invasive cardiac surgery.

  5. Lumbar Spinal Stenosis Minimally Invasive Treatment with Bilateral Transpedicular Facet Augmentation System

    Energy Technology Data Exchange (ETDEWEB)

    Masala, Salvatore, E-mail: salva.masala@tiscali.it [Interventional Radiology and Radiotherapy, University of Rome ' Tor Vergata' , Department of Diagnostic and Molecular Imaging (Italy); Tarantino, Umberto [University of Rome ' Tor Vergata' , Department of Orthopaedics and Traumatology (Italy); Nano, Giovanni, E-mail: gionano@gmail.com [Interventional Radiology and Radiotherapy, University of Rome ' Tor Vergata' , Department of Diagnostic and Molecular Imaging (Italy); Iundusi, Riccardo [University of Rome ' Tor Vergata' , Department of Orthopaedics and Traumatology (Italy); Fiori, Roberto, E-mail: fiori.r@libero.it; Da Ros, Valerio, E-mail: valeriodaros@hotmail.com; Simonetti, Giovanni [Interventional Radiology and Radiotherapy, University of Rome ' Tor Vergata' , Department of Diagnostic and Molecular Imaging (Italy)

    2013-06-15

    Purpose. The purpose of this study was to evaluate the effectiveness of a new pedicle screw-based posterior dynamic stabilization device PDS Percudyn System Trade-Mark-Sign Anchor and Stabilizer (Interventional Spine Inc., Irvine, CA) as alternative minimally invasive treatment for patients with lumbar spine stenosis. Methods. Twenty-four consecutive patients (8 women, 16 men; mean age 61.8 yr) with lumbar spinal stenosis underwent implantation of the minimally invasive pedicle screw-based device for posterior dynamic stabilization. Inclusion criteria were lumbar stenosis without signs of instability, resistant to conservative treatment, and eligible to traditional surgical posterior decompression. Results. Twenty patients (83 %) progressively improved during the 1-year follow-up. Four (17 %) patients did not show any improvement and opted for surgical posterior decompression. For both responder and nonresponder patients, no device-related complications were reported. Conclusions. Minimally invasive PDS Percudyn System Trade-Mark-Sign has effectively improved the clinical setting of 83 % of highly selected patients treated, delaying the need for traditional surgical therapy.

  6. Lumbar Spinal Stenosis Minimally Invasive Treatment with Bilateral Transpedicular Facet Augmentation System

    International Nuclear Information System (INIS)

    Masala, Salvatore; Tarantino, Umberto; Nano, Giovanni; Iundusi, Riccardo; Fiori, Roberto; Da Ros, Valerio; Simonetti, Giovanni

    2013-01-01

    Purpose. The purpose of this study was to evaluate the effectiveness of a new pedicle screw-based posterior dynamic stabilization device PDS Percudyn System™ Anchor and Stabilizer (Interventional Spine Inc., Irvine, CA) as alternative minimally invasive treatment for patients with lumbar spine stenosis. Methods. Twenty-four consecutive patients (8 women, 16 men; mean age 61.8 yr) with lumbar spinal stenosis underwent implantation of the minimally invasive pedicle screw-based device for posterior dynamic stabilization. Inclusion criteria were lumbar stenosis without signs of instability, resistant to conservative treatment, and eligible to traditional surgical posterior decompression. Results. Twenty patients (83 %) progressively improved during the 1-year follow-up. Four (17 %) patients did not show any improvement and opted for surgical posterior decompression. For both responder and nonresponder patients, no device-related complications were reported. Conclusions. Minimally invasive PDS Percudyn System™ has effectively improved the clinical setting of 83 % of highly selected patients treated, delaying the need for traditional surgical therapy.

  7. SIS - Species and Stock Administrative Data Set

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Species and Stock Administrative data set within the Species Information System (SIS) defines entities within the database that serve as the basis for recording...

  8. A minimally-resolved immersed boundary model for reaction-diffusion problems

    OpenAIRE

    Pal Singh Bhalla, A; Griffith, BE; Patankar, NA; Donev, A

    2013-01-01

    We develop an immersed boundary approach to modeling reaction-diffusion processes in dispersions of reactive spherical particles, from the diffusion-limited to the reaction-limited setting. We represent each reactive particle with a minimally-resolved "blob" using many fewer degrees of freedom per particle than standard discretization approaches. More complicated or more highly resolved particle shapes can be built out of a collection of reactive blobs. We demonstrate numerically that the blo...

  9. Waste minimization assessment procedure

    International Nuclear Information System (INIS)

    Kellythorne, L.L.

    1993-01-01

    Perry Nuclear Power Plant began developing a waste minimization plan early in 1991. In March of 1991 the plan was documented following a similar format to that described in the EPA Waste Minimization Opportunity Assessment Manual. Initial implementation involved obtaining management's commitment to support a waste minimization effort. The primary assessment goal was to identify all hazardous waste streams and to evaluate those streams for minimization opportunities. As implementation of the plan proceeded, non-hazardous waste streams routinely generated in large volumes were also evaluated for minimization opportunities. The next step included collection of process and facility data which would be useful in helping the facility accomplish its assessment goals. This paper describes the resources that were used and which were most valuable in identifying both the hazardous and non-hazardous waste streams that existed on site. For each material identified as a waste stream, additional information regarding the materials use, manufacturer, EPA hazardous waste number and DOT hazard class was also gathered. Once waste streams were evaluated for potential source reduction, recycling, re-use, re-sale, or burning for heat recovery, with disposal as the last viable alternative

  10. Westinghouse Hanford Company waste minimization actions

    International Nuclear Information System (INIS)

    Greenhalgh, W.O.

    1988-09-01

    Companies that generate hazardous waste materials are now required by national regulations to establish a waste minimization program. Accordingly, in FY88 the Westinghouse Hanford Company formed a waste minimization team organization. The purpose of the team is to assist the company in its efforts to minimize the generation of waste, train personnel on waste minimization techniques, document successful waste minimization effects, track dollar savings realized, and to publicize and administer an employee incentive program. A number of significant actions have been successful, resulting in the savings of materials and dollars. The team itself has been successful in establishing some worthwhile minimization projects. This document briefly describes the waste minimization actions that have been successful to date. 2 refs., 26 figs., 3 tabs

  11. Claus sulphur recovery potential approaches 99% while minimizing cost

    Energy Technology Data Exchange (ETDEWEB)

    Berlie, E M

    1974-01-21

    In a summary of a paper presented to the fourth joint engineering conference of the American Institute of Chemical Engineers and the Canadian Society for Chemical Engineering, the Claus process is discussed in a modern setting. Some problems faced in the operation of sulfur recovery plants include (1) strict pollution control regulations; (2) design and operation of existing plants; (3) knowledge of process fundamentals; (4) performance testing; (5) specification of feed gas; (6) catalyst life; (7) instrumentation and process control; and (8) quality of feed gas. Some of the factors which must be considered in order to achieve the ultimate capability of the Claus process are listed. There is strong evidence to support the contention that plant operators are reluctant to accept new fundamental knowledge of the Claus sulfur recovery process and are not taking advantage of its inherent potential to achieve the emission standards required, to minimize cost of tail gas cleanup systems and to minimize operating costs.

  12. Irreducible descriptive sets of attributes for information systems

    KAUST Repository

    Moshkov, Mikhail

    2010-01-01

    The maximal consistent extension Ext(S) of a given information system S consists of all objects corresponding to attribute values from S which are consistent with all true and realizable rules extracted from the original information system S. An irreducible descriptive set for the considered information system S is a minimal (relative to the inclusion) set B of attributes which defines exactly the set Ext(S) by means of true and realizable rules constructed over attributes from the considered set B. We show that there exists only one irreducible descriptive set of attributes. We present a polynomial algorithm for this set construction. We also study relationships between the cardinality of irreducible descriptive set of attributes and the number of attributes in S. The obtained results will be useful for the design of concurrent data models from experimental data. © 2010 Springer-Verlag.

  13. Minimal but non-minimal inflation and electroweak symmetry breaking

    Energy Technology Data Exchange (ETDEWEB)

    Marzola, Luca [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia); Institute of Physics, University of Tartu,Ravila 14c, 50411 Tartu (Estonia); Racioppi, Antonio [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia)

    2016-10-07

    We consider the most minimal scale invariant extension of the standard model that allows for successful radiative electroweak symmetry breaking and inflation. The framework involves an extra scalar singlet, that plays the rôle of the inflaton, and is compatibile with current experimental bounds owing to the non-minimal coupling of the latter to gravity. This inflationary scenario predicts a very low tensor-to-scalar ratio r≈10{sup −3}, typical of Higgs-inflation models, but in contrast yields a scalar spectral index n{sub s}≃0.97 which departs from the Starobinsky limit. We briefly discuss the collider phenomenology of the framework.

  14. Measurement of temperature induced in bone during drilling in minimally invasive foot surgery.

    Science.gov (United States)

    Omar, Noor Azzizah; McKinley, John C

    2018-02-19

    There has been growing interest in minimally invasive foot surgery due to the benefits it delivers in post-operative outcomes in comparison to conventional open methods of surgery. One of the major factors determining the protocol in minimally invasive surgery is to prevent iatrogenic thermal osteonecrosis. The aim of the study is to look at various drilling parameters in a minimally invasive surgery setting that would reduce the risk of iatrogenic thermal osteonecrosis. Sixteen fresh-frozen tarsal bones and two metatarsal bones were retrieved from three individuals and drilled using various settings. The parameters considered were drilling speed, drill diameter, and inter-individual cortical variability. Temperature measurements of heat generated at the drilling site were collected using two methods; thermocouple probe and infrared thermography. The data obtained were quantitatively analysed. There was a significant difference in the temperatures generated with different drilling speeds (pdrilled using different drill diameters. Thermocouple showed significantly more sensitive tool in measuring temperature compared to infrared thermography. Drilling at an optimal speed significantly reduced the risk of iatrogenic thermal osteonecrosis by maintaining temperature below the threshold level. Although different drilling diameters did not produce significant differences in temperature generation, there is a need for further study on the mechanical impact of using different drill diameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. The Topological Basis Realization for Six Qubits and the Corresponding Heisenberg Spin -{1/2} Chain Model

    Science.gov (United States)

    Yang, Qi; Cao, Yue; Chen, Shiyin; Teng, Yue; Meng, Yanli; Wang, Gangcheng; Sun, Chunfang; Xue, Kang

    2018-03-01

    In this paper, we construct a new set of orthonormal topological basis states for six qubits with the topological single loop d = 2. By acting on the subspace, we get a new five-dimensional (5D) reduced matrix. In addition, it is shown that the Heisenberg XXX spin-1/2 chain of six qubits can be constructed from the Temperley-Lieb algebra (TLA) generator, both the energy ground state and the spin singlet states of the system can be described by the set of topological basis states.

  16. The Topological Basis Realization for Six Qubits and the Corresponding Heisenberg Spin-1/2 Chain Model

    Science.gov (United States)

    Yang, Qi; Cao, Yue; Chen, Shiyin; Teng, Yue; Meng, Yanli; Wang, Gangcheng; Sun, Chunfang; Xue, Kang

    2018-06-01

    In this paper, we construct a new set of orthonormal topological basis states for six qubits with the topological single loop d = 2. By acting on the subspace, we get a new five-dimensional (5 D) reduced matrix. In addition, it is shown that the Heisenberg XXX spin-1/2 chain of six qubits can be constructed from the Temperley-Lieb algebra (TLA) generator, both the energy ground state and the spin singlet states of the system can be described by the set of topological basis states.

  17. Surface interpolation with radial basis functions for medical imaging

    International Nuclear Information System (INIS)

    Carr, J.C.; Beatson, R.K.; Fright, W.R.

    1997-01-01

    Radial basis functions are presented as a practical solution to the problem of interpolating incomplete surfaces derived from three-dimensional (3-D) medical graphics. The specific application considered is the design of cranial implants for the repair of defects, usually holes, in the skull. Radial basis functions impose few restrictions on the geometry of the interpolation centers and are suited to problems where interpolation centers do not form a regular grid. However, their high computational requirements have previously limited their use to problems where the number of interpolation centers is small (<300). Recently developed fast evaluation techniques have overcome these limitations and made radial basis interpolation a practical approach for larger data sets. In this paper radial basis functions are fitted to depth-maps of the skull's surface, obtained from X-ray computed tomography (CT) data using ray-tracing techniques. They are used to smoothly interpolate the surface of the skull across defect regions. The resulting mathematical description of the skull's surface can be evaluated at any desired resolution to be rendered on a graphics workstation or to generate instructions for operating a computer numerically controlled (CNC) mill

  18. Structural Genomics of Minimal Organisms: Pipeline and Results

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung-Hou; Shin, Dong-Hae; Kim, Rosalind; Adams, Paul; Chandonia, John-Marc

    2007-09-14

    The initial objective of the Berkeley Structural Genomics Center was to obtain a near complete three-dimensional (3D) structural information of all soluble proteins of two minimal organisms, closely related pathogens Mycoplasma genitalium and M. pneumoniae. The former has fewer than 500 genes and the latter has fewer than 700 genes. A semiautomated structural genomics pipeline was set up from target selection, cloning, expression, purification, and ultimately structural determination. At the time of this writing, structural information of more than 93percent of all soluble proteins of M. genitalium is avail able. This chapter summarizes the approaches taken by the authors' center.

  19. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail

    2011-10-30

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ) + λ(sin φ, cos φ, 0), where A,B,C,D ε ℝ are fixed. To achieve invariance under Laguerre transformations, we also derive all Laguerre minimal surfaces that are enveloped by a family of cones. The methodology is based on the isotropic model of Laguerre geometry. In this model a Laguerre minimal surface enveloped by a family of cones corresponds to a graph of a biharmonic function carrying a family of isotropic circles. We classify such functions by showing that the top view of the family of circles is a pencil. © 2011 Springer-Verlag.

  20. Qudit-Basis Universal Quantum Computation Using χ(2 ) Interactions

    Science.gov (United States)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-04-01

    We prove that universal quantum computation can be realized—using only linear optics and χ(2 ) (three-wave mixing) interactions—in any (n +1 )-dimensional qudit basis of the n -pump-photon subspace. First, we exhibit a strictly universal gate set for the qubit basis in the one-pump-photon subspace. Next, we demonstrate qutrit-basis universality by proving that χ(2 ) Hamiltonians and photon-number operators generate the full u (3 ) Lie algebra in the two-pump-photon subspace, and showing how the qutrit controlled-Z gate can be implemented with only linear optics and χ(2 ) interactions. We then use proof by induction to obtain our general qudit result. Our induction proof relies on coherent photon injection or subtraction, a technique enabled by χ(2 ) interaction between the encoding modes and ancillary modes. Finally, we show that coherent photon injection is more than a conceptual tool, in that it offers a route to preparing high-photon-number Fock states from single-photon Fock states.

  1. Qudit-Basis Universal Quantum Computation Using χ^{(2)} Interactions.

    Science.gov (United States)

    Niu, Murphy Yuezhen; Chuang, Isaac L; Shapiro, Jeffrey H

    2018-04-20

    We prove that universal quantum computation can be realized-using only linear optics and χ^{(2)} (three-wave mixing) interactions-in any (n+1)-dimensional qudit basis of the n-pump-photon subspace. First, we exhibit a strictly universal gate set for the qubit basis in the one-pump-photon subspace. Next, we demonstrate qutrit-basis universality by proving that χ^{(2)} Hamiltonians and photon-number operators generate the full u(3) Lie algebra in the two-pump-photon subspace, and showing how the qutrit controlled-Z gate can be implemented with only linear optics and χ^{(2)} interactions. We then use proof by induction to obtain our general qudit result. Our induction proof relies on coherent photon injection or subtraction, a technique enabled by χ^{(2)} interaction between the encoding modes and ancillary modes. Finally, we show that coherent photon injection is more than a conceptual tool, in that it offers a route to preparing high-photon-number Fock states from single-photon Fock states.

  2. Global Analysis of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J

    2010-01-01

    Many properties of minimal surfaces are of a global nature, and this is already true for the results treated in the first two volumes of the treatise. Part I of the present book can be viewed as an extension of these results. For instance, the first two chapters deal with existence, regularity and uniqueness theorems for minimal surfaces with partially free boundaries. Here one of the main features is the possibility of 'edge-crawling' along free parts of the boundary. The third chapter deals with a priori estimates for minimal surfaces in higher dimensions and for minimizers of singular integ

  3. Minimal Surfaces for Hitchin Representations

    DEFF Research Database (Denmark)

    Li, Qiongling; Dai, Song

    2018-01-01

    . In this paper, we investigate the properties of immersed minimal surfaces inside symmetric space associated to a subloci of Hitchin component: $q_n$ and $q_{n-1}$ case. First, we show that the pullback metric of the minimal surface dominates a constant multiple of the hyperbolic metric in the same conformal...... class and has a strong rigidity property. Secondly, we show that the immersed minimal surface is never tangential to any flat inside the symmetric space. As a direct corollary, the pullback metric of the minimal surface is always strictly negatively curved. In the end, we find a fully decoupled system...

  4. Minimal Webs in Riemannian Manifolds

    DEFF Research Database (Denmark)

    Markvorsen, Steen

    2008-01-01

    For a given combinatorial graph $G$ a {\\it geometrization} $(G, g)$ of the graph is obtained by considering each edge of the graph as a $1-$dimensional manifold with an associated metric $g$. In this paper we are concerned with {\\it minimal isometric immersions} of geometrized graphs $(G, g......)$ into Riemannian manifolds $(N^{n}, h)$. Such immersions we call {\\em{minimal webs}}. They admit a natural 'geometric' extension of the intrinsic combinatorial discrete Laplacian. The geometric Laplacian on minimal webs enjoys standard properties such as the maximum principle and the divergence theorems, which...... are of instrumental importance for the applications. We apply these properties to show that minimal webs in ambient Riemannian spaces share several analytic and geometric properties with their smooth (minimal submanifold) counterparts in such spaces. In particular we use appropriate versions of the divergence...

  5. Waste minimization handbook, Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility`s life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996.

  6. Waste minimization handbook, Volume 1

    International Nuclear Information System (INIS)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility's life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996

  7. Polarized atomic orbitals for linear scaling methods

    Science.gov (United States)

    Berghold, Gerd; Parrinello, Michele; Hutter, Jürg

    2002-02-01

    We present a modified version of the polarized atomic orbital (PAO) method [M. S. Lee and M. Head-Gordon, J. Chem. Phys. 107, 9085 (1997)] to construct minimal basis sets optimized in the molecular environment. The minimal basis set derives its flexibility from the fact that it is formed as a linear combination of a larger set of atomic orbitals. This approach significantly reduces the number of independent variables to be determined during a calculation, while retaining most of the essential chemistry resulting from the admixture of higher angular momentum functions. Furthermore, we combine the PAO method with linear scaling algorithms. We use the Chebyshev polynomial expansion method, the conjugate gradient density matrix search, and the canonical purification of the density matrix. The combined scheme overcomes one of the major drawbacks of standard approaches for large nonorthogonal basis sets, namely numerical instabilities resulting from ill-conditioned overlap matrices. We find that the condition number of the PAO overlap matrix is independent from the condition number of the underlying extended basis set, and consequently no numerical instabilities are encountered. Various applications are shown to confirm this conclusion and to compare the performance of the PAO method with extended basis-set calculations.

  8. Power Loss Minimization for Transformers Connected in Parallel with Taps Based on Power Chargeability Balance

    Directory of Open Access Journals (Sweden)

    Álvaro Jaramillo-Duque

    2018-02-01

    Full Text Available In this paper, a model and solution approach for minimizing internal power losses in Transformers Connected in Parallel (TCP with tap-changers is proposed. The model is based on power chargeability balance and seeks to keep the load voltage within an admissible range. For achieving this, tap positions are adjusted in such a way that all TCP are set in similar/same power chargeability. The main contribution of this paper is the inclusion of several construction features (rated voltage, rated power, voltage ratio, short-circuit impedance and tap steps in the minimization of power losses in TCP that are not included in previous works. A Genetic Algorithm (GA is used for solving the proposed model that is a system of nonlinear equations with discrete decision variables. The GA scans different sets for tap positions with the aim of balancing the power supplied by each transformer to the load. For this purpose, a fitness function is used for minimizing two conditions: The first condition consists on the mismatching between power chargeability for each transformer and a desired chargeability; and the second condition is the mismatching between the nominal load voltage and the load voltage obtained by changing the tap positions. The proposed method is generalized for any given number of TCP and was implemented for three TCP, demonstrating that the power losses are minimized and the load voltage remains within an admissible range.

  9. 42 CFR 403.764 - Basis and purpose of religious nonmedical health care institutions providing home service.

    Science.gov (United States)

    2010-10-01

    ... care institutions providing home service. 403.764 Section 403.764 Public Health CENTERS FOR MEDICARE... Basis and purpose of religious nonmedical health care institutions providing home service. (a) Basis... and 1878 of the Act regarding Medicare payment for items and services provided in the home setting...

  10. Very Large-Scale Neighborhoods with Performance Guarantees for Minimizing Makespan on Parallel Machines

    NARCIS (Netherlands)

    Brueggemann, T.; Hurink, Johann L.; Vredeveld, T.; Woeginger, Gerhard

    2006-01-01

    We study the problem of minimizing the makespan on m parallel machines. We introduce a very large-scale neighborhood of exponential size (in the number of machines) that is based on a matching in a complete graph. The idea is to partition the jobs assigned to the same machine into two sets. This

  11. Development of Mixed Autonomous Power System on the Basis of Renewable Energy Sources

    Directory of Open Access Journals (Sweden)

    D. P. Laoshvili

    2010-01-01

    Full Text Available A principal circuit diagram has been developed for an autonomous power system on the basis of renewable energy sources – solar and accumulator batteries.Due to the usage of a dc pulse converter, a dc converter (interrupter, an IGBT module inverter and a single-phase matching power transformer it is possible to achieve an effective sectioning of constant voltage and their inversion with minimal energy losses.Efficiency factor of the proposed converter installation exceeds 90 % and power factor is close to unity.

  12. Automatic reduction of large X-ray fluorescence data-sets applied to XAS and mapping experiments

    International Nuclear Information System (INIS)

    Martin Montoya, Ligia Andrea

    2017-02-01

    In this thesis two automatic methods for the reduction of large fluorescence data sets are presented. The first method is proposed in the framework of BioXAS experiments. The challenge of this experiment is to deal with samples in ultra dilute concentrations where the signal-to-background ratio is low. The experiment is performed in fluorescence mode X-ray absorption spectroscopy with a 100 pixel high-purity Ge detector. The first step consists on reducing 100 fluorescence spectra into one. In this step, outliers are identified by means of the shot noise. Furthermore, a fitting routine which model includes Gaussian functions for the fluorescence lines and exponentially modified Gaussian (EMG) functions for the scattering lines (with long tails at lower energies) is proposed to extract the line of interest from the fluorescence spectrum. Additionally, the fitting model has an EMG function for each scattering line (elastic and inelastic) at incident energies where they start to be discerned. At these energies, the data reduction is done per detector column to include the angular dependence of scattering. In the second part of this thesis, an automatic method for texts separation on palimpsests is presented. Scanning X-ray fluorescence is performed on the parchment, where a spectrum per scanned point is collected. Within this method, each spectrum is treated as a vector forming a basis which is to be transformed so that the basis vectors are the spectra of each ink. Principal Component Analysis is employed as an initial guess of the seek basis. This basis is further transformed by means of an optimization routine that maximizes the contrast and minimizes the non-negative entries in the spectra. The method is tested on original and self made palimpsests.

  13. Computing single step operators of logic programming in radial basis function neural networks

    Science.gov (United States)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  14. Computing single step operators of logic programming in radial basis function neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  15. Computing single step operators of logic programming in radial basis function neural networks

    International Nuclear Information System (INIS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-01-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T p :I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks

  16. System requirements and design description for the document basis database interface (DocBasis)

    International Nuclear Information System (INIS)

    Lehman, W.J.

    1997-01-01

    This document describes system requirements and the design description for the Document Basis Database Interface (DocBasis). The DocBasis application is used to manage procedures used within the tank farms. The application maintains information in a small database to track the document basis for a procedure, as well as the current version/modification level and the basis for the procedure. The basis for each procedure is substantiated by Administrative, Technical, Procedural, and Regulatory requirements. The DocBasis user interface was developed by Science Applications International Corporation (SAIC)

  17. Operating envelope to minimize probability of fractures in Zircaloy-2 pressure tubes

    International Nuclear Information System (INIS)

    Azer, N.; Wong, H.

    1994-01-01

    The failure mode of primary concern with Candu pressure tubes is fast fracture of a through-wall axial crack, resulting from delayed hydride crack growth. The application of operating envelopes is demonstrated to minimize the probability of fracture in Zircaloy-2 pressure tubes based on Zr-2.5%Nb pressure tube experience. The technical basis for the development of the operating envelopes is also summarized. The operating envelope represents an area on the pressure versus temperature diagram within which the reactor may be operated without undue concern for pressure tube fracture. The envelopes presented address both normal operating conditions and the condition where a pressure tube leak has been detected. The examples in this paper are prepared to illustrate the methodology, and are not intended to be directly applicable to the operation of any specific reactor. The application of operating envelopes to minimized the probability of fracture in 80 mm diameter Zircaloy-2 pressure tubes has been discussed. Both normal operating and leaking pressure tube conditions have been considered. 3 refs., 4 figs

  18. Solar Power Tower Design Basis Document, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    ZAVOICO,ALEXIS B.

    2001-07-01

    This report contains the design basis for a generic molten-salt solar power tower. A solar power tower uses a field of tracking mirrors (heliostats) that redirect sunlight on to a centrally located receiver mounted on top a tower, which absorbs the concentrated sunlight. Molten nitrate salt, pumped from a tank at ground level, absorbs the sunlight, heating it up to 565 C. The heated salt flows back to ground level into another tank where it is stored, then pumped through a steam generator to produce steam and make electricity. This report establishes a set of criteria upon which the next generation of solar power towers will be designed. The report contains detailed criteria for each of the major systems: Collector System, Receiver System, Thermal Storage System, Steam Generator System, Master Control System, and Electric Heat Tracing System. The Electric Power Generation System and Balance of Plant discussions are limited to interface requirements. This design basis builds on the extensive experience gained from the Solar Two project and includes potential design innovations that will improve reliability and lower technical risk. This design basis document is a living document and contains several areas that require trade-studies and design analysis to fully complete the design basis. Project- and site-specific conditions and requirements will also resolve open To Be Determined issues.

  19. XZP + 1d and XZP + 1d-DKH basis sets for second-row elements: application to CCSD(T) zero-point vibrational energy and atomization energy calculations.

    Science.gov (United States)

    Campos, Cesar T; Jorge, Francisco E; Alves, Júlia M A

    2012-09-01

    Recently, segmented all-electron contracted double, triple, quadruple, quintuple, and sextuple zeta valence plus polarization function (XZP, X = D, T, Q, 5, and 6) basis sets for the elements from H to Ar were constructed for use in conjunction with nonrelativistic and Douglas-Kroll-Hess Hamiltonians. In this work, in order to obtain a better description of some molecular properties, the XZP sets for the second-row elements were augmented with high-exponent d "inner polarization functions," which were optimized in the molecular environment at the second-order Møller-Plesset level. At the coupled cluster level of theory, the inclusion of tight d functions for these elements was found to be essential to improve the agreement between theoretical and experimental zero-point vibrational energies (ZPVEs) and atomization energies. For all of the molecules studied, the ZPVE errors were always smaller than 0.5 %. The atomization energies were also improved by applying corrections due to core/valence correlation and atomic spin-orbit effects. This led to estimates for the atomization energies of various compounds in the gaseous phase. The largest error (1.2 kcal mol(-1)) was found for SiH(4).

  20. Sectors of solutions and minimal energies in classical Liouville theories for strings

    International Nuclear Information System (INIS)

    Johansson, L.; Kihlberg, A.; Marnelius, R.

    1984-01-01

    All classical solutions of the Liouville theory for strings having finite stable minimum energies are calculated explicitly together with their minimal energies. Our treatment automatically includes the set of natural solitonlike singularities described by Jorjadze, Pogrebkov, and Polivanov. Since the number of such singularities is preserved in time, a sector of solutions is not only characterized by its boundary conditions but also by its number of singularities. Thus, e.g., the Liouville theory with periodic boundary conditions has three different sectors of solutions with stable minimal energies containing zero, one, and two singularities. (Solutions with more singularities have no stable minimum energy.) It is argued that singular solutions do not make the string singular and therefore may be included in the string quantization

  1. Low-level waste minimization at the Y-12 Plant

    Energy Technology Data Exchange (ETDEWEB)

    Koger, J. [Oak Ridge National Lab., TN (United States)

    1993-03-01

    The Y-12 Development Waste Minimization Program is used as a basis for defining new technologies and processes that produce minimum low-level wastes (hazardous, mixed, radioactive, and industrial) for the Y-12 Plant in the future and for Complex-21 and that aid in decontamination and decommissioning (D and D) efforts throughout the complex. In the past, the strategy at the Y-12 Plant was to treat the residues from the production processes using chemical treatment, incineration, compaction, and other technologies, which often generated copious quantities of additional wastes and, with the exception of highly valuable materials such as enriched uranium, incorporated very little recycle in the process. Recycle, in this context, is defined as material that is put back into the process before it enters a waste stream. Additionally, there are several new technology drivers that have recently emerged with the changing climate in the Nuclear Weapons Complex such as Complex 21 and D and D technologies and an increasing number of disassemblies. The hierarchies of concern in the waste minimization effort are source reduction, recycle capability, treatment simplicity, and final disposal difficulty with regard to Complex 21, disassembly efforts, D and D, and, to a lesser extent, weapons production. Source reduction can be achieved through substitution of hazardous substances for nonhazardous materials, and process changes that result in less generated waste.

  2. The production of audiovisual teaching tools in minimally invasive surgery.

    Science.gov (United States)

    Tolerton, Sarah K; Hugh, Thomas J; Cosman, Peter H

    2012-01-01

    Audiovisual learning resources have become valuable adjuncts to formal teaching in surgical training. This report discusses the process and challenges of preparing an audiovisual teaching tool for laparoscopic cholecystectomy. The relative value in surgical education and training, for both the creator and viewer are addressed. This audiovisual teaching resource was prepared as part of the Master of Surgery program at the University of Sydney, Australia. The different methods of video production used to create operative teaching tools are discussed. Collating and editing material for an audiovisual teaching resource can be a time-consuming and technically challenging process. However, quality learning resources can now be produced even with limited prior video editing experience. With minimal cost and suitable guidance to ensure clinically relevant content, most surgeons should be able to produce short, high-quality education videos of both open and minimally invasive surgery. Despite the challenges faced during production of audiovisual teaching tools, these resources are now relatively easy to produce using readily available software. These resources are particularly attractive to surgical trainees when real time operative footage is used. They serve as valuable adjuncts to formal teaching, particularly in the setting of minimally invasive surgery. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  3. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail; Pottmann, Helmut; Grohs, Philipp

    2011-01-01

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ

  4. Model's sparse representation based on reduced mixed GMsFE basis methods

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn [Institute of Mathematics, Hunan University, Changsha 410082 (China); Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn [College of Mathematics and Econometrics, Hunan University, Changsha 410082 (China)

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in

  5. The Tucson-Melbourne Three-Body Force in a Translationally-Invariant Harmonic Oscillator Basis

    Science.gov (United States)

    Marsden, David; Navratil, Petr; Barrett, Bruce

    2000-09-01

    A translationally-invariant three-body basis set has been employed in shell model calculations on ^3H and ^3He including the Tucson-Melbourne form of the real nuclear three-body force. The basis consists of harmonic oscillators in Jacobi coordinates, explicitly avoiding the centre of mass drift problem in the calculations. The derivation of the three-body matrix elements and the results of large basis effective interaction shell model calculations will be presented. J. L. Friar, B. F. Gibson, G. L. Payne and S. A. Coon; Few Body Systems 5, 13 (1988) P. Navratil, G.P. Kamuntavicius and B.R. Barrett; Phys. Rev. C. 61, 044001 (2000)

  6. Y-12 Plant waste minimization strategy

    International Nuclear Information System (INIS)

    Kane, M.A.

    1987-01-01

    The 1984 Amendments to the Resource Conservation and Recovery Act (RCRA) mandate that waste minimization be a major element of hazardous waste management. In response to this mandate and the increasing costs for waste treatment, storage, and disposal, the Oak Ridge Y-12 Plant developed a waste minimization program to encompass all types of wastes. Thus, waste minimization has become an integral part of the overall waste management program. Unlike traditional approaches, waste minimization focuses on controlling waste at the beginning of production instead of the end. This approach includes: (1) substituting nonhazardous process materials for hazardous ones, (2) recycling or reusing waste effluents, (3) segregating nonhazardous waste from hazardous and radioactive waste, and (4) modifying processes to generate less waste or less toxic waste. An effective waste minimization program must provide the appropriate incentives for generators to reduce their waste and provide the necessary support mechanisms to identify opportunities for waste minimization. This presentation focuses on the Y-12 Plant's strategy to implement a comprehensive waste minimization program. This approach consists of four major program elements: (1) promotional campaign, (2) process evaluation for waste minimization opportunities, (3) waste generation tracking system, and (4) information exchange network. The presentation also examines some of the accomplishments of the program and issues which need to be resolved

  7. Minimal open strings

    International Nuclear Information System (INIS)

    Hosomichi, Kazuo

    2008-01-01

    We study FZZT-branes and open string amplitudes in (p, q) minimal string theory. We focus on the simplest boundary changing operators in two-matrix models, and identify the corresponding operators in worldsheet theory through the comparison of amplitudes. Along the way, we find a novel linear relation among FZZT boundary states in minimal string theory. We also show that the boundary ground ring is realized on physical open string operators in a very simple manner, and discuss its use for perturbative computation of higher open string amplitudes.

  8. Minimal Composite Inflation

    DEFF Research Database (Denmark)

    Channuie, Phongpichit; Jark Joergensen, Jakob; Sannino, Francesco

    2011-01-01

    We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity, and that the u......We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity...

  9. 26 CFR 1.1014-4 - Uniformity of basis; adjustment to basis.

    Science.gov (United States)

    2010-04-01

    ...) INCOME TAX (CONTINUED) INCOME TAXES Basis Rules of General Application § 1.1014-4 Uniformity of basis... to property acquired by bequest, devise, or inheritance relate back to the death of the decedent... prescribing a general uniform basis rule for property acquired from a decedent is, on the one hand, to tax the...

  10. NP-hardness of the cluster minimization problem revisited

    Science.gov (United States)

    Adib, Artur B.

    2005-10-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  11. NP-hardness of the cluster minimization problem revisited

    International Nuclear Information System (INIS)

    Adib, Artur B

    2005-01-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested

  12. NP-hardness of the cluster minimization problem revisited

    Energy Technology Data Exchange (ETDEWEB)

    Adib, Artur B [Physics Department, Brown University, Providence, RI 02912 (United States)

    2005-10-07

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  13. Minimal performances of high Tc wires for cost effective SMES compared with low Tc's

    International Nuclear Information System (INIS)

    Levillain, C.; Therond, P.G.

    1996-01-01

    On the basis of a 22MJ/10MVA unit without stray field, the authors determine minimal performances for High T c Superconducting (HTS) wires, in order to obtain HTS Superconducting Magnetic Energy Storage (SMES) competitive compared with Low T c Superconducting (LTS) ones. The cost equation mainly considers the wire volume, the fabrication process and losses. They then recommend HTS critical current densities and operating magnetic fields close to the present state of the art for short samples. A 30% gain for HTS SMES compared with LTS one could be expected

  14. Minimal abdominal incisions

    Directory of Open Access Journals (Sweden)

    João Carlos Magi

    2017-04-01

    Full Text Available Minimally invasive procedures aim to resolve the disease with minimal trauma to the body, resulting in a rapid return to activities and in reductions of infection, complications, costs and pain. Minimally incised laparotomy, sometimes referred to as minilaparotomy, is an example of such minimally invasive procedures. The aim of this study is to demonstrate the feasibility and utility of laparotomy with minimal incision based on the literature and exemplifying with a case. The case in question describes reconstruction of the intestinal transit with the use of this incision. Male, young, HIV-positive patient in a late postoperative of ileotiflectomy, terminal ileostomy and closing of the ascending colon by an acute perforating abdomen, due to ileocolonic tuberculosis. The barium enema showed a proximal stump of the right colon near the ileostomy. The access to the cavity was made through the orifice resulting from the release of the stoma, with a lateral-lateral ileo-colonic anastomosis with a 25 mm circular stapler and manual closure of the ileal stump. These surgeries require their own tactics, such as rigor in the lysis of adhesions, tissue traction, and hemostasis, in addition to requiring surgeon dexterity – but without the need for investments in technology; moreover, the learning curve is reported as being lower than that for videolaparoscopy. Laparotomy with minimal incision should be considered as a valid and viable option in the treatment of surgical conditions. Resumo: Procedimentos minimamente invasivos visam resolver a doença com o mínimo de trauma ao organismo, resultando em retorno rápido às atividades, reduções nas infecções, complicações, custos e na dor. A laparotomia com incisão mínima, algumas vezes referida como minilaparotomia, é um exemplo desses procedimentos minimamente invasivos. O objetivo deste trabalho é demonstrar a viabilidade e utilidade das laparotomias com incisão mínima com base na literatura e

  15. Roothaan's approach to solve the Hartree-Fock equations for atoms confined by soft walls: Basis set with correct asymptotic behavior.

    Science.gov (United States)

    Rodriguez-Bautista, Mariano; Díaz-García, Cecilia; Navarrete-López, Alejandra M; Vargas, Rubicelia; Garza, Jorge

    2015-07-21

    In this report, we use a new basis set for Hartree-Fock calculations related to many-electron atoms confined by soft walls. One- and two-electron integrals were programmed in a code based in parallel programming techniques. The results obtained with this proposal for hydrogen and helium atoms were contrasted with other proposals to study just one and two electron confined atoms, where we have reproduced or improved the results previously reported. Usually, an atom enclosed by hard walls has been used as a model to study confinement effects on orbital energies, the main conclusion reached by this model is that orbital energies always go up when the confinement radius is reduced. However, such an observation is not necessarily valid for atoms confined by penetrable walls. The main reason behind this result is that for atoms with large polarizability, like beryllium or potassium, external orbitals are delocalized when the confinement is imposed and consequently, the internal orbitals behave as if they were in an ionized atom. Naturally, the shell structure of these atoms is modified drastically when they are confined. The delocalization was an argument proposed for atoms confined by hard walls, but it was never verified. In this work, the confinement imposed by soft walls allows to analyze the delocalization concept in many-electron atoms.

  16. FPGA Dynamic Power Minimization through Placement and Routing Constraints

    Directory of Open Access Journals (Sweden)

    Deepak Agarwal

    2006-08-01

    Full Text Available Field-programmable gate arrays (FPGAs are pervasive in embedded systems requiring low-power utilization. A novel power optimization methodology for reducing the dynamic power consumed by the routing of FPGA circuits by modifying the constraints applied to existing commercial tool sets is presented. The power optimization techniques influence commercial FPGA Place and Route (PAR tools by translating power goals into standard throughput and placement-based constraints. The Low-Power Intelligent Tool Environment (LITE is presented, which was developed to support the experimentation of power models and power optimization algorithms. The generated constraints seek to implement one of four power optimization approaches: slack minimization, clock tree paring, N-terminal net colocation, and area minimization. In an experimental study, we optimize dynamic power of circuits mapped into 0.12 μm Xilinx Virtex-II FPGAs. Results show that several optimization algorithms can be combined on a single design, and power is reduced by up to 19.4%, with an average power savings of 10.2%.

  17. Predicting the minimal translation apparatus: lessons from the reductive evolution of mollicutes.

    Directory of Open Access Journals (Sweden)

    Henri Grosjean

    2014-05-01

    Full Text Available Mollicutes is a class of parasitic bacteria that have evolved from a common Firmicutes ancestor mostly by massive genome reduction. With genomes under 1 Mbp in size, most Mollicutes species retain the capacity to replicate and grow autonomously. The major goal of this work was to identify the minimal set of proteins that can sustain ribosome biogenesis and translation of the genetic code in these bacteria. Using the experimentally validated genes from the model bacteria Escherichia coli and Bacillus subtilis as input, genes encoding proteins of the core translation machinery were predicted in 39 distinct Mollicutes species, 33 of which are culturable. The set of 260 input genes encodes proteins involved in ribosome biogenesis, tRNA maturation and aminoacylation, as well as proteins cofactors required for mRNA translation and RNA decay. A core set of 104 of these proteins is found in all species analyzed. Genes encoding proteins involved in post-translational modifications of ribosomal proteins and translation cofactors, post-transcriptional modifications of t+rRNA, in ribosome assembly and RNA degradation are the most frequently lost. As expected, genes coding for aminoacyl-tRNA synthetases, ribosomal proteins and initiation, elongation and termination factors are the most persistent (i.e. conserved in a majority of genomes. Enzymes introducing nucleotides modifications in the anticodon loop of tRNA, in helix 44 of 16S rRNA and in helices 69 and 80 of 23S rRNA, all essential for decoding and facilitating peptidyl transfer, are maintained in all species. Reconstruction of genome evolution in Mollicutes revealed that, beside many gene losses, occasional gains by horizontal gene transfer also occurred. This analysis not only showed that slightly different solutions for preserving a functional, albeit minimal, protein synthetizing machinery have emerged in these successive rounds of reductive evolution but also has broad implications in guiding the

  18. Integrating and scheduling an open set of static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Mezini, Mira; Kloppenburg, Sven

    2006-01-01

    to keep the set of analyses open. We propose an approach to integrating and scheduling an open set of static analyses which decouples the individual analyses and coordinates the analysis executions such that the overall time and space consumption is minimized. The approach has been implemented...... for the Eclipse IDE and has been used to integrate a wide range of analyses such as finding bug patterns, detecting violations of design guidelines, or type system extensions for Java....

  19. Self-duality in Maxwell-Chern-Simons theories with non minimal coupling with matter field

    CERN Document Server

    Chandelier, F; Masson, T; Wallet, J C

    2000-01-01

    We consider a general class of non-local MCS models whose usual minimal coupling to a conserved current is supplemented with a (non-minimal) magnetic Pauli-type coupling. We find that the considered models exhibit a self-duality whenever the magnetic coupling constant reaches a special value: the partition function is invariant under a set of transformations among the parameter space (the duality transformations) while the original action and its dual counterpart have the same form. The duality transformations have a structure similar to the one underlying self-duality of the (2+1)-dimensional Z sub n - Abelian Higgs model with Chern-Simons and bare mass term.

  20. Contribution to computer aided design of digital circuits - Minimization of alphanumeric expressions - Program CHOPIN

    International Nuclear Information System (INIS)

    Blanca, Ernest

    1974-10-01

    Alpha-numeric boolean expressions, written in the form of sums of products and/or products of sums with many brackets, may be minimized in two steps: syntaxic recognition analysis using precedence operator grammar, syntaxic reduction analysis. These two phases of execution and the different programs of the corresponding machine algorithm are described. Examples of minimization of alpha-numeric boolean expressions written with the help of brackets, utilisation note of the program CHOPIN and theoretical considerations related to language, grammar, precedence operator grammar, sequential systems, boolean sets, boolean representations and treatments of boolean expressions, boolean matrices and their use in grammar theory, are discussed and described. (author) [fr

  1. Topological cell decomposition and dimension theory in P-minimal fields

    OpenAIRE

    Cubides-Kovacsics, Pablo; Darnière, Luck; Leenknegt, Eva

    2015-01-01

    This paper addresses some questions about dimension theory for P-minimal structures. We show that, for any definable set A, the dimension of the frontier of A is strictly smaller than the dimension of A itself, and that A has a decomposition into definable, pure-dimensional components. This is then used to show that the intersection of finitely many definable dense subsets of A is still dense in A. As an application, we obtain that any m-ary definable function is continuous on a dense, relati...

  2. Safety Basis Report

    International Nuclear Information System (INIS)

    R.J. Garrett

    2002-01-01

    As part of the internal Integrated Safety Management Assessment verification process, it was determined that there was a lack of documentation that summarizes the safety basis of the current Yucca Mountain Project (YMP) site characterization activities. It was noted that a safety basis would make it possible to establish a technically justifiable graded approach to the implementation of the requirements identified in the Standards/Requirements Identification Document. The Standards/Requirements Identification Documents commit a facility to compliance with specific requirements and, together with the hazard baseline documentation, provide a technical basis for ensuring that the public and workers are protected. This Safety Basis Report has been developed to establish and document the safety basis of the current site characterization activities, establish and document the hazard baseline, and provide the technical basis for identifying structures, systems, and components (SSCs) that perform functions necessary to protect the public, the worker, and the environment from hazards unique to the YMP site characterization activities. This technical basis for identifying SSCs serves as a grading process for the implementation of programs such as Conduct of Operations (DOE Order 5480.19) and the Suspect/Counterfeit Items Program. In addition, this report provides a consolidated summary of the hazards analyses processes developed to support the design, construction, and operation of the YMP site characterization facilities and, therefore, provides a tool for evaluating the safety impacts of changes to the design and operation of the YMP site characterization activities

  3. Safety Basis Report

    Energy Technology Data Exchange (ETDEWEB)

    R.J. Garrett

    2002-01-14

    As part of the internal Integrated Safety Management Assessment verification process, it was determined that there was a lack of documentation that summarizes the safety basis of the current Yucca Mountain Project (YMP) site characterization activities. It was noted that a safety basis would make it possible to establish a technically justifiable graded approach to the implementation of the requirements identified in the Standards/Requirements Identification Document. The Standards/Requirements Identification Documents commit a facility to compliance with specific requirements and, together with the hazard baseline documentation, provide a technical basis for ensuring that the public and workers are protected. This Safety Basis Report has been developed to establish and document the safety basis of the current site characterization activities, establish and document the hazard baseline, and provide the technical basis for identifying structures, systems, and components (SSCs) that perform functions necessary to protect the public, the worker, and the environment from hazards unique to the YMP site characterization activities. This technical basis for identifying SSCs serves as a grading process for the implementation of programs such as Conduct of Operations (DOE Order 5480.19) and the Suspect/Counterfeit Items Program. In addition, this report provides a consolidated summary of the hazards analyses processes developed to support the design, construction, and operation of the YMP site characterization facilities and, therefore, provides a tool for evaluating the safety impacts of changes to the design and operation of the YMP site characterization activities.

  4. Interoperative efficiency in minimally invasive surgery suites.

    Science.gov (United States)

    van Det, M J; Meijerink, W J H J; Hoff, C; Pierie, J P E N

    2009-10-01

    Performing minimally invasive surgery (MIS) in a conventional operating room (OR) requires additional specialized equipment otherwise stored outside the OR. Before the procedure, the OR team must collect, prepare, and connect the equipment, then take it away afterward. These extra tasks pose a thread to OR efficiency and may lengthen turnover times. The dedicated MIS suite has permanently installed laparoscopic equipment that is operational on demand. This study presents two experiments that quantify the superior efficiency of the MIS suite in the interoperative period. Preoperative setup and postoperative breakdown times in the conventional OR and the MIS suite in an experimental setting and in daily practice were analyzed. In the experimental setting, randomly chosen OR teams simulated the setup and breakdown for a standard laparoscopic cholecystectomy (LC) and a complex laparoscopic sigmoid resection (LS). In the clinical setting, the interoperative period for 66 LCs randomly assigned to the conventional OR or the MIS suite were analyzed. In the experimental setting, the setup and breakdown times were significantly shorter in the MIS suite. The difference between the two types of OR increased for the complex procedure: 2:41 min for the LC (p < 0.001) and 10:47 min for the LS (p < 0.001). In the clinical setting, the setup and breakdown times as a whole were not reduced in the MIS suite. Laparoscopic setup and breakdown times were significantly shorter in the MIS suite (mean difference, 5:39 min; p < 0.001). Efficiency during the interoperative period is significantly improved in the MIS suite. The OR nurses' tasks are relieved, which may reduce mental and physical workload and improve job satisfaction and patient safety. Due to simultaneous tasks of other disciplines, an overall turnover time reduction could not be achieved.

  5. Fuzzy GML Modeling Based on Vague Soft Sets

    Directory of Open Access Journals (Sweden)

    Bo Wei

    2017-01-01

    Full Text Available The Open Geospatial Consortium (OGC Geography Markup Language (GML explicitly represents geographical spatial knowledge in text mode. All kinds of fuzzy problems will inevitably be encountered in spatial knowledge expression. Especially for those expressions in text mode, this fuzziness will be broader. Describing and representing fuzziness in GML seems necessary. Three kinds of fuzziness in GML can be found: element fuzziness, chain fuzziness, and attribute fuzziness. Both element fuzziness and chain fuzziness belong to the reflection of the fuzziness between GML elements and, then, the representation of chain fuzziness can be replaced by the representation of element fuzziness in GML. On the basis of vague soft set theory, two kinds of modeling, vague soft set GML Document Type Definition (DTD modeling and vague soft set GML schema modeling, are proposed for fuzzy modeling in GML DTD and GML schema, respectively. Five elements or pairs, associated with vague soft sets, are introduced. Then, the DTDs and the schemas of the five elements are correspondingly designed and presented according to their different chains and different fuzzy data types. While the introduction of the five elements or pairs is the basis of vague soft set GML modeling, the corresponding DTD and schema modifications are key for implementation of modeling. The establishment of vague soft set GML enables GML to represent fuzziness and solves the problem of lack of fuzzy information expression in GML.

  6. Enhanced Recovery Pathways for Improving Outcomes After Minimally Invasive Gynecologic Oncology Surgery.

    Science.gov (United States)

    Chapman, Jocelyn S; Roddy, Erika; Ueda, Stefanie; Brooks, Rebecca; Chen, Lee-Lynn; Chen, Lee-May

    2016-07-01

    To estimate whether an enhanced recovery after surgery pathway facilitates early recovery and discharge in gynecologic oncology patients undergoing minimally invasive surgery. This was a retrospective case-control study. Consecutive gynecologic oncology patients undergoing laparoscopic or robotic surgery between July 1 and November 5, 2014, were treated on an enhanced recovery pathway. Enhanced recovery pathway components included patient education, multimodal analgesia, opioid minimization, nausea prophylaxis as well as early catheter removal, ambulation, and feeding. Cases were matched in a one-to-two ratio with historical control patients on the basis of surgery type and age. Primary endpoints were length of hospital stay, rates of discharge by noon, 30-day hospital readmission rates, and hospital costs. There were 165 patients included in the final cohort, 55 of whom were enhanced recovery pathway patients. Enhanced recovery patients were more likely to be discharged on postoperative day 1 compared with patients in the control group (91% compared with 60%, Pcontrol patients (P=.03). Postoperative pain scores decreased (2.6 compared with 3.12, P=.03) despite a 30% reduction in opioid use. Average total hospital costs were decreased by 12% in the enhanced recovery group ($13,771 compared with $15,649, P=.01). Readmission rates, mortality, and reoperation rates did not differ between the two groups. An enhanced recovery pathway in patients undergoing gynecologic oncology minimally invasive surgery is associated with significant improvements in recovery time, decreased pain despite reduced opioid use, and overall lower hospital costs.

  7. The pointer basis and the feedback stabilization of quantum systems

    International Nuclear Information System (INIS)

    Li, L; Chia, A; Wiseman, H M

    2014-01-01

    The dynamics for an open quantum system can be ‘unravelled’ in infinitely many ways, depending on how the environment is monitored, yielding different sorts of conditioned states, evolving stochastically. In the case of ideal monitoring these states are pure, and the set of states for a given monitoring forms a basis (which is overcomplete in general) for the system. It has been argued elsewhere (Atkins et al 2005 Europhys. Lett. 69 163) that the ‘pointer basis’ as introduced by Zurek et al (1993 Phys. Rev. Lett. 70 1187), should be identified with the unravelling-induced basis which decoheres most slowly. Here we show the applicability of this concept of pointer basis to the problem of state stabilization for quantum systems. In particular we prove that for linear Gaussian quantum systems, if the feedback control is assumed to be strong compared to the decoherence of the pointer basis, then the system can be stabilized in one of the pointer basis states with a fidelity close to one (the infidelity varies inversely with the control strength). Moreover, if the aim of the feedback is to maximize the fidelity of the unconditioned system state with a pure state that is one of its conditioned states, then the optimal unravelling for stabilizing the system in this way is that which induces the pointer basis for the conditioned states. We illustrate these results with a model system: quantum Brownian motion. We show that even if the feedback control strength is comparable to the decoherence, the optimal unravelling still induces a basis very close to the pointer basis. However if the feedback control is weak compared to the decoherence, this is not the case. (paper)

  8. The basis of clinical tribalism, hierarchy and stereotyping: a laboratory-controlled teamwork experiment

    OpenAIRE

    Braithwaite, Jeffrey; Clay-Williams, Robyn; Vecellio, Elia; Marks, Danielle; Hooper, Tamara; Westbrook, Mary; Westbrook, Johanna; Blakely, Brette; Ludlow, Kristiana

    2016-01-01

    Objectives To examine the basis of multidisciplinary teamwork. In real-world healthcare settings, clinicians often cluster in profession-based tribal silos, form hierarchies and exhibit stereotypical behaviours. It is not clear whether these social structures are more a product of inherent characteristics of the individuals or groups comprising the professions, or attributable to a greater extent to workplace factors. Setting Controlled laboratory environment with well-appointed, quiet rooms ...

  9. Minimal Flavour Violation and Beyond

    CERN Document Server

    Isidori, Gino

    2012-01-01

    We review the formulation of the Minimal Flavour Violation (MFV) hypothesis in the quark sector, as well as some "variations on a theme" based on smaller flavour symmetry groups and/or less minimal breaking terms. We also review how these hypotheses can be tested in B decays and by means of other flavour-physics observables. The phenomenological consequences of MFV are discussed both in general terms, employing a general effective theory approach, and in the specific context of the Minimal Supersymmetric extension of the SM.

  10. Quartic scaling MP2 for solids: A highly parallelized algorithm in the plane wave basis

    Science.gov (United States)

    Schäfer, Tobias; Ramberger, Benjamin; Kresse, Georg

    2017-03-01

    We present a low-complexity algorithm to calculate the correlation energy of periodic systems in second-order Møller-Plesset (MP2) perturbation theory. In contrast to previous approximation-free MP2 codes, our implementation possesses a quartic scaling, O ( N 4 ) , with respect to the system size N and offers an almost ideal parallelization efficiency. The general issue that the correlation energy converges slowly with the number of basis functions is eased by an internal basis set extrapolation. The key concept to reduce the scaling is to eliminate all summations over virtual orbitals which can be elegantly achieved in the Laplace transformed MP2 formulation using plane wave basis sets and fast Fourier transforms. Analogously, this approach could allow us to calculate second order screened exchange as well as particle-hole ladder diagrams with a similar low complexity. Hence, the presented method can be considered as a step towards systematically improved correlation energies.

  11. Mixed waste and waste minimization: The effect of regulations and waste minimization on the laboratory

    International Nuclear Information System (INIS)

    Dagan, E.B.; Selby, K.B.

    1993-08-01

    The Hanford Site is located in the State of Washington and is subject to state and federal environmental regulations that hamper waste minimization efforts. This paper addresses the negative effect of these regulations on waste minimization and mixed waste issues related to the Hanford Site. Also, issues are addressed concerning the regulations becoming more lenient. In addition to field operations, the Hanford Site is home to the Pacific Northwest Laboratory which has many ongoing waste minimization activities of particular interest to laboratories

  12. Method of applying single higher order polynomial basis function over multiple domains

    CSIR Research Space (South Africa)

    Lysko, AA

    2010-03-01

    Full Text Available A novel method has been devised where one set of higher order polynomial-based basis functions can be applied over several wire segments, thus permitting to decouple the number of unknowns from the number of segments, and so from the geometrical...

  13. Beam-hardening correction in CT based on basis image and TV model

    International Nuclear Information System (INIS)

    Li Qingliang; Yan Bin; Li Lei; Sun Hongsheng; Zhang Feng

    2012-01-01

    In X-ray computed tomography, the beam hardening leads to artifacts and reduces the image quality. It analyzes how beam hardening influences on original projection. According, it puts forward a kind of new beam-hardening correction method based on the basis images and TV model. Firstly, according to physical characteristics of the beam hardening an preliminary correction model with adjustable parameters is set up. Secondly, using different parameters, original projections are operated by the correction model. Thirdly, the projections are reconstructed to obtain a series of basis images. Finally, the linear combination of basis images is the final reconstruction image. Here, with total variation for the final reconstruction image as the cost function, the linear combination coefficients for the basis images are determined according to iterative method. To verify the effectiveness of the proposed method, the experiments are carried out on real phantom and industrial part. The results show that the algorithm significantly inhibits cup and strip artifacts in CT image. (authors)

  14. AVC: Selecting discriminative features on basis of AUC by maximizing variable complementarity.

    Science.gov (United States)

    Sun, Lei; Wang, Jun; Wei, Jinmao

    2017-03-14

    The Receiver Operator Characteristic (ROC) curve is well-known in evaluating classification performance in biomedical field. Owing to its superiority in dealing with imbalanced and cost-sensitive data, the ROC curve has been exploited as a popular metric to evaluate and find out disease-related genes (features). The existing ROC-based feature selection approaches are simple and effective in evaluating individual features. However, these approaches may fail to find real target feature subset due to their lack of effective means to reduce the redundancy between features, which is essential in machine learning. In this paper, we propose to assess feature complementarity by a trick of measuring the distances between the misclassified instances and their nearest misses on the dimensions of pairwise features. If a misclassified instance and its nearest miss on one feature dimension are far apart on another feature dimension, the two features are regarded as complementary to each other. Subsequently, we propose a novel filter feature selection approach on the basis of the ROC analysis. The new approach employs an efficient heuristic search strategy to select optimal features with highest complementarities. The experimental results on a broad range of microarray data sets validate that the classifiers built on the feature subset selected by our approach can get the minimal balanced error rate with a small amount of significant features. Compared with other ROC-based feature selection approaches, our new approach can select fewer features and effectively improve the classification performance.

  15. Determination of many-electron basis functions for a quantum Hall ground state using Schur polynomials

    Science.gov (United States)

    Mandal, Sudhansu S.; Mukherjee, Sutirtha; Ray, Koushik

    2018-03-01

    A method for determining the ground state of a planar interacting many-electron system in a magnetic field perpendicular to the plane is described. The ground state wave-function is expressed as a linear combination of a set of basis functions. Given only the flux and the number of electrons describing an incompressible state, we use the combinatorics of partitioning the flux among the electrons to derive the basis wave-functions as linear combinations of Schur polynomials. The procedure ensures that the basis wave-functions form representations of the angular momentum algebra. We exemplify the method by deriving the basis functions for the 5/2 quantum Hall state with a few particles. We find that one of the basis functions is precisely the Moore-Read Pfaffian wave function.

  16. 34 CFR 75.232 - The cost analysis; basis for grant amount.

    Science.gov (United States)

    2010-07-01

    ... Secretary sets the amount of a new grant, the Secretary does a cost analysis of the project. The Secretary... objectives of the project with reasonable efficiency and economy under the budget in the application... 34 Education 1 2010-07-01 2010-07-01 false The cost analysis; basis for grant amount. 75.232...

  17. Taxonomic minimalism.

    Science.gov (United States)

    Beattle, A J; Oliver, I

    1994-12-01

    Biological surveys are in increasing demand while taxonomic resources continue to decline. How much formal taxonomy is required to get the job done? The answer depends on the kind of job but it is possible that taxonomic minimalism, especially (1) the use of higher taxonomic ranks, (2) the use of morphospecies rather than species (as identified by Latin binomials), and (3) the involvement of taxonomic specialists only for training and verification, may offer advantages for biodiversity assessment, environmental monitoring and ecological research. As such, formal taxonomy remains central to the process of biological inventory and survey but resources may be allocated more efficiently. For example, if formal Identification is not required, resources may be concentrated on replication and increasing sample sizes. Taxonomic minimalism may also facilitate the inclusion in these activities of important but neglected groups, especially among the invertebrates, and perhaps even microorganisms. Copyright © 1994. Published by Elsevier Ltd.

  18. Minimizing waste in environmental restoration

    International Nuclear Information System (INIS)

    Thuot, J.R.; Moos, L.

    1996-01-01

    Environmental restoration, decontamination and decommissioning, and facility dismantlement projects are not typically known for their waste minimization and pollution prevention efforts. Typical projects are driven by schedules and milestones with little attention given to cost or waste minimization. Conventional wisdom in these projects is that the waste already exists and cannot be reduced or minimized; however, there are significant areas where waste and cost can be reduced by careful planning and execution. Waste reduction can occur in three ways: beneficial reuse or recycling, segregation of waste types, and reducing generation of secondary waste

  19. Quantization of the minimal and non-minimal vector field in curved space

    OpenAIRE

    Toms, David J.

    2015-01-01

    The local momentum space method is used to study the quantized massive vector field (the Proca field) with the possible addition of non-minimal terms. Heat kernel coefficients are calculated and used to evaluate the divergent part of the one-loop effective action. It is shown that the naive expression for the effective action that one would write down based on the minimal coupling case needs modification. We adopt a Faddeev-Jackiw method of quantization and consider the case of an ultrastatic...

  20. Patient set-up verification by infrared optical localization and body surface sensing in breast radiation therapy

    International Nuclear Information System (INIS)

    Spadea, Maria Francesca; Baroni, Guido; Riboldi, Marco; Orecchia, Roberto; Pedotti, Antonio; Tagaste, Barbara; Garibaldi, Cristina

    2006-01-01

    Background and purpose: The aim of the study was to investigate the clinical application of a technique for patient set-up verification in breast cancer radiotherapy, based on the 3D localization of a hybrid configuration of surface control points. Materials and methods: An infrared optical tracker provided the 3D position of two passive markers and 10 laser spots placed around and within the irradiation field on nine patients. A fast iterative constrained minimization procedure was applied to detect and compensate patient set-up errors, through the control points registration with reference data coming from treatment plan (markers reference position, CT-based surface model). Results: The application of the corrective spatial transformation estimated by the registration procedure led to significant improvement of patient set-up. Median value of 3D errors affecting three additional verification markers within the irradiation field decreased from 5.7 to 3.5 mm. Errors variability (25-75%) decreased from 3.2 to 2.1 mm. Laser spots registration on the reference surface model was documented to contribute substantially to set-up errors compensation. Conclusions: Patient set-up verification through a hybrid set of control points and constrained surface minimization algorithm was confirmed to be feasible in clinical practice and to provide valuable information for the improvement of the quality of patient set-up, with minimal requirement of operator-dependant procedures. The technique combines conveniently the advantages of passive markers based methods and surface registration techniques, by featuring immediate and robust estimation of the set-up accuracy from a redundant dataset