WorldWideScience

Sample records for large-scale shell-model calculations

  1. Perturbation theory instead of large scale shell model calculations

    International Nuclear Information System (INIS)

    Feldmeier, H.; Mankos, P.

    1977-01-01

    Results of large scale shell model calculations for (sd)-shell nuclei are compared with a perturbation theory provides an excellent approximation when the SU(3)-basis is used as a starting point. The results indicate that perturbation theory treatment in an SU(3)-basis including 2hω excitations should be preferable to a full diagonalization within the (sd)-shell. (orig.) [de

  2. Large-scale shell model calculations for the N=126 isotones Po-Pu

    International Nuclear Information System (INIS)

    Caurier, E.; Rejmund, M.; Grawe, H.

    2003-04-01

    Large-scale shell model calculations were performed in the full Z=82-126 proton model space π(Oh 9/2 , 1f 7/2 , Oi 13/2 , 2p 3/2 , 1f 5/2 , 2p 1/2 ) employing the code NATHAN. The modified Kuo-Herling interaction was used, no truncation was applied up to protactinium (Z=91) and seniority truncation beyond. The results are compared to experimental data including binding energies, level schemes and electromagnetic transition rates. An overall excellent agreement is obtained for states that can be described in this model space. Limitations of the approach with respect to excitations across the Z=82 and N=126 shells and deficiencies of the interaction are discussed. (orig.)

  3. Large scale shell model calculations: the physics in and the physics out

    International Nuclear Information System (INIS)

    Zuker, A.P.

    1997-01-01

    After giving a few examples of recent results of the (SM) 2 collaboration, the monopole modified realistic interactions to be used in shell model calculations are described and analyzed. Rotational motion is discussed in some detail, and some introductory remarks on level densities are made. (orig.)

  4. Structure of exotic nuclei by large-scale shell model calculations

    International Nuclear Information System (INIS)

    Utsuno, Yutaka; Otsuka, Takaharu; Mizusaki, Takahiro; Honma, Michio

    2006-01-01

    An extensive large-scale shell-model study is conducted for unstable nuclei around N = 20 and N = 28, aiming to investigate how the shell structure evolves from stable to unstable nuclei and affects the nuclear structure. The structure around N = 20 including the disappearance of the magic number is reproduced systematically, exemplified in the systematics of the electromagnetic moments in the Na isotope chain. As a key ingredient dominating the structure/shell evolution in the exotic nuclei from a general viewpoint, we pay attention to the tensor force. Including a proper strength of the tensor force in the effective interaction, we successfully reproduce the proton shell evolution ranging from N = 20 to 28 without any arbitrary modifications in the interaction and predict the ground state of 42Si to contain a large deformed component

  5. Large scale GW calculations

    International Nuclear Information System (INIS)

    Govoni, Marco; Argonne National Lab., Argonne, IL; Galli, Giulia; Argonne National Lab., Argonne, IL

    2015-01-01

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green's function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfaces with thousands of electrons

  6. Symmetry-guided large-scale shell-model theory

    Czech Academy of Sciences Publication Activity Database

    Launey, K. D.; Dytrych, Tomáš; Draayer, J. P.

    2016-01-01

    Roč. 89, JUL (2016), s. 101-136 ISSN 0146-6410 R&D Projects: GA ČR GA16-16772S Institutional support: RVO:61389005 Keywords : Ab intio shell -model theory * Symplectic symmetry * Collectivity * Clusters * Hoyle state * Orderly patterns in nuclei from first principles Subject RIV: BE - Theoretical Physics Impact factor: 11.229, year: 2016

  7. Approximate symmetries in atomic nuclei from a large-scale shell-model perspective

    Science.gov (United States)

    Launey, K. D.; Draayer, J. P.; Dytrych, T.; Sun, G.-H.; Dong, S.-H.

    2015-05-01

    In this paper, we review recent developments that aim to achieve further understanding of the structure of atomic nuclei, by capitalizing on exact symmetries as well as approximate symmetries found to dominate low-lying nuclear states. The findings confirm the essential role played by the Sp(3, ℝ) symplectic symmetry to inform the interaction and the relevant model spaces in nuclear modeling. The significance of the Sp(3, ℝ) symmetry for a description of a quantum system of strongly interacting particles naturally emerges from the physical relevance of its generators, which directly relate to particle momentum and position coordinates, and represent important observables, such as, the many-particle kinetic energy, the monopole operator, the quadrupole moment and the angular momentum. We show that it is imperative that shell-model spaces be expanded well beyond the current limits to accommodate particle excitations that appear critical to enhanced collectivity in heavier systems and to highly-deformed spatial structures, exemplified by the second 0+ state in 12C (the challenging Hoyle state) and 8Be. While such states are presently inaccessible by large-scale no-core shell models, symmetry-based considerations are found to be essential.

  8. Shell model calculations for exotic nuclei

    International Nuclear Information System (INIS)

    Brown, B.A.; Wildenthal, B.H.

    1991-01-01

    A review of the shell-model approach to understanding the properties of light exotic nuclei is given. Binding energies including p and p-sd model spaces and sd and sd-pf model spaces; cross-shell excitations around 32 Mg, including weak-coupling aspects and mechanisms for lowering the ntw excitations; beta decay properties of neutron-rich sd model, of p-sd and sd-pf model spaces, of proton-rich sd model space; coulomb break-up cross sections are discussed. (G.P.) 76 refs.; 12 figs

  9. Large scale calculations for hadron spectroscopy

    International Nuclear Information System (INIS)

    Rebbi, C.

    1985-01-01

    The talk reviews some recent Monte Carlo calculations for Quantum Chromodynamics, performed on Euclidean lattices of rather large extent. Purpose of the calculations is to provide accurate determinations of quantities, such as interquark potentials or mass eigenvalues, which are relevant for hadronic spectroscopy. Results obtained in quenched QCD on 16 3 x 32 lattices are illustrated, and a discussion of computational resources and techniques required for the calculations is presented. 18 refs.,3 figs., 2 tabs

  10. Concurrent algorithms for nuclear shell model calculations

    International Nuclear Information System (INIS)

    Mackenzie, L.M.; Macleod, A.M.; Berry, D.J.; Whitehead, R.R.

    1988-01-01

    The calculation of nuclear properties has proved very successful for light nuclei, but is limited by the power of the present generation of computers. Starting with an analysis of current techniques, this paper discusses how these can be modified to map parallelism inherent in the mathematics onto appropriate parallel machines. A prototype dedicated multiprocessor for nuclear structure calculations, designed and constructed by the authors, is described and evaluated. The approach adopted is discussed in the context of a number of generically similar algorithms. (orig.)

  11. Shell model calculations at superdeformed shapes

    International Nuclear Information System (INIS)

    Nazarewicz, W.; Dobaczewski, J.; Van Isacker, P.

    1991-01-01

    Spectroscopy of superdeformed nuclear states opens up an exciting possibility to probe new properties of the nuclear mean field. In particular, the unusually deformed atomic nucleus can serve as a microscopic laboratory of quantum-mechanical symmetries of a three dimensional harmonic oscillator. The classifications and coupling schemes characteristic of weakly deformed systems are expected to be modified in the superdeformed world. The ''superdeformed'' symmetries lead to new quantum numbers and new effective interactions that can be employed in microscopic calculations. New classification schemes can be directly related to certain geometrical properties of the nuclear shape. 63 refs., 7 figs

  12. A shell-model calculation in terms of correlated subsystems

    International Nuclear Information System (INIS)

    Boisson, J.P.; Silvestre-Brac, B.

    1979-01-01

    A method for solving the shell-model equations in terms of a basis which includes correlated subsystems is presented. It is shown that the method allows drastic truncations of the basis to be made. The corresponding calculations are easy to perform and can be carried out rapidly

  13. Realistic shell-model calculations for Sn isotopes

    International Nuclear Information System (INIS)

    Covello, A.; Andreozzi, F.; Coraggio, L.; Gargano, A.; Porrino, A.

    1997-01-01

    We report on a shell-model study of the Sn isotopes in which a realistic effective interaction derived from the Paris free nucleon-nucleon potential is employed. The calculations are performed within the framework of the seniority scheme by making use of the chain-calculation method. This provides practically exact solutions while cutting down the amount of computational work required by a standard seniority-truncated calculation. The behavior of the energy of several low-lying states in the isotopes with A ranging from 122 to 130 is presented and compared with the experimental one. (orig.)

  14. Challenges in large scale quantum mechanical calculations: Challenges in large scale quantum mechanical calculations

    Energy Technology Data Exchange (ETDEWEB)

    Ratcliff, Laura E. [Argonne Leadership Computing Facility, Argonne National Laboratory, Lemon IL USA; Mohr, Stephan [Department of Computer Applications in Science and Engineering, Barcelona Supercomputing Center (BSC-CNS), Barcelona Spain; Huhs, Georg [Department of Computer Applications in Science and Engineering, Barcelona Supercomputing Center (BSC-CNS), Barcelona Spain; Deutsch, Thierry [University Grenoble Alpes, INAC-MEM, Grenoble France; CEA, INAC-MEM, Grenoble France; Masella, Michel [Laboratoire de Biologie Structurale et Radiologie, Service de Bioénergétique, Biologie Structurale et Mécanisme, Institut de Biologie et de Technologie de Saclay, CEA Saclay, Gif-sur-Yvette Cedex France; Genovese, Luigi [University Grenoble Alpes, INAC-MEM, Grenoble France; CEA, INAC-MEM, Grenoble France

    2016-11-07

    During the past decades, quantum mechanical methods have undergone an amazing transition from pioneering investigations of experts into a wide range of practical applications, made by a vast community of researchers. First principles calculations of systems containing up to a few hundred atoms have become a standard in many branches of science. The sizes of the systems which can be simulated have increased even further during recent years, and quantum-mechanical calculations of systems up to many thousands of atoms are nowadays possible. This opens up new appealing possibilities, in particular for interdisciplinary work, bridging together communities of different needs and sensibilities. In this review we will present the current status of this topic, and will also give an outlook on the vast multitude of applications, challenges and opportunities stimulated by electronic structure calculations, making this field an important working tool and bringing together researchers of many different domains.

  15. Fast, large-scale hologram calculation in wavelet domain

    Science.gov (United States)

    Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi

    2018-04-01

    We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.

  16. Shell model calculations for stoichiometric Na β-alumina

    International Nuclear Information System (INIS)

    Wang, J.C.

    1985-01-01

    Walker and Catlow recently reported the results of their shell model calculations for the structure and transport of Na β-alumina (Naβ). The main computer programs used by Walker and Catlow for their calculations are PLUTO and HADES III. The latter, a recent version of HADES II written for cubic crystals, is believed to be applicable to defects in crystals of both cubic and hexagonal symmetry. PLUTO is usually used in calculating properties of perfect crystals before defects are introduced into the structure. Walker and Catlow claim that, in some respects, their models are superior to those of Wang et al. Yet, their results are quite different from those observed experimentally. In this work these differences are investigated by using a computer program designed to calculate lattice energies for s Naβ using the same shell model parameters adopted by Walker and Catlow. The core and shell positions of all ions, as well as the lattice parameters, were fully relaxed. The calculated energy difference between aBR and BR sites (0.33 eV) is about twice as large as that reported by Walker and Catlow. The present results also show that the relaxed oxygen ion positions next to the conduction plane in this case are displaced from their observed sites reported. When the core-shell spring constant of the oxygen ion was adjusted to minimize these displacements, the above-mentioned energy difference increased to about 0.56 eV. These results cast doubt on the fluid conduction plane structure suggested by Walker and Catlow and on the defect structure and activation energy obtained from their calculations

  17. Recent Developments in No-Core Shell-Model Calculations

    International Nuclear Information System (INIS)

    Navratil, P.; Quaglioni, S.; Stetcu, I.; Barrett, B.R.

    2009-01-01

    We present an overview of recent results and developments of the no-core shell model (NCSM), an ab initio approach to the nuclear many-body problem for light nuclei. In this aproach, we start from realistic two-nucleon or two- plus three-nucleon interactions. Many-body calculations are performed using a finite harmonic-oscillator (HO) basis. To facilitate convergence for realistic inter-nucleon interactions that generate strong short-range correlations, we derive effective interactions by unitary transformations that are tailored to the HO basis truncation. For soft realistic interactions this might not be necessary. If that is the case, the NCSM calculations are variational. In either case, the ab initio NCSM preserves translational invariance of the nuclear many-body problem. In this review, we, in particular, highlight results obtained with the chiral two- plus three-nucleon interactions. We discuss efforts to extend the applicability of the NCSM to heavier nuclei and larger model spaces using importance-truncation schemes and/or use of effective interactions with a core. We outline an extension of the ab initio NCSM to the description of nuclear reactions by the resonating group method technique. A future direction of the approach, the ab initio NCSM with continuum, which will provide a complete description of nuclei as open systems with coupling of bound and continuum states is given in the concluding part of the review.

  18. Recent Developments in No-Core Shell-Model Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Navratil, P; Quaglioni, S; Stetcu, I; Barrett, B R

    2009-03-20

    We present an overview of recent results and developments of the no-core shell model (NCSM), an ab initio approach to the nuclear many-body problem for light nuclei. In this aproach, we start from realistic two-nucleon or two- plus three-nucleon interactions. Many-body calculations are performed using a finite harmonic-oscillator (HO) basis. To facilitate convergence for realistic inter-nucleon interactions that generate strong short-range correlations, we derive effective interactions by unitary transformations that are tailored to the HO basis truncation. For soft realistic interactions this might not be necessary. If that is the case, the NCSM calculations are variational. In either case, the ab initio NCSM preserves translational invariance of the nuclear many-body problem. In this review, we, in particular, highlight results obtained with the chiral two- plus three-nucleon interactions. We discuss efforts to extend the applicability of the NCSM to heavier nuclei and larger model spaces using importance-truncation schemes and/or use of effective interactions with a core. We outline an extension of the ab initio NCSM to the description of nuclear reactions by the resonating group method technique. A future direction of the approach, the ab initio NCSM with continuum, which will provide a complete description of nuclei as open systems with coupling of bound and continuum states is given in the concluding part of the review.

  19. Large Scale GW Calculations on the Cori System

    Science.gov (United States)

    Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven

    The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.

  20. Large-scale nuclear structure calculations for spin-dependent WIMP scattering with chiral effective field theory currents

    OpenAIRE

    Klos, P.; Menéndez, J.; Gazit, D.; Schwenk, A.

    2013-01-01

    We perform state-of-the-art large-scale shell-model calculations of the structure factors for elastic spin-dependent WIMP scattering off 129,131Xe, 127I, 73Ge, 19F, 23Na, 27Al, and 29Si. This comprehensive survey covers the non-zero-spin nuclei relevant to direct dark matter detection. We include a pedagogical presentation of the formalism necessary to describe elastic and inelastic WIMP-nucleus scattering. The valence spaces and nuclear interactions employed have been previously used in nucl...

  1. Application of DYNA3D in large scale crashworthiness calculations

    International Nuclear Information System (INIS)

    Benson, D.J.; Hallquist, J.O.; Igarashi, M.; Shimomaki, K.; Mizuno, M.

    1986-01-01

    This paper presents an example of an automobile crashworthiness calculation. Based on our experiences with the example calculation, we make recommendations to those interested in performing crashworthiness calculations. The example presented in this paper was supplied by Suzuki Motor Co., Ltd., and provided a significant shakedown for the new large deformation shell capability of the DYNA3D code. 15 refs., 3 figs

  2. Large-scale atomic calculations using variational methods

    Energy Technology Data Exchange (ETDEWEB)

    Joensson, Per

    1995-01-01

    Atomic properties, such as radiative lifetimes, hyperfine structures and isotope shift, have been studied both theoretically and experimentally. Computer programs which calculate these properties from multiconfiguration Hartree-Fock (MCHF) and configuration interaction (CI) wave functions have been developed and tested. To study relativistic effects, a program which calculates hyperfine structures from multiconfiguration Dirac-Fock (MCDF) wave functions has also been written. A new method of dealing with radial non-orthogonalities in transition matrix elements has been investigated. This method allows two separate orbital sets to be used for the initial and final states, respectively. It is shown that, once the usual orthogonality restrictions have been overcome, systematic MCHF calculations are able to predict oscillator strengths in light atoms with high accuracy. In connection with recent high-power laser experiments, time-dependent calculations of the atomic response to intense laser fields have been performed. Using the frozen-core approximation, where the atom is modeled as an active electron moving in the average field of the core electrons and the nucleus, the active electron has been propagated in time under the influence of the laser field. Radiative lifetimes and hyperfine structures of excited states in sodium and silver have been experimentally determined using time-resolved laser spectroscopy. By recording the fluorescence light decay following laser excitation in the vacuum ultraviolet spectral region, the radiative lifetimes and hyperfine structures of the 7p{sup 2}P states in silver have been measured. The delayed-coincidence technique has been used to make very accurate measurements of the radiative lifetimes and hyperfine structures of the lowest 2P states in sodium and silver. 77 refs, 2 figs, 14 tabs.

  3. Large-scale atomic calculations using variational methods

    International Nuclear Information System (INIS)

    Joensson, Per.

    1995-01-01

    Atomic properties, such as radiative lifetimes, hyperfine structures and isotope shift, have been studied both theoretically and experimentally. Computer programs which calculate these properties from multiconfiguration Hartree-Fock (MCHF) and configuration interaction (CI) wave functions have been developed and tested. To study relativistic effects, a program which calculates hyperfine structures from multiconfiguration Dirac-Fock (MCDF) wave functions has also been written. A new method of dealing with radial non-orthogonalities in transition matrix elements has been investigated. This method allows two separate orbital sets to be used for the initial and final states, respectively. It is shown that, once the usual orthogonality restrictions have been overcome, systematic MCHF calculations are able to predict oscillator strengths in light atoms with high accuracy. In connection with recent high-power laser experiments, time-dependent calculations of the atomic response to intense laser fields have been performed. Using the frozen-core approximation, where the atom is modeled as an active electron moving in the average field of the core electrons and the nucleus, the active electron has been propagated in time under the influence of the laser field. Radiative lifetimes and hyperfine structures of excited states in sodium and silver have been experimentally determined using time-resolved laser spectroscopy. By recording the fluorescence light decay following laser excitation in the vacuum ultraviolet spectral region, the radiative lifetimes and hyperfine structures of the 7p 2 P states in silver have been measured. The delayed-coincidence technique has been used to make very accurate measurements of the radiative lifetimes and hyperfine structures of the lowest 2P states in sodium and silver. 77 refs, 2 figs, 14 tabs

  4. Auxiliary basis expansions for large-scale electronic structure calculations.

    Science.gov (United States)

    Jung, Yousung; Sodt, Alex; Gill, Peter M W; Head-Gordon, Martin

    2005-05-10

    One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems.

  5. Shell-model calculations with a basis that contains correlated pairs

    International Nuclear Information System (INIS)

    Boisson, J.P.; Silvestre-Brac, B.A.; Liotta, R.J.

    1979-01-01

    A method to solve the shell-model equations within a basis that contains correlated pairs of particles is presented. The method is illustrated for the three-identical-particle system. Applications in nuclei around 208 Pb are given and comparisons with both experimental data and other calculations are carried out. (Auth.)

  6. Unified description of pf-shell nuclei by the Monte Carlo shell model calculations

    Energy Technology Data Exchange (ETDEWEB)

    Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio

    1998-03-01

    The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)

  7. The contribution of Skyrme Hartree-Fock calculations to the understanding of the shell model

    International Nuclear Information System (INIS)

    Zamick, L.

    1984-01-01

    The authors present a detailed comparison of Skyrme Hartree-Fock and the shell model. The H-F calculations are sensitive to the parameters that are chosen. The H-F results justify the use of effective charges in restricted model space calculations by showing that the core contribution can be large. Further, the H-F results roughly justify the use of a constant E2 effective charge, but seem to yield nucleus dependent E4 effective charges. The H-F can yield results for E6 and higher multipoles, which would be zero in s-d model space calculations. On the other side of the coin in H-F the authors can easily consider only the lowest rotational band, whereas in the shell model one can calculate the energies and properties of many more states. In the comparison some apparent problems remain, in particular E4 transitions in the upper half of the s-d shell

  8. Use of shell model calculations in R-matrix studies of neutron-induced reactions

    International Nuclear Information System (INIS)

    Knox, H.D.

    1986-01-01

    R-matrix analyses of neutron-induced reactions for many of the lightest p-shell nuclei are difficult due to a lack of distinct resonance structure in the reaction cross sections. Initial values for the required R-matrix parameters, E,sub(lambda) and γsub(lambdac) for states in the compound system, can be obtained from shell model calculations. In the present work, the results of recent shell model calculations for the lithium isotopes have been used in R-matrix analyses of 6 Li+n and 7 Li+n reactions for E sub(n) 7 Li and 8 Li on the 6 Li+n and 7 Li+n reaction mechanisms and cross sections are discussed. (author)

  9. Half-life calculation of one-proton emitters with a shell model potential

    Energy Technology Data Exchange (ETDEWEB)

    Rodrigues, M. M.; Duarte, S. B. [Centro Brasileiro de Pesquisas Fisicas-CBPF/MCT Rua Dr. Xavier Sigaud, 150, 22290-180, Rio de Janeiro-RJ (Brazil); Teruya, N. [Departamento de Fisica, Universidade Federal da Paraiba - UFPB Campus de Joao Pessoa, 58051-970, Joao Pessoa - PB (Brazil)

    2013-03-25

    The accumulated amount of data for half-lives of proton emitters still remains a challenge to the ability of nuclear models to reproduce them consistently. These nuclei are far from beta stability line in a region where the validity of current nuclear models is not guaranteed. A nuclear shell model is introduced to the calculation of the nuclear barrier of less deformed proton emitters. The predictions using the proposed model are in good agreement with the data, with the advantage of have used only a single parameter in the model.

  10. Shell-model calculations of beta-decay rates for s- and r-process nucleosyntheses

    International Nuclear Information System (INIS)

    Takahashi, K.; Mathews, G.J.; Bloom, S.D.

    1985-01-01

    Examples of large-basis shell-model calculations of Gamow-Teller β-decay properties of specific interest in the astrophysical s- and r- processes are presented. Numerical results are given for: (1) the GT-matrix elements for the excited state decays of the unstable s-process nucleus 99 Tc; and (2) the GT-strength function for the neutron-rich nucleus 130 Cd, which lies on the r-process path. The results are discussed in conjunction with the astrophysics problems. 23 refs., 3 figs

  11. Large-scale calculations of the beta-decay rates and r-process nucleosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Borzov, I N; Goriely, S [Inst. d` Astronomie et d` Astrophysique, Univ. Libre de Bruxelles, Campus Plaine, Bruxelles (Belgium); Pearson, J M [Inst. d` Astronomie et d` Astrophysique, Univ. Libre de Bruxelles, Campus Plaine, Bruxelles (Belgium); [Lab. de Physique Nucleaire, Univ. de Montreal, Montreal (Canada)

    1998-06-01

    An approximation to a self-consistent model of the ground state and {beta}-decay properties of neutron-rich nuclei is outlined. The structure of the {beta}-strength functions in stable and short-lived nuclei is discussed. The results of large-scale calculations of the {beta}-decay rates for spherical and slightly deformed nuclides of relevance to the r-process are analysed and compared with the results of existing global calculations and recent experimental data. (orig.)

  12. Calculations of the energy spectra of Zn, Ga and Ge isotopes by the shell model

    International Nuclear Information System (INIS)

    Sakakura, M.; Shikata, Y.; Arima, A.; Sebe, T.

    1979-01-01

    The effective Hamiltonian which was determined empirically by Koops and Glaudemans is tested in shell model calculations for the 65-68 Zn, 67-69 Ga, and 68-70 Ge nuclei in the full (1p 3 / 2 , 0f 5 / 2 , 1p 1 / 2 )n space. The resulting energy spectra are compared with the experimental spectra and results of previous calculations. The overall agreement with experiment is as satisfactory for these nuclei as for the Ni and Cu isotopes, by which the Hamiltonian was determined. It is noticed that the spectra of 67 Zn and 67 , 69 Ga calculated in this work are similar to those provided by the Alaga model. (orig.) [de

  13. Development of the simulation package 'ELSES' for extra-large-scale electronic structure calculation

    International Nuclear Information System (INIS)

    Hoshi, T; Fujiwara, T

    2009-01-01

    An early-stage version of the simulation package 'ELSES' (extra-large-scale electronic structure calculation) is developed for simulating the electronic structure and dynamics of large systems, particularly nanometer-scale and ten-nanometer-scale systems (see www.elses.jp). Input and output files are written in the extensible markup language (XML) style for general users. Related pre-/post-simulation tools are also available. A practical workflow and an example are described. A test calculation for the GaAs bulk system is shown, to demonstrate that the present code can handle systems with more than one atom species. Several future aspects are also discussed.

  14. Shell model calculations for levels and transition rates in 204Pb and 206Pb

    International Nuclear Information System (INIS)

    Wang, D.; McEllistrem, M.T.

    1990-01-01

    Level energies and decay rates of both negative and positive parity levels of 206,204 Pb have been calculated through mixed-configuration shell model calculations using the modified surface delta interaction (MSDI), the Schiffer-True central interaction, and another two-body interaction. These calculations were all carried out with a full six-orbit neutron hole space. The predicted low-lying levels with the MSDI are in excellent agreement with experiments, accounting for the energies, spins, and parities of essentially all levels below 3 MeV excitation energy except known particle-hole collective excitations in both nuclei. Almost all calculated E2 and M1 transition rates are consistent with measured branching ratios for γ-ray decay of excited levels. The comparison of the observed and calculated levels demonstrates the important role played by the neutron-hole i 13/2 configuration in the levels of 204 Pb and 206 Pb, and interprets an apparent discrepancy over the character and energy spacings of 0 + levels in 204 Pb

  15. In-medium no-core shell model for ab initio nuclear structure calculations

    International Nuclear Information System (INIS)

    Gebrerufael, Eskendr

    2017-01-01

    In this work, we merge two successful ab initio nuclear-structure methods, the no-core shell model (NCSM) and the multi-reference in-medium similarity renormalization group (IM-SRG), to define a novel many-body approach for the comprehensive description of ground and excited states of closed- and open-shell medium-mass nuclei. Building on the key advantages of the two methods - the decoupling of excitations at the many-body level in the IM-SRG, and the exact diagonalization in the NCSM applicable up to medium-light nuclei - their combination enables fully converged no-core calculations for an unprecedented range of nuclei and observables at moderate computational cost. The efficiency and rapid model-space convergence of the new approach make it ideally suited for ab initio studies of ground and low-lying excited states of nuclei up to the medium-mass regime. Interactions constructed within the framework of chiral effective field theory provide an excellent opportunity to describe properties of nuclei from first principles, i.e., rooted in quantum chromodynamics, they overcome the lack of predictive power of phenomenological potentials. The hard core of these interactions causes strong short-range correlations, which we soften by using the similarity-renormalization-group transformation that accelerates the model-space convergence of many-body calculations. Three-nucleon effects, which are mandatory for the correct description of bulk properties of nuclei, are included in our calculations by using the normal-ordered two-body approximation, which has been shown to be sufficient to capture the main effects of the three-nucleon interaction. Using these interactions, we analyze energies of ground and excited states in the carbon and oxygen isotopic chains, where conventional NCSM calculations are still feasible and provide an important benchmark. Furthermore, we study the Hoyle state in 12 C - a three-alpha cluster state that cannot be converged in standard NCSM

  16. Shell model in large spaces and statistical spectroscopy

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1996-01-01

    For many nuclear structure problems of current interest it is essential to deal with shell model in large spaces. For this, three different approaches are now in use and two of them are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the shell model Monte Carlo method. A brief overview of these two methods is given. Large space shell model studies raise fundamental questions regarding the information content of the shell model spectrum of complex nuclei. This led to the third approach- the statistical spectroscopy methods. The principles of statistical spectroscopy have their basis in nuclear quantum chaos and they are described (which are substantiated by large scale shell model calculations) in some detail. (author)

  17. Microscopic calculation of level densities: the shell model Monte Carlo approach

    International Nuclear Information System (INIS)

    Alhassid, Yoram

    2012-01-01

    The shell model Monte Carlo (SMMC) approach provides a powerful technique for the microscopic calculation of level densities in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We discuss a number of developments: (i) Spin distribution. We used a spin projection method to calculate the exact spin distribution of energy levels as a function of excitation energy. In even-even nuclei we find an odd-even staggering effect (in spin). Our results were confirmed in recent analysis of experimental data. (ii) Heavy nuclei. The SMMC approach was extended to heavy nuclei. We have studied the crossover between vibrational and rotational collectivity in families of samarium and neodymium isotopes in model spaces of dimension approx. 10 29 . We find good agreement with experimental results for both state densities and 2 > (where J is the total spin). (iii) Collective enhancement factors. We have calculated microscopically the vibrational and rotational enhancement factors of level densities versus excitation energy. We find that the decay of these enhancement factors in heavy nuclei is correlated with the pairing and shape phase transitions. (iv) Odd-even and odd-odd nuclei. The projection on an odd number of particles leads to a sign problem in SMMC. We discuss a novel method to calculate state densities in odd-even and odd-odd nuclei despite the sign problem. (v) State densities versus level densities. The SMMC approach has been used extensively to calculate state densities. However, experiments often measure level densities (where levels are counted without including their spin degeneracies.) A spin projection method enables us to also calculate level densities in SMMC. We have calculated the SMMC level density of 162 Dy and found it to agree well with experiments

  18. Cluster form factor calculation in the ab initio no-core shell model

    International Nuclear Information System (INIS)

    Navratil, Petr

    2004-01-01

    We derive expressions for cluster overlap integrals or channel cluster form factors for ab initio no-core shell model (NCSM) wave functions. These are used to obtain the spectroscopic factors and can serve as a starting point for the description of low-energy nuclear reactions. We consider the composite system and the target nucleus to be described in the Slater determinant (SD) harmonic oscillator (HO) basis while the projectile eigenstate to be expanded in the Jacobi coordinate HO basis. This is the most practical case. The spurious center of mass components present in the SD bases are removed exactly. The calculated cluster overlap integrals are translationally invariant. As an illustration, we present results of cluster form factor calculations for 5 He vertical bar 4 He+n>, 5 He vertical bar 3 H+d>, 6 Li vertical bar 4 He+d>, 6 Be vertical bar 3 He+ 3 He>, 7 Li vertical bar 4 He+ 3 H>, 7 Li vertical bar 6 Li+n>, 8 Be vertical bar 6 Li+d>, 8 Be vertical bar 7 Li+p>, 9 Li vertical bar 8 Li+n>, and 13 C vertical bar 12 C+n>, with all the nuclei described by multi-(ℎ/2π)Ω NCSM wave functions

  19. The prospect of modern thermomechanics in structural integrity calculations of large-scale pressure vessels

    Science.gov (United States)

    Fekete, Tamás

    2018-05-01

    Structural integrity calculations play a crucial role in designing large-scale pressure vessels. Used in the electric power generation industry, these kinds of vessels undergo extensive safety analyses and certification procedures before deemed feasible for future long-term operation. The calculations are nowadays directed and supported by international standards and guides based on state-of-the-art results of applied research and technical development. However, their ability to predict a vessel's behavior under accidental circumstances after long-term operation is largely limited by the strong dependence of the analysis methodology on empirical models that are correlated to the behavior of structural materials and their changes during material aging. Recently a new scientific engineering paradigm, structural integrity has been developing that is essentially a synergistic collaboration between a number of scientific and engineering disciplines, modeling, experiments and numerics. Although the application of the structural integrity paradigm highly contributed to improving the accuracy of safety evaluations of large-scale pressure vessels, the predictive power of the analysis methodology has not yet improved significantly. This is due to the fact that already existing structural integrity calculation methodologies are based on the widespread and commonly accepted 'traditional' engineering thermal stress approach, which is essentially based on the weakly coupled model of thermomechanics and fracture mechanics. Recently, a research has been initiated in MTA EK with the aim to review and evaluate current methodologies and models applied in structural integrity calculations, including their scope of validity. The research intends to come to a better understanding of the physical problems that are inherently present in the pool of structural integrity problems of reactor pressure vessels, and to ultimately find a theoretical framework that could serve as a well

  20. Implementation of highly parallel and large scale GW calculations within the OpenAtom software

    Science.gov (United States)

    Ismail-Beigi, Sohrab

    The need to describe electronic excitations with better accuracy than provided by band structures produced by Density Functional Theory (DFT) has been a long-term enterprise for the computational condensed matter and materials theory communities. In some cases, appropriate theoretical frameworks have existed for some time but have been difficult to apply widely due to computational cost. For example, the GW approximation incorporates a great deal of important non-local and dynamical electronic interaction effects but has been too computationally expensive for routine use in large materials simulations. OpenAtom is an open source massively parallel ab initiodensity functional software package based on plane waves and pseudopotentials (http://charm.cs.uiuc.edu/OpenAtom/) that takes advantage of the Charm + + parallel framework. At present, it is developed via a three-way collaboration, funded by an NSF SI2-SSI grant (ACI-1339804), between Yale (Ismail-Beigi), IBM T. J. Watson (Glenn Martyna) and the University of Illinois at Urbana Champaign (Laxmikant Kale). We will describe the project and our current approach towards implementing large scale GW calculations with OpenAtom. Potential applications of large scale parallel GW software for problems involving electronic excitations in semiconductor and/or metal oxide systems will be also be pointed out.

  1. Spectroscopy of 96-98Ru and neighboring nuclei: shell model calculations and lifetime measurements

    International Nuclear Information System (INIS)

    Kharraja, B.; Garg, U.; Ghugre, S.S.

    1997-01-01

    High Spin states in 94,95 Mo, 94-96 Tc, 96-98 Ru and 97,98 Rh were populated via the 65 Cu( 36 S,xpyn) reactions at 142 MeV. Level schemes of these nuclei have been extended up to a spin of J ∼ 20ℎ and an excitation energy of E x ∼12 -14 MeV. Information on the high spin structure for 96 Tc and 98 Rh has been obtained for the first time. Spherical shell model calculations have been performed and compared with the experimental excitation energies. The level structures of the N=51, 52 isotones exhibit single-particle nature even at the highest spins and excitation energies. A fragmentation of intensity into several branches after breaking of the N = 50 core has been observed. There are indications for the onset of collectivity around neutron number N = 53 in this mass region. A sequence of E2 transitions, reminiscent of vibrational degree of freedom, were observed in 98 Ru at spins just above the observed N = 50 core breaking. RDM lifetime measurements have been performed to ascertain the intrinsic structures of these level sequences. (author)

  2. Calculation of large scale relative permeabilities from stochastic properties of the permeability field and fluid properties

    Energy Technology Data Exchange (ETDEWEB)

    Lenormand, R.; Thiele, M.R. [Institut Francais du Petrole, Rueil Malmaison (France)

    1997-08-01

    The paper describes the method and presents preliminary results for the calculation of homogenized relative permeabilities using stochastic properties of the permeability field. In heterogeneous media, the spreading of an injected fluid is mainly sue to the permeability heterogeneity and viscosity fingering. At large scale, when the heterogeneous medium is replaced by a homogeneous one, we need to introduce a homogenized (or pseudo) relative permeability to obtain the same spreading. Generally, is derived by using fine-grid numerical simulations (Kyte and Berry). However, this operation is time consuming and cannot be performed for all the meshes of the reservoir. We propose an alternate method which uses the information given by the stochastic properties of the field without any numerical simulation. The method is based on recent developments on homogenized transport equations (the {open_quotes}MHD{close_quotes} equation, Lenormand SPE 30797). The MHD equation accounts for the three basic mechanisms of spreading of the injected fluid: (1) Dispersive spreading due to small scale randomness, characterized by a macrodispersion coefficient D. (2) Convective spreading due to large scale heterogeneities (layers) characterized by a heterogeneity factor H. (3) Viscous fingering characterized by an apparent viscosity ration M. In the paper, we first derive the parameters D and H as functions of variance and correlation length of the permeability field. The results are shown to be in good agreement with fine-grid simulations. The are then derived a function of D, H and M. The main result is that this approach lead to a time dependent . Finally, the calculated are compared to the values derived by history matching using fine-grid numerical simulations.

  3. Calculation and characteristics analysis of blade pitch loads for large scale wind turbines

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Based on the electric pitch system of large scale horizontal-axis wind turbines,the blade pitch loads coming mainly from centrifugal force,aerodynamic force and gravity are analyzed,and the calculation models for them are established in this paper.For illustration,a 1.2 MW wind turbine is introduced as a practical sample,and its blade pitch loads from centrifugal force,aerodynamic force and gravity are calculated and analyzed separately and synthetically.The research results showed that in the process of rotor rotating 360o,the fluctuation of blade pitch loads is similar to cosine curve when the rotor rotational speed,in-flow wind speed and pitch angle are constant.Furthermore,the amplitude of blade pitch load presents quite a difference at a different pitch angle.The ways of calculation for blade pitch loads are of the universality,and are helpful for further research of the individual pitch control system.

  4. Shell model and spectroscopic factors

    International Nuclear Information System (INIS)

    Poves, P.

    2007-01-01

    In these lectures, I introduce the notion of spectroscopic factor in the shell model context. A brief review is given of the present status of the large scale applications of the Interacting Shell Model. The spectroscopic factors and the spectroscopic strength are discussed for nuclei in the vicinity of magic closures and for deformed nuclei. (author)

  5. Large scale steam flow test: Pressure drop data and calculated pressure loss coefficients

    International Nuclear Information System (INIS)

    Meadows, J.B.; Spears, J.R.; Feder, A.R.; Moore, B.P.; Young, C.E.

    1993-12-01

    This report presents the result of large scale steam flow testing, 3 million to 7 million lbs/hr., conducted at approximate steam qualities of 25, 45, 70 and 100 percent (dry, saturated). It is concluded from the test data that reasonable estimates of piping component pressure loss coefficients for single phase flow in complex piping geometries can be calculated using available engineering literature. This includes the effects of nearby upstream and downstream components, compressibility, and internal obstructions, such as splitters, and ladder rungs on individual piping components. Despite expected uncertainties in the data resulting from the complexity of the piping geometry and two-phase flow, the test data support the conclusion that the predicted dry steam K-factors are accurate and provide useful insight into the effect of entrained liquid on the flow resistance. The K-factors calculated from the wet steam test data were compared to two-phase K-factors based on the Martinelli-Nelson pressure drop correlations. This comparison supports the concept of a two-phase multiplier for estimating the resistance of piping with liquid entrained into the flow. The test data in general appears to be reasonably consistent with the shape of a curve based on the Martinelli-Nelson correlation over the tested range of steam quality

  6. New-generation Monte Carlo shell model for the K computer era

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Abe, Takashi; Yoshida, Tooru; Otsuka, Takaharu; Tsunoda, Yusuke; Utsuno, Yutaka; Mizusaki, Takahiro; Honma, Michio

    2012-01-01

    We present a newly enhanced version of the Monte Carlo shell-model (MCSM) method by incorporating the conjugate gradient method and energy-variance extrapolation. This new method enables us to perform large-scale shell-model calculations that the direct diagonalization method cannot reach. This new-generation framework of the MCSM provides us with a powerful tool to perform very advanced large-scale shell-model calculations on current massively parallel computers such as the K computer. We discuss the validity of this method in ab initio calculations of light nuclei, and propose a new method to describe the intrinsic wave function in terms of the shell-model picture. We also apply this new MCSM to the study of neutron-rich Cr and Ni isotopes using conventional shell-model calculations with an inert 40 Ca core and discuss how the magicity of N = 28, 40, 50 remains or is broken. (author)

  7. Chaotic behaviour of the nuclear shell-model hamiltonian

    International Nuclear Information System (INIS)

    Dias, H.; Hussein, M.S.; Oliveira, N.A. de; Wildenthal, B.H.

    1987-11-01

    Large scale nuclear shell-model calculations for several nuclear systems are discussed. In particular, the statistical baheviour of the energy eigenvalues and eigenstates, are discussed. The chaotic behaviour of the NSMH is then shown to be quite useful in calculating the spreading width of the highly collective multipole giant resonances. (author) [pt

  8. DGDFT: A massively parallel method for large scale density functional theory calculations.

    Science.gov (United States)

    Hu, Wei; Lin, Lin; Yang, Chao

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10(-4) Hartree/atom in terms of the error of energy and 6.2 × 10(-4) Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  9. The linearly scaling 3D fragment method for large scale electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)

    2009-07-01

    The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  10. Large-scale subduction of continental crust implied by India-Asia mass-balance calculation

    Science.gov (United States)

    Ingalls, Miquela; Rowley, David B.; Currie, Brian; Colman, Albert S.

    2016-11-01

    Continental crust is buoyant compared with its oceanic counterpart and resists subduction into the mantle. When two continents collide, the mass balance for the continental crust is therefore assumed to be maintained. Here we use estimates of pre-collisional crustal thickness and convergence history derived from plate kinematic models to calculate the crustal mass balance in the India-Asia collisional system. Using the current best estimates for the timing of the diachronous onset of collision between India and Eurasia, we find that about 50% of the pre-collisional continental crustal mass cannot be accounted for in the crustal reservoir preserved at Earth's surface today--represented by the mass preserved in the thickened crust that makes up the Himalaya, Tibet and much of adjacent Asia, as well as southeast Asian tectonic escape and exported eroded sediments. This implies large-scale subduction of continental crust during the collision, with a mass equivalent to about 15% of the total oceanic crustal subduction flux since 56 million years ago. We suggest that similar contamination of the mantle by direct input of radiogenic continental crustal materials during past continent-continent collisions is reflected in some ocean crust and ocean island basalt geochemistry. The subduction of continental crust may therefore contribute significantly to the evolution of mantle geochemistry.

  11. DGDFT: A massively parallel method for large scale density functional theory calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Wei, E-mail: whu@lbl.gov; Yang, Chao, E-mail: cyang@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Lin, Lin, E-mail: linlin@math.berkeley.edu [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Department of Mathematics, University of California, Berkeley, California 94720 (United States)

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10{sup −4} Hartree/atom in terms of the error of energy and 6.2 × 10{sup −4} Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  12. DGDFT: A massively parallel method for large scale density functional theory calculations

    International Nuclear Information System (INIS)

    Hu, Wei; Yang, Chao; Lin, Lin

    2015-01-01

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10 −4 Hartree/atom in terms of the error of energy and 6.2 × 10 −4 Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail

  13. Large scale electronic structure calculations in the study of the condensed phase

    NARCIS (Netherlands)

    van Dam, H.J.J.; Guest, M.F.; Sherwood, P.; Thomas, J.M.H.; van Lenthe, J.H.; van Lingen, J.N.J.; Bailey, C.L.; Bush, I.J.

    2006-01-01

    We consider the role that large-scale electronic structure computations can now play in the modelling of the condensed phase. To structure our analysis, we consider four distict ways in which today's scientific targets can be re-scoped to take advantage of advances in computing resources: 1. time to

  14. Observation of high-spin states in the N=84 nucleus 152Er and comparison with shell-model calculations

    International Nuclear Information System (INIS)

    Kuhnert, A.; Alber, D.; Grawe, H.; Kluge, H.; Maier, K.H.; Reviol, W.; Sun, X.; Beck, E.M.; Byrne, A.P.; Huebel, H.; Bacelar, J.C.; Deleplanque, M.A.; Diamond, R.M.; Stephens, F.S.

    1992-01-01

    High-spin states in 152 Er have been populated through the 116 Sn( 40 Ar,4n) 152 Er reaction. Prompt and delayed γ-γ-γ-t and γ-e-t coincidences have been measured. Levels and transitions are assigned up to an excitation energy of 15 MeV and spin and parities up to 28 + at 9.7 MeV. A new isomer [t 1/2 =11(1) ns] has been observed at 13.4 MeV. The results are discussed in comparison with neighboring nuclei and with shell-model calculations

  15. Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Dytrych, T. [Academy of Sciences of the Czech Republic (ASCR), Prague (Czech Republic); Louisiana State Univ., Baton Rouge, LA (United States); Maris, Pieter [Iowa State Univ., Ames, IA (United States); Launey, K. D. [Louisiana State Univ., Baton Rouge, LA (United States); Draayer, J. P. [Louisiana State Univ., Baton Rouge, LA (United States); Vary, James [Iowa State Univ., Ames, IA (United States); Langr, D. [Czech Technical Univ., Prague (Czech Republic); Aerospace Research and Test Establishment, Prague (Czech Republic); Saule, E. [Univ. of North Carolina, Charlotte, NC (United States); Caprio, M. A. [Univ. of Notre Dame, IN (United States); Catalyurek, U. [The Ohio State Univ., Columbus, OH (United States). Dept. of Electrical and Computer Engineering; Sosonkina, M. [Old Dominion Univ., Norfolk, VA (United States)

    2016-06-09

    We report on the computational characteristics of ab initio nuclear structure calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We examine the computational complexity of the current implementation of the SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for 6Li and 12C in large harmonic oscillator model spaces and SU(3)-selected subspaces. We demonstrate LSU3shell's strong-scaling properties achieved with highly-parallel methods for computing the many-body matrix elements. Results compare favorably with complete model space calculations and signi cant memory savings are achieved in physically important applications. In particular, a well-chosen symmetry-adapted basis a ords memory savings in calculations of states with a fixed total angular momentum in large model spaces while exactly preserving translational invariance.

  16. Particle physics and polyedra proximity calculation for hazard simulations in large-scale industrial plants

    Science.gov (United States)

    Plebe, Alice; Grasso, Giorgio

    2016-12-01

    This paper describes a system developed for the simulation of flames inside an open-source 3D computer graphic software, Blender, with the aim of analyzing in virtual reality scenarios of hazards in large-scale industrial plants. The advantages of Blender are of rendering at high resolution the very complex structure of large industrial plants, and of embedding a physical engine based on smoothed particle hydrodynamics. This particle system is used to evolve a simulated fire. The interaction of this fire with the components of the plant is computed using polyhedron separation distance, adopting a Voronoi-based strategy that optimizes the number of feature distance computations. Results on a real oil and gas refining industry are presented.

  17. Large scale nuclear structure studies

    International Nuclear Information System (INIS)

    Faessler, A.

    1985-01-01

    Results of large scale nuclear structure studies are reported. The starting point is the Hartree-Fock-Bogoliubov solution with angular momentum and proton and neutron number projection after variation. This model for number and spin projected two-quasiparticle excitations with realistic forces yields in sd-shell nuclei similar good results as the 'exact' shell-model calculations. Here the authors present results for a pf-shell nucleus 46 Ti and results for the A=130 mass region where they studied 58 different nuclei with the same single-particle energies and the same effective force derived from a meson exchange potential. They carried out a Hartree-Fock-Bogoliubov variation after mean field projection in realistic model spaces. In this way, they determine for each yrast state the optimal mean Hartree-Fock-Bogoliubov field. They apply this method to 130 Ce and 128 Ba using the same effective nucleon-nucleon interaction. (Auth.)

  18. An algebraic sub-structuring method for large-scale eigenvalue calculation

    International Nuclear Information System (INIS)

    Yang, C.; Gao, W.; Bai, Z.; Li, X.; Lee, L.; Husbands, P.; Ng, E.

    2004-01-01

    We examine sub-structuring methods for solving large-scale generalized eigenvalue problems from a purely algebraic point of view. We use the term 'algebraic sub-structuring' to refer to the process of applying matrix reordering and partitioning algorithms to divide a large sparse matrix into smaller submatrices from which a subset of spectral components are extracted and combined to provide approximate solutions to the original problem. We are interested in the question of which spectral components one should extract from each sub-structure in order to produce an approximate solution to the original problem with a desired level of accuracy. Error estimate for the approximation to the smallest eigenpair is developed. The estimate leads to a simple heuristic for choosing spectral components (modes) from each sub-structure. The effectiveness of such a heuristic is demonstrated with numerical examples. We show that algebraic sub-structuring can be effectively used to solve a generalized eigenvalue problem arising from the simulation of an accelerator structure. One interesting characteristic of this application is that the stiffness matrix produced by a hierarchical vector finite elements scheme contains a null space of large dimension. We present an efficient scheme to deflate this null space in the algebraic sub-structuring process

  19. 3D large-scale calculations using the method of characteristics

    International Nuclear Information System (INIS)

    Dahmani, M.; Roy, R.; Koclas, J.

    2004-01-01

    An overview of the computational requirements and the numerical developments made in order to be able to solve 3D large-scale problems using the characteristics method will be presented. To accelerate the MCI solver, efficient acceleration techniques were implemented and parallelization was performed. However, for the very large problems, the size of the tracking file used to store the tracks can still become prohibitive and exceed the capacity of the machine. The new 3D characteristics solver MCG will now be introduced. This methodology is dedicated to solve very large 3D problems (a part or a whole core) without spatial homogenization. In order to eliminate the input/output problems occurring when solving these large problems, we define a new computing scheme that requires more CPU resources than the usual one, based on sweeps over large tracking files. The huge capacity of storage needed in some problems and the related I/O queries needed by the characteristics solver are replaced by on-the-fly recalculation of tracks at each iteration step. Using this technique, large 3D problems are no longer I/O-bound, and distributed CPU resources can be efficiently used. (author)

  20. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions.

    Science.gov (United States)

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-11-11

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials.

  1. Development of the simulation package 'ELSES' for extra-large-scale electronic structure calculation

    Energy Technology Data Exchange (ETDEWEB)

    Hoshi, T [Department of Applied Mathematics and Physics, Tottori University, Tottori 680-8550 (Japan); Fujiwara, T [Core Research for Evolutional Science and Technology, Japan Science and Technology Agency (CREST-JST) (Japan)

    2009-02-11

    An early-stage version of the simulation package 'ELSES' (extra-large-scale electronic structure calculation) is developed for simulating the electronic structure and dynamics of large systems, particularly nanometer-scale and ten-nanometer-scale systems (see www.elses.jp). Input and output files are written in the extensible markup language (XML) style for general users. Related pre-/post-simulation tools are also available. A practical workflow and an example are described. A test calculation for the GaAs bulk system is shown, to demonstrate that the present code can handle systems with more than one atom species. Several future aspects are also discussed.

  2. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    Science.gov (United States)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  3. Large-scale calculation of ferromagnetic spin systems on the pyrochlore lattice

    Energy Technology Data Exchange (ETDEWEB)

    Soldatov, Konstantin, E-mail: soldatov_ks@students.dvfu.ru [School of Natural Sciences, Far Eastern Federal University, Vladivostok (Russian Federation); Nefedev, Konstantin, E-mail: nefedev.kv@dvfu.ru [School of Natural Sciences, Far Eastern Federal University, Vladivostok (Russian Federation); Institute of Applied Mathematics, Far Eastern Branch, Russian Academy of Science, Vladivostok (Russian Federation); Komura, Yukihiro [CIJ-solutions, Chuo-ku, Tokyo 103-0023 (Japan); Okabe, Yutaka, E-mail: okabe@phys.se.tmu.ac.jp [Department of Physics, Tokyo Metropolitan University, Hachioji, Tokyo 192-0397 (Japan)

    2017-02-19

    We perform the high-performance computation of the ferromagnetic Ising model on the pyrochlore lattice. We determine the critical temperature accurately based on the finite-size scaling of the Binder ratio. Comparing with the data on the simple cubic lattice, we argue the universal finite-size scaling. We also calculate the classical XY model and the classical Heisenberg model on the pyrochlore lattice. - Highlights: • Calculations of the ferromagnetic models on the pyrochlore lattice were performed. • Precise critical temperatures were determined using Binder ratio finite-size scaling. • The universal finite-size scaling was argued.

  4. Large scale exact quantum dynamics calculations: Ten thousand quantum states of acetonitrile

    Science.gov (United States)

    Halverson, Thomas; Poirier, Bill

    2015-03-01

    'Exact' quantum dynamics (EQD) calculations of the vibrational spectrum of acetonitrile (CH3CN) are performed, using two different methods: (1) phase-space-truncated momentum-symmetrized Gaussian basis and (2) correlated truncated harmonic oscillator basis. In both cases, a simple classical phase space picture is used to optimize the selection of individual basis functions-leading to drastic reductions in basis size, in comparison with existing methods. Massive parallelization is also employed. Together, these tools-implemented into a single, easy-to-use computer code-enable a calculation of tens of thousands of vibrational states of CH3CN to an accuracy of 0.001-10 cm-1.

  5. Large-scale ab initio configuration interaction calculations for light nuclei

    International Nuclear Information System (INIS)

    Maris, Pieter; Potter, Hugh; Vary, James P; Aktulga, H Metin; Ng, Esmond G; Yang Chao; Caprio, Mark A; Çatalyürek, Ümit V; Saule, Erik; Oryspayev, Dossay; Sosonkina, Masha; Zhou Zheng

    2012-01-01

    In ab-initio Configuration Interaction calculations, the nuclear wavefunction is expanded in Slater determinants of single-nucleon wavefunctions and the many-body Schrodinger equation becomes a large sparse matrix problem. The challenge is to reach numerical convergence to within quantified numerical uncertainties for physical observables using finite truncations of the infinite-dimensional basis space. We discuss strategies for constructing and solving the resulting large sparse matrix eigenvalue problems on current multicore computer architectures. Several of these strategies have been implemented in the code MFDn, a hybrid MPI/OpenMP Fortran code for ab-initio nuclear structure calculations that can scale to 100,000 cores and more. Finally, we will conclude with some recent results for 12 C including emerging collective phenomena such as rotational band structures using SRG evolved chiral N3LO interactions.

  6. Implementation of structural response sensitivity calculations in a large-scale finite-element analysis system

    Science.gov (United States)

    Giles, G. L.; Rogers, J. L., Jr.

    1982-01-01

    The implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calclating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of the system are also discussed.

  7. Efficient Computation of Sparse Matrix Functions for Large-Scale Electronic Structure Calculations: The CheSS Library.

    Science.gov (United States)

    Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi

    2017-10-10

    We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.

  8. Uncertainty analysis of multiple canister repository model by large-scale calculation

    International Nuclear Information System (INIS)

    Tsujimoto, K.; Okuda, H.; Ahn, J.

    2007-01-01

    A prototype uncertainty analysis has been made by using the multiple-canister radionuclide transport code, VR, for performance assessment for the high-level radioactive waste repository. Fractures in the host rock determine main conduit of groundwater, and thus significantly affect the magnitude of radionuclide release rates from the repository. In this study, the probability distribution function (PDF) for the number of connected canisters in the same fracture cluster that bears water flow has been determined in a Monte-Carlo fashion by running the FFDF code with assumed PDFs for fracture geometry. The uncertainty for the release rate of 237 Np from a hypothetical repository containing 100 canisters has been quantitatively evaluated by using the VR code with PDFs for the number of connected canisters and the near field rock porosity. The calculation results show that the mass transport is greatly affected by (1) the magnitude of the radionuclide source determined by the number of connected canisters by the fracture cluster, and (2) the canister concentration effect in the same fracture network. The results also show the two conflicting tendencies that the more fractures in the repository model space, the greater average value but the smaller uncertainty of the peak fractional release rate is. To perform a vast amount of calculation, we have utilized the Earth Simulator and SR8000. The multi-level hybrid programming method is applied in the optimization to exploit high performance of the Earth Simulator. The Latin Hypercube Sampling has been utilized to reduce the number of samplings in Monte-Carlo calculation. (authors)

  9. Optimizing porphyrins for dye sensitized solar cells using large-scale ab initio calculations

    DEFF Research Database (Denmark)

    Ørnsø, Kristian Baruël; Pedersen, Christian S.; García Lastra, Juan Maria

    2014-01-01

    different side and anchoring groups. Based on the calculated frontier orbital energies and optical gaps we quantify the energy level alignment with the TiO2 conduction band and different redox mediators. An analysis of the energy level-structure relationship reveals a significant structural diversity among...... the dyes with the highest level alignment quality, demonstrating the large degree of flexibility in porphyrin dye design. As a specific example of dye optimization, we show that the level alignment of the high efficiency record dye YD2-o-C8 [Yella et al., Science, 2011, 334, 629-634] can be significantly...

  10. Quantum Monte Carlo diagonalization method as a variational calculation

    International Nuclear Information System (INIS)

    Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio.

    1997-01-01

    A stochastic method for performing large-scale shell model calculations is presented, which utilizes the auxiliary field Monte Carlo technique and diagonalization method. This method overcomes the limitation of the conventional shell model diagonalization and can extremely widen the feasibility of shell model calculations with realistic interactions for spectroscopic study of nuclear structure. (author)

  11. Aerodynamic loads calculation and analysis for large scale wind turbine based on combining BEM modified theory with dynamic stall model

    Energy Technology Data Exchange (ETDEWEB)

    Dai, J.C. [College of Mechanical and Electrical Engineering, Central South University, Changsha (China); School of Electromechanical Engineering, Hunan University of Science and Technology, Xiangtan (China); Hu, Y.P.; Liu, D.S. [School of Electromechanical Engineering, Hunan University of Science and Technology, Xiangtan (China); Long, X. [Hara XEMC Windpower Co., Ltd., Xiangtan (China)

    2011-03-15

    The aerodynamic loads for MW scale horizontal-axis wind turbines are calculated and analyzed in the established coordinate systems which are used to describe the wind turbine. In this paper, the blade element momentum (BEM) theory is employed and some corrections, such as Prandtl and Buhl models, are carried out. Based on the B-L semi-empirical dynamic stall (DS) model, a new modified DS model for NACA63-4xx airfoil is adopted. Then, by combing BEM modified theory with DS model, a set of calculation method of aerodynamic loads for large scale wind turbines is proposed, in which some influence factors such as wind shear, tower, tower and blade vibration are considered. The research results show that the presented dynamic stall model is good enough for engineering purpose; the aerodynamic loads are influenced by many factors such as tower shadow, wind shear, dynamic stall, tower and blade vibration, etc, with different degree; the single blade endures periodical changing loads but the variations of the rotor shaft power caused by the total aerodynamic torque in edgewise direction are very small. The presented study approach of aerodynamic loads calculation and analysis is of the university, and helpful for thorough research of loads reduction on large scale wind turbines. (author)

  12. GRASP92: a package for large-scale relativistic atomic structure calculations

    Science.gov (United States)

    Parpia, F. A.; Froese Fischer, C.; Grant, I. P.

    2006-12-01

    of CSFs sharing the same quantum numbers is determined using the configuration-interaction (CI) procedure that results upon varying the expansion coefficients to determine the extremum of a variational functional. Radial functions may be determined by numerically solving the multiconfiguration Dirac-Fock (MCDF) equations that result upon varying the orbital radial functions or some subset thereof so as to obtain an extremum of the variational functional. Radial wavefunctions may also be determined using a screened hydrogenic or Thomas-Fermi model, although these schemes generally provide initial estimates for MCDF self-consistent-field (SCF) calculations. Transition properties for pairs of ASFs are computed from matrix elements of multipole operators of the electromagnetic field. All matrix elements of CSFs are evaluated using the Racah algebra. Reasons for the new version: During recent studies using the general relativistic atomic structure package (GRASP92), several errors were found, some of which might have been present already in the earlier GRASP92 version (program ABJN_v1_0, Comput. Phys. Comm. 55 (1989) 425). These errors were reported and discussed by Froese Fischer, Gaigalas, and Ralchenko in a separate publication [C. Froese Fischer, G. Gaigalas, Y. Ralchenko, Comput. Phys. Comm. 175 (2006) 738-744. [7

  13. Shape coexistence, Lanczos techniques, and large-basis shell-model calculations

    Energy Technology Data Exchange (ETDEWEB)

    Haxton, W C [Washington Univ., Seattle, WA (United States). Dept. of Physics

    1992-08-01

    I discuss numerical many-body techniques based on the Lanczos algorithm and their applications to nuclear structure problems. Examples include shape coexistence, inclusive response functions, and weak interaction rates in {sup 16}O; weak-coupling descriptions of the O{sup +} bands in isotopes of Ge and Se; and the evaluation of the nuclear Green`s functions that arise in two-neutrino {beta}{beta} decay and in nuclear anapole and electric dipole moment calculations. (author). 11 refs., 2 tabs., 4 figs.

  14. Energy Decomposition Analysis Based on Absolutely Localized Molecular Orbitals for Large-Scale Density Functional Theory Calculations in Drug Design.

    Science.gov (United States)

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2016-07-12

    We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.

  15. Comparisons between shell-model calculations, seniority truncation, and quasiparticle approximations: Application to the odd Ni isotopes and odd N = 82 isotones

    International Nuclear Information System (INIS)

    Losano, L.; Dias, H.; Krmpotic, F.; Wildenthal, B.H.

    1988-01-01

    A detailed study of the results of correcting BCS approximation for the effects of particle-number projection and blocking has been carried out. A low-seniority shell-model approximation was used as the frame of reference for investigating the mixing of one- and three-quasiparticle states in odd-mass Ni isotopes and in odd-mass N = 82 isotones. We discuss the results obtained for the energy spectra and electromagnetic decay properties. Effects of seniority-five configurations on the low-lying states have also been studied through the comparison of the low-seniority shell-model results with those which arose from the corresponding full shell-model calculations

  16. Conventional shell model: some issues

    International Nuclear Information System (INIS)

    Vallieres, M.; Pan, X.W.; Feng, D.H.; Novoselsky, A.

    1997-01-01

    We discuss some important issues in shell-model calculations related to the effective interactions used in different regions of the periodic table; in particular the quality of different interactions is discussed, as well as the mass dependence of the interactions. Mention is made of the recently developed Drexel University shell-model (DUSM). (orig.)

  17. Free energies of binding from large-scale first-principles quantum mechanical calculations: application to ligand hydration energies.

    Science.gov (United States)

    Fox, Stephen J; Pittock, Chris; Tautermann, Christofer S; Fox, Thomas; Christ, Clara; Malcolm, N O J; Essex, Jonathan W; Skylaris, Chris-Kriton

    2013-08-15

    Schemes of increasing sophistication for obtaining free energies of binding have been developed over the years, where configurational sampling is used to include the all-important entropic contributions to the free energies. However, the quality of the results will also depend on the accuracy with which the intermolecular interactions are computed at each molecular configuration. In this context, the energy change associated with the rearrangement of electrons (electronic polarization and charge transfer) upon binding is a very important effect. Classical molecular mechanics force fields do not take this effect into account explicitly, and polarizable force fields and semiempirical quantum or hybrid quantum-classical (QM/MM) calculations are increasingly employed (at higher computational cost) to compute intermolecular interactions in free-energy schemes. In this work, we investigate the use of large-scale quantum mechanical calculations from first-principles as a way of fully taking into account electronic effects in free-energy calculations. We employ a one-step free-energy perturbation (FEP) scheme from a molecular mechanical (MM) potential to a quantum mechanical (QM) potential as a correction to thermodynamic integration calculations within the MM potential. We use this approach to calculate relative free energies of hydration of small aromatic molecules. Our quantum calculations are performed on multiple configurations from classical molecular dynamics simulations. The quantum energy of each configuration is obtained from density functional theory calculations with a near-complete psinc basis set on over 600 atoms using the ONETEP program.

  18. Large-scale calculations of solid oxide fuel cell cermet anode by tight-binding quantum chemistry method

    International Nuclear Information System (INIS)

    Koyama, Michihisa; Kubo, Momoji; Miyamoto, Akira

    2005-01-01

    Improvement of anode characteristics of solid oxide fuel cells is important for the better cell performance and especially the direct use of hydrocarbons. A mixture of ceramics and metal is generally used as anode, and different combinations of ceramics and metals lead to different electrode characteristics. We performed large-scale calculations to investigate the characteristics of Ni/CeO 2 and Cu/CeO 2 anodes at the electronic level using our tight-binding quantum chemical molecular dynamics program. Charge distribution analysis clarified the electron transfer from metal to oxide in both anodes. The calculations of density of states clarified different contributions of Ni and Cu orbitals to the energy levels at around Fermi level in each cermet. Based on the obtained results, we made considerations to explain different characteristics of both cermet anodes. The effectiveness of our approach for the investigation of complex cermet system was proved

  19. Recent shell-model results for exotic nuclei

    Directory of Open Access Journals (Sweden)

    Utsuno Yusuke

    2014-03-01

    Full Text Available We report on our recent advancement in the shell model and its applications to exotic nuclei, focusing on the shell evolution and large-scale calculations with the Monte Carlo shell model (MCSM. First, we test the validity of the monopole-based universal interaction (VMU as a shell-model interaction by performing large-scale shell-model calculations in two different mass regions using effective interactions which partly comprise VMU. Those calculations are successful and provide a deeper insight into the shell evolution beyond the single-particle model, in particular showing that the evolution of the spin-orbit splitting due to the tensor force plays a decisive role in the structure of the neutron-rich N ∼ 28 region and antimony isotopes. Next, we give a brief overview of recent developments in MCSM, and show that it is applicable to exotic nuclei that involve many valence orbits. As an example of its applications to exotic nuclei, shape coexistence in 32Mg is examined.

  20. Calculated Electronic Behavior and Spectrum of Mg+@C60 Using a Simple Jellium-shell Model

    Directory of Open Access Journals (Sweden)

    H. A. Schuessler

    2004-11-01

    Full Text Available Abstract: We present a method for calculating the energy levels and wave functions of any atom or ion with a single valence electron encapsulated in a Fullerene cage using a jelluim-shell model. The valence electron-core interaction is represented by a one-body pseudo-potential obtained through density functional theory with strikingly accurate parameters for Mg+ and which reduces to a purely Coulombic interaction in the case of H. We find that most energy states are affected little by encapsulation. However, when either the electron in the non-encapsulated species has a high probability of being near the jellium cage, or when the cage induces a maximum electron probability density within it, the energy levels shift considerably. Mg+ shows behavior similar to that of H, but since its wave functions are broader, the changes in its energy levels from encapsulation are slightly more pronounced. Agreement with other computational work as well as experiment is excellent and the method presented here is generalizable to any encapsulated species where a one-body electronic pseudo-potential for the free atom (or ion is available. Results are also presented for off-center hydrogen, where a ground state energy minimum of -14.01 eV is found at a nuclear displacement of around 0.1 Å.

  1. SQDFT: Spectral Quadrature method for large-scale parallel O(N) Kohn-Sham calculations at high temperature

    Science.gov (United States)

    Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj; Pask, John E.

    2018-03-01

    We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method for O(N) Kohn-Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw-Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw-Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. We further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect O(N) scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.

  2. Adsorption and diffusion of Ru adatoms on Ru(0001)-supported graphene: Large-scale first-principles calculations

    Energy Technology Data Exchange (ETDEWEB)

    Han, Yong; Evans, James W. [Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011, USA and Ames Laboratory—U.S. Department of Energy, Iowa State University, Ames, Iowa 50011 (United States)

    2015-10-28

    Large-scale first-principles density functional theory calculations are performed to investigate the adsorption and diffusion of Ru adatoms on monolayer graphene (G) supported on Ru(0001). The G sheet exhibits a periodic moiré-cell superstructure due to lattice mismatch. Within a moiré cell, there are three distinct regions: fcc, hcp, and mound, in which the C{sub 6}-ring center is above a fcc site, a hcp site, and a surface Ru atom of Ru(0001), respectively. The adsorption energy of a Ru adatom is evaluated at specific sites in these distinct regions. We find the strongest binding at an adsorption site above a C atom in the fcc region, next strongest in the hcp region, then the fcc-hcp boundary (ridge) between these regions, and the weakest binding in the mound region. Behavior is similar to that observed from small-unit-cell calculations of Habenicht et al. [Top. Catal. 57, 69 (2014)], which differ from previous large-scale calculations. We determine the minimum-energy path for local diffusion near the center of the fcc region and obtain a local diffusion barrier of ∼0.48 eV. We also estimate a significantly lower local diffusion barrier in the ridge region. These barriers and information on the adsorption energy variation facilitate development of a realistic model for the global potential energy surface for Ru adatoms. This in turn enables simulation studies elucidating diffusion-mediated directed-assembly of Ru nanoclusters during deposition of Ru on G/Ru(0001)

  3. Two-Level Chebyshev Filter Based Complementary Subspace Method: Pushing the Envelope of Large-Scale Electronic Structure Calculations.

    Science.gov (United States)

    Banerjee, Amartya S; Lin, Lin; Suryanarayana, Phanish; Yang, Chao; Pask, John E

    2018-06-12

    We describe a novel iterative strategy for Kohn-Sham density functional theory calculations aimed at large systems (>1,000 electrons), applicable to metals and insulators alike. In lieu of explicit diagonalization of the Kohn-Sham Hamiltonian on every self-consistent field (SCF) iteration, we employ a two-level Chebyshev polynomial filter based complementary subspace strategy to (1) compute a set of vectors that span the occupied subspace of the Hamiltonian; (2) reduce subspace diagonalization to just partially occupied states; and (3) obtain those states in an efficient, scalable manner via an inner Chebyshev filter iteration. By reducing the necessary computation to just partially occupied states and obtaining these through an inner Chebyshev iteration, our approach reduces the cost of large metallic calculations significantly, while eliminating subspace diagonalization for insulating systems altogether. We describe the implementation of the method within the framework of the discontinuous Galerkin (DG) electronic structure method and show that this results in a computational scheme that can effectively tackle bulk and nano systems containing tens of thousands of electrons, with chemical accuracy, within a few minutes or less of wall clock time per SCF iteration on large-scale computing platforms. We anticipate that our method will be instrumental in pushing the envelope of large-scale ab initio molecular dynamics. As a demonstration of this, we simulate a bulk silicon system containing 8,000 atoms at finite temperature, and obtain an average SCF step wall time of 51 s on 34,560 processors; thus allowing us to carry out 1.0 ps of ab initio molecular dynamics in approximately 28 h (of wall time).

  4. Shell model calculation of the nuclear moments of 9Li in a 2hω space

    International Nuclear Information System (INIS)

    Chang, Y.; Meder, M.R.

    1984-01-01

    A recent measurement of the magnitude of quadrupole moment of the ground state of 9 Li, Q( 9 Li), finds that Vertical BarQ( 9 Li)/Q( 7 Li)Vertical Bar = 0.88 +- 0.18. A variety of shell-model calculations, using p-shell wave functions, predict Q( 9 Li)approx. =1.3Q( 7 Li) and yield quadrupole moments whose magnitudes are approximately half the experimental values. Agreement between theory and experiment is improved when effective charges are used, although the results are still not completely satisfactory. A calculation of the wave functions of the low-lying states of 7 Li and 9 Li using a modified version of the Sussex matrix elements in a model space, including all 0hω and 2hω excitations, has been performed. The resulting value for Q( 9 Li) was -3.46 fm 2 as ray transitions in /sup 52,53/Cr and /sup 54,55/Mn have been observed using 7 Li( 51 V,xn yp zα γ) fusion-evaporation reactions and γ-particle coincidence techniques. The experiment involved the same reaction at the same center-of-mass energy as the earlier work of Poletti et al., but with target and projectile interchanged. In the present work, eight additional transitions have been identified as occurring in 52 Cr. This provides corroboration of results obtained more recently via 50 Ti(α,2nγ) 52 Cr reaction studies. A simple, efficient approach to the spectroscopy of weakly populated nuclear states which provides for unambiguous isotopic assignments is thus demonstrated

  5. Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei

    Czech Academy of Sciences Publication Activity Database

    Dytrych, Tomáš; Maris, P.; Launey, K. D.; Draayer, J. P.; Vary, J. P.; Langr, D.; Saule, E.; Caprio, M. A.; Catalyurek, U.; Sosonkina, M.

    2016-01-01

    Roč. 207, OCT (2016), s. 202-210 ISSN 0010-4655 R&D Projects: GA ČR GA16-16772S Institutional support: RVO:61389005 Keywords : nuclear structure * Ab initio methods * Shell model * models based on group theory Subject RIV: BE - Theoretical Physics Impact factor: 3.936, year: 2016

  6. Large scale free energy calculations for blind predictions of protein-ligand binding: the D3R Grand Challenge 2015.

    Science.gov (United States)

    Deng, Nanjie; Flynn, William F; Xia, Junchao; Vijayan, R S K; Zhang, Baofeng; He, Peng; Mentes, Ahmet; Gallicchio, Emilio; Levy, Ronald M

    2016-09-01

    We describe binding free energy calculations in the D3R Grand Challenge 2015 for blind prediction of the binding affinities of 180 ligands to Hsp90. The present D3R challenge was built around experimental datasets involving Heat shock protein (Hsp) 90, an ATP-dependent molecular chaperone which is an important anticancer drug target. The Hsp90 ATP binding site is known to be a challenging target for accurate calculations of ligand binding affinities because of the ligand-dependent conformational changes in the binding site, the presence of ordered waters and the broad chemical diversity of ligands that can bind at this site. Our primary focus here is to distinguish binders from nonbinders. Large scale absolute binding free energy calculations that cover over 3000 protein-ligand complexes were performed using the BEDAM method starting from docked structures generated by Glide docking. Although the ligand dataset in this study resembles an intermediate to late stage lead optimization project while the BEDAM method is mainly developed for early stage virtual screening of hit molecules, the BEDAM binding free energy scoring has resulted in a moderate enrichment of ligand screening against this challenging drug target. Results show that, using a statistical mechanics based free energy method like BEDAM starting from docked poses offers better enrichment than classical docking scoring functions and rescoring methods like Prime MM-GBSA for the Hsp90 data set in this blind challenge. Importantly, among the three methods tested here, only the mean value of the BEDAM binding free energy scores is able to separate the large group of binders from the small group of nonbinders with a gap of 2.4 kcal/mol. None of the three methods that we have tested provided accurate ranking of the affinities of the 147 active compounds. We discuss the possible sources of errors in the binding free energy calculations. The study suggests that BEDAM can be used strategically to discriminate

  7. History and future perspectives of the Monte Carlo shell model -from Alphleet to K computer-

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Otsuka, Takaharu; Utsuno, Yutaka; Mizusaki, Takahiro; Honma, Michio; Abe, Takashi

    2013-01-01

    We report a history of the developments of the Monte Carlo shell model (MCSM). The MCSM was proposed in order to perform large-scale shell-model calculations which direct diagonalization method cannot reach. Since 1999 PC clusters were introduced for parallel computation of the MCSM. Since 2011 we participated the High Performance Computing Infrastructure Strategic Program and developed a new MCSM code for current massively parallel computers such as K computer. We discuss future perspectives concerning a new framework and parallel computation of the MCSM by incorporating conjugate gradient method and energy-variance extrapolation

  8. Structure of 14C via elastic and inelastic neutron scattering from 13C: Measurement, R-matrix analysis, and shell model calculations

    International Nuclear Information System (INIS)

    Resler, D.A.

    1987-03-01

    The specific purpose of this work is to provide a better understanding of the 14 C level structure; the general purpose is to provide the details for using shell model calculations in R-matrix analyses. Using the TOF facilities of the Ohio University Accelerator Laboratory, the elastic and first 3 inelastic differential scattering cross sections for 13 C + n were measured at 69 energies for 4.5 ≤ E/sub n/ ≤ 11 MeV. A multiple scattering code was developed which provided a simulation of the experimental scattering process allowing accurate corrections to the small inelastic data. The integrated 13 C(n,α) 10 Be cross section is estimated. The sequential 2n-decay of 14 C states populated by 13 C + n was observed. A shell model code was developed. Normal and nonnormal parity calculations were made for the lithium isotopes using a new two-body interaction. The results for 5 Li predict the 2s/sub 1/2/ and 1d/sub 5/2/ single-particle states to be located below the 3/2 + state. Similar calculations were made for 13 C, 13 N, and 14 C. Results for 13 C and 13 N show for E/sub x/ 7 Li and 14 C, 2 h-barω calculations were done. Shell model calculations generated the R-matrix parameters for the elastic and first 3 inelastic channels of 13 C + n. After adjusting some energies, the predicted structure generally agrees with experiment for E/sub n/ 13 C + n data were refit to replace R 0 background terms by more realistic broad states and to get better agreement with model calculations. R-matrix fitting of the full data set produced new 14 C level information. For E/sub n/ > 4 MeV (E/sub x/ > 12 MeV), 5 states are given definite J/sup π/ assignments; 3, tentative assignments. 122 refs., 91 figs., 30 tabs

  9. Shell model calculations for the A ∼ 100 region: application to the even-Z N = 52 isotones 92Zr -100Cd

    International Nuclear Information System (INIS)

    Halse, P.

    1993-01-01

    Shell model calculations for the Z = 38 - 50 N > 50 region, with a space of protons in the 2p 1/2 , 1g 9/2 orbits and neutrons in the 2d 5/2 , 3s 1/2 , 2d 3/2 , 1g 7/2 orbits, are initiated by the selection of a schematic Hamiltonian and effective electromagnetic operators. An application to the Z-even N = 52 isotones gives a good description for energies of both low-spin and yrast high-spin levels, and for E2 and M1 transition strengths and moments where these have been measured, over the entire range of Z. The calculated E2 matrix elements for the lower-spin yrast states in 98 Pd and 100 Cd suggest a finite and stable prolate deformation with β ∼ 0.1, in marked contrast to previous collective interpretations. 28 refs., 4 tabs., 4 figs

  10. Lattice design and beam optics calculations for the new large-scale electron-positron collider FCC-ee

    CERN Document Server

    Haerer, Bastian; Prof. Dr. Schmidt, Ruediger; Dr. Holzer, Bernhard

    Following the recommendations of the European Strategy Group for High Energy Physics, CERN launched the Future Circular Collider Study (FCC) to investigate the feasibility of large-scale circular colliders for future high energy physics research. This thesis presents the considerations taken into account during the design process of the magnetic lattice in the arc sections of the electron-positron version FCC-ee. The machine is foreseen to operate at four different centre-of-mass energies in the range of 90 to 350 GeV. Different beam parameters need to be achieved for every energy, which requires a flexible lattice design in the arc sections. Therefore methods to tune the horizontal beam emittance without re-positioning machine components are implemented. In combination with damping and excitation wigglers a precise adjustment of the emittance can be achieved. A very first estimation of the vertical emittance arising from lattice imperfections is performed. Special emphasis is put on the optimisation of the ...

  11. Large-scale theoretical calculations in molecular science - design of a large computer system for molecular science and necessary conditions for future computers

    Energy Technology Data Exchange (ETDEWEB)

    Kashiwagi, H [Institute for Molecular Science, Okazaki, Aichi (Japan)

    1982-06-01

    A large computer system was designed and established for molecular science under the leadership of molecular scientists. Features of the computer system are an automated operation system and an open self-service system. Large-scale theoretical calculations have been performed to solve many problems in molecular science, using the computer system. Necessary conditions for future computers are discussed on the basis of this experience.

  12. Large-scale theoretical calculations in molecular science - design of a large computer system for molecular science and necessary conditions for future computers

    International Nuclear Information System (INIS)

    Kashiwagi, H.

    1982-01-01

    A large computer system was designed and established for molecular science under the leadership of molecular scientists. Features of the computer system are an automated operation system and an open self-service system. Large-scale theoretical calculations have been performed to solve many problems in molecular science, using the computer system. Necessary conditions for future computers are discussed on the basis of this experience. (orig.)

  13. Quark shell model using projection operators

    International Nuclear Information System (INIS)

    Ullah, N.

    1988-01-01

    Using the projection operators in the quark shell model, the wave functions for proton are calculated and expressions for calculating the wave function of neutron and also magnetic moment of proton and neutron are derived. (M.G.B.)

  14. Shell model studies in the N = 54 isotones 99Rh, 100Pd

    International Nuclear Information System (INIS)

    Ghugre, S.S.; Sarkar, S.; Chintalapudi, S.N.

    1996-01-01

    The shell model in reproducing the observed level is used to investigate the observed level sequences in 99 Rh and 100 Pd within the spherical shell model framework. Shell model calculations have been performed using the code OXBASH

  15. Large-scale FMO-MP3 calculations on the surface proteins of influenza virus, hemagglutinin (HA) and neuraminidase (NA)

    Science.gov (United States)

    Mochizuki, Yuji; Yamashita, Katsumi; Fukuzawa, Kaori; Takematsu, Kazutomo; Watanabe, Hirofumi; Taguchi, Naoki; Okiyama, Yoshio; Tsuboi, Misako; Nakano, Tatsuya; Tanaka, Shigenori

    2010-06-01

    Two proteins on the influenza virus surface have been well known. One is hemagglutinin (HA) associated with the infection to cells. The fragment molecular orbital (FMO) calculations were performed on a complex consisting of HA trimer and two Fab-fragments at the third-order Møller-Plesset perturbation (MP3) level. The numbers of residues and 6-31G basis functions were 2351 and 201276, and thus a massively parallel-vector computer was utilized to accelerate the processing. This FMO-MP3 job was completed in 5.8 h with 1024 processors. Another protein is neuraminidase (NA) involved in the escape from infected cells. The FMO-MP3 calculation was also applied to analyze the interactions between oseltamivir and surrounding residues in pharmacophore.

  16. Tensor-decomposed vibrational coupled-cluster theory: Enabling large-scale, highly accurate vibrational-structure calculations

    Science.gov (United States)

    Madsen, Niels Kristian; Godtliebsen, Ian H.; Losilla, Sergio A.; Christiansen, Ove

    2018-01-01

    A new implementation of vibrational coupled-cluster (VCC) theory is presented, where all amplitude tensors are represented in the canonical polyadic (CP) format. The CP-VCC algorithm solves the non-linear VCC equations without ever constructing the amplitudes or error vectors in full dimension but still formally includes the full parameter space of the VCC[n] model in question resulting in the same vibrational energies as the conventional method. In a previous publication, we have described the non-linear-equation solver for CP-VCC calculations. In this work, we discuss the general algorithm for evaluating VCC error vectors in CP format including the rank-reduction methods used during the summation of the many terms in the VCC amplitude equations. Benchmark calculations for studying the computational scaling and memory usage of the CP-VCC algorithm are performed on a set of molecules including thiadiazole and an array of polycyclic aromatic hydrocarbons. The results show that the reduced scaling and memory requirements of the CP-VCC algorithm allows for performing high-order VCC calculations on systems with up to 66 vibrational modes (anthracene), which indeed are not possible using the conventional VCC method. This paves the way for obtaining highly accurate vibrational spectra and properties of larger molecules.

  17. Quantum Mechanical Calculation of Noncovalent Interactions: A Large-Scale Evaluation of PMx, DFT, and SAPT Approaches.

    Science.gov (United States)

    Li, Amanda; Muddana, Hari S; Gilson, Michael K

    2014-04-08

    Quantum mechanical (QM) calculations of noncovalent interactions are uniquely useful as tools to test and improve molecular mechanics force fields and to model the forces involved in biomolecular binding and folding. Because the more computationally tractable QM methods necessarily include approximations, which risk degrading accuracy, it is essential to evaluate such methods by comparison with high-level reference calculations. Here, we use the extensive Benchmark Energy and Geometry Database (BEGDB) of CCSD(T)/CBS reference results to evaluate the accuracy and speed of widely used QM methods for over 1200 chemically varied gas-phase dimers. In particular, we study the semiempirical PM6 and PM7 methods; density functional theory (DFT) approaches B3LYP, B97-D, M062X, and ωB97X-D; and symmetry-adapted perturbation theory (SAPT) approach. For the PM6 and DFT methods, we also examine the effects of post hoc corrections for hydrogen bonding (PM6-DH+, PM6-DH2), halogen atoms (PM6-DH2X), and dispersion (DFT-D3 with zero and Becke-Johnson damping). Several orders of the SAPT expansion are also compared, ranging from SAPT0 up to SAPT2+3, where computationally feasible. We find that all DFT methods with dispersion corrections, as well as SAPT at orders above SAPT2, consistently provide dimer interaction energies within 1.0 kcal/mol RMSE across all systems. We also show that a linear scaling of the perturbative energy terms provided by the fast SAPT0 method yields similar high accuracy, at particularly low computational cost. The energies of all the dimer systems from the various QM approaches are included in the Supporting Information, as are the full SAPT2+(3) energy decomposition for a subset of over 1000 systems. The latter can be used to guide the parametrization of molecular mechanics force fields on a term-by-term basis.

  18. Automation methodologies and large-scale validation for G W : Towards high-throughput G W calculations

    Science.gov (United States)

    van Setten, M. J.; Giantomassi, M.; Gonze, X.; Rignanese, G.-M.; Hautier, G.

    2017-10-01

    The search for new materials based on computational screening relies on methods that accurately predict, in an automatic manner, total energy, atomic-scale geometries, and other fundamental characteristics of materials. Many technologically important material properties directly stem from the electronic structure of a material, but the usual workhorse for total energies, namely density-functional theory, is plagued by fundamental shortcomings and errors from approximate exchange-correlation functionals in its prediction of the electronic structure. At variance, the G W method is currently the state-of-the-art ab initio approach for accurate electronic structure. It is mostly used to perturbatively correct density-functional theory results, but is, however, computationally demanding and also requires expert knowledge to give accurate results. Accordingly, it is not presently used in high-throughput screening: fully automatized algorithms for setting up the calculations and determining convergence are lacking. In this paper, we develop such a method and, as a first application, use it to validate the accuracy of G0W0 using the PBE starting point and the Godby-Needs plasmon-pole model (G0W0GN @PBE) on a set of about 80 solids. The results of the automatic convergence study utilized provide valuable insights. Indeed, we find correlations between computational parameters that can be used to further improve the automatization of G W calculations. Moreover, we find that G0W0GN @PBE shows a correlation between the PBE and the G0W0GN @PBE gaps that is much stronger than that between G W and experimental gaps. However, the G0W0GN @PBE gaps still describe the experimental gaps more accurately than a linear model based on the PBE gaps. With this paper, we hence show that G W can be made automatic and is more accurate than using an empirical correction of the PBE gap, but that, for accurate predictive results for a broad class of materials, an improved starting point or some

  19. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.; Dean, D.J.; Langanke, K.

    1997-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)

  20. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  1. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  2. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  3. On the contribution of external cost calculations to energy system governance: The case of a potential large-scale nuclear accident

    International Nuclear Information System (INIS)

    Laes, Erik; Meskens, Gaston; Sluijs, Jeroen P. van der

    2011-01-01

    The contribution of nuclear power to a sustainable energy future is a contested issue. This paper presents a critical review of an attempt to objectify this debate through the calculation of the external costs of a potential large-scale nuclear accident in the ExternE project. A careful dissection of the ExternE approach resulted in a list of 30 calculation steps and assumptions, from which the 6 most contentious ones were selected through a stakeholder internet survey. The policy robustness and relevance of these key assumptions were then assessed in a workshop using the concept of a 'pedigree of knowledge'. Overall, the workshop outcomes revealed the stakeholder and expert panel's scepticism about the assumptions made: generally these were considered not very plausible, subjected to disagreement, and to a large extent inspired by contextual factors. Such criticism indicates a limited validity and useability of the calculated nuclear accident externality as a trustworthy sustainability indicator. Furthermore, it is our contention that the ExternE project could benefit greatly - in terms of gaining public trust - from employing highly visible procedures of extended peer review such as the pedigree assessment applied to our specific case of the external costs of a potential large-scale nuclear accident. - Highlights: → Six most contentious assumptions were selected through a stakeholder internet survey. → Policy robustness of these assumptions was assessed in a pedigree assessment workshop. → Assumptions were considered implausible, controversial, and inspired by contextual factors. → This indicates a limited validity and useability as a trustworthy sustainability indicator.

  4. Shell-model-based deformation analysis of light cadmium isotopes

    Science.gov (United States)

    Schmidt, T.; Heyde, K. L. G.; Blazhev, A.; Jolie, J.

    2017-07-01

    Large-scale shell-model calculations for the even-even cadmium isotopes 98Cd-108Cd have been performed with the antoine code in the π (2 p1 /2;1 g9 /2) ν (2 d5 /2;3 s1 /2;2 d3 /2;1 g7 /2;1 h11 /2) model space without further truncation. Known experimental energy levels and B (E 2 ) values could be well reproduced. Taking these calculations as a starting ground we analyze the deformation parameters predicted for the Cd isotopes as a function of neutron number N and spin J using the methods of model independent invariants introduced by Kumar [Phys. Rev. Lett. 28, 249 (1972), 10.1103/PhysRevLett.28.249] and Cline [Annu. Rev. Nucl. Part. Sci. 36, 683 (1986), 10.1146/annurev.ns.36.120186.003343].

  5. Statistics and the shell model

    International Nuclear Information System (INIS)

    Weidenmueller, H.A.

    1985-01-01

    Starting with N. Bohr's paper on compound-nucleus reactions, we confront regular dynamical features and chaotic motion in nuclei. The shell-model and, more generally, mean-field theories describe average nuclear properties which are thus identified as regular features. The fluctuations about the average show chaotic behaviour of the same type as found in classical chaotic systems upon quantisation. These features are therefore generic and quite independent of the specific dynamics of the nucleus. A novel method to calculate fluctuations is discussed, and the results of this method are described. (orig.)

  6. Decaying and kicked turbulence in a shell model

    DEFF Research Database (Denmark)

    Hooghoudt, Jan Otto; Lohse, Detlef; Toschi, Federico

    2001-01-01

    Decaying and periodically kicked turbulence are analyzed within the Gledzer–Ohkitani–Yamada shell model, to allow for sufficiently large scaling regimes. Energy is transferred towards the small scales in intermittent bursts. Nevertheless, mean field arguments are sufficient to account for the ens......Decaying and periodically kicked turbulence are analyzed within the Gledzer–Ohkitani–Yamada shell model, to allow for sufficiently large scaling regimes. Energy is transferred towards the small scales in intermittent bursts. Nevertheless, mean field arguments are sufficient to account...

  7. Shell model description of Ge isotopes

    International Nuclear Information System (INIS)

    Hirsch, J G; Srivastava, P C

    2012-01-01

    A shell model study of the low energy region of the spectra in Ge isotopes for 38 ≤ N ≤ 50 is presented, analyzing the excitation energies, quadrupole moments, B(E2) values and occupation numbers. The theoretical results have been compared with the available experimental data. The shell model calculations have been performed employing three different effective interactions and valence spaces. We have used two effective shell model interactions, JUN45 and jj44b, for the valence space f 5/2 pg 9/2 without truncation. To include the proton subshell f 7/2 in valence space we have employed the fpg effective interaction due to Sorlin et al., with 48 Ca as a core and a truncation in the number of excited particles.

  8. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  9. Stochastic estimation of nuclear level density in the nuclear shell model: An application to parity-dependent level density in 58Ni

    Directory of Open Access Journals (Sweden)

    Noritaka Shimizu

    2016-02-01

    Full Text Available We introduce a novel method to obtain level densities in large-scale shell-model calculations. Our method is a stochastic estimation of eigenvalue count based on a shifted Krylov-subspace method, which enables us to obtain level densities of huge Hamiltonian matrices. This framework leads to a successful description of both low-lying spectroscopy and the experimentally observed equilibration of Jπ=2+ and 2− states in 58Ni in a unified manner.

  10. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  11. Improved L-BFGS diagonal preconditioners for a large-scale 4D-Var inversion system: application to CO2 flux constraints and analysis error calculation

    Science.gov (United States)

    Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng

    2013-04-01

    This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a

  12. The energy saving possibilities of CDS (Commodity Services System) for large-scale consumers. Calculating and combining for an ideal mix

    International Nuclear Information System (INIS)

    Van Gelder, J.W.

    1999-01-01

    Gasunie's new price scheme, the Commodity Services Sector (CDS), will soon apply to all large-scale consumers. Although the scheme is complicated, it does offer various opportunities to reduce the gas bill. The natural gas price was determined solely by volume, whereas the costs largely depend on the services rendered to the customer such as transmission and capacity. The logical solution is thus to charge the customer separately for the gas and the services. In the future, the gas bill will therefore comprise three elements: commodity (gas), capacity and transmission. As far as capacity is concerned, the user can choose between base capacity, additional capacity, incidental capacity and hourly capacity, depending on the natural gas consumption pattern

  13. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  14. Shell Model Far From Stability: Island of Inversion Mergers

    Science.gov (United States)

    Nowacki, F.; Poves, A.

    2018-02-01

    In this study we propose a common mechanism for the disappearance of shell closures far from stabilty. With the use of Large Scale Shell Model calculations (SM-CI), we predict that the region of deformation which comprises the heaviest Chromium and Iron isotopes at and beyond N=40 will merge with a new one at N=50 in an astonishing parallel to the N=20 and N=28 case in the Neon and Magnesium isotopes. We propose a valence space including the full pf-shell for the protons and the full sdg shell for the neutrons, which represents a come-back of the the harmonic oscillator shells in the very neutron rich regime. Our calculations preserve the doubly magic nature of the ground state of 78Ni, which, however, exhibits a well deformed prolate band at low excitation energy, providing a striking example of shape coexistence far from stability. This new Island of Inversion (IoI) adds to the four well documented ones at N=8, 20, 28 and 40.

  15. Comparison of Conjugate Gradient Density Matrix Search and Chebyshev Expansion Methods for Avoiding Diagonalization in Large-Scale Electronic Structure Calculations

    Science.gov (United States)

    Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.

    1998-01-01

    We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.

  16. Toward enabling large-scale open-shell equation-of-motion coupled cluster calculations: triplet states of β-carotene

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Hanshi; Bhaskaran-Nair, Kiran; Apra, Edoardo; Govind, Niranjan; Kowalski, Karol

    2014-10-02

    In this paper we discuss the application of novel parallel implementation of the coupled cluster (CC) and equation-of-motion coupled cluster methods (EOMCC) in calculations of excitation energies of triplet states in beta-carotene. Calculated excitation energies are compared with experimental data, where available. We also provide a detailed description of the new parallel algorithms for iterative CC and EOMCC models involving single and doubles excitations.

  17. Reactivity of the binuclear non-heme iron active site of delta(9) desaturase studied by large-scale multireference ab initio calculations

    Czech Academy of Sciences Publication Activity Database

    Chalupský, Jakub; Rokob, Tibor András; Kurashige, Y.; Yanai, T.; Solomon, E. I.; Rulíšek, Lubomír; Srnec, Martin

    2014-01-01

    Roč. 136, č. 45 (2014), s. 15977-15991 ISSN 0002-7863 R&D Projects: GA ČR(CZ) GA14-31419S Institutional support: RVO:61388963 ; RVO:61388955 Keywords : DMRG-CASPT2 * ab initio calculations * reaction mechanisms Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 12.113, year: 2014

  18. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  19. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  20. A large-scale R-matrix calculation for electron-impact excitation of the Ne2 +, O-like ion

    OpenAIRE

    McLaughlin , B M; Lee , Teck-Ghee; Ludlow , J A; Landi , E; Loch , S D; Pindzola , M S; Ballance , C P

    2011-01-01

    Abstract The five J? levels within a np2 or np4 ground state complex provide an excellent testing ground for the comparison of theoretical line ratios with astrophysically observed values, in addition to providing valuable electron temperature and density diagnostics. The low temperature nature of the line ratios ensure that the theoretically derived values are sensitive to the underlying atomic structure and electron-impact excitation rates. Previous R- matrix calculations for the O-like ...

  1. Local unitary transformation method for large-scale two-component relativistic calculations. II. Extension to two-electron Coulomb interaction.

    Science.gov (United States)

    Seino, Junji; Nakai, Hiromi

    2012-10-14

    The local unitary transformation (LUT) scheme at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level [J. Seino and H. Nakai, J. Chem. Phys. 136, 244102 (2012)], which is based on the locality of relativistic effects, has been extended to a four-component Dirac-Coulomb Hamiltonian. In the previous study, the LUT scheme was applied only to a one-particle IODKH Hamiltonian with non-relativistic two-electron Coulomb interaction, termed IODKH/C. The current study extends the LUT scheme to a two-particle IODKH Hamiltonian as well as one-particle one, termed IODKH/IODKH, which has been a real bottleneck in numerical calculation. The LUT scheme with the IODKH/IODKH Hamiltonian was numerically assessed in the diatomic molecules HX and X(2) and hydrogen halide molecules, (HX)(n) (X = F, Cl, Br, and I). The total Hartree-Fock energies calculated by the LUT method agree well with conventional IODKH/IODKH results. The computational cost of the LUT method is reduced drastically compared with that of the conventional method. In addition, the LUT method achieves linear-scaling with respect to the system size and a small prefactor.

  2. Temporal structures in shell models

    DEFF Research Database (Denmark)

    Okkels, F.

    2001-01-01

    The intermittent dynamics of the turbulent Gledzer, Ohkitani, and Yamada shell-model is completely characterized by a single type of burstlike structure, which moves through the shells like a front. This temporal structure is described by the dynamics of the instantaneous configuration of the shell...

  3. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  4. Local unitary transformation method for large-scale two-component relativistic calculations: case for a one-electron Dirac Hamiltonian.

    Science.gov (United States)

    Seino, Junji; Nakai, Hiromi

    2012-06-28

    An accurate and efficient scheme for two-component relativistic calculations at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level is presented. The present scheme, termed local unitary transformation (LUT), is based on the locality of the relativistic effect. Numerical assessments of the LUT scheme were performed in diatomic molecules such as HX and X(2) (X = F, Cl, Br, I, and At) and hydrogen halide clusters, (HX)(n) (X = F, Cl, Br, and I). Total energies obtained by the LUT method agree well with conventional IODKH results. The computational costs of the LUT method are drastically lower than those of conventional methods since in the former there is linear-scaling with respect to the system size and a small prefactor.

  5. Large scale collective modeling the final 'freeze out' stages of energetic heavy ion reactions and calculation of single particle measurables from these models

    International Nuclear Information System (INIS)

    Nyiri, Agnes

    2005-01-01

    The goal of this PhD project was to develop the already existing, but far not complete Multi Module Model, specially focusing on the last module which describes the final stages of a heavy ion collision, as this module was still missing. The major original achievements summarized in this thesis correspond to the freeze out problem and calculation of an important measurable, the anisotropic flow. Summary of results: Freeze out: The importance of freeze out models is that they allow the evaluation of observables, which then can be compared to the experimental results. Therefore, it is crucial to find a realistic freeze out description, which is proved to be a non-trivial task. Recently, several kinetic freeze out models have been developed. Based on the earlier results, we have introduced new ideas and improved models, which may contribute to a more realistic description of the freeze out process. We have investigated the applicability of the Boltzmann Transport Equation (BTE) to describe dynamical freeze out. We have introduced the so-called Modified Boltzmann Transport Equation, which has a form very similar to that of the BTE, but takes into account those characteristics of the FO process which the BTE can not handle, e.g. the rapid change of the phase-space distribution function in the direction normal to the finite FO layer. We have shown that the main features of earlier ad hoc kinetic FO models can be obtained from BTE and MBTE. We have discussed the qualitative differences between the two approaches and presented some quantitative comparison as well. Since the introduced modification of the BTE makes it very difficult to solve the FO problem from the first principles, it is important to work out simplified phenomenological models, which can explain the basic features of the FO process. We have built and discussed such a model. Flow analysis: The other main subject of this thesis has been the collective flow in heavy ion collisions. Collective flow from ultra

  6. Large scale collective modeling the final 'freeze out' stages of energetic heavy ion reactions and calculation of single particle measurables from these models

    Energy Technology Data Exchange (ETDEWEB)

    Nyiri, Agnes

    2005-07-01

    The goal of this PhD project was to develop the already existing, but far not complete Multi Module Model, specially focusing on the last module which describes the final stages of a heavy ion collision, as this module was still missing. The major original achievements summarized in this thesis correspond to the freeze out problem and calculation of an important measurable, the anisotropic flow. Summary of results: Freeze out: The importance of freeze out models is that they allow the evaluation of observables, which then can be compared to the experimental results. Therefore, it is crucial to find a realistic freeze out description, which is proved to be a non-trivial task. Recently, several kinetic freeze out models have been developed. Based on the earlier results, we have introduced new ideas and improved models, which may contribute to a more realistic description of the freeze out process. We have investigated the applicability of the Boltzmann Transport Equation (BTE) to describe dynamical freeze out. We have introduced the so-called Modified Boltzmann Transport Equation, which has a form very similar to that of the BTE, but takes into account those characteristics of the FO process which the BTE can not handle, e.g. the rapid change of the phase-space distribution function in the direction normal to the finite FO layer. We have shown that the main features of earlier ad hoc kinetic FO models can be obtained from BTE and MBTE. We have discussed the qualitative differences between the two approaches and presented some quantitative comparison as well. Since the introduced modification of the BTE makes it very difficult to solve the FO problem from the first principles, it is important to work out simplified phenomenological models, which can explain the basic features of the FO process. We have built and discussed such a model. Flow analysis: The other main subject of this thesis has been the collective flow in heavy ion collisions. Collective flow from ultra

  7. Large scale collective modeling the final 'freeze out' stages of energetic heavy ion reactions and calculation of single particle measurables from these models

    Energy Technology Data Exchange (ETDEWEB)

    Nyiri, Agnes

    2005-07-01

    The goal of this PhD project was to develop the already existing, but far not complete Multi Module Model, specially focusing on the last module which describes the final stages of a heavy ion collision, as this module was still missing. The major original achievements summarized in this thesis correspond to the freeze out problem and calculation of an important measurable, the anisotropic flow. Summary of results: Freeze out: The importance of freeze out models is that they allow the evaluation of observables, which then can be compared to the experimental results. Therefore, it is crucial to find a realistic freeze out description, which is proved to be a non-trivial task. Recently, several kinetic freeze out models have been developed. Based on the earlier results, we have introduced new ideas and improved models, which may contribute to a more realistic description of the freeze out process. We have investigated the applicability of the Boltzmann Transport Equation (BTE) to describe dynamical freeze out. We have introduced the so-called Modified Boltzmann Transport Equation, which has a form very similar to that of the BTE, but takes into account those characteristics of the FO process which the BTE can not handle, e.g. the rapid change of the phase-space distribution function in the direction normal to the finite FO layer. We have shown that the main features of earlier ad hoc kinetic FO models can be obtained from BTE and MBTE. We have discussed the qualitative differences between the two approaches and presented some quantitative comparison as well. Since the introduced modification of the BTE makes it very difficult to solve the FO problem from the first principles, it is important to work out simplified phenomenological models, which can explain the basic features of the FO process. We have built and discussed such a model. Flow analysis: The other main subject of this thesis has been the collective flow in heavy ion collisions. Collective flow from ultra

  8. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  9. Shell Models of Superfluid Turbulence

    International Nuclear Information System (INIS)

    Wacks, Daniel H; Barenghi, Carlo F

    2011-01-01

    Superfluid helium consists of two inter-penetrating fluids, a viscous normal fluid and an inviscid superfluid, coupled by a mutual friction. We develop a two-fluid shell model to study superfluid turbulence and investigate the energy spectra and the balance of fluxes between the two fluids in a steady state. At sufficiently low temperatures a 'bottle-neck' develops at high wavenumbers suggesting the need for a further dissipative effect, such as the Kelvin wave cascade.

  10. Automating large-scale LEMUF calculations

    International Nuclear Information System (INIS)

    Picard, R.R.

    1992-01-01

    To better understand material unaccounted for (MUFs) and, in some cases, to comply with formal regulatory requirements, many facilities are paying increasing attention to software for MUF evaluation. Activities related to improving understanding of MUFs are generic (including the identification, by name, of individual measured values and individual special nuclear material (SNM) items in a data base, and the handling of a wide variety of accounting problems) as well as facility-specific (including interfacing a facility's data base to a computational engine and subsequent uses of that engine). Los Alamos efforts to develop a practical engine are reviewed and some of the lessons learned during that development are described in this paper

  11. Experimental investigation shell model excitations of 89Zr up to high spin and its comparison with 88,90Zr

    International Nuclear Information System (INIS)

    Saha, S.; Palit, R.; Sethi, J.

    2012-01-01

    The excited states of nuclei near N=50 closed shell provide suitable laboratory for testing the interactions of shell model states, possible presence of high spin isomers and help in understanding the shape transition as the higher orbitals are occupied. In particular, the structure of N = 49 isotones (and Z =32 to 46) with one hole in N=50 shell gap have been investigated using different reactions. Interestingly, the high spin states in these isotones have contribution from particle excitations across the respective proton and neutron shell gaps and provide suitable testing ground for the prediction of shell model interactions describing theses excitations across the shell gap. In the literature, extensive study of the high spin states of heavier N = 49 isotones starting with 91 Mo up to 95 Pd are available. Limited information existed on the high spin states of lighter isotones. Therefore, the motivation of the present work is to extend the high spin structure of 89 Zr and to characterize the structure of these levels through comparison with the large scale shell model calculations based on two new residual interactions in f 5/2 pg 9/2 model space

  12. Dynamical symmetries of the shell model

    International Nuclear Information System (INIS)

    Van Isacker, P.

    2000-01-01

    The applications of spectrum generating algebras and of dynamical symmetries in the nuclear shell model are many and varied. They stretch back to Wigner's early work on the supermultiplet model and encompass important landmarks in our understanding of the structure of the atomic nucleus such as Racah's SU(2) pairing model and Elliot's SU(3) rotational model. One of the aims of this contribution has been to show the historical importance of the idea of dynamical symmetry in nuclear physics. Another has been to indicate that, in spite of being old, this idea continues to inspire developments that are at the forefront of today's research in nuclear physics. It has been argued in this contribution that the main driving features of nuclear structure can be represented algebraically but at the same time the limitations of the symmetry approach must be recognised. It should be clear that such approach can only account for gross properties and that any detailed description requires more involved numerical calculations of which we have seen many fine examples during this symposium. In this way symmetry techniques can be used as an appropriate starting point for detailed calculations. A noteworthy example of this approach is the pseudo-SU(3) model which starting from its initial symmetry Ansatz has grown into an adequate and powerful description of the nucleus in terms of a truncated shell model. (author)

  13. Statistical properties of the nuclear shell-model Hamiltonian

    International Nuclear Information System (INIS)

    Dias, H.; Hussein, M.S.; Oliveira, N.A. de

    1986-01-01

    The statistical properties of realistic nuclear shell-model Hamiltonian are investigated in sd-shell nuclei. The probability distribution of the basic-vector amplitude is calculated and compared with the Porter-Thomas distribution. Relevance of the results to the calculation of the giant resonance mixing parameter is pointed out. (Author) [pt

  14. On the shell model connection of the cluster model

    International Nuclear Information System (INIS)

    Cseh, J.; Levai, G.; Kato, K.

    2000-01-01

    Complete text of publication follows. The interrelation of basic nuclear structure models is a longstanding problem. The connection between the spherical shell model and the quadrupole collective model has been studied extensively, and symmetry considerations proved to be especially useful in this respect. A collective band was interpreted in the shell model language long ago as a set of states (of the valence nucleons) with a specific SU(3) symmetry. Furthermore, the energies of these rotational states are obtained to a good approximation as eigenvalues of an SU(3) dynamically symmetric shell model Hamiltonian. On the other hand the relation of the shell model and cluster model is less well explored. The connection of the harmonic oscillator (i.e. SU(3)) bases of the two approaches is known, but it was established only for the unrealistic harmonic oscillator interactions. Here we investigate the question: Can an SU(3) dynamically symmetric interaction provide a similar connection between the spherical shell model and the cluster model, like the one between the shell and collective models? In other words: whether or not the energy of the states of the cluster bands, defined by a specific SU(3) symmetries, can be obtained from a shell model Hamiltonian (with SU(3) dynamical symmetry). We carried out calculations within the framework of the semimicroscopic algebraic cluster model, in which not only the cluster model space is obtained from the full shell model space by an SU(3) symmetry-dictated truncation, but SU(3) dynamically symmetric interactions are also applied. Actually, Hamiltonians of this kind proved to be successful in describing the gross features of cluster states in a wide energy range. The novel feature of the present work is that we apply exclusively shell model interactions. The energies obtained from such a Hamiltonian for several bands of the ( 12 C, 14 C, 16 O, 20 Ne, 40 Ca) + α systems turn out to be in good agreement with the experimental

  15. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  16. No-Core Shell Model and Reactions

    International Nuclear Information System (INIS)

    Navratil, P; Ormand, W E; Caurier, E; Bertulani, C

    2005-01-01

    There has been a significant progress in ab initio approaches to the structure of light nuclei. Starting from realistic two- and three-nucleon interactions the ab initio no-core shell model (NCSM) can predict low-lying levels in p-shell nuclei. It is a challenging task to extend ab initio methods to describe nuclear reactions. In this contribution, we present a brief overview of the NCSM with examples of recent applications as well as the first steps taken toward nuclear reaction applications. In particular, we discuss cross section calculations of p+ 6 Li and 6 He+p scattering as well as a calculation of the astrophysically important 7 Be(p, γ) 8 B S-factor

  17. Comparing several boson mappings with the shell model

    International Nuclear Information System (INIS)

    Menezes, D.P.; Yoshinaga, Naotaka; Bonatsos, D.

    1990-01-01

    Boson mappings are an essential step in establishing a connection between the successful phenomenological interacting boson model and the shell model. The boson mapping developed by Bonatsos, Klein and Li is applied to a single j-shell and the resulting energy levels and E2 transitions are shown for a pairing plus quadrupole-quadrupole Hamiltonian. The results are compared to the exact shell model calculation, as well as to these obtained through use of the Otsuka-Arima-Iachello mapping and the Zirnbauer-Brink mapping. In all cases good results are obtained for the spherical and near-vibrational cases

  18. The experimental and shell model approach to 100Sn

    International Nuclear Information System (INIS)

    Grawe, H.; Maier, K.H.; Fitzgerald, J.B.; Heese, J.; Spohr, K.; Schubart, R.; Gorska, M.; Rejmund, M.

    1995-01-01

    The present status of experimental approach to 100 Sn and its shell model structure is given. New developments in experimental techniques, such as low background isomer spectroscopy and charged particle detection in 4π are surveyed. Based on recent experimental data shell model calculations are used to predict the structure of the single- and two-nucleon neighbours of 100 Sn. The results are compared to the systematic of Coulomb energies and spin-orbit splitting and discussed with respect to future experiments. (author). 51 refs, 11 figs, 1 tab

  19. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  20. Shell model for warm rotating nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Matsuo, M.; Yoshida, K. [Kyoto Univ. (Japan); Dossing, T. [Univ. of Copenhagen (Denmark)] [and others

    1996-12-31

    Utilizing a shell model which combines the cranked Nilsson mean-field and the residual surface and volume delta two-body forces, the authors discuss the onset of rotational damping in normal- and super-deformed nuclei. Calculation for a typical normal deformed nucleus {sup 168}Yb indicates that the rotational damping sets in at around 0.8 MeV above the yrast line, and about 30 rotational bands of various length exists at a given rotational frequency, in overall agreement with experimental findings. It is predicted that the onset of rotational damping changes significantly in different superdeformed nuclei due to the variety of the shell gaps and single-particle orbits associated with the superdeformed mean-field.

  1. Large scale phononic metamaterials for seismic isolation

    International Nuclear Information System (INIS)

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-01-01

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials

  2. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  3. Continuum shell-model with complicated configurations

    International Nuclear Information System (INIS)

    Barz, H.W.; Hoehn, J.

    1977-05-01

    The traditional shell model has been combined with the coupled channels method in order to describe resonance reactions. For that purpose the configuration space is divided into two subspaces (Feshbach projection method). Complicated shell-model configurations can be included into the subspace of discrete states which contains the single particle resonance states too. In the subspace of scattering states the equation of motion is solved by using the coupled channels method. Thereby the orthogonality between scattering states and discrete states is ensured. Resonance states are defined with outgoing waves in all channels. By means of simple model calculations the special role of the continuum is investigated. In this connection the energy dependence of the resonance parameters, the isospin mixture via the continuum, threshold effect, as well as the influence of the number of channels taken into account on the widths, positions and dipole strengths of the resonance are discussed. The model is mainly applied to the description of giant resonances excited by the scattering of nucleons and photo-nucleus processes (source term method) found in reactions on light nuclei. The giant resonance observed in the 15 N(p,n) reaction is explained by the inclusion of 2p-2h states. The same is true for the giant resonance in 13 C(J = 1/2, 3/2) as well as for the giant resonance built on the first 3 - state in 16 O. By means of a correlation analysis for the reduced widths amplitudes an access to the doorway conception is found. (author)

  4. On the shell-model-connection of the cluster model

    International Nuclear Information System (INIS)

    Cseh, J.

    2000-01-01

    Complete text of publication follows. The interrelation of basic nuclear structure models is a longstanding problem. The connection between the spherical shell model and the quadrupole collective model has been studied extensively, and symmetry considerations proved to be especially useful in this respect. A collective band was interpreted in the shell model language long ago [1] as a set of states (of the valence nucleons) with a specific SU(3) symmetry. Furthermore, the energies of these rotational states are obtained to a good approximation as eigenvalues of an SU(3) dynamically symmetric shell model Hamiltonian. On the other hand the relation of the shell model and cluster model is less well explored. The connection of the harmonic oscillator (i.e. SU(3)) bases of the two approaches is known [2] but it was established only for the unrealistic harmonic oscillator interactions. Here we investigate the question: Can an SU(3) dynamically symmetric interaction provide a similar connection between the spherical shell model and the cluster model, like the one between the shell and collective models? In other words: whether or not the energy of the states of the cluster bands, defined by a specific SU(3) symmetries, can be obtained from a shell model Hamiltonian (with SU(3) dynamical symmetry). We carried out calculations within the framework of the semimicroscopic algebraic cluster model [3,4] in order to find an answer to this question, which seems to be affirmative. In particular, the energies obtained from such a Hamiltonian for several bands of the ( 12 C, 14 C, 16 O, 20 Ne, 40 Ca) + α systems turn out to be in good agreement with the experimental values. The present results show that the simple and transparent SU(3) connection between the spherical shell model and the cluster model is valid not only for the harmonic oscillator interactions, but for much more general (SU(3) dynamically symmetric) Hamiltonians as well, which result in realistic energy spectra. Via

  5. Projected shell model description of N = 114 superdeformed isotone nuclei

    International Nuclear Information System (INIS)

    Guo, R S; Chen, L M; Chou, C H

    2006-01-01

    A systematic description of the yrast superdeformed (SD) bands in N 114, Z = 80-84 isotone nuclei using the projected shell model is presented. The calculated γ-ray energies, moment of inertia and M1 transitions are compared with the data for which spin is assigned. Excellent agreement with the available data for all isotones is obtained. The calculated electromagnetic properties provide a microscopic understanding of those measured nuclei. Some predictions in superdeformed nuclei are also discussed

  6. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg

    2009-01-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro......-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 mu m. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  7. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  8. Large-scale river regulation

    International Nuclear Information System (INIS)

    Petts, G.

    1994-01-01

    Recent concern over human impacts on the environment has tended to focus on climatic change, desertification, destruction of tropical rain forests, and pollution. Yet large-scale water projects such as dams, reservoirs, and inter-basin transfers are among the most dramatic and extensive ways in which our environment has been, and continues to be, transformed by human action. Water running to the sea is perceived as a lost resource, floods are viewed as major hazards, and wetlands are seen as wastelands. River regulation, involving the redistribution of water in time and space, is a key concept in socio-economic development. To achieve water and food security, to develop drylands, and to prevent desertification and drought are primary aims for many countries. A second key concept is ecological sustainability. Yet the ecology of rivers and their floodplains is dependent on the natural hydrological regime, and its related biochemical and geomorphological dynamics. (Author)

  9. Many-body forces in nuclear shell-model

    International Nuclear Information System (INIS)

    Rath, P.K.

    1985-01-01

    In the microscopic derivation of the effective Hamiltonian for the nuclear shell model many-body forces between the valence nucleons occur. These many-body forces can be discriminated in ''real'' many-body forces, which can be related to mesonic and internal degrees of freedom of the nucleons, and ''effective'' many-body forces, which arise by the confinement of the nucleonic Hilbert space to the finite-dimension shell-model space. In the present thesis the influences of such three-body forces on the spectra of sd-shell nuclei are studied. For this the two common techniques for shell-model calculations (Oak Ridge-Rochester and Glasgow representation) are extended in such way that a general three-body term in the Hamiltonian can be regarded. The studies show that the repulsive contributions of the considered three-nucleon forces become more important with increasing number of valence nucleons. By this the particle-number dependence of empirical two-nucleon forces can be qualitatively explained. A special kind of effective many-body force occurs in the folded diagram expansion of the energy-dependent effective Hamiltonian for the shell model. Thereby it is shown that the contributions of the folded diagrams with three nucleons are just as important as those with two nucleons. Thus it is to be suspected that the folded diagram expansion contains many-particle terms with arbitrary particle number. The present studies however show that four nucleon effects are neglegible so that the folded diagram expansion can be confined to two- and three-particle terms. In shell-model calculations which extend over several main shells the influences of the spurious center-of-mass motion must be regarded. A procedure is discussed by which these spurious degrees of freedom can be exactly separated. (orig.) [de

  10. Charge radii and electromagnetic moments of Li and Be isotopes from the ab initio no-core shell model

    International Nuclear Information System (INIS)

    Forssen, C.; Caurier, E.; Navratil, P.

    2009-01-01

    Recently, charge radii and ground-state electromagnetic moments of Li and Be isotopes were measured precisely. We have performed large-scale ab initio no-core shell model calculations for these isotopes using high-precision nucleon-nucleon potentials. The isotopic trends of our computed charge radii and quadrupole and magnetic-dipole moments are in good agreement with experimental results with the exception of the 11 Li charge radius. The magnetic moments are in particular well described, whereas the absolute magnitudes of the quadrupole moments are about 10% too small. The small magnitude of the 6 Li quadrupole moment is reproduced, and with the CD-Bonn NN potential, also its correct sign

  11. Reviving large-scale projects

    International Nuclear Information System (INIS)

    Desiront, A.

    2003-01-01

    For the past decade, most large-scale hydro development projects in northern Quebec have been put on hold due to land disputes with First Nations. Hydroelectric projects have recently been revived following an agreement signed with Aboriginal communities in the province who recognized the need to find new sources of revenue for future generations. Many Cree are working on the project to harness the waters of the Eastmain River located in the middle of their territory. The work involves building an 890 foot long dam, 30 dikes enclosing a 603 square-km reservoir, a spillway, and a power house with 3 generating units with a total capacity of 480 MW of power for start-up in 2007. The project will require the use of 2,400 workers in total. The Cree Construction and Development Company is working on relations between Quebec's 14,000 Crees and the James Bay Energy Corporation, the subsidiary of Hydro-Quebec which is developing the project. Approximately 10 per cent of the $735-million project has been designated for the environmental component. Inspectors ensure that the project complies fully with environmental protection guidelines. Total development costs for Eastmain-1 are in the order of $2 billion of which $735 million will cover work on site and the remainder will cover generating units, transportation and financial charges. Under the treaty known as the Peace of the Braves, signed in February 2002, the Quebec government and Hydro-Quebec will pay the Cree $70 million annually for 50 years for the right to exploit hydro, mining and forest resources within their territory. The project comes at a time when electricity export volumes to the New England states are down due to growth in Quebec's domestic demand. Hydropower is a renewable and non-polluting source of energy that is one of the most acceptable forms of energy where the Kyoto Protocol is concerned. It was emphasized that large-scale hydro-electric projects are needed to provide sufficient energy to meet both

  12. Symmetry-dictated trucation: Solutions of the spherical shell model for heavy nuclei

    International Nuclear Information System (INIS)

    Guidry, M.W.

    1992-01-01

    Principles of dynamical symmetry are used to simplify the spherical shell model. The resulting symmetry-dictated truncation leads to dynamical symmetry solutions that are often in quantitative agreement with a variety of observables. Numerical calculations, including terms that break the dynamical symmetries, are shown that correspond to shell model calculations for heavy deformed nuclei. The effective residual interaction is simple, well-behaved, and can be determined from basic observables. With this approach, we intend to apply the shell model in systematic fashion to all nuclei. The implications for nuclear structure far from stability and for nuclear masses and other quantities of interest in astrophysics are discussed

  13. Large Scale Glazed Concrete Panels

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    Today, there is a lot of focus on concrete surface’s aesthitic potential, both globally and locally. World famous architects such as Herzog De Meuron, Zaha Hadid, Richard Meyer and David Chippenfield challenge the exposure of concrete in their architecture. At home, this trend can be seen...... in the crinkly façade of DR-Byen (the domicile of the Danish Broadcasting Company) by architect Jean Nouvel and Zaha Hadid’s Ordrupgård’s black curved smooth concrete surfaces. Furthermore, one can point to initiatives such as “Synlig beton” (visible concrete) that can be seen on the website www.......synligbeton.dk and spæncom’s aesthetic relief effects by the designer Line Kramhøft (www.spaencom.com). It is my hope that the research-development project “Lasting large scale glazed concrete formwork,” I am working on at DTU, department of Architectural Engineering will be able to complement these. It is a project where I...

  14. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  15. Large scale cross hole testing

    International Nuclear Information System (INIS)

    Ball, J.K.; Black, J.H.; Doe, T.

    1991-05-01

    As part of the Site Characterisation and Validation programme the results of the large scale cross hole testing have been used to document hydraulic connections across the SCV block, to test conceptual models of fracture zones and obtain hydrogeological properties of the major hydrogeological features. The SCV block is highly heterogeneous. This heterogeneity is not smoothed out even over scales of hundreds of meters. Results of the interpretation validate the hypothesis of the major fracture zones, A, B and H; not much evidence of minor fracture zones is found. The uncertainty in the flow path, through the fractured rock, causes sever problems in interpretation. Derived values of hydraulic conductivity were found to be in a narrow range of two to three orders of magnitude. Test design did not allow fracture zones to be tested individually. This could be improved by testing the high hydraulic conductivity regions specifically. The Piezomac and single hole equipment worked well. Few, if any, of the tests ran long enough to approach equilibrium. Many observation boreholes showed no response. This could either be because there is no hydraulic connection, or there is a connection but a response is not seen within the time scale of the pumping test. The fractional dimension analysis yielded credible results, and the sinusoidal testing procedure provided an effective means of identifying the dominant hydraulic connections. (10 refs.) (au)

  16. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  17. The alpha-particle and shell models of the nucleus

    International Nuclear Information System (INIS)

    Perring, J.K.; Skyrme, T.H.R.

    1994-01-01

    It is shown that it is possible to write down α-particle wave functions for the ground states of 8 Be, 12 C and 16 O, which become, when antisymmetrized, identical with shell-model wave functions. The α-particle functions are used to obtain potentials which can then be used to derive wave functions and energies of excited states. Most of the low-lying states of 16 O are obtained in this way, qualitative agreement with experiment being found. The shell structure of the 0 + level at 6·06 MeV is analyzed, and is found to consist largely of single-particle excitations. The lifetime for pair-production is calculated, and found to be comparable with the experimental value. The validity of the method is discussed, and comparison made with shell-model calculations. (author). 5 refs, 1 tab

  18. Shell model Monte Carlo investigation of rare earth nuclei

    International Nuclear Information System (INIS)

    White, J. A.; Koonin, S. E.; Dean, D. J.

    2000-01-01

    We utilize the shell model Monte Carlo method to study the structure of rare earth nuclei. This work demonstrates the first systematic full oscillator shell with intruder calculations in such heavy nuclei. Exact solutions of a pairing plus quadrupole Hamiltonian are compared with the static path approximation in several dysprosium isotopes from A=152 to 162, including the odd mass A=153. Some comparisons are also made with Hartree-Fock-Bogoliubov results from Baranger and Kumar. Basic properties of these nuclei at various temperatures and spin are explored. These include energy, deformation, moments of inertia, pairing channel strengths, band crossing, and evolution of shell model occupation numbers. Exact level densities are also calculated and, in the case of 162 Dy, compared with experimental data. (c) 2000 The American Physical Society

  19. Large-scale galaxy bias

    Science.gov (United States)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2018-02-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  20. Large-scale galaxy bias

    Science.gov (United States)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  1. Cluster shell model: I. Structure of 9Be, 9B

    Science.gov (United States)

    Della Rocca, V.; Iachello, F.

    2018-05-01

    We calculate energy spectra, electromagnetic transition rates, longitudinal and transverse electron scattering form factors and log ft values for beta decay in 9Be, 9B, within the framework of a cluster shell model. By comparing with experimental data, we find strong evidence for the structure of these nuclei to be two α-particles in a dumbbell configuration with Z2 symmetry, plus an additional nucleon.

  2. Ground state energy fluctuations in the nuclear shell model

    International Nuclear Information System (INIS)

    Velazquez, Victor; Hirsch, Jorge G.; Frank, Alejandro; Barea, Jose; Zuker, Andres P.

    2005-01-01

    Statistical fluctuations of the nuclear ground state energies are estimated using shell model calculations in which particles in the valence shells interact through well-defined forces, and are coupled to an upper shell governed by random 2-body interactions. Induced ground-state energy fluctuations are found to be one order of magnitude smaller than those previously associated with chaotic components, in close agreement with independent perturbative estimates based on the spreading widths of excited states

  3. Large-scale exact diagonalizations reveal low-momentum scales of nuclei

    Science.gov (United States)

    Forssén, C.; Carlsson, B. D.; Johansson, H. T.; Sääf, D.; Bansal, A.; Hagen, G.; Papenbrock, T.

    2018-03-01

    Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding 1010 on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to Nmax=22 and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.

  4. Transition sum rules in the shell model

    Science.gov (United States)

    Lu, Yi; Johnson, Calvin W.

    2018-03-01

    An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy-weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, which in the case of the EWSR is a double commutator. While most prior applications of the double commutator have been to special cases, we derive general formulas for matrix elements of both operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We apply this simple tool to a number of nuclides and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E 1 ) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground-state electric quadrupole (E 2 ) centroids in the s d shell.

  5. Large scale integration of photovoltaics in cities

    International Nuclear Information System (INIS)

    Strzalka, Aneta; Alam, Nazmul; Duminil, Eric; Coors, Volker; Eicker, Ursula

    2012-01-01

    Highlights: ► We implement the photovoltaics on a large scale. ► We use three-dimensional modelling for accurate photovoltaic simulations. ► We consider the shadowing effect in the photovoltaic simulation. ► We validate the simulated results using detailed hourly measured data. - Abstract: For a large scale implementation of photovoltaics (PV) in the urban environment, building integration is a major issue. This includes installations on roof or facade surfaces with orientations that are not ideal for maximum energy production. To evaluate the performance of PV systems in urban settings and compare it with the building user’s electricity consumption, three-dimensional geometry modelling was combined with photovoltaic system simulations. As an example, the modern residential district of Scharnhauser Park (SHP) near Stuttgart/Germany was used to calculate the potential of photovoltaic energy and to evaluate the local own consumption of the energy produced. For most buildings of the district only annual electrical consumption data was available and only selected buildings have electronic metering equipment. The available roof area for one of these multi-family case study buildings was used for a detailed hourly simulation of the PV power production, which was then compared to the hourly measured electricity consumption. The results were extrapolated to all buildings of the analyzed area by normalizing them to the annual consumption data. The PV systems can produce 35% of the quarter’s total electricity consumption and half of this generated electricity is directly used within the buildings.

  6. Novel extrapolation method in the Monte Carlo shell model

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio

    2010-01-01

    We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full pf-shell calculation of 56 Ni, and the applicability of the method to a system beyond the current limit of exact diagonalization is shown for the pf+g 9/2 -shell calculation of 64 Ge.

  7. Structure of 12B from measurement and R-matrix analysis of sigma(theta) for 11B(n,n)11B and 11B(n,n')11Bsup(*)(2.12 MeV), and shell-model calculations

    International Nuclear Information System (INIS)

    Koehler, P.E.; Knox, H.D.; Resler, D.A.; Lane, R.O.

    1983-01-01

    Differential cross sections for neutrons, elastically scattered from 11 B and inelastically scattered to the first excited state 11 B*(2.12 MeV) have been measured at 13 incident energies for 4.8 12 B of 7.8 to 10.3 MeV. The cross sections were measured at nine laboratory angles per energy from 20 0 to 160 0 and show considerable resonance structure. Differential inelastic cross sections were also measured for the 4.45 and 5.02 MeV levels of 11 B for 2 to 9 angles at several incident energies. These new elastic and inelastic 2.12 MeV level data have been analyzed together with previously publsihed cross sections for 2 12 B. The shell model was used to calculate states in 12 B as well as spectroscopic amplitudes for reactions leading to these states. The results of this model calculation are compared to those of the R-matrix analysis. Much of the structure observed in the experimental work is predicted by the model for Esub(x) < or approx. 7 MeV. For levels of higher excitation the agreement is not as good. The experimental data are also compared to continuum shell-model calculations. (orig.)

  8. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  9. Intrinsic Density Matrices of the Nuclear Shell Model

    International Nuclear Information System (INIS)

    Deveikis, A.; Kamuntavichius, G.

    1996-01-01

    A new method for calculation of shell model intrinsic density matrices, defined as two-particle density matrices integrated over the centre-of-mass position vector of two last particles and complemented with isospin variables, has been developed. The intrinsic density matrices obtained are completely antisymmetric, translation-invariant, and do not employ a group-theoretical classification of antisymmetric states. They are used for exact realistic density matrix expansion within the framework of the reduced Hamiltonian method. The procedures based on precise arithmetic for calculation of the intrinsic density matrices that involve no numerical diagonalization or orthogonalization have been developed and implemented in the computer code. (author). 11 refs., 2 tabs

  10. Moments Method for Shell-Model Level Density

    International Nuclear Information System (INIS)

    Zelevinsky, V; Horoi, M; Sen'kov, R A

    2016-01-01

    The modern form of the Moments Method applied to the calculation of the nuclear shell-model level density is explained and examples of the method at work are given. The calculated level density practically exactly coincides with the result of full diagonalization when the latter is feasible. The method provides the pure level density for given spin and parity with spurious center-of-mass excitations subtracted. The presence and interplay of all correlations leads to the results different from those obtained by the mean-field combinatorics. (paper)

  11. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  12. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  13. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  14. Deriving the nuclear shell model from first principles

    Science.gov (United States)

    Barrett, Bruce R.; Dikmen, Erdal; Vary, James P.; Maris, Pieter; Shirokov, Andrey M.; Lisetskiy, Alexander F.

    2014-09-01

    The results of an 18-nucleon No Core Shell Model calculation, performed in a large basis space using a bare, soft NN interaction, can be projected into the 0 ℏω space, i.e., the sd -shell. Because the 16 nucleons in the 16O core are frozen in the 0 ℏω space, all the correlations of the 18-nucleon system are captured by the two valence, sd -shell nucleons. By the projection, we obtain microscopically the sd -shell 2-body effective interactions, the core energy and the sd -shell s.p. energies. Thus, the input for standard shell-model calculations can be determined microscopically by this approach. If the same procedure is then applied to 19-nucleon systems, the sd -shell 3-body effective interactions can also be obtained, indicating the importance of these 3-body effective interactions relative to the 2-body effective interactions. Applications to A = 19 and heavier nuclei with different intrinsic NN interactions will be presented and discussed. The results of an 18-nucleon No Core Shell Model calculation, performed in a large basis space using a bare, soft NN interaction, can be projected into the 0 ℏω space, i.e., the sd -shell. Because the 16 nucleons in the 16O core are frozen in the 0 ℏω space, all the correlations of the 18-nucleon system are captured by the two valence, sd -shell nucleons. By the projection, we obtain microscopically the sd -shell 2-body effective interactions, the core energy and the sd -shell s.p. energies. Thus, the input for standard shell-model calculations can be determined microscopically by this approach. If the same procedure is then applied to 19-nucleon systems, the sd -shell 3-body effective interactions can also be obtained, indicating the importance of these 3-body effective interactions relative to the 2-body effective interactions. Applications to A = 19 and heavier nuclei with different intrinsic NN interactions will be presented and discussed. Supported by the US NSF under Grant No. 0854912, the US DOE under

  15. Shell model test of the Porter-Thomas distribution

    International Nuclear Information System (INIS)

    Grimes, S.M.; Bloom, S.D.

    1981-01-01

    Eigenvectors have been calculated for the A=18, 19, 20, 21, and 26 nuclei in an sd shell basis. The decomposition of these states into their shell model components shows, in agreement with other recent work, that this distribution is not a single Gaussian. We find that the largest amplitudes are distributed approximately in a Gaussian fashion. Thus, many experimental measurements should be consistent with the Porter-Thomas predictions. We argue that the non-Gaussian form of the complete distribution can be simply related to the structure of the Hamiltonian

  16. Nuclear deformation in the configuration-interaction shell model

    Science.gov (United States)

    Alhassid, Y.; Bertsch, G. F.; Gilbreth, C. N.; Mustonen, M. T.

    2018-02-01

    We review a method that we recently introduced to calculate the finite-temperature distribution of the axial quadrupole operator in the laboratory frame using the auxiliary-field Monte Carlo technique in the framework of the configuration-interaction shell model. We also discuss recent work to determine the probability distribution of the quadrupole shape tensor as a function of intrinsic deformation β,γ by expanding its logarithm in quadrupole invariants. We demonstrate our method for an isotope chain of samarium nuclei whose ground states describe a crossover from spherical to deformed shapes.

  17. Shell-model Monte Carlo studies of nuclei

    International Nuclear Information System (INIS)

    Dean, D.J.

    1997-01-01

    The pair content and structure of nuclei near N = Z are described in the frwnework of shell-model Monte Carlo (SMMC) calculations. Results include the enhancement of J=0 T=1 proton-neutron pairing at N=Z nuclei, and the maxked difference of thermal properties between even-even and odd-odd N=Z nuclei. Additionally, a study of the rotational properties of the T=1 (ground state), and T=0 band mixing seen in 74 Rb is presented

  18. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  19. Pair shell model description of collective motions

    International Nuclear Information System (INIS)

    Chen Hsitseng; Feng Dahsuan

    1996-01-01

    The shell model in the pair basis has been reviewed with a case study of four particles in a spherical single-j shell. By analyzing the wave functions according to their pair components, the novel concept of the optimum pairs was developed which led to the proposal of a generalized pair mean-field method to solve the many-body problem. The salient feature of the method is its ability to handle within the framework of the spherical shell model a rotational system where the usual strong configuration mixing complexity is so simplified that it is now possible to obtain analytically the band head energies and the moments of inertia. We have also examined the effects of pair truncation on rotation and found the slow convergence of adding higher spin pairs. Finally, we found that when the SDI and Q .Q interactions are of equal strengths, the optimum pair approximation is still valid. (orig.)

  20. Sensitivity technologies for large scale simulation

    International Nuclear Information System (INIS)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  1. Large-scale numerical simulations of plasmas

    International Nuclear Information System (INIS)

    Hamaguchi, Satoshi

    2004-01-01

    The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)

  2. Superconducting materials for large scale applications

    International Nuclear Information System (INIS)

    Dew-Hughes, D.

    1975-01-01

    Applications of superconductors capable of carrying large current densities in large-scale electrical devices are examined. Discussions are included on critical current density, superconducting materials available, and future prospects for improved superconducting materials. (JRD)

  3. Large-scale computing with Quantum Espresso

    International Nuclear Information System (INIS)

    Giannozzi, P.; Cavazzoni, C.

    2009-01-01

    This paper gives a short introduction to Quantum Espresso: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing.

  4. Large-scale regions of antimatter

    International Nuclear Information System (INIS)

    Grobov, A. V.; Rubin, S. G.

    2015-01-01

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era

  5. Large-scale regions of antimatter

    Energy Technology Data Exchange (ETDEWEB)

    Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru [National Research Nuclear University MEPhI (Russian Federation)

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  6. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  7. Political consultation and large-scale research

    International Nuclear Information System (INIS)

    Bechmann, G.; Folkers, H.

    1977-01-01

    Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de

  8. Angular momentum dependence of the distribution of shell model eigenenergies

    International Nuclear Information System (INIS)

    Yen, M.K.

    1974-01-01

    In the conventional shell model calculation the many-particle energy matrices are constructed and diagonalized for definite angular momentum and parity. However the resulting set of eigenvalues possess a near normal behavior and hence a simple statistical description is possible. Usually one needs only about four parameters to capture the average level densities if the size of the set is not too small. The parameters are essentially moments of the distribution. But the difficulty lies in the yet unsolved problem of calculating moments in the fixed angular momentum subspace. We have derived a formula to approximate the angular momentum projection dependence of any operator averaged in a shell model basis. This approximate formula which is a truncated series in Hermite polynomials has been proved very good numerically and justified analytically for large systems. Applying this formula to seven physical cases we have found that the fixed angular momentum projection energy centroid, width and higher central moments can be obtained accurately provided for even-even nuclei the even and odd angular momentum projections are treated separately. Using this information one can construct the energy distribution for fixed angular momentum projection assuming normal behavior. Then the fixed angular momentum level densities are deduced and spectra are extracted. Results are in reasonably good agreement with the exact values although not as good as those obtained using exact fixed angular momentum moments. (Diss. Abstr. Int., B)

  9. Coexistence of spherical states with deformed and superdeformed bands in doubly magic 40Ca; A shell model challenge

    International Nuclear Information System (INIS)

    Caurier, E.; Nowacki, F.; Menendez, J.; Poves, A.

    2007-02-01

    Large scale shell model calculations, with dimensions reaching 10 9 , are carried out to describe the recently observed deformed (ND) and superdeformed (SD) bands based on the first and second excited 0 + states of 40 Ca at 3.35 MeV and 5.21 MeV respectively. A valence space comprising two major oscillator shells, sd and pf, can accommodate most of the relevant degrees of freedom of this problem. The ND band is dominated by configurations with four particles promoted to the pf-shell (4p-4h in short). The SD band by 8p-8h configurations. The ground state of 40 Ca is strongly correlated, but the closed shell still amounts to 65%. The energies of the bands are very well reproduced by the calculations. The out-band transitions connecting the SD band with other states are very small and depend on the details of the mixing among the different np-nh configurations, in spite of that, the calculation describes them reasonably. For the in-band transition probabilities along the SD band, we predict a fairly constant transition quadrupole moment Q 0 (t) ∼ 70 e fm 2 up to J=10, that decreases toward the higher spins. We submit also that the J=8 states of the deformed and superdeformed band are maximally mixed. (authors)

  10. Coexistence of spherical states with deformed and superdeformed bands in doubly magic 40Ca: A shell-model challenge

    International Nuclear Information System (INIS)

    Caurier, E.; Nowacki, F.; Menendez, J.; Poves, A.

    2007-01-01

    Large-scale shell-model calculations, with dimensions reaching 10 9 , are carried out to describe the recently observed deformed (ND) and superdeformed (SD) bands based on the first and second excited 0 + states of 40 Ca at 3.35 and 5.21 MeV, respectively. A valence space comprising two major oscillator shells, sd and pf, can accommodate most of the relevant degrees of freedom of this problem. The ND band is dominated by configurations with four particles promoted to the pf shell (4p-4h in short). The SD band by 8p-8h configurations. The ground state of 40 Ca is strongly correlated, but the closed shell still amounts to 65%. The energies of the bands are very well reproduced by the calculations. The out-band transitions connecting the SD band with other states are very small and depend on the details of the mixing among the different np-nh configurations; in spite of that, the calculation describes them reasonably. For the in-band transition probabilities along the SD band, we predict a fairly constant transition quadrupole moment Q 0 (t)∼170 e fm 2 up to J=10 that decreases toward the higher spins. We submit also that the J=8 states of the deformed and superdeformed bands are maximally mixed

  11. Dissecting the large-scale galactic conformity

    Science.gov (United States)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  12. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its......The Subject of large scale networks is approached from the perspective of the network planner. An analysis of the long term planning problems is presented with the main focus on the changing requirements for large scale networks and the potential problems in meeting these requirements. The problems...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...

  13. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  14. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  15. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  16. The Expanded Large Scale Gap Test

    Science.gov (United States)

    1987-03-01

    NSWC TR 86-32 DTIC THE EXPANDED LARGE SCALE GAP TEST BY T. P. LIDDIARD D. PRICE RESEARCH AND TECHNOLOGY DEPARTMENT ’ ~MARCH 1987 Ap~proved for public...arises, to reduce the spread in the LSGT 50% gap value.) The worst charges, such as those with the highest or lowest densities, the largest re-pressed...Arlington, VA 22217 PE 62314N INS3A 1 RJ14E31 7R4TBK 11 TITLE (Include Security CIlmsilficatiorn The Expanded Large Scale Gap Test . 12. PEIRSONAL AUTHOR() T

  17. Configuration management in large scale infrastructure development

    NARCIS (Netherlands)

    Rijn, T.P.J. van; Belt, H. van de; Los, R.H.

    2000-01-01

    Large Scale Infrastructure (LSI) development projects such as the construction of roads, rail-ways and other civil engineering (water)works is tendered differently today than a decade ago. Traditional workflow requested quotes from construction companies for construction works where the works to be

  18. Large-scale Motion of Solar Filaments

    Indian Academy of Sciences (India)

    tribpo

    Large-scale Motion of Solar Filaments. Pavel Ambrož, Astronomical Institute of the Acad. Sci. of the Czech Republic, CZ-25165. Ondrejov, The Czech Republic. e-mail: pambroz@asu.cas.cz. Alfred Schroll, Kanzelhöehe Solar Observatory of the University of Graz, A-9521 Treffen,. Austria. e-mail: schroll@solobskh.ac.at.

  19. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  20. The origin of large scale cosmic structure

    International Nuclear Information System (INIS)

    Jones, B.J.T.; Palmer, P.L.

    1985-01-01

    The paper concerns the origin of large scale cosmic structure. The evolution of density perturbations, the nonlinear regime (Zel'dovich's solution and others), the Gott and Rees clustering hierarchy, the spectrum of condensations, and biassed galaxy formation, are all discussed. (UK)

  1. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  2. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  3. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  4. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  5. Large-Scale Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  6. Stability of large scale interconnected dynamical systems

    International Nuclear Information System (INIS)

    Akpan, E.P.

    1993-07-01

    Large scale systems modelled by a system of ordinary differential equations are considered and necessary and sufficient conditions are obtained for the uniform asymptotic connective stability of the systems using the method of cone-valued Lyapunov functions. It is shown that this model significantly improves the existing models. (author). 9 refs

  7. Importance-truncated shell model for multi-shell valence spaces

    Energy Technology Data Exchange (ETDEWEB)

    Stumpf, Christina; Vobig, Klaus; Roth, Robert [Institut fuer Kernphysik, TU Darmstadt (Germany)

    2016-07-01

    The valence-space shell model is one of the work horses in nuclear structure theory. In traditional applications, shell-model calculations are carried out using effective interactions constructed in a phenomenological framework for rather small valence spaces, typically spanned by one major shell. We improve on this traditional approach addressing two main aspects. First, we use new effective interactions derived in an ab initio approach and, thus, establish a connection to the underlying nuclear interaction providing access to single- and multi-shell valence spaces. Second, we extend the shell model to larger valence spaces by applying an importance-truncation scheme based on a perturbative importance measure. In this way, we reduce the model space to the relevant basis states for the description of a few target eigenstates and solve the eigenvalue problem in this physics-driven truncated model space. In particular multi-shell valence spaces are not tractable otherwise. We combine the importance-truncated shell model with refined extrapolation schemes to approximately recover the exact result. We present first results obtained in the importance-truncated shell model with the newly derived ab initio effective interactions for multi-shell valence spaces, e.g., the sdpf shell.

  8. Large-scale structure of the Universe

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.

    1978-01-01

    The problems, discussed at the ''Large-scale Structure of the Universe'' symposium are considered on a popular level. Described are the cell structure of galaxy distribution in the Universe, principles of mathematical galaxy distribution modelling. The images of cell structures, obtained after reprocessing with the computer are given. Discussed are three hypothesis - vortical, entropic, adiabatic, suggesting various processes of galaxy and galaxy clusters origin. A considerable advantage of the adiabatic hypothesis is recognized. The relict radiation, as a method of direct studying the processes taking place in the Universe is considered. The large-scale peculiarities and small-scale fluctuations of the relict radiation temperature enable one to estimate the turbance properties at the pre-galaxy stage. The discussion of problems, pertaining to studying the hot gas, contained in galaxy clusters, the interactions within galaxy clusters and with the inter-galaxy medium, is recognized to be a notable contribution into the development of theoretical and observational cosmology

  9. Emerging large-scale solar heating applications

    International Nuclear Information System (INIS)

    Wong, W.P.; McClung, J.L.

    2009-01-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  10. Emerging large-scale solar heating applications

    Energy Technology Data Exchange (ETDEWEB)

    Wong, W.P.; McClung, J.L. [Science Applications International Corporation (SAIC Canada), Ottawa, Ontario (Canada)

    2009-07-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  11. Mean field theory of nuclei and shell model. Present status and future outlook

    International Nuclear Information System (INIS)

    Nakada, Hitoshi

    2003-01-01

    functions and halos, magic numbers of unstable nuclei, mean field shell model and cluster structure, and problems concerning effective interactions. An overview of numerical calculation is presented at the end with two subtitles: Hartree-Fock calculation, and shell model calculation. (S. Funahashi)

  12. Challenges for Large Scale Structure Theory

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    I will describe some of the outstanding questions in Cosmology where answers could be provided by observations of the Large Scale Structure of the Universe at late times.I will discuss some of the theoretical challenges which will have to be overcome to extract this information from the observations. I will describe some of the theoretical tools that might be useful to achieve this goal. 

  13. Methods for Large-Scale Nonlinear Optimization.

    Science.gov (United States)

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  14. Large scale inhomogeneities and the cosmological principle

    International Nuclear Information System (INIS)

    Lukacs, B.; Meszaros, A.

    1984-12-01

    The compatibility of cosmologic principles and possible large scale inhomogeneities of the Universe is discussed. It seems that the strongest symmetry principle which is still compatible with reasonable inhomogeneities, is a full conformal symmetry in the 3-space defined by the cosmological velocity field, but even in such a case, the standard model is isolated from the inhomogeneous ones when the whole evolution is considered. (author)

  15. Fires in large scale ventilation systems

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.

    1991-01-01

    This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)

  16. Large-scale Complex IT Systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...

  17. Large-scale complex IT systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2012-01-01

    12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...

  18. LAVA: Large scale Automated Vulnerability Addition

    Science.gov (United States)

    2016-05-23

    LAVA: Large-scale Automated Vulnerability Addition Brendan Dolan -Gavitt∗, Patrick Hulin†, Tim Leek†, Fredrich Ulrich†, Ryan Whelan† (Authors listed...released, and thus rapidly become stale. We can expect tools to have been trained to detect bugs that have been released. Given the commercial price tag...low TCN) and dead (low liveness) program data is a powerful one for vulnera- bility injection. The DUAs it identifies are internal program quantities

  19. Large-Scale Transit Signal Priority Implementation

    OpenAIRE

    Lee, Kevin S.; Lozner, Bailey

    2018-01-01

    In 2016, the District Department of Transportation (DDOT) deployed Transit Signal Priority (TSP) at 195 intersections in highly urbanized areas of Washington, DC. In collaboration with a broader regional implementation, and in partnership with the Washington Metropolitan Area Transit Authority (WMATA), DDOT set out to apply a systems engineering–driven process to identify, design, test, and accept a large-scale TSP system. This presentation will highlight project successes and lessons learned.

  20. Nuclear spectroscopy in large shell model spaces: recent advances

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1995-01-01

    Three different approaches are now available for carrying out nuclear spectroscopy studies in large shell model spaces and they are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the recently introduced Monte Carlo method for the shell model; (iii) the spectral averaging theory, based on central limit theorems, in indefinitely large shell model spaces. The various principles, recent applications and possibilities of these three methods are described and the similarity between the Monte Carlo method and the spectral averaging theory is emphasized. (author). 28 refs., 1 fig., 5 tabs

  1. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  2. Neutrino nucleosynthesis in supernovae: Shell model predictions

    International Nuclear Information System (INIS)

    Haxton, W.C.

    1989-01-01

    Almost all of the 3 · 10 53 ergs liberated in a core collapse supernova is radiated as neutrinos by the cooling neutron star. I will argue that these neutrinos interact with nuclei in the ejected shells of the supernovae to produce new elements. It appears that this nucleosynthesis mechanism is responsible for the galactic abundances of 7 Li, 11 B, 19 F, 138 La, and 180 Ta, and contributes significantly to the abundances of about 15 other light nuclei. I discuss shell model predictions for the charged and neutral current allowed and first-forbidden responses of the parent nuclei, as well as the spallation processes that produce the new elements. 18 refs., 1 fig., 1 tab

  3. Shell model the Monte Carlo way

    International Nuclear Information System (INIS)

    Ormand, W.E.

    1995-01-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined

  4. Shell model the Monte Carlo way

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  5. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  6. RESTRUCTURING OF THE LARGE-SCALE SPRINKLERS

    Directory of Open Access Journals (Sweden)

    Paweł Kozaczyk

    2016-09-01

    Full Text Available One of the best ways for agriculture to become independent from shortages of precipitation is irrigation. In the seventies and eighties of the last century a number of large-scale sprinklers in Wielkopolska was built. At the end of 1970’s in the Poznan province 67 sprinklers with a total area of 6400 ha were installed. The average size of the sprinkler reached 95 ha. In 1989 there were 98 sprinklers, and the area which was armed with them was more than 10 130 ha. The study was conducted on 7 large sprinklers with the area ranging from 230 to 520 hectares in 1986÷1998. After the introduction of the market economy in the early 90’s and ownership changes in agriculture, large-scale sprinklers have gone under a significant or total devastation. Land on the State Farms of the State Agricultural Property Agency has leased or sold and the new owners used the existing sprinklers to a very small extent. This involved a change in crop structure, demand structure and an increase in operating costs. There has also been a threefold increase in electricity prices. Operation of large-scale irrigation encountered all kinds of barriers in practice and limitations of system solutions, supply difficulties, high levels of equipment failure which is not inclined to rational use of available sprinklers. An effect of a vision of the local area was to show the current status of the remaining irrigation infrastructure. The adopted scheme for the restructuring of Polish agriculture was not the best solution, causing massive destruction of assets previously invested in the sprinkler system.

  7. Proceedings of a symposium on the occasion of the 40th anniversary of the nuclear shell model

    International Nuclear Information System (INIS)

    Lee, T.S.H.; Wiringa, R.B.

    1990-03-01

    This report contains papers on the following topics: excitation of 1p-1h stretched states with the (p,n) reaction as a test of shell-model calculations; on Z=64 shell closure and some high spin states of 149 Gd and 159 Ho; saturating interactions in 4 He with density dependence; are short-range correlations visible in very large-basis shell-model calculations?; recent and future applications of the shell model in the continuum; shell model truncation schemes for rotational nuclei; the particle-hole interaction and high-spin states near A-16; magnetic moment of doubly closed shell +1 nucleon nucleus 41 Sc(I π =7/2 - ); the new magic nucleus 96 Zr; comparing several boson mappings with the shell model; high spin band structures in 165 Lu; optical potential with two-nucleon correlations; generalized valley approximation applied to a schematic model of the monopole excitation; pair approximation in the nuclear shell model; and many-particle, many-hole deformed states

  8. Optical interconnect for large-scale systems

    Science.gov (United States)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  9. Adaptive visualization for large-scale graph

    International Nuclear Information System (INIS)

    Nakamura, Hiroko; Shinano, Yuji; Ohzahata, Satoshi

    2010-01-01

    We propose an adoptive visualization technique for representing a large-scale hierarchical dataset within limited display space. A hierarchical dataset has nodes and links showing the parent-child relationship between the nodes. These nodes and links are described using graphics primitives. When the number of these primitives is large, it is difficult to recognize the structure of the hierarchical data because many primitives are overlapped within a limited region. To overcome this difficulty, we propose an adaptive visualization technique for hierarchical datasets. The proposed technique selects an appropriate graph style according to the nodal density in each area. (author)

  10. Neutrinos and large-scale structure

    International Nuclear Information System (INIS)

    Eisenstein, Daniel J.

    2015-01-01

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos

  11. Puzzles of large scale structure and gravitation

    International Nuclear Information System (INIS)

    Sidharth, B.G.

    2006-01-01

    We consider the puzzle of cosmic voids bounded by two-dimensional structures of galactic clusters as also a puzzle pointed out by Weinberg: How can the mass of a typical elementary particle depend on a cosmic parameter like the Hubble constant? An answer to the first puzzle is proposed in terms of 'Scaled' Quantum Mechanical like behaviour which appears at large scales. The second puzzle can be answered by showing that the gravitational mass of an elementary particle has a Machian character (see Ahmed N. Cantorian small worked, Mach's principle and the universal mass network. Chaos, Solitons and Fractals 2004;21(4))

  12. Neutrinos and large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Eisenstein, Daniel J. [Daniel J. Eisenstein, Harvard-Smithsonian Center for Astrophysics, 60 Garden St., MS #20, Cambridge, MA 02138 (United States)

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  13. Concepts for Large Scale Hydrogen Production

    OpenAIRE

    Jakobsen, Daniel; Åtland, Vegar

    2016-01-01

    The objective of this thesis is to perform a techno-economic analysis of large-scale, carbon-lean hydrogen production in Norway, in order to evaluate various production methods and estimate a breakeven price level. Norway possesses vast energy resources and the export of oil and gas is vital to the country s economy. The results of this thesis indicate that hydrogen represents a viable, carbon-lean opportunity to utilize these resources, which can prove key in the future of Norwegian energy e...

  14. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  15. Dipolar modulation of Large-Scale Structure

    Science.gov (United States)

    Yoon, Mijin

    For the last two decades, we have seen a drastic development of modern cosmology based on various observations such as the cosmic microwave background (CMB), type Ia supernovae, and baryonic acoustic oscillations (BAO). These observational evidences have led us to a great deal of consensus on the cosmological model so-called LambdaCDM and tight constraints on cosmological parameters consisting the model. On the other hand, the advancement in cosmology relies on the cosmological principle: the universe is isotropic and homogeneous on large scales. Testing these fundamental assumptions is crucial and will soon become possible given the planned observations ahead. Dipolar modulation is the largest angular anisotropy of the sky, which is quantified by its direction and amplitude. We measured a huge dipolar modulation in CMB, which mainly originated from our solar system's motion relative to CMB rest frame. However, we have not yet acquired consistent measurements of dipolar modulations in large-scale structure (LSS), as they require large sky coverage and a number of well-identified objects. In this thesis, we explore measurement of dipolar modulation in number counts of LSS objects as a test of statistical isotropy. This thesis is based on two papers that were published in peer-reviewed journals. In Chapter 2 [Yoon et al., 2014], we measured a dipolar modulation in number counts of WISE matched with 2MASS sources. In Chapter 3 [Yoon & Huterer, 2015], we investigated requirements for detection of kinematic dipole in future surveys.

  16. Internationalization Measures in Large Scale Research Projects

    Science.gov (United States)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  17. Status: Large-scale subatmospheric cryogenic systems

    International Nuclear Information System (INIS)

    Peterson, T.

    1989-01-01

    In the late 1960's and early 1970's an interest in testing and operating RF cavities at 1.8K motivated the development and construction of four large (300 Watt) 1.8K refrigeration systems. in the past decade, development of successful superconducting RF cavities and interest in obtaining higher magnetic fields with the improved Niobium-Titanium superconductors has once again created interest in large-scale 1.8K refrigeration systems. The L'Air Liquide plant for Tore Supra is a recently commissioned 300 Watt 1.8K system which incorporates new technology, cold compressors, to obtain the low vapor pressure for low temperature cooling. CEBAF proposes to use cold compressors to obtain 5KW at 2.0K. Magnetic refrigerators of 10 Watt capacity or higher at 1.8K are now being developed. The state of the art of large-scale refrigeration in the range under 4K will be reviewed. 28 refs., 4 figs., 7 tabs

  18. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  19. Accelerating large-scale phase-field simulations with GPU

    Directory of Open Access Journals (Sweden)

    Xiaoming Shi

    2017-10-01

    Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  20. Large-scale fracture mechancis testing -- requirements and possibilities

    International Nuclear Information System (INIS)

    Brumovsky, M.

    1993-01-01

    Application of fracture mechanics to very important and/or complicated structures, like reactor pressure vessels, brings also some questions about the reliability and precision of such calculations. These problems become more pronounced in cases of elastic-plastic conditions of loading and/or in parts with non-homogeneous materials (base metal and austenitic cladding, property gradient changes through material thickness) or with non-homogeneous stress fields (nozzles, bolt threads, residual stresses etc.). For such special cases some verification by large-scale testing is necessary and valuable. This paper discusses problems connected with planning of such experiments with respect to their limitations, requirements to a good transfer of received results to an actual vessel. At the same time, an analysis of possibilities of small-scale model experiments is also shown, mostly in connection with application of results between standard, small-scale and large-scale experiments. Experience from 30 years of large-scale testing in SKODA is used as an example to support this analysis. 1 fig

  1. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  2. Reexamination of shell model tests of the Porter-Thomas distribution

    International Nuclear Information System (INIS)

    Grimes, S.M.

    1983-01-01

    Recent shell model calculations have yielded width amplitude distributions which have apparently not agreed with the Porter-Thomas distribution. This result conflicts with the present experimental evidence. A reanalysis of these calculations suggests that, although correct, they do not imply that the Porter-Thomas distribution will fail to describe the width distributions observed experimentally. The conditions for validity of the Porter-Thomas distribution are discussed

  3. Radiations: large scale monitoring in Japan

    International Nuclear Information System (INIS)

    Linton, M.; Khalatbari, A.

    2011-01-01

    As the consequences of radioactive leaks on their health are a matter of concern for Japanese people, a large scale epidemiological study has been launched by the Fukushima medical university. It concerns the two millions inhabitants of the Fukushima Prefecture. On the national level and with the support of public funds, medical care and follow-up, as well as systematic controls are foreseen, notably to check the thyroid of 360.000 young people less than 18 year old and of 20.000 pregnant women in the Fukushima Prefecture. Some measurements have already been performed on young children. Despite the sometimes rather low measures, and because they know that some parts of the area are at least as much contaminated as it was the case around Chernobyl, some people are reluctant to go back home

  4. Large-scale digitizer system, analog converters

    International Nuclear Information System (INIS)

    Althaus, R.F.; Lee, K.L.; Kirsten, F.A.; Wagner, L.J.

    1976-10-01

    Analog to digital converter circuits that are based on the sharing of common resources, including those which are critical to the linearity and stability of the individual channels, are described. Simplicity of circuit composition is valued over other more costly approaches. These are intended to be applied in a large-scale processing and digitizing system for use with high-energy physics detectors such as drift-chambers or phototube-scintillator arrays. Signal distribution techniques are of paramount importance in maintaining adequate signal-to-noise ratio. Noise in both amplitude and time-jitter senses is held sufficiently low so that conversions with 10-bit charge resolution and 12-bit time resolution are achieved

  5. Engineering management of large scale systems

    Science.gov (United States)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  6. Grid sensitivity capability for large scale structures

    Science.gov (United States)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  7. Large - scale Rectangular Ruler Automated Verification Device

    Science.gov (United States)

    Chen, Hao; Chang, Luping; Xing, Minjian; Xie, Xie

    2018-03-01

    This paper introduces a large-scale rectangular ruler automated verification device, which consists of photoelectric autocollimator and self-designed mechanical drive car and data automatic acquisition system. The design of mechanical structure part of the device refer to optical axis design, drive part, fixture device and wheel design. The design of control system of the device refer to hardware design and software design, and the hardware mainly uses singlechip system, and the software design is the process of the photoelectric autocollimator and the automatic data acquisition process. This devices can automated achieve vertical measurement data. The reliability of the device is verified by experimental comparison. The conclusion meets the requirement of the right angle test procedure.

  8. Large Scale Landform Mapping Using Lidar DEM

    Directory of Open Access Journals (Sweden)

    Türkay Gökgöz

    2015-08-01

    Full Text Available In this study, LIDAR DEM data was used to obtain a primary landform map in accordance with a well-known methodology. This primary landform map was generalized using the Focal Statistics tool (Majority, considering the minimum area condition in cartographic generalization in order to obtain landform maps at 1:1000 and 1:5000 scales. Both the primary and the generalized landform maps were verified visually with hillshaded DEM and an orthophoto. As a result, these maps provide satisfactory visuals of the landforms. In order to show the effect of generalization, the area of each landform in both the primary and the generalized maps was computed. Consequently, landform maps at large scales could be obtained with the proposed methodology, including generalization using LIDAR DEM.

  9. Constructing sites on a large scale

    DEFF Research Database (Denmark)

    Braae, Ellen Marie; Tietjen, Anne

    2011-01-01

    Since the 1990s, the regional scale has regained importance in urban and landscape design. In parallel, the focus in design tasks has shifted from master plans for urban extension to strategic urban transformation projects. A prominent example of a contemporary spatial development approach...... for setting the design brief in a large scale urban landscape in Norway, the Jaeren region around the city of Stavanger. In this paper, we first outline the methodological challenges and then present and discuss the proposed method based on our teaching experiences. On this basis, we discuss aspects...... is the IBA Emscher Park in the Ruhr area in Germany. Over a 10 years period (1988-1998), more than a 100 local transformation projects contributed to the transformation from an industrial to a post-industrial region. The current paradigm of planning by projects reinforces the role of the design disciplines...

  10. Large scale study of tooth enamel

    International Nuclear Information System (INIS)

    Bodart, F.; Deconninck, G.; Martin, M.T.

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. On hundred eighty samples of teeth were first analyzed using PIXE, backscattering and nuclear reaction techniques. The results were analyzed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population. (author)

  11. Testing Einstein's Gravity on Large Scales

    Science.gov (United States)

    Prescod-Weinstein, Chandra

    2011-01-01

    A little over a decade has passed since two teams studying high redshift Type Ia supernovae announced the discovery that the expansion of the universe was accelerating. After all this time, we?re still not sure how cosmic acceleration fits into the theory that tells us about the large-scale universe: General Relativity (GR). As part of our search for answers, we have been forced to question GR itself. But how will we test our ideas? We are fortunate enough to be entering the era of precision cosmology, where the standard model of gravity can be subjected to more rigorous testing. Various techniques will be employed over the next decade or two in the effort to better understand cosmic acceleration and the theory behind it. In this talk, I will describe cosmic acceleration, current proposals to explain it, and weak gravitational lensing, an observational effect that allows us to do the necessary precision cosmology.

  12. Large-Scale Astrophysical Visualization on Smartphones

    Science.gov (United States)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  13. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  14. Novel algorithm of large-scale simultaneous linear equations

    International Nuclear Information System (INIS)

    Fujiwara, T; Hoshi, T; Yamamoto, S; Sogabe, T; Zhang, S-L

    2010-01-01

    We review our recently developed methods of solving large-scale simultaneous linear equations and applications to electronic structure calculations both in one-electron theory and many-electron theory. This is the shifted COCG (conjugate orthogonal conjugate gradient) method based on the Krylov subspace, and the most important issue for applications is the shift equation and the seed switching method, which greatly reduce the computational cost. The applications to nano-scale Si crystals and the double orbital extended Hubbard model are presented.

  15. Monte Carlo evaluation of path integral for the nuclear shell model

    International Nuclear Information System (INIS)

    Lang, G.H.

    1993-01-01

    The authors present a path-integral formulation of the nuclear shell model using auxillary fields; the path-integral is evaluated by Monte Carlo methods. The method scales favorably with valence-nucleon number and shell-model basis: full-basis calculations are demonstrated up to the rare-earth region, which cannot be treated by other methods. Observables are calculated for the ground state and in a thermal ensemble. Dynamical correlations are obtained, from which strength functions are extracted through the Maximum Entropy method. Examples in the s-d shell, where exact diagonalization can be carried out, compared well with exact results. The open-quotes sign problemclose quotes generic to quantum Monte Carlo calculations is found to be absent in the attractive pairing-plus-multipole interactions. The formulation is general for interacting fermion systems and is well suited for parallel computation. The authors have implemented it on the Intel Touchstone Delta System, achieving better than 99% parallelization

  16. Study of nickel nuclei by (p,d) and (p,t) reactions. Shell model interpretation

    International Nuclear Information System (INIS)

    Kong-A-Siou, D.-H.

    1975-01-01

    The experimental techniques employed at the Nuclear Science Institute (Grenoble) and at Michigan State University are described. The development of the transition amplitude calculation of the one-or two-nucleon transfer reactions is described first, after which the principle of shell model calculations is outlined. The choices of configuration space and two-body interactions are discussed. The DWBA method of analysis is studied in more detail. The effects of different approximations and the influence of the parameters are examined. Special attention is paid to the j-dependence of the form of the angular distributions, on effect not explained in the standard DWBA framework. The results are analysed and a large section is devoted to a comparative study of the experimental results obtained and those from other nuclear reactions. The spectroscopic data obtained are compared with the results of shell model calculations [fr

  17. Ab Initio Study of 40Ca with an Importance Truncated No-Core Shell Model

    Energy Technology Data Exchange (ETDEWEB)

    Roth, R; Navratil, P

    2007-05-22

    We propose an importance truncation scheme for the no-core shell model, which enables converged calculations for nuclei well beyond the p-shell. It is based on an a priori measure for the importance of individual basis states constructed by means of many-body perturbation theory. Only the physically relevant states of the no-core model space are considered, which leads to a dramatic reduction of the basis dimension. We analyze the validity and efficiency of this truncation scheme using different realistic nucleon-nucleon interactions and compare to conventional no-core shell model calculations for {sup 4}He and {sup 16}O. Then, we present the first converged calculations for the ground state of {sup 40}Ca within no-core model spaces including up to 16{h_bar}{Omega}-excitations using realistic low-momentum interactions. The scheme is universal and can be easily applied to other quantum many-body problems.

  18. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  19. Solving large scale structure in ten easy steps with COLA

    Energy Technology Data Exchange (ETDEWEB)

    Tassev, Svetlin [Department of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08544 (United States); Zaldarriaga, Matias [School of Natural Sciences, Institute for Advanced Study, Olden Lane, Princeton, NJ 08540 (United States); Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu [Center for Astrophysics, Harvard University, 60 Garden Street, Cambridge, MA 02138 (United States)

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  20. Large-scale stochasticity in Hamiltonian systems

    International Nuclear Information System (INIS)

    Escande, D.F.

    1982-01-01

    Large scale stochasticity (L.S.S.) in Hamiltonian systems is defined on the paradigm Hamiltonian H(v,x,t) =v 2 /2-M cos x-P cos k(x-t) which describes the motion of one particle in two electrostatic waves. A renormalization transformation Tsub(r) is described which acts as a microscope that focusses on a given KAM (Kolmogorov-Arnold-Moser) torus in phase space. Though approximate, Tsub(r) yields the threshold of L.S.S. in H with an error of 5-10%. The universal behaviour of KAM tori is predicted: for instance the scale invariance of KAM tori and the critical exponent of the Lyapunov exponent of Cantori. The Fourier expansion of KAM tori is computed and several conjectures by L. Kadanoff and S. Shenker are proved. Chirikov's standard mapping for stochastic layers is derived in a simpler way and the width of the layers is computed. A simpler renormalization scheme for these layers is defined. A Mathieu equation for describing the stability of a discrete family of cycles is derived. When combined with Tsub(r), it allows to prove the link between KAM tori and nearby cycles, conjectured by J. Greene and, in particular, to compute the mean residue of a torus. The fractal diagrams defined by G. Schmidt are computed. A sketch of a methodology for computing the L.S.S. threshold in any two-degree-of-freedom Hamiltonian system is given. (Auth.)

  1. Large scale molecular simulations of nanotoxicity.

    Science.gov (United States)

    Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong

    2014-01-01

    The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells. © 2014 Wiley Periodicals, Inc.

  2. Large-scale tides in general relativity

    Energy Technology Data Exchange (ETDEWEB)

    Ip, Hiu Yan; Schmidt, Fabian, E-mail: iphys@mpa-garching.mpg.de, E-mail: fabians@mpa-garching.mpg.de [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2017-02-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the 'separate universe' paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.

  3. Food appropriation through large scale land acquisitions

    International Nuclear Information System (INIS)

    Cristina Rulli, Maria; D’Odorico, Paolo

    2014-01-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300–550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190–370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations. (letter)

  4. Large Scale EOF Analysis of Climate Data

    Science.gov (United States)

    Prabhat, M.; Gittens, A.; Kashinath, K.; Cavanaugh, N. R.; Mahoney, M.

    2016-12-01

    We present a distributed approach towards extracting EOFs from 3D climate data. We implement the method in Apache Spark, and process multi-TB sized datasets on O(1000-10,000) cores. We apply this method to latitude-weighted ocean temperature data from CSFR, a 2.2 terabyte-sized data set comprising ocean and subsurface reanalysis measurements collected at 41 levels in the ocean, at 6 hour intervals over 31 years. We extract the first 100 EOFs of this full data set and compare to the EOFs computed simply on the surface temperature field. Our analyses provide evidence of Kelvin and Rossy waves and components of large-scale modes of oscillation including the ENSO and PDO that are not visible in the usual SST EOFs. Further, they provide information on the the most influential parts of the ocean, such as the thermocline, that exist below the surface. Work is ongoing to understand the factors determining the depth-varying spatial patterns observed in the EOFs. We will experiment with weighting schemes to appropriately account for the differing depths of the observations. We also plan to apply the same distributed approach to analysis of analysis of 3D atmospheric climatic data sets, including multiple variables. Because the atmosphere changes on a quicker time-scale than the ocean, we expect that the results will demonstrate an even greater advantage to computing 3D EOFs in lieu of 2D EOFs.

  5. Mirror dark matter and large scale structure

    International Nuclear Information System (INIS)

    Ignatiev, A.Yu.; Volkas, R.R.

    2003-01-01

    Mirror matter is a dark matter candidate. In this paper, we reexamine the linear regime of density perturbation growth in a universe containing mirror dark matter. Taking adiabatic scale-invariant perturbations as the input, we confirm that the resulting processed power spectrum is richer than for the more familiar cases of cold, warm and hot dark matter. The new features include a maximum at a certain scale λ max , collisional damping below a smaller characteristic scale λ S ' , with oscillatory perturbations between the two. These scales are functions of the fundamental parameters of the theory. In particular, they decrease for decreasing x, the ratio of the mirror plasma temperature to that of the ordinary. For x∼0.2, the scale λ max becomes galactic. Mirror dark matter therefore leads to bottom-up large scale structure formation, similar to conventional cold dark matter, for x(less-or-similar sign)0.2. Indeed, the smaller the value of x, the closer mirror dark matter resembles standard cold dark matter during the linear regime. The differences pertain to scales smaller than λ S ' in the linear regime, and generally in the nonlinear regime because mirror dark matter is chemically complex and to some extent dissipative. Lyman-α forest data and the early reionization epoch established by WMAP may hold the key to distinguishing mirror dark matter from WIMP-style cold dark matter

  6. Spectroscopy of 215Ra: the shell model and enhanced E3 transitions

    International Nuclear Information System (INIS)

    Stuchbery, A.E.; Dracoulis, G.D.; Kibedi, T.; Fabricius, B.; Lane, G.J.; Poletti, A.R.; Baxter, A.M.

    1998-01-01

    Excited states in the N=127 nucleus 215 Ra have been studied using γ-ray and electron spectroscopy following reactions of 13 C on 206 Pb targets. Levels were identified up to spins of ∝61/2 ℎ and excitation energies of ∝6 MeV. Enhanced octupole transitions are a feature of the level scheme. Lifetimes and magnetic moments were measured for several isomeric levels. The level scheme, transition rates and magnetic moments are compared with empirical shell model calculations and multiparticle octupole-coupled shell model calculations. In general, the experimental data are well described, but in comparison with its success in describing enhanced E3 transitions between related states in the radon isotopes, some limitations of the multiparticle octupole-coupling approach are revealed in 215 Ra. (orig.)

  7. Collectivity in heavy nuclei in the shell model Monte Carlo approach

    International Nuclear Information System (INIS)

    Özen, C.; Alhassid, Y.; Nakada, H.

    2014-01-01

    The microscopic description of collectivity in heavy nuclei in the framework of the configuration-interaction shell model has been a major challenge. The size of the model space required for the description of heavy nuclei prohibits the use of conventional diagonalization methods. We have overcome this difficulty by using the shell model Monte Carlo (SMMC) method, which can treat model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We identify a thermal observable that can distinguish between vibrational and rotational collectivity and use it to describe the crossover from vibrational to rotational collectivity in families of even-even rare-earth isotopes. We calculate the state densities in these nuclei and find them to be in close agreement with experimental data. We also calculate the collective enhancement factors of the corresponding level densities and find that their decay with excitation energy is correlated with the pairing and shape phase transitions. (author)

  8. On the absence of an α-nucleus structure in a two-centre shell model

    International Nuclear Information System (INIS)

    Gupta, R.K.; Sharma, M.K.; Antonenko, N.V.; Scheid, W.

    1999-01-01

    The two-centre shell model, used within the Strutinsky macro-microscopic method, is a valid prescription for calculating adiabatic or diabatic potential energy surfaces. It is shown, however, that this model does not contain the appropriate α-nucleus structure effects, very much required for collisions between light nuclei. A possible way to incorporate such effects is suggested. (author). Letter-to-the-editor

  9. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  10. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  11. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  12. Quantum gravity and the large scale anomaly

    Energy Technology Data Exchange (ETDEWEB)

    Kamenshchik, Alexander Y.; Tronconi, Alessandro; Venturi, Giovanni, E-mail: Alexander.Kamenshchik@bo.infn.it, E-mail: Alessandro.Tronconi@bo.infn.it, E-mail: Giovanni.Venturi@bo.infn.it [Dipartimento di Fisica e Astronomia and INFN, Via Irnerio 46,40126 Bologna (Italy)

    2015-04-01

    The spectrum of primordial perturbations obtained by calculating the quantum gravitational corrections to the dynamics of scalar perturbations is compared with Planck 2013 and BICEP2/Keck Array public data. The quantum gravitational effects are calculated in the context of a Wheeler-De Witt approach and have quite distinctive features. We constrain the free parameters of the theory by comparison with observations.

  13. Corrections to the neutrinoless double-β-decay operator in the shell model

    Science.gov (United States)

    Engel, Jonathan; Hagen, Gaute

    2009-06-01

    We use diagrammatic perturbation theory to construct an effective shell-model operator for the neutrinoless double-β decay of Se82. The starting point is the same Bonn-C nucleon-nucleon interaction that is used to generate the Hamiltonian for recent shell-model calculations of double-β decay. After first summing high-energy ladder diagrams that account for short-range correlations and then adding diagrams of low order in the G matrix to account for longer-range correlations, we fold the two-body matrix elements of the resulting effective operator with transition densities from the recent shell-model calculation to obtain the overall nuclear matrix element that governs the decay. Although the high-energy ladder diagrams suppress this matrix element at very short distances as expected, they enhance it at distances between one and two fermis, so that their overall effect is small. The corrections due to longer-range physics are large, but cancel one another so that the fully corrected matrix element is comparable to that produced by the bare operator. This cancellation between large and physically distinct low-order terms indicates the importance of a reliable nonperturbative calculation.

  14. Probes of large-scale structure in the Universe

    International Nuclear Information System (INIS)

    Suto, Yasushi; Gorski, K.; Juszkiewicz, R.; Silk, J.

    1988-01-01

    Recent progress in observational techniques has made it possible to confront quantitatively various models for the large-scale structure of the Universe with detailed observational data. We develop a general formalism to show that the gravitational instability theory for the origin of large-scale structure is now capable of critically confronting observational results on cosmic microwave background radiation angular anisotropies, large-scale bulk motions and large-scale clumpiness in the galaxy counts. (author)

  15. Nucleon-pair approximation to the nuclear shell model

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Y.M., E-mail: ymzhao@sjtu.edu.cn [Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240 (China); Arima, A. [Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240 (China); Musashi Gakuen, 1-26-1 Toyotamakami Nerima-ku, Tokyo 176-8533 (Japan)

    2014-12-01

    Atomic nuclei are complex systems of nucleons–protons and neutrons. Nucleons interact with each other via an attractive and short-range force. This feature of the interaction leads to a pattern of dominantly monopole and quadrupole correlations between like particles (i.e., proton–proton and neutron–neutron correlations) in low-lying states of atomic nuclei. As a consequence, among dozens or even hundreds of possible types of nucleon pairs, very few nucleon pairs such as proton and neutron pairs with spin zero, two (in some cases spin four), and occasionally isoscalar spin-aligned proton–neutron pairs, play important roles in low-energy nuclear structure. The nucleon-pair approximation therefore provides us with an efficient truncation scheme of the full shell model configurations which are otherwise too large to handle for medium and heavy nuclei in foreseeable future. Furthermore, the nucleon-pair approximation leads to simple pictures in physics, as the dimension of nucleon-pair subspace is always small. The present paper aims at a sound review of its history, formulation, validity, applications, as well as its link to previous approaches, with the focus on the new developments in the last two decades. The applicability of the nucleon-pair approximation and numerical calculations of low-lying states for realistic atomic nuclei are demonstrated with examples. Applications of pair approximations to other problems are also discussed.

  16. Fermion dynamical symmetry and the nuclear shell model

    International Nuclear Information System (INIS)

    Ginocchio, J.N.

    1985-01-01

    The interacting boson model (IBM) has been very successful in giving a unified and simple description of the spectroscopic properties of a wide range of nuclei, from vibrational through rotational nuclei. The three basic assumptions of the model are that: (1) the valence nucleons move about a doubly closed core, (2) the collective low-lying states are composed primarily of coherent pairs of neutrons and pairs of protons coupled to angular momentum zero and two, and (3) these coherent pairs are approximated as bosons. In this review we shall show how it is possible to have fermion Hamiltonians which have a class of collective eigenstates composed entirely of monopole and quadrupole pairs of fermions. Hence these models satisfy the assumptions (1) and (2) above but no boson approximation need be made. Thus the Pauli principle is kept in tact. Furthermore the fermion shell model states excluded in the IBM can be classified by the number of fermion pairs which are not coherent monopole or quadrupole pairs. Hence the mixing of these states into the low-lying spectrum can be calculated in a systematic and tractable manner. Thus we can introduce features which are outside the IBM. 11 refs

  17. Large scale dynamics of protoplanetary discs

    Science.gov (United States)

    Béthune, William

    2017-08-01

    Planets form in the gaseous and dusty disks orbiting young stars. These protoplanetary disks are dispersed in a few million years, being accreted onto the central star or evaporated into the interstellar medium. To explain the observed accretion rates, it is commonly assumed that matter is transported through the disk by turbulence, although the mechanism sustaining turbulence is uncertain. On the other side, irradiation by the central star could heat up the disk surface and trigger a photoevaporative wind, but thermal effects cannot account for the observed acceleration and collimation of the wind into a narrow jet perpendicular to the disk plane. Both issues can be solved if the disk is sensitive to magnetic fields. Weak fields lead to the magnetorotational instability, whose outcome is a state of sustained turbulence. Strong fields can slow down the disk, causing it to accrete while launching a collimated wind. However, the coupling between the disk and the neutral gas is done via electric charges, each of which is outnumbered by several billion neutral molecules. The imperfect coupling between the magnetic field and the neutral gas is described in terms of "non-ideal" effects, introducing new dynamical behaviors. This thesis is devoted to the transport processes happening inside weakly ionized and weakly magnetized accretion disks; the role of microphysical effects on the large-scale dynamics of the disk is of primary importance. As a first step, I exclude the wind and examine the impact of non-ideal effects on the turbulent properties near the disk midplane. I show that the flow can spontaneously organize itself if the ionization fraction is low enough; in this case, accretion is halted and the disk exhibits axisymmetric structures, with possible consequences on planetary formation. As a second step, I study the launching of disk winds via a global model of stratified disk embedded in a warm atmosphere. This model is the first to compute non-ideal effects from

  18. Large-Scale Spacecraft Fire Safety Tests

    Science.gov (United States)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; hide

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  19. Large-scale fuel cycle centres

    International Nuclear Information System (INIS)

    Smiley, S.H.; Black, K.M.

    1977-01-01

    The US Nuclear Regulatory Commission (NRC) has considered the nuclear energy centre concept for fuel cycle plants in the Nuclear Energy Centre Site Survey 1975 (NECSS-75) Rep. No. NUREG-0001, an important study mandated by the US Congress in the Energy Reorganization Act of 1974 which created the NRC. For this study, the NRC defined fuel cycle centres as consisting of fuel reprocessing and mixed-oxide fuel fabrication plants, and optional high-level waste and transuranic waste management facilities. A range of fuel cycle centre sizes corresponded to the fuel throughput of power plants with a total capacity of 50,000-300,000MW(e). The types of fuel cycle facilities located at the fuel cycle centre permit the assessment of the role of fuel cycle centres in enhancing the safeguard of strategic special nuclear materials - plutonium and mixed oxides. Siting fuel cycle centres presents a smaller problem than siting reactors. A single reprocessing plant of the scale projected for use in the USA (1500-2000t/a) can reprocess fuel from reactors producing 50,000-65,000MW(e). Only two or three fuel cycle centres of the upper limit size considered in the NECSS-75 would be required in the USA by the year 2000. The NECSS-75 fuel cycle centre evaluation showed that large-scale fuel cycle centres present no real technical siting difficulties from a radiological effluent and safety standpoint. Some construction economies may be achievable with fuel cycle centres, which offer opportunities to improve waste-management systems. Combined centres consisting of reactors and fuel reprocessing and mixed-oxide fuel fabrication plants were also studied in the NECSS. Such centres can eliminate shipment not only of Pu but also mixed-oxide fuel. Increased fuel cycle costs result from implementation of combined centres unless the fuel reprocessing plants are commercial-sized. Development of Pu-burning reactors could reduce any economic penalties of combined centres. The need for effective fissile

  20. The effective field theory of cosmological large scale structures

    Energy Technology Data Exchange (ETDEWEB)

    Carrasco, John Joseph M. [Stanford Univ., Stanford, CA (United States); Hertzberg, Mark P. [Stanford Univ., Stanford, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States); Senatore, Leonardo [Stanford Univ., Stanford, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ≈ 10–6c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations δ(k) for all the observables. As an example, we calculate the correction to the power spectrum at order δ(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ≃ 0.24h Mpc–1.

  1. Glass badge dosimetry system for large scale personal monitoring

    International Nuclear Information System (INIS)

    Norimichi Juto

    2002-01-01

    Glass Badge using silver activated phosphate glass dosemeter was specially developed for large scale personal monitoring. And dosimetry systems such as an automatic leader and a dose equipment calculation algorithm were developed at once to achieve reasonable personal monitoring. In large scale personal monitoring, both of precision for dosimetry and confidence for lot of personal data handling become very important. The silver activated phosphate glass dosemeter has basically excellent characteristics for dosimetry such as homogeneous and stable sensitivity, negligible fading and so on. Glass Badge was designed to measure 10 keV - 10 MeV range of photon. 300 keV - 3 MeV range of beta, and 0.025 eV - 15 MeV range of neutron by included SSNTD. And developed Glass Badge dosimetry system has not only these basic characteristics but also lot of features to keep good precision for dosimetry and data handling. In this presentation, features of Glass Badge dosimetry systems and examples for practical personal monitoring systems will be presented. (Author)

  2. Large scale injection test (LASGIT) modelling

    International Nuclear Information System (INIS)

    Arnedo, D.; Olivella, S.; Alonso, E.E.

    2010-01-01

    Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug

  3. Large-scale fuel cycle centers

    International Nuclear Information System (INIS)

    Smiley, S.H.; Black, K.M.

    1977-01-01

    The United States Nuclear Regulatory Commission (NRC) has considered the nuclear energy center concept for fuel cycle plants in the Nuclear Energy Center Site Survey - 1975 (NECSS-75) -- an important study mandated by the U.S. Congress in the Energy Reorganization Act of 1974 which created the NRC. For the study, NRC defined fuel cycle centers to consist of fuel reprocessing and mixed oxide fuel fabrication plants, and optional high-level waste and transuranic waste management facilities. A range of fuel cycle center sizes corresponded to the fuel throughput of power plants with a total capacity of 50,000 - 300,000 MWe. The types of fuel cycle facilities located at the fuel cycle center permit the assessment of the role of fuel cycle centers in enhancing safeguarding of strategic special nuclear materials -- plutonium and mixed oxides. Siting of fuel cycle centers presents a considerably smaller problem than the siting of reactors. A single reprocessing plant of the scale projected for use in the United States (1500-2000 MT/yr) can reprocess the fuel from reactors producing 50,000-65,000 MWe. Only two or three fuel cycle centers of the upper limit size considered in the NECSS-75 would be required in the United States by the year 2000 . The NECSS-75 fuel cycle center evaluations showed that large scale fuel cycle centers present no real technical difficulties in siting from a radiological effluent and safety standpoint. Some construction economies may be attainable with fuel cycle centers; such centers offer opportunities for improved waste management systems. Combined centers consisting of reactors and fuel reprocessing and mixed oxide fuel fabrication plants were also studied in the NECSS. Such centers can eliminate not only shipment of plutonium, but also mixed oxide fuel. Increased fuel cycle costs result from implementation of combined centers unless the fuel reprocessing plants are commercial-sized. Development of plutonium-burning reactors could reduce any

  4. Large-scale assembly of colloidal particles

    Science.gov (United States)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  5. Large scale computing in theoretical physics: Example QCD

    International Nuclear Information System (INIS)

    Schilling, K.

    1986-01-01

    The limitations of the classical mathematical analysis of Newton and Leibniz appear to be more and more overcome by the power of modern computers. Large scale computing techniques - which resemble closely the methods used in simulations within statistical mechanics - allow to treat nonlinear systems with many degrees of freedom such as field theories in nonperturbative situations, where analytical methods do fail. The computation of the hadron spectrum within the framework of lattice QCD sets a demanding goal for the application of supercomputers in basic science. It requires both big computer capacities and clever algorithms to fight all the numerical evils that one encounters in the Euclidean world. The talk will attempt to describe both the computer aspects and the present state of the art of spectrum calculations within lattice QCD. (orig.)

  6. Isolating relativistic effects in large-scale structure

    Science.gov (United States)

    Bonvin, Camille

    2014-12-01

    We present a fully relativistic calculation of the observed galaxy number counts in the linear regime. We show that besides the density fluctuations and redshift-space distortions, various relativistic effects contribute to observations at large scales. These effects all have the same physical origin: they result from the fact that our coordinate system, namely the galaxy redshift and the incoming photons’ direction, is distorted by inhomogeneities in our Universe. We then discuss the impact of the relativistic effects on the angular power spectrum and on the two-point correlation function in configuration space. We show that the latter is very well adapted to isolate the relativistic effects since it naturally makes use of the symmetries of the different contributions. In particular, we discuss how the Doppler effect and the gravitational redshift distortions can be isolated by looking for a dipole in the cross-correlation function between a bright and a faint population of galaxies.

  7. Neutrinoless double-β decay matrix elements in large shell-model spaces with the generator-coordinate method

    Science.gov (United States)

    Jiao, C. F.; Engel, J.; Holt, J. D.

    2017-11-01

    We use the generator-coordinate method (GCM) with realistic shell-model interactions to closely approximate full shell-model calculations of the matrix elements for the neutrinoless double-β decay of 48Ca, 76Ge, and 82Se. We work in one major shell for the first isotope, in the f5 /2p g9 /2 space for the second and third, and finally in two major shells for all three. Our coordinates include not only the usual axial deformation parameter β , but also the triaxiality angle γ and neutron-proton pairing amplitudes. In the smaller model spaces our matrix elements agree well with those of full shell-model diagonalization, suggesting that our Hamiltonian-based GCM captures most of the important valence-space correlations. In two major shells, where exact diagonalization is not currently possible, our matrix elements are only slightly different from those in a single shell.

  8. Thermal power generation projects ``Large Scale Solar Heating``; EU-Thermie-Projekte ``Large Scale Solar Heating``

    Energy Technology Data Exchange (ETDEWEB)

    Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)

    1998-12-31

    The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)

  9. Radiative capture reaction {sup 7}Be(p,{gamma}){sup 8}B in the continuum shell model

    Energy Technology Data Exchange (ETDEWEB)

    Bennaceur, K; Ploszajczak, M [Grand Accelerateur National d` Ions Lourds (GANIL), Caen (France); Nowacki, F [Grand Accelerateur National d` Ions Lourds (GANIL), Caen (France); [Lab. de Physique Theorique Strasbourg, Strasbourg (France); Okolowicz, J [Grand Accelerateur National d` Ions Lourds (GANIL), Caen (France); [Inst. of Nuclear Physics, Krakow (Poland)

    1998-06-01

    We present here the first application of realistic shell model (SM) including coupling between many-particle (quasi-)bound states and the continuum of one-particle scattering states to the calculation of the total capture cross section and the astrophysical factor in the reaction {sup 7}Be(p,{gamma}){sup 8}B. (orig.)

  10. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    survival and recruitment estimates from the French CES scheme to assess the relative contributions of survival and recruitment to overall population changes. He develops a novel approach to modelling survival rates from such multi–site data by using within–year recaptures to provide a covariate of between–year recapture rates. This provided parsimonious models of variation in recapture probabilities between sites and years. The approach provides promising results for the four species investigated and can potentially be extended to similar data from other CES/MAPS schemes. The final paper by Blandine Doligez, David Thomson and Arie van Noordwijk (Doligez et al., 2004 illustrates how large-scale studies of population dynamics can be important for evaluating the effects of conservation measures. Their study is concerned with the reintroduction of White Stork populations to the Netherlands where a re–introduction programme started in 1969 had resulted in a breeding population of 396 pairs by 2000. They demonstrate the need to consider a wide range of models in order to account for potential age, time, cohort and “trap–happiness” effects. As the data are based on resightings such trap–happiness must reflect some form of heterogeneity in resighting probabilities. Perhaps surprisingly, the provision of supplementary food did not influence survival, but it may havehad an indirect effect via the alteration of migratory behaviour. Spatially explicit modelling of data gathered at many sites inevitably results in starting models with very large numbers of parameters. The problem is often complicated further by having relatively sparse data at each site, even where the total amount of data gathered is very large. Both Julliard (2004 and Doligez et al. (2004 give explicit examples of problems caused by needing to handle very large numbers of parameters and show how they overcame them for their particular data sets. Such problems involve both the choice of appropriate

  11. A method of orbital analysis for large-scale first-principles simulations

    International Nuclear Information System (INIS)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-01-01

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF 4 )

  12. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  13. Large scale propagation intermittency in the atmosphere

    Science.gov (United States)

    Mehrabi, Ali

    2000-11-01

    Long-term (several minutes to hours) amplitude variations observed in outdoor sound propagation experiments at Disneyland, California, in February 1998 are explained in terms of a time varying index of refraction. The experimentally propagated acoustic signals were received and recorded at several locations ranging from 300 meters to 2,800 meters. Meteorological data was taken as a function of altitude simultaneously with the received signal levels. There were many barriers along the path of acoustic propagation that affected the received signal levels, especially at short ranges. In a downward refraction situation, there could be a random change of amplitude in the predicted signals. A computer model based on the Fast Field Program (FFP) was used to compute the signal loss at the different receiving locations and to verify that the variations in the received signal levels can be predicted numerically. The calculations agree with experimental data with the same trend variations in average amplitude.

  14. Double inflation: A possible resolution of the large-scale structure problem

    International Nuclear Information System (INIS)

    Turner, M.S.; Villumsen, J.V.; Vittorio, N.; Silk, J.; Juszkiewicz, R.

    1986-11-01

    A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Ω = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of ∼100 Mpc, while the small-scale structure over ≤ 10 Mpc resembles that in a low density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations. 38 refs., 6 figs

  15. Van der Waals coefficients beyond the classical shell model

    Energy Technology Data Exchange (ETDEWEB)

    Tao, Jianmin, E-mail: jianmint@sas.upenn.edu [Department of Chemistry, University of Pennsylvania, Philadelphia, Pennsylvania 19104-6323 (United States); Fang, Yuan; Hao, Pan [Department of Physics and Engineering Physics, Tulane University, New Orleans, Louisiana 70118 (United States); Scuseria, G. E. [Department of Chemistry and Department of Physics and Astronomy, Rice University, Houston, Texas 77251-1892, USA and Department of Chemistry, Faculty of Science, King Abdulaziz University, Jeddah 21589 (Saudi Arabia); Ruzsinszky, Adrienn; Perdew, John P. [Department of Physics, Temple University, Philadelphia, Pennsylvania 19122 (United States)

    2015-01-14

    Van der Waals (vdW) coefficients can be accurately generated and understood by modelling the dynamic multipole polarizability of each interacting object. Accurate static polarizabilities are the key to accurate dynamic polarizabilities and vdW coefficients. In this work, we present and study in detail a hollow-sphere model for the dynamic multipole polarizability proposed recently by two of the present authors (JT and JPP) to simulate the vdW coefficients for inhomogeneous systems that allow for a cavity. The inputs to this model are the accurate static multipole polarizabilities and the electron density. A simplification of the full hollow-sphere model, the single-frequency approximation (SFA), circumvents the need for a detailed electron density and for a double numerical integration over space. We find that the hollow-sphere model in SFA is not only accurate for nanoclusters and cage molecules (e.g., fullerenes) but also yields vdW coefficients among atoms, fullerenes, and small clusters in good agreement with expensive time-dependent density functional calculations. However, the classical shell model (CSM), which inputs the static dipole polarizabilities and estimates the static higher-order multipole polarizabilities therefrom, is accurate for the higher-order vdW coefficients only when the interacting objects are large. For the lowest-order vdW coefficient C{sub 6}, SFA and CSM are exactly the same. The higher-order (C{sub 8} and C{sub 10}) terms of the vdW expansion can be almost as important as the C{sub 6} term in molecular crystals. Application to a variety of clusters shows that there is strong non-additivity of the long-range vdW interactions between nanoclusters.

  16. Exotic muon-to-positron conversion in nuclei: partial transition sum evaluation by using shell model

    International Nuclear Information System (INIS)

    Divari, P.C.; Vergados, J.D.; Kosmas, T.S.; Skouras, L.D.

    2001-01-01

    A comprehensive study of the exotic (μ - ,e + ) conversion in 27 Al, 27 Al(μ - ,e + ) 27 Na is presented. The relevant operators are deduced assuming one-pion and two-pion modes in the framework of intermediate neutrino mixing models, paying special attention to the light neutrino case. The total rate is calculated by summing over partial transition strengths for all kinematically accessible final states derived with s-d shell model calculations employing the well-known Wildenthal realistic interaction

  17. Large scale chromatographic separations using continuous displacement chromatography (CDC)

    International Nuclear Information System (INIS)

    Taniguchi, V.T.; Doty, A.W.; Byers, C.H.

    1988-01-01

    A process for large scale chromatographic separations using a continuous chromatography technique is described. The process combines the advantages of large scale batch fixed column displacement chromatography with conventional analytical or elution continuous annular chromatography (CAC) to enable large scale displacement chromatography to be performed on a continuous basis (CDC). Such large scale, continuous displacement chromatography separations have not been reported in the literature. The process is demonstrated with the ion exchange separation of a binary lanthanide (Nd/Pr) mixture. The process is, however, applicable to any displacement chromatography separation that can be performed using conventional batch, fixed column chromatography

  18. The role of large-scale, extratropical dynamics in climate change

    International Nuclear Information System (INIS)

    Shepherd, T.G.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop's University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database

  19. The role of large-scale, extratropical dynamics in climate change

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, T.G. [ed.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.

  20. Large-Scale Structure Behind The Milky Way with ALFAZOA

    Science.gov (United States)

    Sanchez Barrantes, Monica; Henning, Patricia A.; Momjian, Emmanuel; McIntyre, Travis; Minchin, Robert F.

    2018-06-01

    The region of the sky behind the Milky Way (the Zone of Avoidance; ZOA) is not well studied due to high obscuration from gas and dust in our galaxy as well as stellar confusion, which results in low detection rate of galaxies in this region. Because of this, little is known about the distribution of galaxies in the ZOA, and other all sky redshift surveys have incomplete maps (e.g. the 2MASS Redshift survey in NIR has a gap of 5-8 deg around the Galactic plane). There is still controversy about the dipole anisotropy calculated from the comparison between the CMB and galaxy and redshift surveys, in part due to the incomplete sky mapping and redshift depth of these surveys. Fortunately, there is no ZOA at radio wavelengths because such wavelengths can pass unimpeded through dust and are not affected by stellar confusion. Therefore, we can detect and make a map of the distribution of obscured galaxies that contain the 21cm neutral hydrogen emission line, and trace the large-scale structure across the Galactic plane. The Arecibo L-Band Feed Array Zone of Avoidance (ALFAZOA) survey is a blind HI survey for galaxies behind the Milky Way that covers more than 1000 square degrees of the sky, conducted in two phases: shallow (completed) and deep (ongoing). We show the results of the finished shallow phase of the survey, which mapped a region between the galactic longitude l=30-75 deg, and latitude b <|10 deg|, and detected 418 galaxies to about 12,000 km/s, including galaxy properties and mapped large-scale structure. We do the same for new results from the deep phase, which is ongoing and covers 30 < l < 75 deg and b < |2| deg for the inner galaxy and 175 < l < 207 deg, with -2 < b < 1 for the outer galaxy.

  1. Large scale structure from viscous dark matter

    CERN Document Server

    Blas, Diego; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim

    2015-01-01

    Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale $k_m$ for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale $k_m$, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with $N$-body simulations up to scales $k=0.2 \\, h/$Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to varia...

  2. Software for large scale tracking studies

    International Nuclear Information System (INIS)

    Niederer, J.

    1984-05-01

    Over the past few years, Brookhaven accelerator physicists have been adapting particle tracking programs in planning local storage rings, and lately for SSC reference designs. In addition, the Laboratory is actively considering upgrades to its AGS capabilities aimed at higher proton intensity, polarized proton beams, and heavy ion acceleration. Further activity concerns heavy ion transfer, a proposed booster, and most recently design studies for a heavy ion collider to join to this complex. Circumstances have thus encouraged a search for common features among design and modeling programs and their data, and the corresponding controls efforts among present and tentative machines. Using a version of PATRICIA with nonlinear forces as a vehicle, we have experimented with formal ways to describe accelerator lattice problems to computers as well as to speed up the calculations for large storage ring models. Code treated by straightforward reorganization has served for SSC explorations. The representation work has led to a relational data base centered program, LILA, which has desirable properties for dealing with the many thousands of rapidly changing variables in tracking and other model programs. 13 references

  3. Superconductivity for Large Scale Wind Turbines

    Energy Technology Data Exchange (ETDEWEB)

    R. Fair; W. Stautner; M. Douglass; R. Rajput-Ghoshal; M. Moscinski; P. Riley; D. Wagner; J. Kim; S. Hou; F. Lopez; K. Haran; J. Bray; T. Laskaris; J. Rochford; R. Duckworth

    2012-10-12

    A conceptual design has been completed for a 10MW superconducting direct drive wind turbine generator employing low temperature superconductors for the field winding. Key technology building blocks from the GE Wind and GE Healthcare businesses have been transferred across to the design of this concept machine. Wherever possible, conventional technology and production techniques have been used in order to support the case for commercialization of such a machine. Appendices A and B provide further details of the layout of the machine and the complete specification table for the concept design. Phase 1 of the program has allowed us to understand the trade-offs between the various sub-systems of such a generator and its integration with a wind turbine. A Failure Modes and Effects Analysis (FMEA) and a Technology Readiness Level (TRL) analysis have been completed resulting in the identification of high risk components within the design. The design has been analyzed from a commercial and economic point of view and Cost of Energy (COE) calculations have been carried out with the potential to reduce COE by up to 18% when compared with a permanent magnet direct drive 5MW baseline machine, resulting in a potential COE of 0.075 $/kWh. Finally, a top-level commercialization plan has been proposed to enable this technology to be transitioned to full volume production. The main body of this report will present the design processes employed and the main findings and conclusions.

  4. Investigation on the integral output power model of a large-scale wind farm

    Institute of Scientific and Technical Information of China (English)

    BAO Nengsheng; MA Xiuqian; NI Weidou

    2007-01-01

    The integral output power model of a large-scale wind farm is needed when estimating the wind farm's output over a period of time in the future.The actual wind speed power model and calculation method of a wind farm made up of many wind turbine units are discussed.After analyzing the incoming wind flow characteristics and their energy distributions,and after considering the multi-effects among the wind turbine units and certain assumptions,the incoming wind flow model of multi-units is built.The calculation algorithms and steps of the integral output power model of a large-scale wind farm are provided.Finally,an actual power output of the wind farm is calculated and analyzed by using the practical measurement wind speed data.The characteristics of a large-scale wind farm are also discussed.

  5. Final Report Fermionic Symmetries and Self consistent Shell Model

    International Nuclear Information System (INIS)

    Zamick, Larry

    2008-01-01

    In this final report in the field of theoretical nuclear physics we note important accomplishments.We were confronted with 'anomoulous' magnetic moments by the experimetalists and were able to expain them. We found unexpected partial dynamical symmetries--completely unknown before, and were able to a large extent to expain them. The importance of a self consistent shell model was emphasized.

  6. Deformed shell model studies of spectroscopic properties of Zn and ...

    Indian Academy of Sciences (India)

    2014-04-05

    Apr 5, 2014 ... April 2014 physics pp. 757–767. Deformed shell model studies of ... experiments without isotopical enrichment thereby reducing the cost considerably. By taking a large mass of the sample because of its low cost, one can ...

  7. Bursts and shocks in a continuum shell model

    DEFF Research Database (Denmark)

    Andersen, Ken Haste; Bohr, Tomas; Jensen, M.H.

    1998-01-01

    We study a burst event, i.e., the evolution of an initial condition having support only in a finite interval of k-space, in the continuum shell model due to Parisi. We show that the continuum equation without forcing or dissipation can be explicitly written in characteristic form and that the right...

  8. Projected shell model study of neutron- deficient 122Ce

    Indian Academy of Sciences (India)

    Projected shell model; band diagram; yrast energies; electromagnetic quan- ... signed to 122Ce by detecting γ-rays in coincidence with evaporated charged particles .... 0.75 from the free nucleon values to account for the core-polarization and ...

  9. Mayer–Jensen Shell Model and Magic Numbers

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 12; Issue 12. Mayer-Jensen Shell Model and Magic Numbers - An Independent Nucleon Model with Spin-Orbit Coupling. R Velusamy. General Article Volume 12 Issue 12 December 2007 pp 12-24 ...

  10. A different interpretation of the nuclear shell model

    International Nuclear Information System (INIS)

    Fabre de la Ripelle, M.

    1984-12-01

    In the first order approximation the nucleons are moving into a collective well extracted from the two-body N-N interaction. The nuclear shell model is explained by the structure of the first order solution of the Schroedinger equation. In the next step the two-body correlations generated by the N-N potential are introduced in the wave function

  11. Quantum chaos in the two-center shell model

    Energy Technology Data Exchange (ETDEWEB)

    Milek, B; Noerenberg, W; Rozmej, P [Gesellschaft fuer Schwerionenforschung m.b.H., Darmstadt (Germany, F.R.)

    1989-11-01

    Within an axially symmetric two-center shell model single-particle levels with {Omega}=1/2 are analyzed with respect to their level-spacing distributions and avoided level crossings as functions of the shape parameters. Only for shapes sufficiently far from any additional symmetry, ideal Wigner distributions are found as signature for quantum chaos. (orig.).

  12. Quantum chaos in the two-center shell model

    Energy Technology Data Exchange (ETDEWEB)

    Milek, B; Noerenberg, W; Rozmej, P

    1989-03-01

    Within an axially symmetric two-center shell model single-particle levels with ..cap omega.. = 1/2 are analyzed with respect to their level-spacing distributions and avoided level crossings as functions of the shape parameters. Only for shapes sufficiently far from any additional symmetry, ideal Wigner distributions are found as signature for quantum chaos.

  13. Solving the nuclear shell model with an algebraic method

    International Nuclear Information System (INIS)

    Feng, D.H.; Pan, X.W.; Guidry, M.

    1997-01-01

    We illustrate algebraic methods in the nuclear shell model through a concrete example, the fermion dynamical symmetry model (FDSM). We use this model to introduce important concepts such as dynamical symmetry, symmetry breaking, effective symmetry, and diagonalization within a higher-symmetry basis. (orig.)

  14. Intruder level and deformation in SD-pair shell model

    International Nuclear Information System (INIS)

    Luo Yan'an; Ning Pingzhi; Pan Feng

    2004-01-01

    The influence of intruder level on nuclear deformation is studied within the framework of the nucleon-pair shell model truncated to an SD-pair subspace. The results suggest that the intruder level has a tendency to reduce the deformation and plays an important role in determining the onset of rotational behavior. (authors)

  15. Shell model description of band structure in 48Cr

    International Nuclear Information System (INIS)

    Vargas, Carlos E.; Velazquez, Victor M.

    2007-01-01

    The band structure for normal and abnormal parity bands in 48Cr are described using the m-scheme shell model. In addition to full fp-shell, two particles in the 1d3/2 orbital are allowed in order to describe intruder states. The interaction includes fp-, sd- and mixed matrix elements

  16. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising w...

  17. Large-scale matrix-handling subroutines 'ATLAS'

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Fujita, Keiichi; Matsuura, Toshihiko; Tahara, Nobuo

    1978-03-01

    Subroutine package ''ATLAS'' has been developed for handling large-scale matrices. The package is composed of four kinds of subroutines, i.e., basic arithmetic routines, routines for solving linear simultaneous equations and for solving general eigenvalue problems and utility routines. The subroutines are useful in large scale plasma-fluid simulations. (auth.)

  18. Large-scale Agricultural Land Acquisitions in West Africa | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    This project will examine large-scale agricultural land acquisitions in nine West African countries -Burkina Faso, Guinea-Bissau, Guinea, Benin, Mali, Togo, Senegal, Niger, and Côte d'Ivoire. ... They will use the results to increase public awareness and knowledge about the consequences of large-scale land acquisitions.

  19. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Administrator

    structure and chemical purity of 99⋅1% by inductively coupled plasma optical emission spectroscopy on a large scale. Keywords. Sol–gel; yttria-stabilized zirconia; large scale; nanopowder; Pechini method. 1. Introduction. Zirconia has attracted the attention of many scientists because of its tremendous thermal, mechanical ...

  20. Analysis of environmental impact assessment for large-scale X-ray medical equipments

    International Nuclear Information System (INIS)

    Fu Jin; Pei Chengkai

    2011-01-01

    Based on an Environmental Impact Assessment (EIA) project, this paper elaborates the basic analysis essentials of EIA for the sales project of large-scale X-ray medical equipment, and provides the analysis procedure of environmental impact and dose estimation method under normal and accident conditions. The key points of EIA for the sales project of large-scale X-ray medical equipment include the determination of pollution factor and management limit value according to the project's actual situation, the utilization of various methods of assessment and prediction such as analogy, actual measurement and calculation to analyze, monitor, calculate and predict the pollution during normal and accident condition. (authors)

  1. The Rights and Responsibility of Test Takers When Large-Scale Testing Is Used for Classroom Assessment

    Science.gov (United States)

    van Barneveld, Christina; Brinson, Karieann

    2017-01-01

    The purpose of this research was to identify conflicts in the rights and responsibility of Grade 9 test takers when some parts of a large-scale test are marked by teachers and used in the calculation of students' class marks. Data from teachers' questionnaires and students' questionnaires from a 2009-10 administration of a large-scale test of…

  2. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Pro jects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse pro blems * partially separable pro blems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior-point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  3. GAS MIXING ANALYSIS IN A LARGE-SCALED SALTSTONE FACILITY

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S

    2008-05-28

    Computational fluid dynamics (CFD) methods have been used to estimate the flow patterns mainly driven by temperature gradients inside vapor space in a large-scaled Saltstone vault facility at Savannah River site (SRS). The purpose of this work is to examine the gas motions inside the vapor space under the current vault configurations by taking a three-dimensional transient momentum-energy coupled approach for the vapor space domain of the vault. The modeling calculations were based on prototypic vault geometry and expected normal operating conditions as defined by Waste Solidification Engineering. The modeling analysis was focused on the air flow patterns near the ventilated corner zones of the vapor space inside the Saltstone vault. The turbulence behavior and natural convection mechanism used in the present model were benchmarked against the literature information and theoretical results. The verified model was applied to the Saltstone vault geometry for the transient assessment of the air flow patterns inside the vapor space of the vault region using the potential operating conditions. The baseline model considered two cases for the estimations of the flow patterns within the vapor space. One is the reference nominal case. The other is for the negative temperature gradient between the roof inner and top grout surface temperatures intended for the potential bounding condition. The flow patterns of the vapor space calculated by the CFD model demonstrate that the ambient air comes into the vapor space of the vault through the lower-end ventilation hole, and it gets heated up by the Benard-cell type circulation before leaving the vault via the higher-end ventilation hole. The calculated results are consistent with the literature information. Detailed results and the cases considered in the calculations will be discussed here.

  4. Fragmentation of single-particle strength and the validity of the shell model

    International Nuclear Information System (INIS)

    Brand, M.G.E.; Rijsdijk, G.A.; Muller, F.A.; Allaart, K.; Dickhoff, W.H.

    1991-01-01

    The problem of missing spectroscopic strength in proton knock-out reactions is addressed by calculating this strength with a realistic interaction up to about a hundred MeV missing energy. An interaction suitably modified for short-range correlations (G-matrix) is employed in the calculation of the self-energy including all orbitals up to and including three major shells above the Fermi level for protons. The spectroscopic strength is obtained by solving the Dyson equation for the Green function with a self-energy up to second order in the interaction. Results for 48 Ca and 90 Zr are compared with recent (e,e'p) data. The calculated strength overestimates the data by about 10-15% of the independent particle shell-model (IPSM) sum rule. This is in accordance with what is expected from depletions calculated in infinite nuclear matter. Inclusion of higher order terms into the self-energy, especially the correlated motion of particles and holes, is found to be necessary to reproduce the observed fragmentation of strength in the low-energy region. The widths of the strength distributions compare well with empirical formulas which have been deduced from optical potentials. The validity of the conventional shell-model picture is connected with the relevance of Landau's quasiparticle picture for strongly interacting Fermi systems. (orig.)

  5. Application of the Kishimoto-Tamura boson expansion theory to a single-j shell model

    International Nuclear Information System (INIS)

    Li, C.T.; Pedrocchi, V.G.; Tamura, T.

    1985-01-01

    The boson expansion theory of Kishimoto and Tamura is applied to a single-j shell model. It is shown that this theory is quite accurate, giving results that agree very closely with those of the exact fermion calculations. The fast convergence of the boson expansion is also demonstrated. A critical discussion is then made of an earlier paper by Arima, in which he stated that the Kishimoto-Tamura theory gives rise to very poor numerical results. The source of the trouble encountered by Arima is unmasked

  6. Recent developments of the projected shell model based on many-body techniques

    Directory of Open Access Journals (Sweden)

    Sun Yang

    2015-01-01

    Full Text Available Recent developments of the projected shell model (PSM are summarized. Firstly, by using the Pfaffian algorithm, the multi-quasiparticle configuration space is expanded to include 6-quasiparticle states. The yrast band of 166Hf at very high spins is studied as an example, where the observed third back-bending in the moment of inertia is well reproduced and explained. Secondly, an angular-momentum projected generate coordinate method is developed based on PSM. The evolution of the low-lying states, including the second 0+ state, of the soft Gd, Dy, and Er isotopes to the well-deformed ones is calculated, and compared with experimental data.

  7. Onion-shell model for cosmic ray electrons and radio synchrotron emission in supernova remnants

    International Nuclear Information System (INIS)

    Beck, R.; Drury, L.O.; Voelk, H.J.; Bogdan, T.J.

    1985-01-01

    The spectrum of cosmic ray electrons, accelerated in the shock front of a supernova remnant (SNR), is calculated in the test-particle approximation using an onion-shell model. Particle diffusion within the evolving remnant is explicity taken into account. The particle spectrum becomes steeper with increasing radius as well as SNR age. Simple models of the magnetic field distribution allow a prediction of the intensity and spectrum of radio synchrotron emission and their radial variation. The agreement with existing observations is satisfactory in several SNR's but fails in other cases. Radiative cooling may be an important effect, especially in SNR's exploding in a dense interstellar medium

  8. Onion-shell model for cosmic ray electrons and radio synchrotron emission in supernova remnants

    Science.gov (United States)

    Beck, R.; Drury, L. O.; Voelk, H. J.; Bogdan, T. J.

    1985-01-01

    The spectrum of cosmic ray electrons, accelerated in the shock front of a supernova remnant (SNR), is calculated in the test-particle approximation using an onion-shell model. Particle diffusion within the evolving remnant is explicity taken into account. The particle spectrum becomes steeper with increasing radius as well as SNR age. Simple models of the magnetic field distribution allow a prediction of the intensity and spectrum of radio synchrotron emission and their radial variation. The agreement with existing observations is satisfactory in several SNR's but fails in other cases. Radiative cooling may be an important effect, especially in SNR's exploding in a dense interstellar medium.

  9. Fixed J spectral distributions in large shell model spaces. Pt. 3

    International Nuclear Information System (INIS)

    Jacquemin, C.; Auger, G.; Quesne, C.

    1982-01-01

    A method is developed to exactly calculate the fixed J quasiparticle centroid energies and partial widths. Some results obtained in the even-mass lead isotopes with various interactions are analysed. Fixed J quasiparticle distributions are used to predict an upper limit for the deviations between the quasiparticle approximation and the shell model results for the low-energy levels. The influence of the states with a high quasiparticle number in the low-energy region is seen to strongly depend upon the interaction. The importance of the dimensionalities and the internal widths is explaining the admixtures is stressed. (orig.)

  10. Shell model estimate of electric dipole moments in medium and heavy nuclei

    Directory of Open Access Journals (Sweden)

    Teruya Eri

    2015-01-01

    Full Text Available Existence of the electric dipole moment (EDM is deeply related with time-reversal invariance. The EDMof a diamagnetic atom is mainly induced by the nuclear Schiff moment. After carrying out the shell model calculations to obtain wavefunctions for Xe isotopes, we evaluate nuclear Schiff moments for Xe isotopes to estimate their atomic EDMs. We estimate the contribution from each single particle orbital for the Schiff moment. It is found that the contribution on the Schiff moment is very different from orbital to orbital.

  11. An Novel Architecture of Large-scale Communication in IOT

    Science.gov (United States)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    In recent years, many scholars have done a great deal of research on the development of Internet of Things and networked physical systems. However, few people have made the detailed visualization of the large-scale communications architecture in the IOT. In fact, the non-uniform technology between IPv6 and access points has led to a lack of broad principles of large-scale communications architectures. Therefore, this paper presents the Uni-IPv6 Access and Information Exchange Method (UAIEM), a new architecture and algorithm that addresses large-scale communications in the IOT.

  12. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  13. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  14. Thermal activation of dislocations in large scale obstacle bypass

    Science.gov (United States)

    Sobie, Cameron; Capolungo, Laurent; McDowell, David L.; Martinez, Enrique

    2017-08-01

    Dislocation dynamics simulations have been used extensively to predict hardening caused by dislocation-obstacle interactions, including irradiation defect hardening in the athermal case. Incorporating the role of thermal energy on these interactions is possible with a framework provided by harmonic transition state theory (HTST) enabling direct access to thermally activated reaction rates using the Arrhenius equation, including rates of dislocation-obstacle bypass processes. Moving beyond unit dislocation-defect reactions to a representative environment containing a large number of defects requires coarse-graining the activation energy barriers of a population of obstacles into an effective energy barrier that accurately represents the large scale collective process. The work presented here investigates the relationship between unit dislocation-defect bypass processes and the distribution of activation energy barriers calculated for ensemble bypass processes. A significant difference between these cases is observed, which is attributed to the inherent cooperative nature of dislocation bypass processes. In addition to the dislocation-defect interaction, the morphology of the dislocation segments pinned to the defects play an important role on the activation energies for bypass. A phenomenological model for activation energy stress dependence is shown to describe well the effect of a distribution of activation energies, and a probabilistic activation energy model incorporating the stress distribution in a material is presented.

  15. Simulation of fatigue crack growth under large scale yielding conditions

    Science.gov (United States)

    Schweizer, Christoph; Seifert, Thomas; Riedel, Hermann

    2010-07-01

    A simple mechanism based model for fatigue crack growth assumes a linear correlation between the cyclic crack-tip opening displacement (ΔCTOD) and the crack growth increment (da/dN). The objective of this work is to compare analytical estimates of ΔCTOD with results of numerical calculations under large scale yielding conditions and to verify the physical basis of the model by comparing the predicted and the measured evolution of the crack length in a 10%-chromium-steel. The material is described by a rate independent cyclic plasticity model with power-law hardening and Masing behavior. During the tension-going part of the cycle, nodes at the crack-tip are released such that the crack growth increment corresponds approximately to the crack-tip opening. The finite element analysis performed in ABAQUS is continued for so many cycles until a stabilized value of ΔCTOD is reached. The analytical model contains an interpolation formula for the J-integral, which is generalized to account for cyclic loading and crack closure. Both simulated and estimated ΔCTOD are reasonably consistent. The predicted crack length evolution is found to be in good agreement with the behavior of microcracks observed in a 10%-chromium steel.

  16. Discriminant WSRC for Large-Scale Plant Species Recognition

    Directory of Open Access Journals (Sweden)

    Shanwen Zhang

    2017-01-01

    Full Text Available In sparse representation based classification (SRC and weighted SRC (WSRC, it is time-consuming to solve the global sparse representation problem. A discriminant WSRC (DWSRC is proposed for large-scale plant species recognition, including two stages. Firstly, several subdictionaries are constructed by dividing the dataset into several similar classes, and a subdictionary is chosen by the maximum similarity between the test sample and the typical sample of each similar class. Secondly, the weighted sparse representation of the test image is calculated with respect to the chosen subdictionary, and then the leaf category is assigned through the minimum reconstruction error. Different from the traditional SRC and its improved approaches, we sparsely represent the test sample on a subdictionary whose base elements are the training samples of the selected similar class, instead of using the generic overcomplete dictionary on the entire training samples. Thus, the complexity to solving the sparse representation problem is reduced. Moreover, DWSRC is adapted to newly added leaf species without rebuilding the dictionary. Experimental results on the ICL plant leaf database show that the method has low computational complexity and high recognition rate and can be clearly interpreted.

  17. Large scale anisotropy studies with the Auger Observatory

    International Nuclear Information System (INIS)

    Santos, E.M.; Letessier-Selvon, A.

    2006-01-01

    With the increasing Auger surface array data sample of the highest energy cosmic rays, large scale anisotropy studies at this part of the spectrum become a promising path towards the understanding of the origin of ultra-high energy cosmic particles. We describe the methods underlying the search for distortions in the cosmic rays arrival directions over large angular scales, that is, bigger than those commonly employed in the search for correlations with point-like sources. The widely used tools, known as coverage maps, are described and some of the issues involved in their calculations are presented through Monte Carlo based studies. Coverage computation requires a deep knowledge on the local detection efficiency, including the influence of weather parameters like temperature and pressure. Particular attention is devoted to a new proposed method to extract the coverage, based upon the assumption of time factorization of an extensive air shower detector acceptance. We use Auger monitoring data to test the goodness of such a hypothesis. We finally show the necessity of using more than one coverage to extract any possible anisotropic pattern on the sky, by pointing to some of the biases present in commonly used methods based, for example, on the scrambling of the UTC arrival times for each event. (author)

  18. Connections between the dynamical symmetries in the microscopic shell model

    Energy Technology Data Exchange (ETDEWEB)

    Georgieva, A. I., E-mail: anageorg@issp.bas.bg [Institute of Solid State Physics, Bulgarian Academy of Sciences, Sofia 1784 (Bulgaria); Drumev, K. P. [Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, Sofia 1784 (Bulgaria)

    2016-03-25

    The dynamical symmetries of the microscopic shell model appear as the limiting cases of a symmetry adapted Pairing-Plus-Quadrupole Model /PQM/, with a Hamiltonian containing isoscalar and isovector pairing and quadrupole interactions. We establish a correspondence between each of the three types of pairing bases and Elliott’s SU(3) basis, that describes collective rotation of nuclear systems with quadrupole deformation. It is derived from their complementarity to the same LS coupling chain of the shell model number conserving algebra. The probability distribution of the S U(3) basis states within the pairing eigenstates is also obtained through a numerical diagonalization of the PQM Hamiltonian in each limit. We introduce control parameters, which define the phase diagram of the model and determine the role of each term of the Hamiltonian in the correct reproduction of the experimental data for the considered nuclei.

  19. Ab Initio Symmetry-Adapted No-Core Shell Model

    International Nuclear Information System (INIS)

    Draayer, J P; Dytrych, T; Launey, K D

    2011-01-01

    A multi-shell extension of the Elliott SU(3) model, the SU(3) symmetry-adapted version of the no-core shell model (SA-NCSM), is described. The significance of this SA-NCSM emerges from the physical relevance of its SU(3)-coupled basis, which – while it naturally manages center-of-mass spuriosity – provides a microscopic description of nuclei in terms of mixed shape configurations. Since typically configurations of maximum spatial deformation dominate, only a small part of the model space suffices to reproduce the low-energy nuclear dynamics and hence, offers an effective symmetry-guided framework for winnowing of model space. This is based on our recent findings of low-spin and high-deformation dominance in realistic NCSM results and, in turn, holds promise to significantly enhance the reach of ab initio shell models.

  20. Projected Shell Model Description of Positive Parity Band of 130Pr Nucleus

    Science.gov (United States)

    Singh, Suram; Kumar, Amit; Singh, Dhanvir; Sharma, Chetan; Bharti, Arun; Bhat, G. H.; Sheikh, J. A.

    2018-02-01

    Theoretical investigation of positive parity yrast band of odd-odd 130Pr nucleus is performed by applying the projected shell model. The present study is undertaken to investigate and verify the very recently observed side band in 130Pr theoretically in terms of quasi-particle (qp) configuration. From the analysis of band diagram, the yrast as well as side band are found to arise from two-qp configuration πh 11/2 ⊗ νh 11/2. The present calculations are viewed to have qualitatively reproduced the known experimental data for yrast states, transition energies, and B( M1) / B( E2) ratios of this nucleus. The recently observed positive parity side band is also reproduced by the present calculations. The energy states of the side band are predicted up to spin 25+, which is far above the known experimental spin of 18+ and this could serve as a motivational factor for future experiments. In addition, the reduced transition probability B( E2) for interband transitions has also been calculated for the first time in projected shell model, which would serve as an encouragement for other research groups in the future.

  1. Shell-model predictions for Lambda Lambda hypernuclei

    International Nuclear Information System (INIS)

    Gal, A.; Millener, D.

    2011-01-01

    It is shown how the recent shell-model determination of ΛN spin-dependent interaction terms in Λ hypernuclei allows for a reliable deduction of ΛΛ separation energies in ΛΛ hypernuclei across the nuclear p shell. Comparison is made with the available data, highlighting # Lambda# # Lambda# 11 Be and # Lambda# # Lambda# 12 Be which have been suggested as possible candidates for the KEK-E373 HIDA event.

  2. Super-hypernuclei in the quark-shell model, 2

    International Nuclear Information System (INIS)

    Terazawa, Hidezumi.

    1989-07-01

    By following the previous paper, where the quark-shell model of nuclei in quantum chromodynamics is briefly reviewed, a short review of the MIT bag model of nuclei is presented for comparison and a simple estimate of the Hλ ('hexalambda') mass is also made for illustration. Furthermore, an even shorter review of the 'nucleon cluster model' of nuclei is presented for further comparison. (J.P.N.)

  3. Large-scale land transformations in Indonesia: The role of ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... enable timely responses to the impacts of large-scale land transformations in Central Kalimantan ... In partnership with UNESCO's Organization for Women in Science for the ... New funding opportunity for gender equality and climate change.

  4. Resolute large scale mining company contribution to health services of

    African Journals Online (AJOL)

    Resolute large scale mining company contribution to health services of Lusu ... in terms of socio economic, health, education, employment, safe drinking water, ... The data were analyzed using Scientific Package for Social Science (SPSS).

  5. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  6. Bottom-Up Accountability Initiatives and Large-Scale Land ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Corey Piccioni

    fuel/energy, climate, and finance has occurred and one of the most ... this wave of large-scale land acquisitions. In fact, esti- ... Environmental Rights Action/Friends of the Earth,. Nigeria ... map the differentiated impacts (gender, ethnicity,.

  7. Large-scale linear programs in planning and prediction.

    Science.gov (United States)

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  8. Bottom-Up Accountability Initiatives and Large-Scale Land ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... Security can help increase accountability for large-scale land acquisitions in ... to build decent economic livelihoods and participate meaningfully in decisions ... its 2017 call for proposals to establish Cyber Policy Centres in the Global South.

  9. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    KAUST Repository

    Kumar, Rohit

    2017-08-11

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  10. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  11. Large-Scale 3D Printing: The Way Forward

    Science.gov (United States)

    Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid

    2018-03-01

    Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.

  12. Determination of Hamiltonian matrix for IBM4 and compare it is self value with shells model

    International Nuclear Information System (INIS)

    Slyman, S.; Hadad, S.; Souman, H.

    2004-01-01

    The Hamiltonian is determined using the procedure OAI and the mapping of (IBM4) states into the shell model, which is based on the seniority classification scheme. A boson sub-matrix of the shell model Hamiltonian for the (sd) 4 configuration is constructed, and is proved to produce the same eigenvalues as the shell model Hamiltonian for the corresponding fermion states. (authors)

  13. No Large Scale Curvature Perturbations during Waterfall of Hybrid Inflation

    OpenAIRE

    Abolhasani, Ali Akbar; Firouzjahi, Hassan

    2010-01-01

    In this paper the possibility of generating large scale curvature perturbations induced from the entropic perturbations during the waterfall phase transition of standard hybrid inflation model is studied. We show that whether or not appreciable amounts of large scale curvature perturbations are produced during the waterfall phase transition depend crucially on the competition between the classical and the quantum mechanical back-reactions to terminate inflation. If one considers only the clas...

  14. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  15. Benefits of transactive memory systems in large-scale development

    OpenAIRE

    Aivars, Sablis

    2016-01-01

    Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise...

  16. Capabilities of the Large-Scale Sediment Transport Facility

    Science.gov (United States)

    2016-04-01

    pump flow meters, sediment trap weigh tanks , and beach profiling lidar. A detailed discussion of the original LSTF features and capabilities can be...ERDC/CHL CHETN-I-88 April 2016 Approved for public release; distribution is unlimited. Capabilities of the Large-Scale Sediment Transport...describes the Large-Scale Sediment Transport Facility (LSTF) and recent upgrades to the measurement systems. The purpose of these upgrades was to increase

  17. Comparative Analysis of Different Protocols to Manage Large Scale Networks

    OpenAIRE

    Anil Rao Pimplapure; Dr Jayant Dubey; Prashant Sen

    2013-01-01

    In recent year the numbers, complexity and size is increased in Large Scale Network. The best example of Large Scale Network is Internet, and recently once are Data-centers in Cloud Environment. In this process, involvement of several management tasks such as traffic monitoring, security and performance optimization is big task for Network Administrator. This research reports study the different protocols i.e. conventional protocols like Simple Network Management Protocol and newly Gossip bas...

  18. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  19. Angle-correlated cross sections in the framework of the continuum shell model

    International Nuclear Information System (INIS)

    Moerschel, K.P.

    1984-01-01

    In the present thesis in the framework of the continuum shell modell a concept for the treatment of angle-correlated cross sections was developed by which coincidence experiments on electron scattering on nuclei are described. For this the existing Darmstadt continuum-shell-model code had to be extended to the calculation of the correlation coefficients in which nuclear dynamics enter and which determine completely the angle-correlated cross sections. Under inclusion of the kinematics a method for the integration over the scattered electron was presented and used for the comparison with corresponding experiments. As application correlation coefficients for the proton channel in 12 C with 1 - and 2 + excitations were studied. By means of these coefficients finally cross sections for the reaction 12 C (e,p) 11 B could be calculated and compared with the experiment whereby the developed methods were proved as suitable to predict correctly both the slope and the quantity of the experimental cross sections. (orig.) [de

  20. Phylogenetic distribution of large-scale genome patchiness

    Directory of Open Access Journals (Sweden)

    Hackenberg Michael

    2008-04-01

    Full Text Available Abstract Background The phylogenetic distribution of large-scale genome structure (i.e. mosaic compositional patchiness has been explored mainly by analytical ultracentrifugation of bulk DNA. However, with the availability of large, good-quality chromosome sequences, and the recently developed computational methods to directly analyze patchiness on the genome sequence, an evolutionary comparative analysis can be carried out at the sequence level. Results The local variations in the scaling exponent of the Detrended Fluctuation Analysis are used here to analyze large-scale genome structure and directly uncover the characteristic scales present in genome sequences. Furthermore, through shuffling experiments of selected genome regions, computationally-identified, isochore-like regions were identified as the biological source for the uncovered large-scale genome structure. The phylogenetic distribution of short- and large-scale patchiness was determined in the best-sequenced genome assemblies from eleven eukaryotic genomes: mammals (Homo sapiens, Pan troglodytes, Mus musculus, Rattus norvegicus, and Canis familiaris, birds (Gallus gallus, fishes (Danio rerio, invertebrates (Drosophila melanogaster and Caenorhabditis elegans, plants (Arabidopsis thaliana and yeasts (Saccharomyces cerevisiae. We found large-scale patchiness of genome structure, associated with in silico determined, isochore-like regions, throughout this wide phylogenetic range. Conclusion Large-scale genome structure is detected by directly analyzing DNA sequences in a wide range of eukaryotic chromosome sequences, from human to yeast. In all these genomes, large-scale patchiness can be associated with the isochore-like regions, as directly detected in silico at the sequence level.

  1. RELAPS choked flow model and application to a large scale flow test

    International Nuclear Information System (INIS)

    Ransom, V.H.; Trapp, J.A.

    1980-01-01

    The RELAP5 code was used to simulate a large scale choked flow test. The fluid system used in the test was modeled in RELAP5 using a uniform, but coarse, nodalization. The choked mass discharge rate was calculated using the RELAP5 choked flow model. The calulations were in good agreement with the test data, and the flow was calculated to be near thermal equilibrium

  2. Coulomb excitation $^{74}$Zn-$^{80}$Zn (N=50): probing the validity of shell-model descriptions around $^{78}$Ni

    CERN Multimedia

    A study of the evolution of the nuclear structure along the zinc isotopic chain close to the doubly magic nucleus $^{78}$Ni is proposed to probe recent shell-model calculations in this area of the nuclear chart. Excitation energies and connecting B(E2) values will be measured through multiple Coulomb excitation experiment with laser ionized purified beams of $^{74-80}$Zn from HIE ISOLDE. The current proposal request 30 shifts.

  3. Shell model description of 16O(p,γ)17F and 16O(p,p)16O reactions

    International Nuclear Information System (INIS)

    Bennaceur, K.; Michel, N.; Okolowicz, J.; Ploszajczak, M.; Bennaceur, K.; Nowacki, F.; Okolowicz, J.

    2000-01-01

    We present shell model calculations of both the structure of 17 F and the reactions 16 O(p,γ) 17 F, 16 O(p,p) 16 O. We use the ZBM interaction which provides a fair description of the properties of 16 O and neighbouring nuclei and, in particular it takes account for the complicated correlations in coexisting low-lying states of 16 O. (authors)

  4. PKI security in large-scale healthcare networks.

    Science.gov (United States)

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  5. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    , the impact of large-scale agriculture and outgrower schemes on productivity, household welfare and wages in developing countries is highly contentious. Chapter 1 of this thesis provides an introduction to the study, while also reviewing the key debate in the contemporary land ‘grabbing’ and historical large...... sugarcane outgrower scheme on household income and asset stocks. Chapter 5 examines the wages and working conditions in ‘formal’ large-scale and ‘informal’ small-scale irrigated agriculture. The results in Chapter 2 show that moisture stress, the use of untested planting materials, and conflict over land...... commands a higher wage than ‘formal’ large-scale agriculture, while rather different wage determination mechanisms exist in the two sectors. Human capital characteristics (education and experience) partly explain the differences in wages within the formal sector, but play no significant role...

  6. Seismic safety in conducting large-scale blasts

    Science.gov (United States)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  7. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...

  8. The role of large scale motions on passive scalar transport

    Science.gov (United States)

    Dharmarathne, Suranga; Araya, Guillermo; Tutkun, Murat; Leonardi, Stefano; Castillo, Luciano

    2014-11-01

    We study direct numerical simulation (DNS) of turbulent channel flow at Reτ = 394 to investigate effect of large scale motions on fluctuating temperature field which forms a passive scalar field. Statistical description of the large scale features of the turbulent channel flow is obtained using two-point correlations of velocity components. Two-point correlations of fluctuating temperature field is also examined in order to identify possible similarities between velocity and temperature fields. The two-point cross-correlations betwen the velocity and temperature fluctuations are further analyzed to establish connections between these two fields. In addition, we use proper orhtogonal decompotion (POD) to extract most dominant modes of the fields and discuss the coupling of large scale features of turbulence and the temperature field.

  9. Large-Scale Structure and Hyperuniformity of Amorphous Ices

    Science.gov (United States)

    Martelli, Fausto; Torquato, Salvatore; Giovambattista, Nicolas; Car, Roberto

    2017-09-01

    We investigate the large-scale structure of amorphous ices and transitions between their different forms by quantifying their large-scale density fluctuations. Specifically, we simulate the isothermal compression of low-density amorphous ice (LDA) and hexagonal ice to produce high-density amorphous ice (HDA). Both HDA and LDA are nearly hyperuniform; i.e., they are characterized by an anomalous suppression of large-scale density fluctuations. By contrast, in correspondence with the nonequilibrium phase transitions to HDA, the presence of structural heterogeneities strongly suppresses the hyperuniformity and the system becomes hyposurficial (devoid of "surface-area fluctuations"). Our investigation challenges the largely accepted "frozen-liquid" picture, which views glasses as structurally arrested liquids. Beyond implications for water, our findings enrich our understanding of pressure-induced structural transformations in glasses.

  10. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  11. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  12. Holographic shell model: Stack data structure inside black holes?

    Science.gov (United States)

    Davidson, Aharon

    2014-03-01

    Rather than tiling the black hole horizon by Planck area patches, we suggest that bits of information inhabit, universally and holographically, the entire black core interior, a bit per a light sheet unit interval of order Planck area difference. The number of distinguishable (tagged by a binary code) configurations, counted within the context of a discrete holographic shell model, is given by the Catalan series. The area entropy formula is recovered, including Cardy's universal logarithmic correction, and the equipartition of mass per degree of freedom is proven. The black hole information storage resembles, in the count procedure, the so-called stack data structure.

  13. Morphing the Shell Model into an Effective Theory

    International Nuclear Information System (INIS)

    Haxton, W. C.; Song, C.-L.

    2000-01-01

    We describe a strategy for attacking the canonical nuclear structure problem--bound-state properties of a system of point nucleons interacting via a two-body potential--which involves an expansion in the number of particles scattering at high momenta, but is otherwise exact. The required self-consistent solutions of the Bloch-Horowitz equation for effective interactions and operators are obtained by an efficient Green's function method based on the Lanczos algorithm. We carry out this program for the simplest nuclei, d and 3 He , in order to explore the consequences of reformulating the shell model as a controlled effective theory. (c) 2000 The American Physical Society

  14. Report of the LASCAR forum: Large scale reprocessing plant safeguards

    International Nuclear Information System (INIS)

    1992-01-01

    This report has been prepared to provide information on the studies which were carried out from 1988 to 1992 under the auspices of the multinational forum known as Large Scale Reprocessing Plant Safeguards (LASCAR) on safeguards for four large scale reprocessing plants operated or planned to be operated in the 1990s. The report summarizes all of the essential results of these studies. The participants in LASCAR were from France, Germany, Japan, the United Kingdom, the United States of America, the Commission of the European Communities - Euratom, and the International Atomic Energy Agency

  15. Large-scale structure observables in general relativity

    International Nuclear Information System (INIS)

    Jeong, Donghui; Schmidt, Fabian

    2015-01-01

    We review recent studies that rigorously define several key observables of the large-scale structure of the Universe in a general relativistic context. Specifically, we consider (i) redshift perturbation of cosmic clock events; (ii) distortion of cosmic rulers, including weak lensing shear and magnification; and (iii) observed number density of tracers of the large-scale structure. We provide covariant and gauge-invariant expressions of these observables. Our expressions are given for a linearly perturbed flat Friedmann–Robertson–Walker metric including scalar, vector, and tensor metric perturbations. While we restrict ourselves to linear order in perturbation theory, the approach can be straightforwardly generalized to higher order. (paper)

  16. Topology Optimization of Large Scale Stokes Flow Problems

    DEFF Research Database (Denmark)

    Aage, Niels; Poulsen, Thomas Harpsøe; Gersborg-Hansen, Allan

    2008-01-01

    This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs.......This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs....

  17. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  18. Generation of large-scale vortives in compressible helical turbulence

    International Nuclear Information System (INIS)

    Chkhetiani, O.G.; Gvaramadze, V.V.

    1989-01-01

    We consider generation of large-scale vortices in compressible self-gravitating turbulent medium. The closed equation describing evolution of the large-scale vortices in helical turbulence with finite correlation time is obtained. This equation has the form similar to the hydromagnetic dynamo equation, which allows us to call the vortx genertation effect the vortex dynamo. It is possible that principally the same mechanism is responsible both for amplification and maintenance of density waves and magnetic fields in gaseous disks of spiral galaxies. (author). 29 refs

  19. Computational challenges of large-scale, long-time, first-principles molecular dynamics

    International Nuclear Information System (INIS)

    Kent, P R C

    2008-01-01

    Plane wave density functional calculations have traditionally been able to use the largest available supercomputing resources. We analyze the scalability of modern projector-augmented wave implementations to identify the challenges in performing molecular dynamics calculations of large systems containing many thousands of electrons. Benchmark calculations on the Cray XT4 demonstrate that global linear-algebra operations are the primary reason for limited parallel scalability. Plane-wave related operations can be made sufficiently scalable. Improving parallel linear-algebra performance is an essential step to reaching longer timescales in future large-scale molecular dynamics calculations

  20. Large-scale ground motion simulation using GPGPU

    Science.gov (United States)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number

  1. Large-scale Lurgi plant would be uneconomic: study group

    Energy Technology Data Exchange (ETDEWEB)

    1964-03-21

    Gas Council and National Coal Board agreed that building of large scale Lurgi plant on the basis of study is not at present acceptable on economic grounds. The committee considered that new processes based on naphtha offered more economic sources of base and peak load production. Tables listing data provided in contractors' design studies and summary of contractors' process designs are included.

  2. Origin of large-scale cell structure in the universe

    International Nuclear Information System (INIS)

    Zel'dovich, Y.B.

    1982-01-01

    A qualitative explanation is offered for the characteristic global structure of the universe, wherein ''black'' regions devoid of galaxies are surrounded on all sides by closed, comparatively thin, ''bright'' layers populated by galaxies. The interpretation rests on some very general arguments regarding the growth of large-scale perturbations in a cold gas

  3. Large-Scale Systems Control Design via LMI Optimization

    Czech Academy of Sciences Publication Activity Database

    Rehák, Branislav

    2015-01-01

    Roč. 44, č. 3 (2015), s. 247-253 ISSN 1392-124X Institutional support: RVO:67985556 Keywords : Combinatorial linear matrix inequalities * large-scale system * decentralized control Subject RIV: BC - Control Systems Theory Impact factor: 0.633, year: 2015

  4. Identification of low order models for large scale processes

    NARCIS (Netherlands)

    Wattamwar, S.K.

    2010-01-01

    Many industrial chemical processes are complex, multi-phase and large scale in nature. These processes are characterized by various nonlinear physiochemical effects and fluid flows. Such processes often show coexistence of fast and slow dynamics during their time evolutions. The increasing demand

  5. Worldwide large-scale fluctuations of sardine and anchovy ...

    African Journals Online (AJOL)

    Worldwide large-scale fluctuations of sardine and anchovy populations. ... African Journal of Marine Science. Journal Home · ABOUT THIS JOURNAL · Advanced ... Fullscreen Fullscreen Off. http://dx.doi.org/10.2989/AJMS.2008.30.1.13.463.

  6. Worldwide large-scale fluctuations of sardine and anchovy ...

    African Journals Online (AJOL)

    Worldwide large-scale fluctuations of sardine and anchovy populations. ... African Journal of Marine Science. Journal Home · ABOUT THIS JOURNAL · Advanced ... http://dx.doi.org/10.2989/AJMS.2008.30.1.13.463 · AJOL African Journals ...

  7. Large-scale coastal impact induced by a catastrophic storm

    DEFF Research Database (Denmark)

    Fruergaard, Mikkel; Andersen, Thorbjørn Joest; Johannessen, Peter N

    breaching. Our results demonstrate that violent, millennial-scale storms can trigger significant large-scale and long-term changes on barrier coasts, and that coastal changes assumed to take place over centuries or even millennia may occur in association with a single extreme storm event....

  8. Success Factors of Large Scale ERP Implementation in Thailand

    OpenAIRE

    Rotchanakitumnuai; Siriluck

    2010-01-01

    The objectives of the study are to examine the determinants of ERP implementation success factors of ERP implementation. The result indicates that large scale ERP implementation success consist of eight factors: project management competence, knowledge sharing, ERP system quality , understanding, user involvement, business process re-engineering, top management support, organization readiness.

  9. Planck intermediate results XLII. Large-scale Galactic magnetic fields

    DEFF Research Database (Denmark)

    Adam, R.; Ade, P. A. R.; Alves, M. I. R.

    2016-01-01

    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured ...

  10. Large Scale Simulations of the Euler Equations on GPU Clusters

    KAUST Repository

    Liebmann, Manfred; Douglas, Craig C.; Haase, Gundolf; Horvá th, Zoltá n

    2010-01-01

    The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one

  11. Breakdown of large-scale circulation in turbulent rotating convection

    NARCIS (Netherlands)

    Kunnen, R.P.J.; Clercx, H.J.H.; Geurts, Bernardus J.

    2008-01-01

    Turbulent rotating convection in a cylinder is investigated both numerically and experimentally at Rayleigh number Ra = $10^9$ and Prandtl number $\\sigma$ = 6.4. In this Letter we discuss two topics: the breakdown under rotation of the domain-filling large-scale circulation (LSC) typical for

  12. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  13. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena

  14. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-01-01

    structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination

  15. Temporal Variation of Large Scale Flows in the Solar Interior ...

    Indian Academy of Sciences (India)

    tribpo

    Temporal Variation of Large Scale Flows in the Solar Interior. 355. Figure 2. Zonal and meridional components of the time-dependent residual velocity at a few selected depths as marked above each panel, are plotted as contours of constant velocity in the longitude-latitude plane. The left panels show the zonal component, ...

  16. Facile Large-Scale Synthesis of 5- and 6-Carboxyfluoresceins

    DEFF Research Database (Denmark)

    Hammershøj, Peter; Ek, Pramod Kumar; Harris, Pernille

    2015-01-01

    A series of fluorescein dyes have been prepared from a common precursor through a very simple synthetic procedure, giving access to important precursors for fluorescent probes. The method has proven an efficient access to regioisomerically pure 5- and 6-carboxyfluoresceins on a large scale, in good...

  17. The Large-Scale Structure of Scientific Method

    Science.gov (United States)

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  18. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  19. Large-Scale Machine Learning for Classification and Search

    Science.gov (United States)

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  20. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  1. Proceedings of the meeting on large scale computer simulation research

    International Nuclear Information System (INIS)

    2004-04-01

    The meeting to summarize the collaboration activities for FY2003 on the Large Scale Computer Simulation Research was held January 15-16, 2004 at Theory and Computer Simulation Research Center, National Institute for Fusion Science. Recent simulation results, methodologies and other related topics were presented. (author)

  2. Chirping for large-scale maritime archaeological survey

    DEFF Research Database (Denmark)

    Grøn, Ole; Boldreel, Lars Ole

    2014-01-01

    Archaeological wrecks exposed on the sea floor are mapped using side-scan and multibeam techniques, whereas the detection of submerged archaeological sites, such as Stone Age settlements, and wrecks, partially or wholly embedded in sea-floor sediments, requires the application of high-resolution ...... the present state of this technology, it appears well suited to large-scale maritime archaeological mapping....

  3. Large-scale Homogenization of Bulk Materials in Mammoth Silos

    NARCIS (Netherlands)

    Schott, D.L.

    2004-01-01

    This doctoral thesis concerns the large-scale homogenization of bulk materials in mammoth silos. The objective of this research was to determine the best stacking and reclaiming method for homogenization in mammoth silos. For this purpose a simulation program was developed to estimate the

  4. Large Scale Survey Data in Career Development Research

    Science.gov (United States)

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  5. Large Scale Anomalies of the Cosmic Microwave Background with Planck

    DEFF Research Database (Denmark)

    Frejsel, Anne Mette

    This thesis focuses on the large scale anomalies of the Cosmic Microwave Background (CMB) and their possible origins. The investigations consist of two main parts. The first part is on statistical tests of the CMB, and the consistency of both maps and power spectrum. We find that the Planck data...

  6. Fractals and the Large-Scale Structure in the Universe

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 4. Fractals and the Large-Scale Structure in the Universe - Is the Cosmological Principle Valid? A K Mittal T R Seshadri. General Article Volume 7 Issue 4 April 2002 pp 39-47 ...

  7. LARGE-SCALE COMMERCIAL INVESTMENTS IN LAND: SEEKING ...

    African Journals Online (AJOL)

    extent of large-scale investment in land or to assess its impact on the people in recipient countries. .... favorable lease terms, apparently based on a belief that this is necessary to .... Harm to the rights of local occupiers of land can result from a dearth. 24. ..... applies to a self-identified group based on the group's traditions.

  8. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround

    Science.gov (United States)

    Peurach, Donald J.; Neumerski, Christine M.

    2015-01-01

    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  9. Reconsidering Replication: New Perspectives on Large-Scale School Improvement

    Science.gov (United States)

    Peurach, Donald J.; Glazer, Joshua L.

    2012-01-01

    The purpose of this analysis is to reconsider organizational replication as a strategy for large-scale school improvement: a strategy that features a "hub" organization collaborating with "outlet" schools to enact school-wide designs for improvement. To do so, we synthesize a leading line of research on commercial replication to construct a…

  10. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed; Elsawy, Hesham; Gharbieh, Mohammad; Alouini, Mohamed-Slim; Adinoyi, Abdulkareem; Alshaalan, Furaih

    2017-01-01

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end

  11. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  12. Technologies and challenges in large-scale phosphoproteomics

    DEFF Research Database (Denmark)

    Engholm-Keller, Kasper; Larsen, Martin Røssel

    2013-01-01

    become the main technique for discovery and characterization of phosphoproteins in a nonhypothesis driven fashion. In this review, we describe methods for state-of-the-art MS-based analysis of protein phosphorylation as well as the strategies employed in large-scale phosphoproteomic experiments...... with focus on the various challenges and limitations this field currently faces....

  13. Solving Large Scale Crew Scheduling Problems in Practice

    NARCIS (Netherlands)

    E.J.W. Abbink (Erwin); L. Albino; T.A.B. Dollevoet (Twan); D. Huisman (Dennis); J. Roussado; R.L. Saldanha

    2010-01-01

    textabstractThis paper deals with large-scale crew scheduling problems arising at the Dutch railway operator, Netherlands Railways (NS). NS operates about 30,000 trains a week. All these trains need a driver and a certain number of guards. Some labor rules restrict the duties of a certain crew base

  14. The large scale microwave background anisotropy in decaying particle cosmology

    International Nuclear Information System (INIS)

    Panek, M.

    1987-06-01

    We investigate the large-scale anisotropy of the microwave background radiation in cosmological models with decaying particles. The observed value of the quadrupole moment combined with other constraints gives an upper limit on the redshift of the decay z/sub d/ < 3-5. 12 refs., 2 figs

  15. Dual Decomposition for Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Vandenberghe, Lieven

    2013-01-01

    Dual decomposition is applied to power balancing of exible thermal storage units. The centralized large-scale problem is decomposed into smaller subproblems and solved locallyby each unit in the Smart Grid. Convergence is achieved by coordinating the units consumption through a negotiation...

  16. Evaluation of Large-scale Public Sector Reforms

    DEFF Research Database (Denmark)

    Breidahl, Karen Nielsen; Gjelstrup, Gunnar; Hansen, Hanne Foss

    2017-01-01

    and more delimited policy areas take place. In our analysis we apply four governance perspectives (rational-instrumental, rational-interest based, institutional-cultural and a chaos perspective) in a comparative analysis of the evaluations of two large-scale public sector reforms in Denmark and Norway. We...

  17. Assessment of climate change impacts on rainfall using large scale ...

    Indian Academy of Sciences (India)

    Many of the applied techniques in water resources management can be directly or indirectly influenced by ... is based on large scale climate signals data around the world. In order ... predictand relationships are often very complex. .... constraints to solve the optimization problem. ..... social, and environmental sustainability.

  18. Factors Influencing Uptake of a Large Scale Curriculum Innovation.

    Science.gov (United States)

    Adey, Philip S.

    Educational research has all too often failed to be implemented on a large-scale basis. This paper describes the multiplier effect of a professional development program for teachers and for trainers in the United Kingdom, and how that program was developed, monitored, and evaluated. Cognitive Acceleration through Science Education (CASE) is a…

  19. ability in Large Scale Land Acquisitions in Kenya

    International Development Research Centre (IDRC) Digital Library (Canada)

    Corey Piccioni

    Kenya's national planning strategy, Vision 2030. Agri- culture, natural resource exploitation, and infrastruc- ... sitions due to high levels of poverty and unclear or in- secure land tenure rights in Kenya. Inadequate social ... lease to a private company over the expansive Yala. Swamp to undertake large-scale irrigation farming.

  20. New Visions for Large Scale Networks: Research and Applications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This paper documents the findings of the March 12-14, 2001 Workshop on New Visions for Large-Scale Networks: Research and Applications. The workshops objectives were...

  1. Large-scale silviculture experiments of western Oregon and Washington.

    Science.gov (United States)

    Nathan J. Poage; Paul D. Anderson

    2007-01-01

    We review 12 large-scale silviculture experiments (LSSEs) in western Washington and Oregon with which the Pacific Northwest Research Station of the USDA Forest Service is substantially involved. We compiled and arrayed information about the LSSEs as a series of matrices in a relational database, which is included on the compact disc published with this report and...

  2. Participatory Design and the Challenges of Large-Scale Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    With its 10th biannual anniversary conference, Participatory Design (PD) is leaving its teens and must now be considered ready to join the adult world. In this article we encourage the PD community to think big: PD should engage in large-scale information-systems development and opt for a PD...

  3. A method of orbital analysis for large-scale first-principles simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ohwaki, Tsukuru [Advanced Materials Laboratory, Nissan Research Center, Nissan Motor Co., Ltd., 1 Natsushima-cho, Yokosuka, Kanagawa 237-8523 (Japan); Otani, Minoru [Nanosystem Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568 (Japan); Ozaki, Taisuke [Research Center for Simulation Science (RCSS), Japan Advanced Institute of Science and Technology (JAIST), 1-1 Asahidai, Nomi, Ishikawa 923-1292 (Japan)

    2014-06-28

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})

  4. Large scale laboratory diffusion experiments in clay rocks

    International Nuclear Information System (INIS)

    Garcia-Gutierrez, M.; Missana, T.; Mingarro, M.; Martin, P.L.; Cormenzana, J.L.

    2005-01-01

    Full text of publication follows: Clay formations are potential host rocks for high-level radioactive waste repositories. In clay materials the radionuclide diffusion is the main transport mechanism. Thus, the understanding of the diffusion processes and the determination of diffusion parameters in conditions as similar as possible to the real ones, are critical for the performance assessment of deep geological repository. Diffusion coefficients are mainly measured in the laboratory using small samples, after a preparation to fit into the diffusion cell. In addition, a few field tests are usually performed for confirming laboratory results, and analyse scale effects. In field or 'in situ' tests the experimental set-up usually includes the injection of a tracer diluted in reconstituted formation water into a packed off section of a borehole. Both experimental systems may produce artefacts in the determination of diffusion coefficients. In laboratory the preparation of the sample can generate structural change mainly if the consolidated clay have a layered fabric, and in field test the introduction of water could modify the properties of the saturated clay in the first few centimeters, just where radionuclide diffusion is expected to take place. In this work, a large scale laboratory diffusion experiment is proposed, using a large cylindrical sample of consolidated clay that can overcome the above mentioned problems. The tracers used were mixed with clay obtained by drilling a central hole, re-compacted into the hole at approximately the same density as the consolidated block and finally sealed. Neither additional treatment of the sample nor external monitoring are needed. After the experimental time needed for diffusion to take place (estimated by scoping calculations) the block was sampled to obtain a 3D distribution of the tracer concentration and the results were modelled. An additional advantage of the proposed configuration is that it could be used in 'in situ

  5. A review of parallel computing for large-scale remote sensing image mosaicking

    OpenAIRE

    Chen, Lajiao; Ma, Yan; Liu, Peng; Wei, Jingbo; Jie, Wei; He, Jijun

    2015-01-01

    Interest in image mosaicking has been spurred by a wide variety of research and management needs. However, for large-scale applications, remote sensing image mosaicking usually requires significant computational capabilities. Several studies have attempted to apply parallel computing to improve image mosaicking algorithms and to speed up calculation process. The state of the art of this field has not yet been summarized, which is, however, essential for a better understanding and for further ...

  6. Large scale experiments with a 5 MW sodium/air heat exchanger for decay heat removal

    International Nuclear Information System (INIS)

    Stehle, H.; Damm, G.; Jansing, W.

    1994-01-01

    Sodium experiments in the large scale test facility ILONA were performed to demonstrate proper operation of a passive decay heat removal system for LMFBRs based on pure natural convection flow. Temperature and flow distributions on the sodium and the air side of a 5 MW sodium/air heat exchanger in a natural draught stack were measured during steady state and transient operation in good agreement with calculations using a two dimensional computer code ATTICA/DIANA. (orig.)

  7. Analysis of the applicability of fracture mechanics on the basis of large scale specimen testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Polachova, H.; Sulc, J.; Anikovskij, V.; Dragunov, Y.; Rivkin, E.; Filatov, V.

    1988-01-01

    The verification is dealt with of fracture mechanics calculations for WWER reactor pressure vessels by large scale model testing performed on the large testing machine ZZ 8000 (maximum load of 80 MN) in the Skoda Concern. The results of testing a large set of large scale test specimens with surface crack-type defects are presented. The nominal thickness of the specimens was 150 mm with defect depths between 15 and 100 mm, the testing temperature varying between -30 and +80 degC (i.e., in the temperature interval of T ko ±50 degC). Specimens with a scale of 1:8 and 1:12 were also tested, as well as standard (CT and TPB) specimens. Comparisons of results of testing and calculations suggest some conservatism of calculations (especially for small defects) based on Linear Elastic Fracture Mechanics, according to the Nuclear Reactor Pressure Vessel Codes which use the fracture mechanics values from J IC testing. On the basis of large scale tests the ''Defect Analysis Diagram'' was constructed and recommended for brittle fracture assessment of reactor pressure vessels. (author). 7 figs., 2 tabs., 3 refs

  8. Large-scale field testing on flexible shallow landslide barriers

    Science.gov (United States)

    Bugnion, Louis; Volkwein, Axel; Wendeler, Corinna; Roth, Andrea

    2010-05-01

    Open shallow landslides occur regularly in a wide range of natural terrains. Generally, they are difficult to predict and result in damages to properties and disruption of transportation systems. In order to improve the knowledge about the physical process itself and to develop new protection measures, large-scale field experiments were conducted in Veltheim, Switzerland. Material was released down a 30° inclined test slope into a flexible barrier. The flow as well as the impact into the barrier was monitored using various measurement techniques. Laser devices recording flow heights, a special force plate measuring normal and shear basal forces as well as load cells for impact pressures were installed along the test slope. In addition, load cells were built in the support and retaining cables of the barrier to provide data for detailed back-calculation of load distribution during impact. For the last test series an additional guiding wall in flow direction on both sides of the barrier was installed to achieve higher impact pressures in the middle of the barrier. With these guiding walls the flow is not able to spread out before hitting the barrier. A special constructed release mechanism simulating the sudden failure of the slope was designed such that about 50 m3 of mixed earth and gravel saturated with water can be released in an instant. Analysis of cable forces combined with impact pressures and velocity measurements during a test series allow us now to develop a load model for the barrier design. First numerical simulations with the software tool FARO, originally developed for rockfall barriers and afterwards calibrated for debris flow impacts, lead already to structural improvements on barrier design. Decisive for the barrier design is the first dynamic impact pressure depending on the flow velocity and afterwards the hydrostatic pressure of the complete retained material behind the barrier. Therefore volume estimation of open shallow landslides by assessing

  9. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    Energy Technology Data Exchange (ETDEWEB)

    Nusser, Adi [Physics Department and the Asher Space Science Institute-Technion, Haifa 32000 (Israel); Branchini, Enzo [Department of Physics, Universita Roma Tre, Via della Vasca Navale 84, 00146 Rome (Italy); Davis, Marc, E-mail: adi@physics.technion.ac.il, E-mail: branchin@fis.uniroma3.it, E-mail: mdavis@berkeley.edu [Departments of Astronomy and Physics, University of California, Berkeley, CA 94720 (United States)

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.

  10. Large-scale innovation and change in UK higher education

    Directory of Open Access Journals (Sweden)

    Stephen Brown

    2013-09-01

    Full Text Available This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ technology to deliver such changes. Key lessons that emerged from these experiences are reviewed covering themes of pervasiveness, unofficial systems, project creep, opposition, pressure to deliver, personnel changes and technology issues. The paper argues that collaborative approaches to project management offer greater prospects of effective large-scale change in universities than either management-driven top-down or more champion-led bottom-up methods. It also argues that while some diminution of control over project outcomes is inherent in this approach, this is outweighed by potential benefits of lasting and widespread adoption of agreed changes.

  11. Measuring Cosmic Expansion and Large Scale Structure with Destiny

    Science.gov (United States)

    Benford, Dominic J.; Lauer, Tod R.

    2007-01-01

    Destiny is a simple, direct, low cost mission to determine the properties of dark energy by obtaining a cosmologically deep supernova (SN) type Ia Hubble diagram and by measuring the large-scale mass power spectrum over time. Its science instrument is a 1.65m space telescope, featuring a near-infrared survey camera/spectrometer with a large field of view. During its first two years, Destiny will detect, observe, and characterize 23000 SN Ia events over the redshift interval 0.4Destiny will be used in its third year as a high resolution, wide-field imager to conduct a weak lensing survey covering >lo00 square degrees to measure the large-scale mass power spectrum. The combination of surveys is much more powerful than either technique on its own, and will have over an order of magnitude greater sensitivity than will be provided by ongoing ground-based projects.

  12. Volume measurement study for large scale input accountancy tank

    International Nuclear Information System (INIS)

    Uchikoshi, Seiji; Watanabe, Yuichi; Tsujino, Takeshi

    1999-01-01

    Large Scale Tank Calibration (LASTAC) facility, including an experimental tank which has the same volume and structure as the input accountancy tank of Rokkasho Reprocessing Plant (RRP) was constructed in Nuclear Material Control Center of Japan. Demonstration experiments have been carried out to evaluate a precision of solution volume measurement and to establish the procedure of highly accurate pressure measurement for a large scale tank with dip-tube bubbler probe system to be applied to the input accountancy tank of RRP. Solution volume in a tank is determined from substitution the solution level for the calibration function obtained in advance, which express a relation between the solution level and its volume in the tank. Therefore, precise solution volume measurement needs a precise calibration function that is determined carefully. The LASTAC calibration experiments using pure water showed good result in reproducibility. (J.P.N.)

  13. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  14. Large-scale influences in near-wall turbulence.

    Science.gov (United States)

    Hutchins, Nicholas; Marusic, Ivan

    2007-03-15

    Hot-wire data acquired in a high Reynolds number facility are used to illustrate the need for adequate scale separation when considering the coherent structure in wall-bounded turbulence. It is found that a large-scale motion in the log region becomes increasingly comparable in energy to the near-wall cycle as the Reynolds number increases. Through decomposition of fluctuating velocity signals, it is shown that this large-scale motion has a distinct modulating influence on the small-scale energy (akin to amplitude modulation). Reassessment of DNS data, in light of these results, shows similar trends, with the rate and intensity of production due to the near-wall cycle subject to a modulating influence from the largest-scale motions.

  15. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed

    2017-03-16

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end, cellular networks are indeed a strong first mile candidate to accommodate the data tsunami to be generated by the IoT. However, IoT devices are required in the cellular paradigm to undergo random access procedures as a precursor to resource allocation. Such procedures impose a major bottleneck that hinders cellular networks\\' ability to support large-scale IoT. In this article, we shed light on the random access dilemma and present a case study based on experimental data as well as system-level simulations. Accordingly, a case is built for the latent need to revisit random access procedures. A call for action is motivated by listing a few potential remedies and recommendations.

  16. Cosmic ray acceleration by large scale galactic shocks

    International Nuclear Information System (INIS)

    Cesarsky, C.J.; Lagage, P.O.

    1987-01-01

    The mechanism of diffusive shock acceleration may account for the existence of galactic cosmic rays detailed application to stellar wind shocks and especially to supernova shocks have been developed. Existing models can usually deal with the energetics or the spectral slope, but the observed energy range of cosmic rays is not explained. Therefore it seems worthwhile to examine the effect that large scale, long-lived galactic shocks may have on galactic cosmic rays, in the frame of the diffusive shock acceleration mechanism. Large scale fast shocks can only be expected to exist in the galactic halo. We consider three situations where they may arise: expansion of a supernova shock in the halo, galactic wind, galactic infall; and discuss the possible existence of these shocks and their role in accelerating cosmic rays

  17. Efficient algorithms for collaborative decision making for large scale settings

    DEFF Research Database (Denmark)

    Assent, Ira

    2011-01-01

    to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems.......Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses...... on avoiding redundancy for users working on the same task. While this improves the effectiveness of the user work process, the underlying query processing engine is typically considered a "black box" and left unchanged. Research in multiple query processing, on the other hand, ignores the application...

  18. Lagrangian space consistency relation for large scale structure

    International Nuclear Information System (INIS)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2015-01-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space

  19. The Large-scale Effect of Environment on Galactic Conformity

    Science.gov (United States)

    Sun, Shuangpeng; Guo, Qi; Wang, Lan; Wang, Jie; Gao, Liang; Lacey, Cedric G.; Pan, Jun

    2018-04-01

    We use a volume-limited galaxy sample from the SDSS Data Release 7 to explore the dependence of galactic conformity on the large-scale environment, measured on ˜ 4 Mpc scales. We find that the star formation activity of neighbour galaxies depends more strongly on the environment than on the activity of their primary galaxies. In under-dense regions most neighbour galaxies tend to be active, while in over-dense regions neighbour galaxies are mostly passive, regardless of the activity of their primary galaxies. At a given stellar mass, passive primary galaxies reside in higher density regions than active primary galaxies, leading to the apparently strong conformity signal. The dependence of the activity of neighbour galaxies on environment can be explained by the corresponding dependence of the fraction of satellite galaxies. Similar results are found for galaxies in a semi-analytical model, suggesting that no new physics is required to explain the observed large-scale conformity.

  20. Electron drift in a large scale solid xenon

    International Nuclear Information System (INIS)

    Yoo, J.; Jaskierny, W.F.

    2015-01-01

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Therefore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon

  1. Active power reserves evaluation in large scale PVPPs

    DEFF Research Database (Denmark)

    Crăciun, Bogdan-Ionut; Kerekes, Tamas; Sera, Dezso

    2013-01-01

    The present trend on investing in renewable ways of producing electricity in the detriment of conventional fossil fuel-based plants will lead to a certain point where these plants have to provide ancillary services and contribute to overall grid stability. Photovoltaic (PV) power has the fastest...... growth among all renewable energies and managed to reach high penetration levels creating instabilities which at the moment are corrected by the conventional generation. This paradigm will change in the future scenarios where most of the power is supplied by large scale renewable plants and parts...... of the ancillary services have to be shared by the renewable plants. The main focus of the proposed paper is to technically and economically analyze the possibility of having active power reserves in large scale PV power plants (PVPPs) without any auxiliary storage equipment. The provided reserves should...

  2. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  3. Continuum shell-model study of 16O and 40Ca

    International Nuclear Information System (INIS)

    Heil, V.; Stock, W.

    1976-06-01

    Continuum shell-model calculations of the E1 and E2 strengths in 16 O and 40 Ca are presented. A consistent microscopic description of both the giant resonances and isospin forbidden E1- transitions between bound states can be achieved through 1) a careful choice of the single-particle potential, 2) the use of a finite-range residual interaction (including the Coulomb particle-hole force), and 3) the removal of spurious states. The results obtained within the separation expansion approximation of Birkholz are in reasonable agreement with measured photonucleon angular distributions and formfactors for electroexcitation. The influence of the continuum on the isospin mixing in bound states is found to be very strong. (orig.) [de

  4. Identification of shell-model states in $^{135}$Sb populated via $\\beta^{-}$ decay of $^{135}$Sn

    CERN Document Server

    Shergur, J; Brown, B A; Cederkäll, J; Dillmann, I; Fraile-Prieto, L M; Hoff, P; Joinet, A; Köster, U; Kratz, K L; Pfeiffer, B; Walters, W B; Wöhr, A

    2005-01-01

    The $\\beta$- decay of $^{135}$Sn was studied at CERN/ISOLDE using a resonance ionization laser ion source and mass separator to achieve elemental and mass selectivity, respectively. $\\gamma$-ray singles and $\\gamma\\gamma$ coincidence spectra were collected as a function of time with the laser on and with the laser off. These data were used to establish the positions of new levels in $^{135}$Sb, including new low-spin states at 440 and 798 keV, which are given tentative spin and parity assignments of 3/2$^{+}$ and 9/2$^{+}$, respectively. The observed levels of $^{135}$Sb are compared with shell-model calculations using different single-particle energies and different interactions.

  5. Phases and phase transitions in the algebraic microscopic shell model

    Directory of Open Access Journals (Sweden)

    Georgieva A. I.

    2016-01-01

    Full Text Available We explore the dynamical symmetries of the shell model number conserving algebra, which define three types of pairing and quadrupole phases, with the aim to obtain the prevailing phase or phase transition for the real nuclear systems in a single shell. This is achieved by establishing a correspondence between each of the pairing bases with the Elliott’s SU(3 basis that describes collective rotation of nuclear systems. This allows for a complete classification of the basis states of different number of particles in all the limiting cases. The probability distribution of the SU(3 basis states within theirs corresponding pairing states is also obtained. The relative strengths of dynamically symmetric quadrupole-quadrupole interaction in respect to the isoscalar, isovector and total pairing interactions define a control parameter, which estimates the importance of each term of the Hamiltonian in the correct reproduction of the experimental data for the considered nuclei.

  6. Some Statistics for Measuring Large-Scale Structure

    OpenAIRE

    Brandenberger, Robert H.; Kaplan, David M.; A, Stephen; Ramsey

    1993-01-01

    Good statistics for measuring large-scale structure in the Universe must be able to distinguish between different models of structure formation. In this paper, two and three dimensional ``counts in cell" statistics and a new ``discrete genus statistic" are applied to toy versions of several popular theories of structure formation: random phase cold dark matter model, cosmic string models, and global texture scenario. All three statistics appear quite promising in terms of differentiating betw...

  7. Foundations of Large-Scale Multimedia Information Management and Retrieval

    CERN Document Server

    Chang, Edward Y

    2011-01-01

    "Foundations of Large-Scale Multimedia Information Management and Retrieval - Mathematics of Perception" covers knowledge representation and semantic analysis of multimedia data and scalability in signal extraction, data mining, and indexing. The book is divided into two parts: Part I - Knowledge Representation and Semantic Analysis focuses on the key components of mathematics of perception as it applies to data management and retrieval. These include feature selection/reduction, knowledge representation, semantic analysis, distance function formulation for measuring similarity, and

  8. PKI security in large-scale healthcare networks

    OpenAIRE

    Mantas, G.; Lymberopoulos, D.; Komninos, N.

    2012-01-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a ...

  9. Experimental simulation of microinteractions in large scale explosions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X.; Luo, R.; Yuen, W.W.; Theofanous, T.G. [California Univ., Santa Barbara, CA (United States). Center for Risk Studies and Safety

    1998-01-01

    This paper presents data and analysis of recent experiments conducted in the SIGMA-2000 facility to simulate microinteractions in large scale explosions. Specifically, the fragmentation behavior of a high temperature molten steel drop under high pressure (beyond critical) conditions are investigated. The current data demonstrate, for the first time, the effect of high pressure in suppressing the thermal effect of fragmentation under supercritical conditions. The results support the microinteractions idea, and the ESPROSE.m prediction of fragmentation rate. (author)

  10. Large-scale motions in the universe: a review

    International Nuclear Information System (INIS)

    Burstein, D.

    1990-01-01

    The expansion of the universe can be retarded in localised regions within the universe both by the presence of gravity and by non-gravitational motions generated in the post-recombination universe. The motions of galaxies thus generated are called 'peculiar motions', and the amplitudes, size scales and coherence of these peculiar motions are among the most direct records of the structure of the universe. As such, measurements of these properties of the present-day universe provide some of the severest tests of cosmological theories. This is a review of the current evidence for large-scale motions of galaxies out to a distance of ∼5000 km s -1 (in an expanding universe, distance is proportional to radial velocity). 'Large-scale' in this context refers to motions that are correlated over size scales larger than the typical sizes of groups of galaxies, up to and including the size of the volume surveyed. To orient the reader into this relatively new field of study, a short modern history is given together with an explanation of the terminology. Careful consideration is given to the data used to measure the distances, and hence the peculiar motions, of galaxies. The evidence for large-scale motions is presented in a graphical fashion, using only the most reliable data for galaxies spanning a wide range in optical properties and over the complete range of galactic environments. The kinds of systematic errors that can affect this analysis are discussed, and the reliability of these motions is assessed. The predictions of two models of large-scale motion are compared to the observations, and special emphasis is placed on those motions in which our own Galaxy directly partakes. (author)

  11. A Classification Framework for Large-Scale Face Recognition Systems

    OpenAIRE

    Zhou, Ziheng; Deravi, Farzin

    2009-01-01

    This paper presents a generic classification framework for large-scale face recognition systems. Within the framework, a data sampling strategy is proposed to tackle the data imbalance when image pairs are sampled from thousands of face images for preparing a training dataset. A modified kernel Fisher discriminant classifier is proposed to make it computationally feasible to train the kernel-based classification method using tens of thousands of training samples. The framework is tested in an...

  12. Large Scale Visual Recommendations From Street Fashion Images

    OpenAIRE

    Jagadeesh, Vignesh; Piramuthu, Robinson; Bhardwaj, Anurag; Di, Wei; Sundaresan, Neel

    2014-01-01

    We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data. Specifically, we propose four data driven models in the form of Complementary Nearest Neighbor Consensus, Gaussian Mixture Models, Texture Agnostic Retrieval and Markov Chain LDA for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive e...

  13. Design study on sodium cooled large-scale reactor

    International Nuclear Information System (INIS)

    Murakami, Tsutomu; Hishida, Masahiko; Kisohara, Naoyuki

    2004-07-01

    In Phase 1 of the 'Feasibility Studies on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2, design improvement for further cost reduction of establishment of the plant concept has been performed. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2003, which is the third year of Phase 2. In the JFY2003 design study, critical subjects related to safety, structural integrity and thermal hydraulics which found in the last fiscal year has been examined and the plant concept has been modified. Furthermore, fundamental specifications of main systems and components have been set and economy has been evaluated. In addition, as the interim evaluation of the candidate concept of the FBR fuel cycle is to be conducted, cost effectiveness and achievability for the development goal were evaluated and the data of the three large-scale reactor candidate concepts were prepared. As a results of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000 yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  14. Design study on sodium-cooled large-scale reactor

    International Nuclear Information System (INIS)

    Shimakawa, Yoshio; Nibe, Nobuaki; Hori, Toru

    2002-05-01

    In Phase 1 of the 'Feasibility Study on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2 of the F/S, it is planed to precede a preliminary conceptual design of a sodium-cooled large-scale reactor based on the design of the advanced loop type reactor. Through the design study, it is intended to construct such a plant concept that can show its attraction and competitiveness as a commercialized reactor. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2001, which is the first year of Phase 2. In the JFY2001 design study, a plant concept has been constructed based on the design of the advanced loop type reactor, and fundamental specifications of main systems and components have been set. Furthermore, critical subjects related to safety, structural integrity, thermal hydraulics, operability, maintainability and economy have been examined and evaluated. As a result of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  15. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  16. NASA: Assessments of Selected Large-Scale Projects

    Science.gov (United States)

    2011-03-01

    REPORT DATE MAR 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4. TITLE AND SUBTITLE Assessments Of Selected Large-Scale Projects...Volatile EvolutioN MEP Mars Exploration Program MIB Mishap Investigation Board MMRTG Multi Mission Radioisotope Thermoelectric Generator MMS Magnetospheric...probes designed to explore the Martian surface, to satellites equipped with advanced sensors to study the earth , to telescopes intended to explore the

  17. Primordial large-scale electromagnetic fields from gravitoelectromagnetic inflation

    Energy Technology Data Exchange (ETDEWEB)

    Membiela, Federico Agustin [Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata, Funes 3350, (7600) Mar del Plata (Argentina); Consejo Nacional de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina)], E-mail: membiela@mdp.edu.ar; Bellini, Mauricio [Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata, Funes 3350, (7600) Mar del Plata (Argentina); Consejo Nacional de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina)], E-mail: mbellini@mdp.edu.ar

    2009-04-20

    We investigate the origin and evolution of primordial electric and magnetic fields in the early universe, when the expansion is governed by a cosmological constant {lambda}{sub 0}. Using the gravitoelectromagnetic inflationary formalism with A{sub 0}=0, we obtain the power of spectrums for large-scale magnetic fields and the inflaton field fluctuations during inflation. A very important fact is that our formalism is naturally non-conformally invariant.

  18. Primordial large-scale electromagnetic fields from gravitoelectromagnetic inflation

    Science.gov (United States)

    Membiela, Federico Agustín; Bellini, Mauricio

    2009-04-01

    We investigate the origin and evolution of primordial electric and magnetic fields in the early universe, when the expansion is governed by a cosmological constant Λ0. Using the gravitoelectromagnetic inflationary formalism with A0 = 0, we obtain the power of spectrums for large-scale magnetic fields and the inflaton field fluctuations during inflation. A very important fact is that our formalism is naturally non-conformally invariant.

  19. Primordial large-scale electromagnetic fields from gravitoelectromagnetic inflation

    International Nuclear Information System (INIS)

    Membiela, Federico Agustin; Bellini, Mauricio

    2009-01-01

    We investigate the origin and evolution of primordial electric and magnetic fields in the early universe, when the expansion is governed by a cosmological constant Λ 0 . Using the gravitoelectromagnetic inflationary formalism with A 0 =0, we obtain the power of spectrums for large-scale magnetic fields and the inflaton field fluctuations during inflation. A very important fact is that our formalism is naturally non-conformally invariant.

  20. Rotation invariant fast features for large-scale recognition

    Science.gov (United States)

    Takacs, Gabriel; Chandrasekhar, Vijay; Tsai, Sam; Chen, David; Grzeszczuk, Radek; Girod, Bernd

    2012-10-01

    We present an end-to-end feature description pipeline which uses a novel interest point detector and Rotation- Invariant Fast Feature (RIFF) descriptors. The proposed RIFF algorithm is 15× faster than SURF1 while producing large-scale retrieval results that are comparable to SIFT.2 Such high-speed features benefit a range of applications from Mobile Augmented Reality (MAR) to web-scale image retrieval and analysis.

  1. Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism,

    Science.gov (United States)

    1985-10-07

    ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK* Artificial Inteligence Laboratory AREA Is WORK UNIT NUMBERS 545 Technology Square...D-R162 422 CONCURRENT PROGRMMIZNG USING f"OS XL?ITP TEH l’ LARGE-SCALE PARALLELISH(U) NASI AC E Al CAMBRIDGE ARTIFICIAL INTELLIGENCE L. G AGHA ET AL...RESOLUTION TEST CHART N~ATIONAL BUREAU OF STANDA.RDS - -96 A -E. __ _ __ __’ .,*- - -- •. - MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL

  2. On a Game of Large-Scale Projects Competition

    Science.gov (United States)

    Nikonov, Oleg I.; Medvedeva, Marina A.

    2009-09-01

    The paper is devoted to game-theoretical control problems motivated by economic decision making situations arising in realization of large-scale projects, such as designing and putting into operations the new gas or oil pipelines. A non-cooperative two player game is considered with payoff functions of special type for which standard existence theorems and algorithms for searching Nash equilibrium solutions are not applicable. The paper is based on and develops the results obtained in [1]-[5].

  3. The Phoenix series large scale LNG pool fire experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  4. Large scale 2D spectral compressed sensing in continuous domain

    KAUST Repository

    Cai, Jian-Feng

    2017-06-20

    We consider the problem of spectral compressed sensing in continuous domain, which aims to recover a 2-dimensional spectrally sparse signal from partially observed time samples. The signal is assumed to be a superposition of s complex sinusoids. We propose a semidefinite program for the 2D signal recovery problem. Our model is able to handle large scale 2D signals of size 500 × 500, whereas traditional approaches only handle signals of size around 20 × 20.

  5. Large scale 2D spectral compressed sensing in continuous domain

    KAUST Repository

    Cai, Jian-Feng; Xu, Weiyu; Yang, Yang

    2017-01-01

    We consider the problem of spectral compressed sensing in continuous domain, which aims to recover a 2-dimensional spectrally sparse signal from partially observed time samples. The signal is assumed to be a superposition of s complex sinusoids. We propose a semidefinite program for the 2D signal recovery problem. Our model is able to handle large scale 2D signals of size 500 × 500, whereas traditional approaches only handle signals of size around 20 × 20.

  6. Large-scale nuclear energy from the thorium cycle

    International Nuclear Information System (INIS)

    Lewis, W.B.; Duret, M.F.; Craig, D.S.; Veeder, J.I.; Bain, A.S.

    1973-02-01

    The thorium fuel cycle in CANDU (Canada Deuterium Uranium) reactors challenges breeders and fusion as the simplest means of meeting the world's large-scale demands for energy for centuries. Thorium oxide fuel allows high power density with excellent neutron economy. The combination of thorium fuel with organic caloporteur promises easy maintenance and high availability of the whole plant. The total fuelling cost including charges on the inventory is estimated to be attractively low. (author) [fr

  7. Large scale particle image velocimetry with helium filled soap bubbles

    Energy Technology Data Exchange (ETDEWEB)

    Bosbach, Johannes; Kuehn, Matthias; Wagner, Claus [German Aerospace Center (DLR), Institute of Aerodynamics and Flow Technology, Goettingen (Germany)

    2009-03-15

    The application of particle image velocimetry (PIV) to measurement of flows on large scales is a challenging necessity especially for the investigation of convective air flows. Combining helium filled soap bubbles as tracer particles with high power quality switched solid state lasers as light sources allows conducting PIV on scales of the order of several square meters. The technique was applied to mixed convection in a full scale double aisle aircraft cabin mock-up for validation of computational fluid dynamics simulations. (orig.)

  8. Evolutionary leap in large-scale flood risk assessment needed

    OpenAIRE

    Vorogushyn, Sergiy; Bates, Paul D.; de Bruijn, Karin; Castellarin, Attilio; Kreibich, Heidi; Priest, Sally J.; Schröter, Kai; Bagli, Stefano; Blöschl, Günter; Domeneghetti, Alessio; Gouldby, Ben; Klijn, Frans; Lammersen, Rita; Neal, Jeffrey C.; Ridder, Nina

    2018-01-01

    Current approaches for assessing large-scale flood risks contravene the fundamental principles of the flood risk system functioning because they largely ignore basic interactions and feedbacks between atmosphere, catchments, river-floodplain systems and socio-economic processes. As a consequence, risk analyses are uncertain and might be biased. However, reliable risk estimates are required for prioritizing national investments in flood risk mitigation or for appraisal and management of insura...

  9. Large scale particle image velocimetry with helium filled soap bubbles

    Science.gov (United States)

    Bosbach, Johannes; Kühn, Matthias; Wagner, Claus

    2009-03-01

    The application of Particle Image Velocimetry (PIV) to measurement of flows on large scales is a challenging necessity especially for the investigation of convective air flows. Combining helium filled soap bubbles as tracer particles with high power quality switched solid state lasers as light sources allows conducting PIV on scales of the order of several square meters. The technique was applied to mixed convection in a full scale double aisle aircraft cabin mock-up for validation of Computational Fluid Dynamics simulations.

  10. Large-scale Health Information Database and Privacy Protection*1

    OpenAIRE

    YAMAMOTO, Ryuichi

    2016-01-01

    Japan was once progressive in the digitalization of healthcare fields but unfortunately has fallen behind in terms of the secondary use of data for public interest. There has recently been a trend to establish large-scale health databases in the nation, and a conflict between data use for public interest and privacy protection has surfaced as this trend has progressed. Databases for health insurance claims or for specific health checkups and guidance services were created according to the law...

  11. Isogeometric shell formulation based on a classical shell model

    KAUST Repository

    Niemi, Antti

    2012-09-04

    This paper constitutes the first steps in our work concerning isogeometric shell analysis. An isogeometric shell model of the Reissner-Mindlin type is introduced and a study of its accuracy in the classical pinched cylinder benchmark problem presented. In contrast to earlier works [1,2,3,4], the formulation is based on a shell model where the displacement, strain and stress fields are defined in terms of a curvilinear coordinate system arising from the NURBS description of the shell middle surface. The isogeometric shell formulation is implemented using the PetIGA and igakit software packages developed by the authors. The igakit package is a Python package used to generate NURBS representations of geometries that can be utilised by the PetIGA finite element framework. The latter utilises data structures and routines of the portable, extensible toolkit for scientific computation (PETSc), [5,6]. The current shell implementation is valid for static, linear problems only, but the software package is well suited for future extensions to geometrically and materially nonlinear regime as well as to dynamic problems. The accuracy of the approach in the pinched cylinder benchmark problem and present comparisons against the h-version of the finite element method with bilinear elements. Quadratic, cubic and quartic NURBS discretizations are compared against the isoparametric bilinear discretization introduced in [7]. The results show that the quadratic and cubic NURBS approximations exhibit notably slower convergence under uniform mesh refinement as the thickness decreases but the quartic approximation converges relatively quickly within the standard variational framework. The authors future work is concerned with building an isogeometric finite element method for modelling nonlinear structural response of thin-walled shells undergoing large rigid-body motions. The aim is to use the model in a aeroelastic framework for the simulation of flapping wings.

  12. Large-scale preparation of hollow graphitic carbon nanospheres

    International Nuclear Information System (INIS)

    Feng, Jun; Li, Fu; Bai, Yu-Jun; Han, Fu-Dong; Qi, Yong-Xin; Lun, Ning; Lu, Xi-Feng

    2013-01-01

    Hollow graphitic carbon nanospheres (HGCNSs) were synthesized on large scale by a simple reaction between glucose and Mg at 550 °C in an autoclave. Characterization by X-ray diffraction, Raman spectroscopy and transmission electron microscopy demonstrates the formation of HGCNSs with an average diameter of 10 nm or so and a wall thickness of a few graphenes. The HGCNSs exhibit a reversible capacity of 391 mAh g −1 after 60 cycles when used as anode materials for Li-ion batteries. -- Graphical abstract: Hollow graphitic carbon nanospheres could be prepared on large scale by the simple reaction between glucose and Mg at 550 °C, which exhibit superior electrochemical performance to graphite. Highlights: ► Hollow graphitic carbon nanospheres (HGCNSs) were prepared on large scale at 550 °C ► The preparation is simple, effective and eco-friendly. ► The in situ yielded MgO nanocrystals promote the graphitization. ► The HGCNSs exhibit superior electrochemical performance to graphite.

  13. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  14. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  15. BILGO: Bilateral greedy optimization for large scale semidefinite programming

    KAUST Repository

    Hao, Zhifeng

    2013-10-03

    Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.

  16. Human visual system automatically represents large-scale sequential regularities.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-03-04

    Our brain recordings reveal that large-scale sequential regularities defined across non-adjacent stimuli can be automatically represented in visual sensory memory. To show that, we adopted an auditory paradigm developed by Sussman, E., Ritter, W., and Vaughan, H. G. Jr. (1998). Predictability of stimulus deviance and the mismatch negativity. NeuroReport, 9, 4167-4170, Sussman, E., and Gumenyuk, V. (2005). Organization of sequential sounds in auditory memory. NeuroReport, 16, 1519-1523 to the visual domain by presenting task-irrelevant infrequent luminance-deviant stimuli (D, 20%) inserted among task-irrelevant frequent stimuli being of standard luminance (S, 80%) in randomized (randomized condition, SSSDSSSSSDSSSSD...) and fixed manners (fixed condition, SSSSDSSSSDSSSSD...). Comparing the visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in human visual sensory system, revealed that visual MMN elicited by deviant stimuli was reduced in the fixed compared to the randomized condition. Thus, the large-scale sequential regularity being present in the fixed condition (SSSSD) must have been represented in visual sensory memory. Interestingly, this effect did not occur in conditions with stimulus-onset asynchronies (SOAs) of 480 and 800 ms but was confined to the 160-ms SOA condition supporting the hypothesis that large-scale regularity extraction was based on perceptual grouping of the five successive stimuli defining the regularity. 2010 Elsevier B.V. All rights reserved.

  17. Large-scale preparation of hollow graphitic carbon nanospheres

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Jun; Li, Fu [Key Laboratory for Liquid-Solid Structural Evolution and Processing of Materials, Ministry of Education, Shandong University, Jinan 250061 (China); Bai, Yu-Jun, E-mail: byj97@126.com [Key Laboratory for Liquid-Solid Structural Evolution and Processing of Materials, Ministry of Education, Shandong University, Jinan 250061 (China); State Key laboratory of Crystal Materials, Shandong University, Jinan 250100 (China); Han, Fu-Dong; Qi, Yong-Xin; Lun, Ning [Key Laboratory for Liquid-Solid Structural Evolution and Processing of Materials, Ministry of Education, Shandong University, Jinan 250061 (China); Lu, Xi-Feng [Lunan Institute of Coal Chemical Engineering, Jining 272000 (China)

    2013-01-15

    Hollow graphitic carbon nanospheres (HGCNSs) were synthesized on large scale by a simple reaction between glucose and Mg at 550 Degree-Sign C in an autoclave. Characterization by X-ray diffraction, Raman spectroscopy and transmission electron microscopy demonstrates the formation of HGCNSs with an average diameter of 10 nm or so and a wall thickness of a few graphenes. The HGCNSs exhibit a reversible capacity of 391 mAh g{sup -1} after 60 cycles when used as anode materials for Li-ion batteries. -- Graphical abstract: Hollow graphitic carbon nanospheres could be prepared on large scale by the simple reaction between glucose and Mg at 550 Degree-Sign C, which exhibit superior electrochemical performance to graphite. Highlights: Black-Right-Pointing-Pointer Hollow graphitic carbon nanospheres (HGCNSs) were prepared on large scale at 550 Degree-Sign C Black-Right-Pointing-Pointer The preparation is simple, effective and eco-friendly. Black-Right-Pointing-Pointer The in situ yielded MgO nanocrystals promote the graphitization. Black-Right-Pointing-Pointer The HGCNSs exhibit superior electrochemical performance to graphite.

  18. Study of a large scale neutron measurement channel

    International Nuclear Information System (INIS)

    Amarouayache, Anissa; Ben Hadid, Hayet.

    1982-12-01

    A large scale measurement channel allows the processing of the signal coming from an unique neutronic sensor, during three different running modes: impulses, fluctuations and current. The study described in this note includes three parts: - A theoretical study of the large scale channel and its brief description are given. The results obtained till now in that domain are presented. - The fluctuation mode is thoroughly studied and the improvements to be done are defined. The study of a fluctuation linear channel with an automatic commutation of scales is described and the results of the tests are given. In this large scale channel, the method of data processing is analogical. - To become independent of the problems generated by the use of a an analogical processing of the fluctuation signal, a digital method of data processing is tested. The validity of that method is improved. The results obtained on a test system realized according to this method are given and a preliminary plan for further research is defined [fr

  19. Critical thinking, politics on a large scale and media democracy

    Directory of Open Access Journals (Sweden)

    José Antonio IBÁÑEZ-MARTÍN

    2015-06-01

    Full Text Available The first approximation to the social current reality offers us numerous motives for the worry. The spectacle of violence and of immorality can scare us easily. But more worrying still it is to verify that the horizon of conviviality, peace and wellbeing that Europe had been developing from the Treaty of Rome of 1957 has compromised itself seriously for the economic crisis. Today we are before an assault to the democratic politics, which is qualified, on the part of the media democracy, as an exhausted system, which is required to be changed into a new and great politics, a politics on a large scale. The article analyses the concept of a politics on a large scale, primarily attending to Nietzsche, and noting its union with the great philosophy and the great education. The study of the texts of Nietzsche leads us to the conclusion of how in them we often find an interesting analysis of the problems and a misguided proposal for solutions. We cannot think to suggest solutions to all the problems, but we outline various proposals about changes of political activity, that reasonably are defended from the media democracy. In conclusion, we point out that a politics on a large scale requires statesmen, able to suggest modes of life in common that can structure a long-term coexistence.

  20. Geospatial Optimization of Siting Large-Scale Solar Projects

    Energy Technology Data Exchange (ETDEWEB)

    Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Quinby, Ted [National Renewable Energy Lab. (NREL), Golden, CO (United States); Caulfield, Emmet [Stanford Univ., CA (United States); Gerritsen, Margot [Stanford Univ., CA (United States); Diffendorfer, Jay [U.S. Geological Survey, Boulder, CO (United States); Haines, Seth [U.S. Geological Survey, Boulder, CO (United States)

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  1. Parallel clustering algorithm for large-scale biological data sets.

    Science.gov (United States)

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies.

  2. Problems of large-scale vertically-integrated aquaculture

    Energy Technology Data Exchange (ETDEWEB)

    Webber, H H; Riordan, P F

    1976-01-01

    The problems of vertically-integrated aquaculture are outlined; they are concerned with: species limitations (in the market, biological and technological); site selection, feed, manpower needs, and legal, institutional and financial requirements. The gaps in understanding of, and the constraints limiting, large-scale aquaculture are listed. Future action is recommended with respect to: types and diversity of species to be cultivated, marketing, biotechnology (seed supply, disease control, water quality and concerted effort), siting, feed, manpower, legal and institutional aids (granting of water rights, grants, tax breaks, duty-free imports, etc.), and adequate financing. The last of hard data based on experience suggests that large-scale vertically-integrated aquaculture is a high risk enterprise, and with the high capital investment required, banks and funding institutions are wary of supporting it. Investment in pilot projects is suggested to demonstrate that large-scale aquaculture can be a fully functional and successful business. Construction and operation of such pilot farms is judged to be in the interests of both the public and private sector.

  3. BILGO: Bilateral greedy optimization for large scale semidefinite programming

    KAUST Repository

    Hao, Zhifeng; Yuan, Ganzhao; Ghanem, Bernard

    2013-01-01

    Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.

  4. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  5. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  6. [A large-scale accident in Alpine terrain].

    Science.gov (United States)

    Wildner, M; Paal, P

    2015-02-01

    Due to the geographical conditions, large-scale accidents amounting to mass casualty incidents (MCI) in Alpine terrain regularly present rescue teams with huge challenges. Using an example incident, specific conditions and typical problems associated with such a situation are presented. The first rescue team members to arrive have the elementary tasks of qualified triage and communication to the control room, which is required to dispatch the necessary additional support. Only with a clear "concept", to which all have to adhere, can the subsequent chaos phase be limited. In this respect, a time factor confounded by adverse weather conditions or darkness represents enormous pressure. Additional hazards are frostbite and hypothermia. If priorities can be established in terms of urgency, then treatment and procedure algorithms have proven successful. For evacuation of causalities, a helicopter should be strived for. Due to the low density of hospitals in Alpine regions, it is often necessary to distribute the patients over a wide area. Rescue operations in Alpine terrain have to be performed according to the particular conditions and require rescue teams to have specific knowledge and expertise. The possibility of a large-scale accident should be considered when planning events. With respect to optimization of rescue measures, regular training and exercises are rational, as is the analysis of previous large-scale Alpine accidents.

  7. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  8. Foundational perspectives on causality in large-scale brain networks

    Science.gov (United States)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  9. Thermal System Analysis and Optimization of Large-Scale Compressed Air Energy Storage (CAES

    Directory of Open Access Journals (Sweden)

    Zhongguang Fu

    2015-08-01

    Full Text Available As an important solution to issues regarding peak load and renewable energy resources on grids, large-scale compressed air energy storage (CAES power generation technology has recently become a popular research topic in the area of large-scale industrial energy storage. At present, the combination of high-expansion ratio turbines with advanced gas turbine technology is an important breakthrough in energy storage technology. In this study, a new gas turbine power generation system is coupled with current CAES technology. Moreover, a thermodynamic cycle system is optimized by calculating for the parameters of a thermodynamic system. Results show that the thermal efficiency of the new system increases by at least 5% over that of the existing system.

  10. Large-Scale Corrections to the CMB Anisotropy from Asymptotic de Sitter Mode

    Science.gov (United States)

    Sojasi, A.

    2018-01-01

    In this study, large-scale effects from asymptotic de Sitter mode on the CMB anisotropy are investigated. Besides the slow variation of the Hubble parameter onset of the last stage of inflation, the recent observational constraints from Planck and WMAP on spectral index confirm that the geometry of the universe can not be pure de Sitter in this era. Motivated by these evidences, we use this mode to calculate the power spectrum of the CMB anisotropy on the large scale. It is found that the CMB spectrum is dependent on the index of Hankel function ν which in the de Sitter limit ν → 3/2, the power spectrum reduces to the scale invariant result. Also, the result shows that the spectrum of anisotropy is dependent on angular scale and slow-roll parameter and these additional corrections are swept away by a cutoff scale parameter H ≪ M ∗ < M P .

  11. Large-scale Cosmic-Ray Anisotropy as a Probe of Interstellar Turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Giacinti, Gwenael; Kirk, John G. [Max-Planck-Institut für Kernphysik, Postfach 103980, D-69029 Heidelberg (Germany)

    2017-02-01

    We calculate the large-scale cosmic-ray (CR) anisotropies predicted for a range of Goldreich–Sridhar (GS) and isotropic models of interstellar turbulence, and compare them with IceTop data. In general, the predicted CR anisotropy is not a pure dipole; the cold spots reported at 400 TeV and 2 PeV are consistent with a GS model that contains a smooth deficit of parallel-propagating waves and a broad resonance function, though some other possibilities cannot, as yet, be ruled out. In particular, isotropic fast magnetosonic wave turbulence can match the observations at high energy, but cannot accommodate an energy dependence in the shape of the CR anisotropy. Our findings suggest that improved data on the large-scale CR anisotropy could provide a valuable probe of the properties—notably the power-spectrum—of the interstellar turbulence within a few tens of parsecs from Earth.

  12. THE DECAY OF A WEAK LARGE-SCALE MAGNETIC FIELD IN TWO-DIMENSIONAL TURBULENCE

    Energy Technology Data Exchange (ETDEWEB)

    Kondić, Todor; Hughes, David W.; Tobias, Steven M., E-mail: t.kondic@leeds.ac.uk [Department of Applied Mathematics, University of Leeds, Leeds LS2 9JT (United Kingdom)

    2016-06-01

    We investigate the decay of a large-scale magnetic field in the context of incompressible, two-dimensional magnetohydrodynamic turbulence. It is well established that a very weak mean field, of strength significantly below equipartition value, induces a small-scale field strong enough to inhibit the process of turbulent magnetic diffusion. In light of ever-increasing computer power, we revisit this problem to investigate fluids and magnetic Reynolds numbers that were previously inaccessible. Furthermore, by exploiting the relation between the turbulent diffusion of the magnetic potential and that of the magnetic field, we are able to calculate the turbulent magnetic diffusivity extremely accurately through the imposition of a uniform mean magnetic field. We confirm the strong dependence of the turbulent diffusivity on the product of the magnetic Reynolds number and the energy of the large-scale magnetic field. We compare our findings with various theoretical descriptions of this process.

  13. Observing the temperature of the big bang through large scale structure

    Science.gov (United States)

    Ferreira, Pedro G.; Magueijo, João

    2008-09-01

    It is an interesting possibility that the Universe underwent a period of thermal equilibrium at very early times. One expects a residue of this primordial state to be imprinted on the large scale structure of space time. In this paper, we study the morphology of this thermal residue in a universe whose early dynamics is governed by a scalar field. We calculate the amplitude of fluctuations on large scales and compare it with the imprint of vacuum fluctuations. We then use the observed power spectrum of fluctuations on the cosmic microwave background to place a constraint on the temperature of the Universe before and during inflation. We also present an alternative scenario, where the fluctuations are predominantly thermal and near scale-invariant.

  14. A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE

    Energy Technology Data Exchange (ETDEWEB)

    RODRIGUEZ, MARKO A. [Los Alamos National Laboratory; BOLLEN, JOHAN [Los Alamos National Laboratory; VAN DE SOMPEL, HERBERT [Los Alamos National Laboratory

    2007-01-30

    The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

  15. Symplectic integrators for large scale molecular dynamics simulations: A comparison of several explicit methods

    International Nuclear Information System (INIS)

    Gray, S.K.; Noid, D.W.; Sumpter, B.G.

    1994-01-01

    We test the suitability of a variety of explicit symplectic integrators for molecular dynamics calculations on Hamiltonian systems. These integrators are extremely simple algorithms with low memory requirements, and appear to be well suited for large scale simulations. We first apply all the methods to a simple test case using the ideas of Berendsen and van Gunsteren. We then use the integrators to generate long time trajectories of a 1000 unit polyethylene chain. Calculations are also performed with two popular but nonsymplectic integrators. The most efficient integrators of the set investigated are deduced. We also discuss certain variations on the basic symplectic integration technique

  16. Large-scale compositional heterogeneity in the Earth's mantle

    Science.gov (United States)

    Ballmer, M.

    2017-12-01

    Seismic imaging of subducted Farallon and Tethys lithosphere in the lower mantle has been taken as evidence for whole-mantle convection, and efficient mantle mixing. However, cosmochemical constraints point to a lower-mantle composition that has a lower Mg/Si compared to upper-mantle pyrolite. Moreover, geochemical signatures of magmatic rocks indicate the long-term persistence of primordial reservoirs somewhere in the mantle. In this presentation, I establish geodynamic mechanisms for sustaining large-scale (primordial) heterogeneity in the Earth's mantle using numerical models. Mantle flow is controlled by rock density and viscosity. Variations in intrinsic rock density, such as due to heterogeneity in basalt or iron content, can induce layering or partial layering in the mantle. Layering can be sustained in the presence of persistent whole mantle convection due to active "unmixing" of heterogeneity in low-viscosity domains, e.g. in the transition zone or near the core-mantle boundary [1]. On the other hand, lateral variations in intrinsic rock viscosity, such as due to heterogeneity in Mg/Si, can strongly affect the mixing timescales of the mantle. In the extreme case, intrinsically strong rocks may remain unmixed through the age of the Earth, and persist as large-scale domains in the mid-mantle due to focusing of deformation along weak conveyor belts [2]. That large-scale lateral heterogeneity and/or layering can persist in the presence of whole-mantle convection can explain the stagnation of some slabs, as well as the deflection of some plumes, in the mid-mantle. These findings indeed motivate new seismic studies for rigorous testing of model predictions. [1] Ballmer, M. D., N. C. Schmerr, T. Nakagawa, and J. Ritsema (2015), Science Advances, doi:10.1126/sciadv.1500815. [2] Ballmer, M. D., C. Houser, J. W. Hernlund, R. Wentzcovitch, and K. Hirose (2017), Nature Geoscience, doi:10.1038/ngeo2898.

  17. Large-Scale Traveling Weather Systems in Mars’ Southern Extratropics

    Science.gov (United States)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2017-10-01

    Between late fall and early spring, Mars’ middle- and high-latitude atmosphere supports strong mean equator-to-pole temperature contrasts and an accompanying mean westerly polar vortex. Observations from both the MGS Thermal Emission Spectrometer (TES) and the MRO Mars Climate Sounder (MCS) indicate that a mean baroclinicity-barotropicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Such extratropical weather disturbances are critical components of the global circulation as they serve as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of such traveling extratropical synoptic disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively-lifted and radiatively-active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to the northern-hemisphere counterparts, the southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are investigated, in addition to large-scale up-slope/down-slope flows and the diurnal cycle. A southern storm zone in late winter and early spring presents in the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.

  18. On the Phenomenology of an Accelerated Large-Scale Universe

    Directory of Open Access Journals (Sweden)

    Martiros Khurshudyan

    2016-10-01

    Full Text Available In this review paper, several new results towards the explanation of the accelerated expansion of the large-scale universe is discussed. On the other hand, inflation is the early-time accelerated era and the universe is symmetric in the sense of accelerated expansion. The accelerated expansion of is one of the long standing problems in modern cosmology, and physics in general. There are several well defined approaches to solve this problem. One of them is an assumption concerning the existence of dark energy in recent universe. It is believed that dark energy is responsible for antigravity, while dark matter has gravitational nature and is responsible, in general, for structure formation. A different approach is an appropriate modification of general relativity including, for instance, f ( R and f ( T theories of gravity. On the other hand, attempts to build theories of quantum gravity and assumptions about existence of extra dimensions, possible variability of the gravitational constant and the speed of the light (among others, provide interesting modifications of general relativity applicable to problems of modern cosmology, too. In particular, here two groups of cosmological models are discussed. In the first group the problem of the accelerated expansion of large-scale universe is discussed involving a new idea, named the varying ghost dark energy. On the other hand, the second group contains cosmological models addressed to the same problem involving either new parameterizations of the equation of state parameter of dark energy (like varying polytropic gas, or nonlinear interactions between dark energy and dark matter. Moreover, for cosmological models involving varying ghost dark energy, massless particle creation in appropriate radiation dominated universe (when the background dynamics is due to general relativity is demonstrated as well. Exploring the nature of the accelerated expansion of the large-scale universe involving generalized

  19. Large-Scale Traveling Weather Systems in Mars Southern Extratropics

    Science.gov (United States)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2017-01-01

    Between late fall and early spring, Mars' middle- and high-latitude atmosphere supports strong mean equator-to-pole temperature contrasts and an accompanying mean westerly polar vortex. Observations from both the MGS Thermal Emission Spectrometer (TES) and the MRO Mars Climate Sounder (MCS) indicate that a mean baroclinicity-barotropicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Such extratropical weather disturbances are critical components of the global circulation as they serve as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of such traveling extratropical synoptic disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively-lifted and radiatively-active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to the northern-hemisphere counterparts, the southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are investigated, in addition to large-scale up-slope/down-slope flows and the diurnal cycle. A southern storm zone in late winter and early spring presents in the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.

  20. Nonlinear evolution of large-scale structure in the universe

    International Nuclear Information System (INIS)

    Frenk, C.S.; White, S.D.M.; Davis, M.

    1983-01-01

    Using N-body simulations we study the nonlinear development of primordial density perturbation in an Einstein--de Sitter universe. We compare the evolution of an initial distribution without small-scale density fluctuations to evolution from a random Poisson distribution. These initial conditions mimic the assumptions of the adiabatic and isothermal theories of galaxy formation. The large-scale structures which form in the two cases are markedly dissimilar. In particular, the correlation function xi(r) and the visual appearance of our adiabatic (or ''pancake'') models match better the observed distribution of galaxies. This distribution is characterized by large-scale filamentary structure. Because the pancake models do not evolve in a self-similar fashion, the slope of xi(r) steepens with time; as a result there is a unique epoch at which these models fit the galaxy observations. We find the ratio of cutoff length to correlation length at this time to be lambda/sub min//r 0 = 5.1; its expected value in a neutrino dominated universe is 4(Ωh) -1 (H 0 = 100h km s -1 Mpc -1 ). At early epochs these models predict a negligible amplitude for xi(r) and could explain the lack of measurable clustering in the Lyα absorption lines of high-redshift quasars. However, large-scale structure in our models collapses after z = 2. If this collapse precedes galaxy formation as in the usual pancake theory, galaxies formed uncomfortably recently. The extent of this problem may depend on the cosmological model used; the present series of experiments should be extended in the future to include models with Ω<1