WorldWideScience

Sample records for computing zero-point energy

  1. Zero-point energy of confined fermions

    International Nuclear Information System (INIS)

    Milton, K.A.

    1980-01-01

    A closed form for the reduced Green's function of massless fermions in the interior of a spherical bag is obtained. In terms of this Green's function, the corresponding zero-point or Casimir energy is computed. It is proposed that a resulting quadratic divergence can be absorbed by renormalizing a suitable parameter in the bag model (that is, absorbed by a contact term). The residual Casimir stress is attractive, but smaller than the repulsive Casimir stress of gluons in the model. The result for the total zero-point energy is in substantial disagreement with bag model phenomenological values

  2. Zero Point Energy of Renormalized Wilson Loops

    OpenAIRE

    Hidaka, Yoshimasa; Pisarski, Robert D.

    2009-01-01

    The quark antiquark potential, and its associated zero point energy, can be extracted from lattice measurements of the Wilson loop. We discuss a unique prescription to renormalize the Wilson loop, for which the perturbative contribution to the zero point energy vanishes identically. A zero point energy can arise nonperturbatively, which we illustrate by considering effective string models. The nonperturbative contribution to the zero point energy vanishes in the Nambu model, but is nonzero wh...

  3. Gravity and Zero Point Energy

    Science.gov (United States)

    Massie, U. W.

    When Planck introduced the 1/2 hv term to his 1911 black body equation he showed that there is a residual energy remaining at zero degree K after all thermal energy ceased. Other investigators, including Lamb, Casimir, and Dirac added to this information. Today zero point energy (ZPE) is accepted as an established condition. The purpose of this paper is to demonstrate that the density of the ZPE is given by the gravity constant (G) and the characteristics of its particles are revealed by the cosmic microwave background (CMB). Eddies of ZPE particles created by flow around mass bodies reduce the pressure normal to the eddy flow and are responsible for the force of gravity. Helium atoms resonate with ZPE particles at low temperature to produce superfluid helium. High velocity micro vortices of ZPE particles about a basic particle or particles are responsible for electromagnetic forces. The speed of light is the speed of the wave front in the ZPE and its value is a function of the temperature and density of the ZPE.

  4. Zero point energy of renormalized Wilson loops

    International Nuclear Information System (INIS)

    Hidaka, Yoshimasa; Pisarski, Robert D.

    2009-01-01

    The quark-antiquark potential, and its associated zero point energy, can be extracted from lattice measurements of the Wilson loop. We discuss a unique prescription to renormalize the Wilson loop, for which the perturbative contribution to the zero point energy vanishes identically. A zero point energy can arise nonperturbatively, which we illustrate by considering effective string models. The nonperturbative contribution to the zero point energy vanishes in the Nambu model, but is nonzero when terms for extrinsic curvature are included. At one loop order, the nonperturbative contribution to the zero point energy is negative, regardless of the sign of the extrinsic curvature term.

  5. Gravitational Zero Point Energy induces Physical Observables

    OpenAIRE

    Garattini, Remo

    2010-01-01

    We consider the contribution of Zero Point Energy on the induced Cosmological Constant and on the induced Electric/Magnetic charge in absence of matter fields. The method is applicable to every spherically symmetric background. Extensions to a generic $f(R) $ theory are also allowed. Only the graviton appears to be fundamental to the determination of Zero Point Energy.

  6. Zero-point energy in spheroidal geometries

    OpenAIRE

    Kitson, A. R.; Signal, A. I.

    2005-01-01

    We study the zero-point energy of a massless scalar field subject to spheroidal boundary conditions. Using the zeta-function method, the zero-point energy is evaluated for small ellipticity. Axially symmetric vector fields are also considered. The results are interpreted within the context of QCD flux tubes and the MIT bag model.

  7. Zero-point energy in bag models

    International Nuclear Information System (INIS)

    Milton, K.A.

    1979-01-01

    The zero-point (Casimir) energy of free vector (gluon) fields confined to a spherical cavity (bag) is computed. With a suitable renormalization the result for eight gluons is E = + 0.51/a. This result is substantially larger than that for a spherical shell (where both interior and exterior modes are present), and so affects Johnson's model of the QCD vacuum. It is also smaller than, and of opposite sign to, the value used in bag model phenomenology, so it will have important implications there. 1 figure

  8. Zero Point Energy and the Dirac Equation

    OpenAIRE

    Forouzbakhsh, Farshid

    2007-01-01

    Zero Point Energy (ZPE) describes the random electromagnetic oscillations that are left in the vacuum after all other energy has been removed. One way to explain this is by means of the uncertainty principle of quantum physics, which implies that it is impossible to have a zero energy condition.I this article, the ZPE is explained by using a novel description of the graviton. This is based on the behavior of photons in gravitational field, leading to a new definition of the graviton. In effec...

  9. Zero point energy of polyhedral water clusters.

    Science.gov (United States)

    Anick, David J

    2005-06-30

    Polyhedral water clusters (PWCs) are cage-like (H2O)n clusters where every O participates in exactly three H bonds. For a database of 83 PWCs, 8 zero point energy (ZPE) was calculated at the B3LYP/6-311++G** level. ZPE correlates negatively with electronic energy (E0): each increase of 1 kcal/mol in E0 corresponds to a decrease of about 0.11 kcal/mol in ZPE. For each n, a set of four connectivity parameters accounts for 98% or more of the variance in ZPE. Linear regression of ZPE against n and this set gives an RMS error of 0.13 kcal/mol. The contributions to ZPE from stretch modes only (ZPE(S)) and from torsional modes only (ZPE(T)) also correlate strongly with E0 and with each other.

  10. Tapping the zero-point energy as an energy source

    International Nuclear Information System (INIS)

    King, M.B.

    1991-01-01

    This paper reports that the hypothesis for tapping the zero-point energy (ZPE) arises by combining the theories of the ZPE with the theories of system self-organization. The vacuum polarization of atomic nuclei might allow their synchronous motion to activate a ZPE coherence. Experimentally observed plasma ion-acoustic anomalies as well as inventions utilizing cycloid ion motions may offer supporting evidence. The suggested experiment of rapidly circulating a charged plasma in a vortex ring might induce a sufficient zero-point energy interaction to manifest a gravitational anomaly. An invention utilizing abrupt E field rotation to create virtual charge exhibits excessive energy output

  11. Vibrational zero point energy for H-doped silicon

    Science.gov (United States)

    Karazhanov, S. Zh.; Ganchenkova, M.; Marstein, E. S.

    2014-05-01

    Most of the studies addressed to computations of hydrogen parameters in semiconductor systems, such as silicon, are performed at zero temperature T = 0 K and do not account for contribution of vibrational zero point energy (ZPE). For light weight atoms such as hydrogen (H), however, magnitude of this parameter might be not negligible. This Letter is devoted to clarify the importance of accounting the zero-point vibrations when analyzing hydrogen behavior in silicon and its effect on silicon electronic properties. For this, we estimate the ZPE for different locations and charge states of H in Si. We show that the main contribution to the ZPE is coming from vibrations along the Si-H bonds whereas contributions from other Si atoms apart from the direct Si-H bonds play no role. It is demonstrated that accounting the ZPE reduces the hydrogen formation energy by ˜0.17 eV meaning that neglecting ZPE at low temperatures one can underestimate hydrogen solubility by few orders of magnitude. In contrast, the effect of the ZPE on the ionization energy of H in Si is negligible. The results can have important implications for characterization of vibrational properties of Si by inelastic neutron scattering, as well as for theoretical estimations of H concentration in Si.

  12. A Systematic Approach for Computing Zero-Point Energy, Quantum Partition Function, and Tunneling Effect Based on Kleinert's Variational Perturbation Theory.

    Science.gov (United States)

    Wong, Kin-Yiu; Gao, Jiali

    2008-09-09

    In this paper, we describe an automated integration-free path-integral (AIF-PI) method, based on Kleinert's variational perturbation (KP) theory, to treat internuclear quantum-statistical effects in molecular systems. We have developed an analytical method to obtain the centroid potential as a function of the variational parameter in the KP theory, which avoids numerical difficulties in path-integral Monte Carlo or molecular dynamics simulations, especially at the limit of zero-temperature. Consequently, the variational calculations using the KP theory can be efficiently carried out beyond the first order, i.e., the Giachetti-Tognetti-Feynman-Kleinert variational approach, for realistic chemical applications. By making use of the approximation of independent instantaneous normal modes (INM), the AIF-PI method can readily be applied to many-body systems. Previously, we have shown that in the INM approximation, the AIF-PI method is accurate for computing the quantum partition function of a water molecule (3 degrees of freedom) and the quantum correction factor for the collinear H(3) reaction rate (2 degrees of freedom). In this work, the accuracy and properties of the KP theory are further investigated by using the first three order perturbations on an asymmetric double-well potential, the bond vibrations of H(2), HF, and HCl represented by the Morse potential, and a proton-transfer barrier modeled by the Eckart potential. The zero-point energy, quantum partition function, and tunneling factor for these systems have been determined and are found to be in excellent agreement with the exact quantum results. Using our new analytical results at the zero-temperature limit, we show that the minimum value of the computed centroid potential in the KP theory is in excellent agreement with the ground state energy (zero-point energy) and the position of the centroid potential minimum is the expectation value of particle position in wave mechanics. The fast convergent property

  13. Conversion of zero point energy into high-energy photons

    Energy Technology Data Exchange (ETDEWEB)

    Ivlev, B. I. [Universidad Autonoma de San Luis Potosi, Instituto de Fisica, Av. Manuel Nava No. 6, Zona Universitaria, 78290 San Luis Potosi, SLP (Mexico)

    2016-11-01

    An unusual phenomenon, observed in experiments is studied. X-ray laser bursts of keV energy are emitted from a metal where long-living states, resulting in population inversion, are totally unexpected. Anomalous electron-photon states are revealed to be formed inside the metal. These states are associated with narrow, 10{sup -11} cm, potential well created by the local reduction of zero point electromagnetic energy. In contrast to analogous van der Waals potential well, leading to attraction of two hydrogen atoms, the depth of the anomalous well is on the order of 1 MeV. The states in that well are long-living which results in population inversion and subsequent laser generation observed. The X-ray emission, occurring in transitions to lower levels, is due to the conversion of zero point electromagnetic energy. (Author)

  14. Energy and thermodynamic considerations involving electromagnetic zero-point radiation

    International Nuclear Information System (INIS)

    Cole, Daniel C.

    1999-01-01

    There has been recent speculation and controversy regarding whether electromagnetic zero-point radiation might be the next candidate in the progression of plentiful energy sources, ranging, for example, from hydrodynamic, chemical, and nuclear energy sources. Certainly, however, extracting energy from the vacuum seems counter intuitive to most people. Here, these ideas are clarified, drawing on simple and common examples. Known properties of electromagnetic zero-point energy are qualitatively discussed. An outlook on the success of utilizing this energy source is then discussed

  15. Zero-point energy and the Eoetvoes experiment

    International Nuclear Information System (INIS)

    Ross, D.K.

    1999-01-01

    The paper shows that the modification of the electromagnetic zero-point energy inside a solid aluminum ball ia large enough to be detected by a feasible Eoetvoes-type experiment improved only a factor of 100 over earlier experiments. Because of the uncertainties surrounding the relationship of the zero-point energy to the cosmological constant and to renormalization effects in general relativity that such an experiment might give a non-null result. This would be a test of the weak equivalence principle and of general relativity itself in regard to a very special purely quantum-mechanical form of energy

  16. Zero-point energy in early quantum theory

    International Nuclear Information System (INIS)

    Milonni, P.W.; Shih, M.-L.

    1991-01-01

    In modern physics the vacuum is not a tranquil void but a quantum state with fluctuations having observable consequences. The present concept of the vacuum has its roots in the zero-point energy of harmonic oscillators and the electromagnetic field, and arose before the development of the formalism of quantum mechanics. This article discusses these roots in the blackbody research of Planck and Einstein in 1912--1913, and the relation to Bose--Einstein statistics and the first indication of wave--particle duality uncovered by Einstein's fluctuation formula. Also considered are the Einstein--Stern theory of specific heats, which invoked zero-point energy in a way which turned out to be incorrect, and the experimental implications of zero-point energy recognized by Mulliken and Debye in vibrational spectroscopy and x-ray diffraction

  17. Uncertainty relations, zero point energy and the linear canonical group

    Science.gov (United States)

    Sudarshan, E. C. G.

    1993-01-01

    The close relationship between the zero point energy, the uncertainty relations, coherent states, squeezed states, and correlated states for one mode is investigated. This group-theoretic perspective enables the parametrization and identification of their multimode generalization. In particular the generalized Schroedinger-Robertson uncertainty relations are analyzed. An elementary method of determining the canonical structure of the generalized correlated states is presented.

  18. Zero-point quantum fluctuations and dark energy

    International Nuclear Information System (INIS)

    Maggiore, Michele

    2011-01-01

    In the Hamiltonian formulation of general relativity, the energy associated to an asymptotically flat space-time with metric g μν is related to the Hamiltonian H GR by E=H GR [g μν ]-H GR [η μν ], where the subtraction of the flat-space contribution is necessary to get rid of an otherwise divergent boundary term. This classic result indicates that the energy associated to flat space does not gravitate. We apply the same principle to study the effect of the zero-point fluctuations of quantum fields in cosmology, proposing that their contribution to cosmic expansion is obtained computing the vacuum energy of quantum fields in a Friedmann-Robertson-Walker space-time with Hubble parameter H(t) and subtracting from it the flat-space contribution. Then the term proportional to Λ c 4 (where Λ c is the UV cutoff) cancels, and the remaining (bare) value of the vacuum energy density is proportional to Λ c 2 H 2 (t). After renormalization, this produces a renormalized vacuum energy density ∼M 2 H 2 (t), where M is the scale where quantum gravity sets is, so for M of the order of the Planck mass a vacuum energy density of the order of the critical density can be obtained without any fine-tuning. The counterterms can be chosen so that the renormalized energy density and pressure satisfy p=wρ, with w a parameter that can be fixed by comparison to the observed value, so, in particular, one can choose w=-1. An energy density evolving in time as H 2 (t) is however observationally excluded as an explanation for the dominant dark energy component that is responsible for the observed acceleration of the Universe. We rather propose that zero-point vacuum fluctuations provide a new subdominant ''dark'' contribution to the cosmic expansion that, for a UV scale M slightly smaller than the Planck mass, is consistent with existing limits and potentially detectable.

  19. Progress and results in Zero-Point Energy research

    International Nuclear Information System (INIS)

    King, M.B.

    1992-01-01

    This paper reports that the vacuum polarization of atomic nuclei may trigger a coherence in the zero-point energy (ZPE) whenever a large number of nuclei undergo abrupt, synchronous motion. Experimental evidence arises from the energy anomalies observed in heavy-ion collisions, ion-acoustic plasma oscillations, sonoluminescence, fractoemission, large charge density plasmoids, abrupt electric discharges, and light water cold fusion experiments. Further evidence arises from inventions that utilize coherent ion-acoustic activity to output anomalously excessive power

  20. Isotope effect on the zero point energy shift upon condensation

    International Nuclear Information System (INIS)

    Kornblum, Z.C.

    1977-01-01

    The various isotope-dependent and independent atomic and molecular properties that pertain to the isotopic difference between the zero point energy (ZPE) shifts upon condensation have been derived. The theoretical development of the change of the ZPE associated with the internal molecular vibrations, due to the condensation of the gaseous molecules, has been presented on the basis of Wolfsberg's second-order perturbation treatment of the isotope-dependent London dispersion forces between liquid molecules. The isotope effect on the ZPE shift is related to the difference between the sums of the integrated intensities of the infrared absorption bands of the two gaseous isotopic molecules. Each intensity sum is expressed, in part, in terms of partial derivatives of the molecular dipole moment with respect to atomic cartesian coordinates. These derivatives are related to the isotope-independent effective charges of the atoms, which are theoretically calculated by means of a modified CNDO/2 computer program. The effective atomic charges are also calculated from available experimental infrared intensity data. The effects of isotopic substitutions of carbon-13 for carbon-12 and/or deuterium for protium, in ethylene, methane, and the fluorinated methanes, CH 3 F, CH 2 F 2 , CHF 3 , and CF 4 , on the ZPE shift upon condensation are calculated. These results compare well with the Bigeleisen B-factors, which are experimentally obtained from vapor pressure measurements of the isotopic species. Each of the following molecular properties will tend to increase the isotopic difference between the ZPE shifts upon condensation: (1) large number of highly polar bonds, (2) high molecular weight, (3) non-polar (preferably) or massive molecule, (4) non-hydrogenous molecule, and (5) closely packed liquid molecules. These properties will result in stronger dispersion forces in the liquid phase between the lighter molecules than between the isotopically heavier molecules

  1. Atom-surface interaction: Zero-point energy formalism

    International Nuclear Information System (INIS)

    Paranjape, V.V.

    1985-01-01

    The interaction energy between an atom and a surface formed by a polar medium is derived with use of a new approach based on the zero-point energy formalism. It is shown that the energy depends on the separation Z between the atom and the surface. With increasing Z, the energy decreases according to 1/Z 3 , while with decreasing Z the energy saturates to a finite value. It is also shown that the energy is affected by the velocity of the atom, but this correction is small. Our result for large Z is consistent with the work of Manson and Ritchie [Phys. Rev. B 29, 1084 (1984)], who follow a more traditional approach to the problem

  2. Zero-point energy effects in anion solvation shells.

    Science.gov (United States)

    Habershon, Scott

    2014-05-21

    By comparing classical and quantum-mechanical (path-integral-based) molecular simulations of solvated halide anions X(-) [X = F, Cl, Br and I], we identify an ion-specific quantum contribution to anion-water hydrogen-bond dynamics; this effect has not been identified in previous simulation studies. For anions such as fluoride, which strongly bind water molecules in the first solvation shell, quantum simulations exhibit hydrogen-bond dynamics nearly 40% faster than the corresponding classical results, whereas those anions which form a weakly bound solvation shell, such as iodide, exhibit a quantum effect of around 10%. This observation can be rationalized by considering the different zero-point energy (ZPE) of the water vibrational modes in the first solvation shell; for strongly binding anions, the ZPE of bound water molecules is larger, giving rise to faster dynamics in quantum simulations. These results are consistent with experimental investigations of anion-bound water vibrational and reorientational motion.

  3. Isotope effect on the zero point energy shift upon condensation

    International Nuclear Information System (INIS)

    Kornblum, Z.C.; Ishida, T.

    1977-07-01

    The various isotope-dependent and independent atomic and molecular properties that pertain to the isotopic difference between the zero point energy (ZPE) shifts upon condensation were derived. The theoretical development of the change of the ZPE associated with the internal molecular vibrations, due to the condensation of the gaseous molecules, is presented on the basis of Wolfsberg's second-order perturbation treatment of the isotope-dependent London dispersion forces between liquid molecules. The isotope effect on the ZPE shift is related to the difference between the sums of the integrated intensities of the infrared absorption bands of the two gaseous isotopic molecules. The effective atomic charges are also calculated from available experimental infrared intensity data. The effects of isotopic substitutions of carbon-13 for carbon-12 and/or deuterium for protium, in ethylene, methane, and the fluorinated methanes, CH 3 F, CH 2 F 2 , CHF 3 , and CF 4 , on the ZPE shift upon condensation are calculated. These results compare well with the Bigeleisen B-factors, which are experimentally obtained from vapor pressure measurements of the isotopic species. Each of the following molecular properties will tend to increase the isotopic difference between the ZPE shifts upon condensation: (1) large number of highly polar bonds, (2) high molecular weight, (3) non-polar (preferably) or massive molecule, (4) non-hydrogenous molecule, and (5) closely packed liquid molecules. These properties will result in stronger dispersion forces in the liquid phase between the lighter molecules than between the isotopically heavier molecules. 36 tables, 9 figures

  4. Lorentz invariance and the zero-point stress-energy tensor

    OpenAIRE

    Visser, Matt

    2016-01-01

    Some 65 years ago (1951) Wolfgang Pauli noted that the net zero-point energy density could be set to zero by a carefully fine-tuned cancellation between bosons and fermions. In the current article I will argue in a slightly different direction: The zero-point energy density is only one component of the zero-point stress energy tensor, and it is this tensor quantity that is in many ways the more fundamental object of interest. I shall demonstrate that Lorentz invariance of the zero-point stres...

  5. Transmission of Helium Isotopes through Graphdiyne Pores: Tunneling versus Zero Point Energy Effects.

    Science.gov (United States)

    Hernández, Marta I; Bartolomei, Massimiliano; Campos-Martínez, José

    2015-10-29

    Recent progress in the production of new two-dimensional (2D) nanoporous materials is attracting considerable interest for applications to isotope separation in gases. In this paper we report a computational study of the transmission of (4)He and (3)He through the (subnanometer) pores of graphdiyne, a recently synthesized 2D carbon material. The He-graphdiyne interaction is represented by a force field parametrized upon ab initio calculations, and the (4)He/(3)He selectivity is analyzed by tunneling-corrected transition state theory. We have found that both zero point energy (of the in-pore degrees of freedom) and tunneling effects play an extraordinary role at low temperatures (≈20-30 K). However, both quantum features work in opposite directions in such a way that the selectivity ratio does not reach an acceptable value. Nevertheless, the efficiency of zero point energy is in general larger, so that (4)He tends to diffuse faster than (3)He through the graphdiyne membrane, with a maximum performance at 23 K. Moreover, it is found that the transmission rates are too small in the studied temperature range, precluding practical applications. It is concluded that the role of the in-pore degrees of freedom should be included in computations of the transmission probabilities of molecules through nanoporous materials.

  6. Resolved-sideband Raman cooling of a bound atom to the 3D zero-point energy

    International Nuclear Information System (INIS)

    Monroe, C.; Meekhof, D.M.; King, B.E.; Jefferts, S.R.; Itano, W.M.; Wineland, D.J.; Gould, P.

    1995-01-01

    We report laser cooling of a single 9 Be + ion held in a rf (Paul) ion trap to where it occupies the quantum-mechanical ground state of motion. With the use of resolved-sideband stimulated Raman cooling, the zero point of motion is achieved 98% of the time in 1D and 92% of the time in 3D. Cooling to the zero-point energy appears to be a crucial prerequisite for future experiments such as the realization of simple quantum logic gates applicable to quantum computation. copyright 1995 The American Physical Society

  7. Zero-point energy conservation in classical trajectory simulations: Application to H2CO

    Science.gov (United States)

    Lee, Kin Long Kelvin; Quinn, Mitchell S.; Kolmann, Stephen J.; Kable, Scott H.; Jordan, Meredith J. T.

    2018-05-01

    A new approach for preventing zero-point energy (ZPE) violation in quasi-classical trajectory (QCT) simulations is presented and applied to H2CO "roaming" reactions. Zero-point energy may be problematic in roaming reactions because they occur at or near bond dissociation thresholds and these channels may be incorrectly open or closed depending on if, or how, ZPE has been treated. Here we run QCT simulations on a "ZPE-corrected" potential energy surface defined as the sum of the molecular potential energy surface (PES) and the global harmonic ZPE surface. Five different harmonic ZPE estimates are examined with four, on average, giving values within 4 kJ/mol—chemical accuracy—for H2CO. The local harmonic ZPE, at arbitrary molecular configurations, is subsequently defined in terms of "projected" Cartesian coordinates and a global ZPE "surface" is constructed using Shepard interpolation. This, combined with a second-order modified Shepard interpolated PES, V, allows us to construct a proof-of-concept ZPE-corrected PES for H2CO, Veff, at no additional computational cost to the PES itself. Both V and Veff are used to model product state distributions from the H + HCO → H2 + CO abstraction reaction, which are shown to reproduce the literature roaming product state distributions. Our ZPE-corrected PES allows all trajectories to be analysed, whereas, in previous simulations, a significant proportion was discarded because of ZPE violation. We find ZPE has little effect on product rotational distributions, validating previous QCT simulations. Running trajectories on V, however, shifts the product kinetic energy release to higher energy than on Veff and classical simulations of kinetic energy release should therefore be viewed with caution.

  8. Collective mass and zero-point energy in the generator-coordinate method

    International Nuclear Information System (INIS)

    Fiolhais, C.

    1982-01-01

    The aim of the present thesis if the study of the collective mass parameters and the zero-point energies in the GCM framework with special regards to the fission process. After the derivation of the collective Schroedinger equation in the framework of the Gaussian overlap approximation the inertia parameters are compared with those of the adiabatic time-dependent Hartree-Fock method. Then the kinetic and the potential zero-point energy occurring in this formulation are studied. Thereafter the practical application of the described formalism is discussed. Then a numerical calculation of the GCM mass parameter and the zero-point energy for the fission process on the base of a two-center shell model with a pairing force in the BCS approximation is presented. (HSI) [de

  9. Zero-point energy constraint in quasi-classical trajectory calculations.

    Science.gov (United States)

    Xie, Zhen; Bowman, Joel M

    2006-04-27

    A method to constrain the zero-point energy in quasi-classical trajectory calculations is proposed and applied to the Henon-Heiles system. The main idea of this method is to smoothly eliminate the coupling terms in the Hamiltonian as the energy of any mode falls below a specified value.

  10. Zero-point energy of N perfectly conducting concentric cylindrical shells

    International Nuclear Information System (INIS)

    Tatur, K.; Woods, L.M.

    2008-01-01

    The zero-point (Casimir) energy of N perfectly conducting, infinitely long, concentric cylindrical shells is calculated utilizing the mode summation technique. The obtained convergent expression is studied as a function of size, curvature and number of shells. Limiting cases, such as infinitely close shells or infinite radius shells are also investigated

  11. Phase holonomy, zero-point energy cancellation and supersymmetric quantum mechanics

    International Nuclear Information System (INIS)

    Iida, Shinji; Kuratsuji, Hiroshi

    1987-01-01

    We show that the zero-point energy of bosons is cancelled out by the phase holonomy which is induced by the adiabatic deformation of a boson system coupled with a fermion. This mechanism results in a supersymmetric quantum mechanics as a special case and presents a possible dynamical origin of supersymmetry. (orig.)

  12. Gaussian-3 theory using density functional geometries and zero-point energies

    International Nuclear Information System (INIS)

    Baboul, A.G.; Curtiss, L.A.; Redfern, P.C.; Raghavachari, K.

    1999-01-01

    A variation of Gaussian-3 (G3) theory is presented in which the geometries and zero-point energies are obtained from B3LYP density functional theory [B3LYP/6-31G(d)] instead of geometries from second-order perturbation theory [MP2(FU)/6-31G(d)] and zero-point energies from Hartree - Fock theory [HF/6-31G(d)]. This variation, referred to as G3//B3LYP, is assessed on 299 energies (enthalpies of formation, ionization potentials, electron affinities, proton affinities) from the G2/97 test set [J. Chem. Phys. 109, 42 (1998)]. The G3//B3LYP average absolute deviation from experiment for the 299 energies is 0.99 kcal/mol compared to 1.01 kcal/mol for G3 theory. Generally, the results from the two methods are similar, with some exceptions. G3//B3LYP theory gives significantly improved results for several cases for which MP2 theory is deficient for optimized geometries, such as CN and O 2 + . However, G3//B3LYP does poorly for ionization potentials that involve a Jahn - Teller distortion in the cation (CH 4 + , BF 3 + , BCl 3 + ) because of the B3LYP/6-31G(d) geometries. The G3(MP2) method is also modified to use B3LYP/6-31G(d) geometries and zero-point energies. This variation, referred to as G3(MP2)//B3LYP, has an average absolute deviation of 1.25 kcal/mol compared to 1.30 kcal/mol for G3(MP2) theory. Thus, use of density functional geometries and zero-point energies in G3 and G3(MP2) theories is a useful alternative to MP2 geometries and HF zero-point energies. copyright 1999 American Institute of Physics

  13. Accurate Anharmonic Zero-Point Energies for Some Combustion-Related Species from Diffusion Monte Carlo.

    Science.gov (United States)

    Harding, Lawrence B; Georgievskii, Yuri; Klippenstein, Stephen J

    2017-06-08

    Full-dimensional analytic potential energy surfaces based on CCSD(T)/cc-pVTZ calculations have been determined for 48 small combustion-related molecules. The analytic surfaces have been used in Diffusion Monte Carlo calculations of the anharmonic zero-point energies. The resulting anharmonicity corrections are compared to vibrational perturbation theory results based both on the same level of electronic structure theory and on lower-level electronic structure methods (B3LYP and MP2).

  14. Generalized zero point anomaly

    International Nuclear Information System (INIS)

    Nogueira, Jose Alexandre; Maia Junior, Adolfo

    1994-01-01

    It is defined Zero point Anomaly (ZPA) as the difference between the Effective Potential (EP) and the Zero point Energy (ZPE). It is shown, for a massive and interacting scalar field that, in very general conditions, the renormalized ZPA vanishes and then the renormalized EP and ZPE coincide. (author). 3 refs

  15. Uncertainties in scaling factors for ab initio vibrational zero-point energies

    Science.gov (United States)

    Irikura, Karl K.; Johnson, Russell D.; Kacker, Raghu N.; Kessel, Rüdiger

    2009-03-01

    Vibrational zero-point energies (ZPEs) determined from ab initio calculations are often scaled by empirical factors. An empirical scaling factor partially compensates for the effects arising from vibrational anharmonicity and incomplete treatment of electron correlation. These effects are not random but are systematic. We report scaling factors for 32 combinations of theory and basis set, intended for predicting ZPEs from computed harmonic frequencies. An empirical scaling factor carries uncertainty. We quantify and report, for the first time, the uncertainties associated with scaling factors for ZPE. The uncertainties are larger than generally acknowledged; the scaling factors have only two significant digits. For example, the scaling factor for B3LYP/6-31G(d) is 0.9757±0.0224 (standard uncertainty). The uncertainties in the scaling factors lead to corresponding uncertainties in predicted ZPEs. The proposed method for quantifying the uncertainties associated with scaling factors is based upon the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. We also present a new reference set of 60 diatomic and 15 polyatomic "experimental" ZPEs that includes estimated uncertainties.

  16. Understanding zero-point energy in the context of classical electromagnetism

    International Nuclear Information System (INIS)

    Boyer, Timothy H

    2016-01-01

    Today’s textbooks of electromagnetism give the particular solution to Maxwell’s equations involving the integral over the charge and current sources at retarded times. However, the texts fail to emphasise that the choice of the incoming-wave boundary conditions corresponding to solutions of the homogeneous Maxwell equations must be made based upon experiment. Here we discuss the role of these incoming-wave boundary conditions for an experimenter with a hypothetical charged harmonic oscillator as his equipment. We describe the observations of the experimenter when located near a radio station or immersed in thermal radiation at temperature T . The classical physicists at the end of the 19th century chose the incoming-wave boundary conditions for the homogeneous Maxwell equations based upon the experimental observations of Lummer and Pringsheim which measured only the thermal radiation which exceeded the random radiation surrounding their measuring equipment; the physicists concluded that they could take the homogeneous solutions to vanish at zero temperature. Today at the beginning of the 21st century, classical physicists must choose the incoming-wave boundary conditions for the homogeneous Maxell equations to correspond to the full radiation spectrum revealed by the recent Casimir force measurements which detect all the radiation surrounding conducting parallel plates, including the radiation absorbed and emitted by the plates themselves. The random classical radiation spectrum revealed by the Casimir force measurements includes electromagnetic zero-point radiation, which is missing from the spectrum measured by Lummer and Pringsheim, and which cannot be eliminated by going to zero temperature. This zero-point radiation will lead to zero-point energy for all systems which have electromagnetic interactions. Thus the choice of the incoming-wave boundary conditions on the homogeneous Maxwell equations is intimately related to the ideas of zero-point energy and

  17. What measurable zero point fluctuations can(not) tell us about dark energy

    International Nuclear Information System (INIS)

    Doran, M.

    2006-05-01

    We show that laboratory experiments cannot measure the absolute value of dark energy. All known experiments rely on electromagnetic interactions. They are thus insensitive to particles and fields that interact only weakly with ordinary matter. In addition, Josephson junction experiments only measure differences in vacuum energy similar to Casimir force measurements. Gravity, however, couples to the absolute value. Finally we note that Casimir force measurements have tested zero point fluctuations up to energies of ∝ 10 eV, well above the dark energy scale of ∝ 0.01 eV. Hence, the proposed cut-off in the fluctuation spectrum is ruled out experimentally. (Orig.)

  18. Zero-point energies in the two-center shell model. II

    International Nuclear Information System (INIS)

    Reinhard, P.-G.

    1978-01-01

    The zero-point energy (ZPE) contained in the potential-energy surface of a two-center shell model (TCSM) is evaluated. In extension of previous work, the author uses here the full TCSM with l.s force, smoothing and asymmetry. The results show a critical dependence on the height of the potential barrier between the centers. The ZPE turns out to be non-negligible along the fission path for 236 U, and even more so for lighter systems. It is negligible for surface quadrupole motion and it is just on the fringe of being negligible for motion along the asymmetry coordinate. (Auth.)

  19. Zero-point energies in the two-center shell model

    International Nuclear Information System (INIS)

    Reinhard, P.G.

    1975-01-01

    The zero-point energies (ZPE) contained in the potential-energy surfaces (PES) of a two-center shell model are evaluated. For the c.m. motion of the system as a whole the kinetic ZPE was found to be negligible, whereas it varies appreciably for the rotational and oscillation modes (about 5-9MeV). For the latter two modes the ZPE also depends sensitively on the changing pairing structure, which can induce strong local fluctuations, particularly in light nuclei. The potential ZPE is very small for heavy nuclei, but might just become important in light nuclei. (Auth.)

  20. Valence coordinate contributions to zero-point energy shifts due to hydrogen isotope substitutions

    International Nuclear Information System (INIS)

    Oi, Takao; Ishida, Takanobu

    1986-01-01

    The orthogonal approximation method for the zero-point energy (ZPE) developed previously has been applied to analyze the shifts in the ZPE, δ(ZPE), due to monodeuterium substitutions in methane, ethylene, ethane and benzene in terms of elements of F and G matrices. The δ(ZPE) can be expressed with a reasonable precision as a sum of contributions of individual valence coordinates and correction terms consisting of the first-order interactions between the coordinates. A further refinement in the precision is achieved by a set of small number of second-order terms, which can be estimated by a simple procedure. (author)

  1. On the zero point energy of the electromagnetic field in the presence of material media

    International Nuclear Information System (INIS)

    Ferreira, L.A.

    1980-12-01

    The Van der Waals force between two semi-infinite material media separated by a piece of a third material is calculated. In this calculation, a generalization of some works on this theme is made, considering the radiation field delay effect, and impose no kind of electric and magnetic permeability dependence on the field frequency. The zero point energy of electromagnetic field in the presence of rectangular cavities with perfectly conducting walls (epsilon →i infinite) and/or infinitely permeable walls (μ→ infinite), is also calculated. Two kinds of regularization are made. In view of the results obtained modifications in the Casimir's model for the electron are suggested [pt

  2. Source of vacuum electromagnetic zero-point energy and Dirac's large numbers hypothesis

    International Nuclear Information System (INIS)

    Simaciu, I.; Dumitrescu, G.

    1993-01-01

    The stochastic electrodynamics states that zero-point fluctuation of the vacuum (ZPF) is an electromagnetic zero-point radiation with spectral density ρ(ω)=ℎω 3 / 2π 2 C 3 . Protons, free electrons and atoms are sources for this radiation. Each of them absorbs and emits energy by interacting with ZPF. At equilibrium ZPF radiation is scattered by dipoles.Scattered radiation spectral density is ρ(ω,r) ρ(ω).c.σ(ω) / 4πr 2 . Radiation of dipole spectral density of Universe is ρ ∫ 0 R nρ(ω,r)4πr 2 dr. But if σ atom P e σ=σ T then ρ ρ(ω)σ T R.n. Moreover if ρ=ρ(ω) then σ T Rn = 1. With R = G M/c 2 and σ T ≅(e 2 /m e c 2 ) 2 ∝ r e 2 then σ T .Rn 1 is equivalent to R/r e = e 2 /Gm p m e i.e. the cosmological coincidence discussed in the context of Dirac's large-numbers hypothesis. (Author)

  3. Ab initio calculation of the zero-point energy in dense hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Takezawa, Tomoki [Division of Materials Physics, Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka 560-8531 (Japan); Nagara, Hitose [Division of Materials Physics, Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka 560-8531 (Japan); Nagao, Kazutaka [Laboratory of Atomic and Solid State Physics, Cornel University, Ithaca, NY (United States)

    2002-11-11

    We have studied the vibrational modes and their frequencies in both atomic and molecular phases of dense hydrogen to find the stable structures and evaluated the zero-point energies (ZPEs) and the effect on molecular dissociation. The most probable structure in the atomic phase is Cs IV whose vibrational modes have real frequencies over the whole Brillouin zone. And the structure in the molecular phase is very close to Cmca, whose vibrational modes with imaginary frequencies work as guides to the stable structure. Our estimates of the ZPE are very close to those of Kagan et al (Kagan Yu, Pushkarev V V and Kholas A 1977 Sov. Phys.-JETP 46 511). Adding the ZPE to the static energy, we estimated its effect on the pressure of the molecular dissociation. The reduction of the dissociation pressure due to the inclusion of the ZPE becomes over 100 GPa.

  4. Ab initio calculation of the zero-point energy in dense hydrogen

    International Nuclear Information System (INIS)

    Takezawa, Tomoki; Nagara, Hitose; Nagao, Kazutaka

    2002-01-01

    We have studied the vibrational modes and their frequencies in both atomic and molecular phases of dense hydrogen to find the stable structures and evaluated the zero-point energies (ZPEs) and the effect on molecular dissociation. The most probable structure in the atomic phase is Cs IV whose vibrational modes have real frequencies over the whole Brillouin zone. And the structure in the molecular phase is very close to Cmca, whose vibrational modes with imaginary frequencies work as guides to the stable structure. Our estimates of the ZPE are very close to those of Kagan et al (Kagan Yu, Pushkarev V V and Kholas A 1977 Sov. Phys.-JETP 46 511). Adding the ZPE to the static energy, we estimated its effect on the pressure of the molecular dissociation. The reduction of the dissociation pressure due to the inclusion of the ZPE becomes over 100 GPa

  5. The zero-point field. On the search for the cosmic basic energy

    International Nuclear Information System (INIS)

    McTaggart, L.

    2007-02-01

    Does an inexhaustable energy source exist from which all life is fed? A form of energy, which penetrates all dead and living expression forms of life? Does a logical, scientific explanation exist for parapsychological phenomena like clairvoyance, telepathy, ghost healing, synchronicity, and a model for the mode of action of homeopathy? Do serious researchers and scientific studies to be token in ernest exist, which not only deal with this questions but also have found answers? During eight years the British scientific journalist Lynne McTaggart has researched. ''Teh zero-point field'' is the result of numerous speeches with renowned physicists, biophysicists, neuroscientists, biologist, and consciousness researchers on the whole world, which have independently discovered phenomena, which are combined like puzzle pieces to a fascinating total picture.

  6. Understanding the reaction between muonium atoms and hydrogen molecules: zero point energy, tunnelling, and vibrational adiabaticity

    Science.gov (United States)

    Aldegunde, J.; Jambrina, P. G.; García, E.; Herrero, V. J.; Sáez-Rábanos, V.; Aoiz, F. J.

    2013-11-01

    The advent of very precise measurements of rate coefficients in reactions of muonium (Mu), the lightest hydrogen isotope, with H2 in its ground and first vibrational state and of kinetic isotope effects with respect to heavier isotopes has triggered a renewed interests in the field of muonic chemistry. The aim of the present article is to review the most recent results about the dynamics and mechanism of the reaction Mu+H2 to shed light on the importance of quantum effects such as tunnelling, the preservation of the zero point energy, and the vibrational adiabaticity. In addition to accurate quantum mechanical (QM) calculations, quasiclassical trajectories (QCT) have been run in order to check the reliability of this method for this isotopic variant. It has been found that the reaction with H2(v=0) is dominated by the high zero point energy (ZPE) of the products and that tunnelling is largely irrelevant. Accordingly, both QCT calculations that preserve the products' ZPE as well as those based on the Ring Polymer Molecular Dynamics methodology can reproduce the QM rate coefficients. However, when the hydrogen molecule is vibrationally excited, QCT calculations fail completely in the prediction of the huge vibrational enhancement of the reactivity. This failure is attributed to tunnelling, which plays a decisive role breaking the vibrational adiabaticity when v=1. By means of the analysis of the results, it can be concluded that the tunnelling takes place through the ν1=1 collinear barrier. Somehow, the tunnelling that is missing in the Mu+H2(v=0) reaction is found in Mu+H2(v=1).

  7. A simple model for correcting the zero point energy problem in classical trajectory simulations of polyatomic molecules

    International Nuclear Information System (INIS)

    Miller, W.H.; Hase, W.L.; Darling, C.L.

    1989-01-01

    A simple model is proposed for correcting problems with zero point energy in classical trajectory simulations of dynamical processes in polyatomic molecules. The ''problems'' referred to are that classical mechanics allows the vibrational energy in a mode to decrease below its quantum zero point value, and since the total energy is conserved classically this can allow too much energy to pool in other modes. The proposed model introduces hard sphere-like terms in action--angle variables that prevent the vibrational energy in any mode from falling below its zero point value. The algorithm which results is quite simple in terms of the cartesian normal modes of the system: if the energy in a mode k, say, decreases below its zero point value at time t, then at this time the momentum P k for that mode has its sign changed, and the trajectory continues. This is essentially a time reversal for mode k (only exclamation point), and it conserves the total energy of the system. One can think of the model as supplying impulsive ''quantum kicks'' to a mode whose energy attempts to fall below its zero point value, a kind of ''Planck demon'' analogous to a Brownian-like random force. The model is illustrated by application to a model of CH overtone relaxation

  8. Zero-Point Energy Leakage in Quantum Thermal Bath Molecular Dynamics Simulations.

    Science.gov (United States)

    Brieuc, Fabien; Bronstein, Yael; Dammak, Hichem; Depondt, Philippe; Finocchi, Fabio; Hayoun, Marc

    2016-12-13

    The quantum thermal bath (QTB) has been presented as an alternative to path-integral-based methods to introduce nuclear quantum effects in molecular dynamics simulations. The method has proved to be efficient, yielding accurate results for various systems. However, the QTB method is prone to zero-point energy leakage (ZPEL) in highly anharmonic systems. This is a well-known problem in methods based on classical trajectories where part of the energy of the high-frequency modes is transferred to the low-frequency modes leading to a wrong energy distribution. In some cases, the ZPEL can have dramatic consequences on the properties of the system. Thus, we investigate the ZPEL by testing the QTB method on selected systems with increasing complexity in order to study the conditions and the parameters that influence the leakage. We also analyze the consequences of the ZPEL on the structural and vibrational properties of the system. We find that the leakage is particularly dependent on the damping coefficient and that increasing its value can reduce and, in some cases, completely remove the ZPEL. When using sufficiently high values for the damping coefficient, the expected energy distribution among the vibrational modes is ensured. In this case, the QTB method gives very encouraging results. In particular, the structural properties are well-reproduced. The dynamical properties should be regarded with caution although valuable information can still be extracted from the vibrational spectrum, even for large values of the damping term.

  9. Dissociation energies of six NO2 isotopologues by laser induced fluorescence spectroscopy and zero point energy of some triatomic molecules.

    Science.gov (United States)

    Michalski, G; Jost, R; Sugny, D; Joyeux, M; Thiemens, M

    2004-10-15

    We have measured the rotationless photodissociation threshold of six isotopologues of NO2 containing 14N, 15N, 16O, and 18O isotopes using laser induced fluorescence detection and jet cooled NO2 (to avoid rotational congestion). For each isotopologue, the spectrum is very dense below the dissociation energy while fluorescence disappears abruptly above it. The six dissociation energies ranged from 25 128.56 cm(-1) for 14N16O2 to 25 171.80 cm(-1) for 15N18O2. The zero point energy for the NO2 isotopologues was determined from experimental vibrational energies, application of the Dunham expansion, and from canonical perturbation theory using several potential energy surfaces. Using the experimentally determined dissociation energies and the calculated zero point energies of the parent NO2 isotopologue and of the NO product(s) we determined that there is a common De = 26 051.17+/-0.70 cm(-1) using the Born-Oppenheimer approximation. The canonical perturbation theory was then used to calculate the zero point energy of all stable isotopologues of SO2, CO2, and O3, which are compared with previous determinations.

  10. Mode-by-mode summation for the zero point electromagnetic energy of an infinite cylinder

    International Nuclear Information System (INIS)

    Milton, K.A.; Nesterenko, A.V.; Nesterenko, V.V.

    1999-01-01

    Using the mode-by-mode summation technique the zero point energy of the electromagnetic field is calculated for the boundary conditions given on the surface of an infinite solid cylinder. It is assumed that the dielectric and magnetic characteristics of the material which makes up the cylinder (var-epsilon 1 ,μ 1 ) and of that which makes up the surroundings (var-epsilon 2 ,μ 2 ) obey the relation var-epsilon 1 μ 1 =var-epsilon 2 μ 2 . With this assumption all the divergences cancel. The divergences are regulated by making use of zeta function techniques. Numerical calculations are carried out for a dilute dielectric-diamagnetic cylinder and for a perfectly conducting cylindrical shell. The Casimir energy in the first case vanishes, and in the second is in a complete agreement with that obtained by DeRaad and Milton who employed a Green close-quote s function technique with an ultraviolet regulator. copyright 1999 The American Physical Society

  11. Zero-Point Energy Constraint for Unimolecular Dissociation Reactions. Giving Trajectories Multiple Chances To Dissociate Correctly.

    Science.gov (United States)

    Paul, Amit K; Hase, William L

    2016-01-28

    A zero-point energy (ZPE) constraint model is proposed for classical trajectory simulations of unimolecular decomposition and applied to CH4* → H + CH3 decomposition. With this model trajectories are not allowed to dissociate unless they have ZPE in the CH3 product. If not, they are returned to the CH4* region of phase space and, if necessary, given additional opportunities to dissociate with ZPE. The lifetime for dissociation of an individual trajectory is the time it takes to dissociate with ZPE in CH3, including multiple possible returns to CH4*. With this ZPE constraint the dissociation of CH4* is exponential in time as expected for intrinsic RRKM dynamics and the resulting rate constant is in good agreement with the harmonic quantum value of RRKM theory. In contrast, a model that discards trajectories without ZPE in the reaction products gives a CH4* → H + CH3 rate constant that agrees with the classical and not quantum RRKM value. The rate constant for the purely classical simulation indicates that anharmonicity may be important and the rate constant from the ZPE constrained classical trajectory simulation may not represent the complete anharmonicity of the RRKM quantum dynamics. The ZPE constraint model proposed here is compared with previous models for restricting ZPE flow in intramolecular dynamics, and connecting product and reactant/product quantum energy levels in chemical dynamics simulations.

  12. Zero-point energy, tunnelling, and vibrational adiabaticity in the Mu + H2 reaction

    Science.gov (United States)

    Mielke, Steven L.; Garrett, Bruce C.; Fleming, Donald G.; Truhlar, Donald G.

    2015-01-01

    Isotopic substitution of muonium for hydrogen provides an unparalleled opportunity to deepen our understanding of quantum mass effects on chemical reactions. A recent topical review in this journal of the thermal and vibrationally state-selected reaction of Mu with H2 raises a number of issues that are addressed here. We show that some earlier quantum mechanical calculations of the Mu + H2 reaction, which are highlighted in this review, and which have been used to benchmark approximate methods, are in error by as much as 19% in the low-temperature limit. We demonstrate that an approximate treatment of the Born-Oppenheimer diagonal correction that was used in some recent studies is not valid for treating the vibrationally state-selected reaction. We also discuss why vibrationally adiabatic potentials that neglect bend zero-point energy are not a useful analytical tool for understanding reaction rates, and why vibrationally non-adiabatic transitions cannot be understood by considering tunnelling through vibrationally adiabatic potentials. Finally, we present calculations on a hierarchical family of potential energy surfaces to assess the sensitivity of rate constants to the quality of the potential surface.

  13. Is zero-point energy physical? A toy model for Casimir-like effect

    International Nuclear Information System (INIS)

    Nikolić, Hrvoje

    2017-01-01

    Zero-point energy is generally known to be unphysical. Casimir effect, however, is often presented as a counterexample, giving rise to a conceptual confusion. To resolve the confusion we study foundational aspects of Casimir effect at a qualitative level, but also at a quantitative level within a simple toy model with only 3 degrees of freedom. In particular, we point out that Casimir vacuum is not a state without photons, and not a ground state for a Hamiltonian that can describe Casimir force. Instead, Casimir vacuum can be related to the photon vacuum by a non-trivial Bogoliubov transformation, and it is a ground state only for an effective Hamiltonian describing Casimir plates at a fixed distance. At the fundamental microscopic level, Casimir force is best viewed as a manifestation of van der Waals forces. - Highlights: • A toy model for Casimir-like effect with only 3 degrees of freedom is constructed. • Casimir vacuum can be related to the photon vacuum by a non-trivial Bogoliubov transformation. • Casimir vacuum is a ground state only for an effective Hamiltonian describing Casimir plates at a fixed distance. • At the fundamental microscopic level, Casimir force is best viewed as a manifestation of van der Waals forces.

  14. On zero-point energy, stability and Hagedorn behavior of Type IIB strings on pp-waves

    International Nuclear Information System (INIS)

    Bigazzi, F.; Cotrone, A.L.

    2003-06-01

    Type IIB strings on many pp-wave backgrounds, supported either by 5-form or 3-form fluxes, have negative light-cone zero-point energy. This raises the question of their stability and poses possible problems in the definition of their thermodynamic properties. After having pointed out the correct way of calculating the zero-point energy, an issue not fully discussed in literature, we show that these Type IIB strings are classically stable and have well defined thermal properties, exhibiting a Hagedorn behavior. (author)

  15. The ground state tunneling splitting and the zero point energy of malonaldehyde: a quantum Monte Carlo determination.

    Science.gov (United States)

    Viel, Alexandra; Coutinho-Neto, Maurício D; Manthe, Uwe

    2007-01-14

    Quantum dynamics calculations of the ground state tunneling splitting and of the zero point energy of malonaldehyde on the full dimensional potential energy surface proposed by Yagi et al. [J. Chem. Phys. 1154, 10647 (2001)] are reported. The exact diffusion Monte Carlo and the projection operator imaginary time spectral evolution methods are used to compute accurate benchmark results for this 21-dimensional ab initio potential energy surface. A tunneling splitting of 25.7+/-0.3 cm-1 is obtained, and the vibrational ground state energy is found to be 15 122+/-4 cm-1. Isotopic substitution of the tunneling hydrogen modifies the tunneling splitting down to 3.21+/-0.09 cm-1 and the vibrational ground state energy to 14 385+/-2 cm-1. The computed tunneling splittings are slightly higher than the experimental values as expected from the potential energy surface which slightly underestimates the barrier height, and they are slightly lower than the results from the instanton theory obtained using the same potential energy surface.

  16. On the contribution of intramolecular zero point energy to the equation of state of solid H2

    Science.gov (United States)

    Chandrasekharan, V.; Etters, R. D.

    1978-01-01

    Experimental evidence shows that the internal zero-point energy of the H2 molecule exhibits a relatively strong pressure dependence in the solid as well as changing considerably upon condensation. It is shown that these effects contribute about 6% to the total sublimation energy and to the pressure in the solid state. Methods to modify the ab initio isolated pair potential to account for these environmental effects are discussed.

  17. Semiclassical wave packet treatment of scattering resonances: application to the delta zero-point energy effect in recombination reactions.

    Science.gov (United States)

    Vetoshkin, Evgeny; Babikov, Dmitri

    2007-09-28

    For the first time Feshbach-type resonances important in recombination reactions are characterized using the semiclassical wave packet method. This approximation allows us to determine the energies, lifetimes, and wave functions of the resonances and also to observe a very interesting correlation between them. Most important is that this approach permits description of a quantum delta-zero-point energy effect in recombination reactions and reproduces the anomalous rates of ozone formation.

  18. Bound state potential energy surface construction: ab initio zero-point energies and vibrationally averaged rotational constants.

    Science.gov (United States)

    Bettens, Ryan P A

    2003-01-15

    Collins' method of interpolating a potential energy surface (PES) from quantum chemical calculations for reactive systems (Jordan, M. J. T.; Thompson, K. C.; Collins, M. A. J. Chem. Phys. 1995, 102, 5647. Thompson, K. C.; Jordan, M. J. T.; Collins, M. A. J. Chem. Phys. 1998, 108, 8302. Bettens, R. P. A.; Collins, M. A. J. Chem. Phys. 1999, 111, 816) has been applied to a bound state problem. The interpolation method has been combined for the first time with quantum diffusion Monte Carlo calculations to obtain an accurate ground state zero-point energy, the vibrationally average rotational constants, and the vibrationally averaged internal coordinates. In particular, the system studied was fluoromethane using a composite method approximating the QCISD(T)/6-311++G(2df,2p) level of theory. The approach adopted in this work (a) is fully automated, (b) is fully ab initio, (c) includes all nine nuclear degrees of freedom, (d) requires no assumption of the functional form of the PES, (e) possesses the full symmetry of the system, (f) does not involve fitting any parameters of any kind, and (g) is generally applicable to any system amenable to quantum chemical calculations and Collins' interpolation method. The calculated zero-point energy agrees to within 0.2% of its current best estimate. A0 and B0 are within 0.9 and 0.3%, respectively, of experiment.

  19. Stable long-time semiclassical description of zero-point energy in high-dimensional molecular systems.

    Science.gov (United States)

    Garashchuk, Sophya; Rassolov, Vitaly A

    2008-07-14

    Semiclassical implementation of the quantum trajectory formalism [J. Chem. Phys. 120, 1181 (2004)] is further developed to give a stable long-time description of zero-point energy in anharmonic systems of high dimensionality. The method is based on a numerically cheap linearized quantum force approach; stabilizing terms compensating for the linearization errors are added into the time-evolution equations for the classical and nonclassical components of the momentum operator. The wave function normalization and energy are rigorously conserved. Numerical tests are performed for model systems of up to 40 degrees of freedom.

  20. Thermodynamic Stability of Ice II and Its Hydrogen-Disordered Counterpart: Role of Zero-Point Energy.

    Science.gov (United States)

    Nakamura, Tatsuya; Matsumoto, Masakazu; Yagasaki, Takuma; Tanaka, Hideki

    2016-03-03

    We investigate why no hydrogen-disordered form of ice II has been found in nature despite the fact that most of hydrogen-ordered ices have hydrogen-disordered counterparts. The thermodynamic stability of a set of hydrogen-ordered ice II variants relative to ice II is evaluated theoretically. It is found that ice II is more stable than the disordered variants so generated as to satisfy the simple ice rule due to the lower zero-point energy as well as the pair interaction energy. The residual entropy of the disordered ice II phase gradually compensates the unfavorable free energy with increasing temperature. The crossover, however, occurs at a high temperature well above the melting point of ice III. Consequently, the hydrogen-disordered phase does not exist in nature. The thermodynamic stability of partially hydrogen-disordered ices is also scrutinized by examining the free-energy components of several variants obtained by systematic inversion of OH directions in ice II. The potential energy of one variant is lower than that of the ice II structure, but its Gibbs free energy is slightly higher than that of ice II due to the zero-point energy. The slight difference in the thermodynamic stability leaves the possibility of the partial hydrogen-disorder in real ice II.

  1. Correlation of zero-point energy with molecular structure and molecular forces. 1. Development of the approximation

    International Nuclear Information System (INIS)

    Oi, T.; Ishida, T.

    1983-01-01

    An approximation formula for the zero-point energy (ZPE) has been developed on the basis of Lanczos' tau method in which the ZPE has been expressed in terms of the traces of positive integral powers of the FG matrix. It requires two approximation parameters, i.e., a normalization reference point in a domain of vibration eigenvalues and a range for the purpose of expansion. These parameters have been determined for two special cases as well as for general situation at various values of a weighting function parameter. The approximation method has been tested on water, carbon dioxide, formaldehyde, and methane. The relative errors are 3% or less for the molecules examined, and the best choice of the parameters moderately depends on the frequency distribution. 25 references, 2 figures, 9 tables

  2. Geometric constraints in semiclassical initial value representation calculations in Cartesian coordinates: accurate reduction in zero-point energy.

    Science.gov (United States)

    Issack, Bilkiss B; Roy, Pierre-Nicholas

    2005-08-22

    An approach for the inclusion of geometric constraints in semiclassical initial value representation calculations is introduced. An important aspect of the approach is that Cartesian coordinates are used throughout. We devised an algorithm for the constrained sampling of initial conditions through the use of multivariate Gaussian distribution based on a projected Hessian. We also propose an approach for the constrained evaluation of the so-called Herman-Kluk prefactor in its exact log-derivative form. Sample calculations are performed for free and constrained rare-gas trimers. The results show that the proposed approach provides an accurate evaluation of the reduction in zero-point energy. Exact basis set calculations are used to assess the accuracy of the semiclassical results. Since Cartesian coordinates are used, the approach is general and applicable to a variety of molecular and atomic systems.

  3. A rapid method for measuring maximum density temperatures in water and aqueous solutions for the study of quantum zero point energy effects in these liquids

    International Nuclear Information System (INIS)

    Deeney, F A; O'Leary, J P

    2008-01-01

    The connection between quantum zero point fluctuations and a density maximum in water and in liquid He 4 has recently been established. Here we present a description of a simple and rapid method of determining the temperatures at which maximum densities in water and aqueous solutions occur. The technique is such as to allow experiments to be carried out in one session of an undergraduate laboratory thereby introducing students to the concept of quantum zero point energy

  4. Method and basis set dependence of anharmonic ground state nuclear wave functions and zero-point energies: Application to SSSH

    Science.gov (United States)

    Kolmann, Stephen J.; Jordan, Meredith J. T.

    2010-02-01

    One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol-1 at the CCSD(T)/6-31G∗ level of theory, has a 4 kJ mol-1 dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol-1 lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol-1 lower in energy at the CCSD(T)/6-31G∗ level of theory. Ideally, for sub-kJ mol-1 thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.

  5. Method and basis set dependence of anharmonic ground state nuclear wave functions and zero-point energies: application to SSSH.

    Science.gov (United States)

    Kolmann, Stephen J; Jordan, Meredith J T

    2010-02-07

    One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol(-1) at the CCSD(T)/6-31G* level of theory, has a 4 kJ mol(-1) dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol(-1) lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol(-1) lower in energy at the CCSD(T)/6-31G* level of theory. Ideally, for sub-kJ mol(-1) thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.

  6. Density functional theory calculations of the lowest energy quintet and triplet states of model hemes: role of functional, basis set, and zero-point energy corrections.

    Science.gov (United States)

    Khvostichenko, Daria; Choi, Andrew; Boulatov, Roman

    2008-04-24

    We investigated the effect of several computational variables, including the choice of the basis set, application of symmetry constraints, and zero-point energy (ZPE) corrections, on the structural parameters and predicted ground electronic state of model 5-coordinate hemes (iron(II) porphines axially coordinated by a single imidazole or 2-methylimidazole). We studied the performance of B3LYP and B3PW91 with eight Pople-style basis sets (up to 6-311+G*) and B97-1, OLYP, and TPSS functionals with 6-31G and 6-31G* basis sets. Only hybrid functionals B3LYP, B3PW91, and B97-1 reproduced the quintet ground state of the model hemes. With a given functional, the choice of the basis set caused up to 2.7 kcal/mol variation of the quintet-triplet electronic energy gap (DeltaEel), in several cases, resulting in the inversion of the sign of DeltaEel. Single-point energy calculations with triple-zeta basis sets of the Pople (up to 6-311G++(2d,2p)), Ahlrichs (TZVP and TZVPP), and Dunning (cc-pVTZ) families showed the same trend. The zero-point energy of the quintet state was approximately 1 kcal/mol lower than that of the triplet, and accounting for ZPE corrections was crucial for establishing the ground state if the electronic energy of the triplet state was approximately 1 kcal/mol less than that of the quintet. Within a given model chemistry, effects of symmetry constraints and of a "tense" structure of the iron porphine fragment coordinated to 2-methylimidazole on DeltaEel were limited to 0.3 kcal/mol. For both model hemes the best agreement with crystallographic structural data was achieved with small 6-31G and 6-31G* basis sets. Deviation of the computed frequency of the Fe-Im stretching mode from the experimental value with the basis set decreased in the order: nonaugmented basis sets, basis sets with polarization functions, and basis sets with polarization and diffuse functions. Contraction of Pople-style basis sets (double-zeta or triple-zeta) affected the results

  7. The Effect of de-Sitter Like Background on Increasing the Zero Point Budget of Dark Energy

    Directory of Open Access Journals (Sweden)

    Haidar Sheikhahmadi

    2016-01-01

    Full Text Available During this work, using subtraction renormalization mechanism, zero point quantum fluctuations for bosonic scalar fields in a de-Sitter like background are investigated. By virtue of the observed value for spectral index, ns(k, for massive scalar field the best value for the first slow roll parameter, ϵ, is achieved. In addition, the energy density of vacuum quantum fluctuations for massless scalar field is obtained. The effects of these fluctuations on other components of the universe are studied. By solving the conservation equation, for some different examples, the energy density for different components of the universe is obtained. In the case which all components of the universe are in an interaction, the different dissipation functions, Q~i, are considered. The time evolution of ρDE(z/ρcri(z shows that Q~=3γH(tρm has the best agreement in comparison to observational data including CMB, BAO, and SNeIa data set.

  8. ACS Zero Point Verification

    Science.gov (United States)

    Dolphin, Andrew

    2005-07-01

    The uncertainties in the photometric zero points create a fundamental limit to the accuracy of photometry. The current state of the ACS calibration is surprisingly poor, with zero point uncertainties of 0.03 magnitudes. The reason for this is that the ACS calibrations are based primarily on semi-emprical synthetic zero points and observations of fields too crowded for accurate ground-based photometry. I propose to remedy this problem by obtaining ACS images of the omega Cen standard field with all nine broadband ACS/WFC filters. This will permit the direct determination of the ACS zero points by comparison with excellent ground-based photometry, and should reduce their uncertainties to less than 0.01 magnitudes. A second benefit is that it will facilitate the comparison of the WFPC2 and ACS photometric systems, which will be important as WFPC2 is phased out and ACS becomes HST's primary imager. Finally, three of the filters will be repeated from my Cycle 12 observations, allowing for a measurement of any change in sensitivity.

  9. Isotope effect on the zero point energy shift upon condensation. I. Formulation and application to ethylene, methane, and fluoromethanes

    International Nuclear Information System (INIS)

    Kornblum, Z.C.; Ishida, T.

    1978-01-01

    A method of evaluating the isotope effect (IE) on the zero point energy (ZPE) shift upon condensation due to the London dispersion forces in the liquid has been formulated. It is expressed to the first order, as a product of an isotope-independent liquid factor and a factor of isotopic differences in gas-phase properties. The theory has been tested by calculating the effective atomic charges for carbon and hydrogen in ethylene, according to the CNDO/2 molecular orbital algorithm, and it correctly predicts the magnitude of the IE on the ZPE shift and the first-order sum rules involving the isotopic ethylenes. However, it fails to explain the difference in vapor pressures of isotopic isomers. The theory has also been applied to the D/H and to the 13 C/ 12 C isotope effects in methane and fluoromethanes. The results obtained from the CNDO/2 calculations have been compared with the experimental values of the total infrared absorption intensities and of the IE on the ZPE shift of isotopic methanes. Based on these calculations, the molecular properties that enhance the stronger dispersion forces in the liquid phase between the lighter molecules than between the isotopically heavier molecules, and hence favor a large IE on the ZPE shift, have been deduced

  10. Zero point energy leakage in condensed phase dynamics: An assessment of quantum simulation methods for liquid water

    Science.gov (United States)

    Habershon, Scott; Manolopoulos, David E.

    2009-12-01

    The approximate quantum mechanical ring polymer molecular dynamics (RPMD) and linearized semiclassical initial value representation (LSC-IVR) methods are compared and contrasted in a study of the dynamics of the flexible q-TIP4P/F water model at room temperature. For this water model, a RPMD simulation gives a diffusion coefficient that is only a few percent larger than the classical diffusion coefficient, whereas a LSC-IVR simulation gives a diffusion coefficient that is three times larger. We attribute this discrepancy to the unphysical leakage of initially quantized zero point energy (ZPE) from the intramolecular to the intermolecular modes of the liquid as the LSC-IVR simulation progresses. In spite of this problem, which is avoided by construction in RPMD, the LSC-IVR may still provide a useful approximation to certain short-time dynamical properties which are not so strongly affected by the ZPE leakage. We illustrate this with an application to the liquid water dipole absorption spectrum, for which the RPMD approximation breaks down at frequencies in the O-H stretching region owing to contamination from the internal modes of the ring polymer. The LSC-IVR does not suffer from this difficulty and it appears to provide quite a promising way to calculate condensed phase vibrational spectra.

  11. Zero point energy leakage in condensed phase dynamics: an assessment of quantum simulation methods for liquid water.

    Science.gov (United States)

    Habershon, Scott; Manolopoulos, David E

    2009-12-28

    The approximate quantum mechanical ring polymer molecular dynamics (RPMD) and linearized semiclassical initial value representation (LSC-IVR) methods are compared and contrasted in a study of the dynamics of the flexible q-TIP4P/F water model at room temperature. For this water model, a RPMD simulation gives a diffusion coefficient that is only a few percent larger than the classical diffusion coefficient, whereas a LSC-IVR simulation gives a diffusion coefficient that is three times larger. We attribute this discrepancy to the unphysical leakage of initially quantized zero point energy (ZPE) from the intramolecular to the intermolecular modes of the liquid as the LSC-IVR simulation progresses. In spite of this problem, which is avoided by construction in RPMD, the LSC-IVR may still provide a useful approximation to certain short-time dynamical properties which are not so strongly affected by the ZPE leakage. We illustrate this with an application to the liquid water dipole absorption spectrum, for which the RPMD approximation breaks down at frequencies in the O-H stretching region owing to contamination from the internal modes of the ring polymer. The LSC-IVR does not suffer from this difficulty and it appears to provide quite a promising way to calculate condensed phase vibrational spectra.

  12. Gaussian-2 theory: Use of higher level correlation methods, quadratic configuration interaction geometries, and second-order Moller--Plesset zero-point energies

    International Nuclear Information System (INIS)

    Curtiss, L.A.; Raghavachari, K.; Pople, J.A.

    1995-01-01

    The performance of Gaussian-2 theory is investigated when higher level theoretical methods are included for correlation effects, geometries, and zero-point energies. A higher level of correlation treatment is examined using Brueckner doubles [BD(T)] and coupled cluster [CCSD(T)] methods rather than quadratic configuration interaction [QCISD(T)]. The use of geometries optimized at the QCISD level rather than the second-order Moller--Plesset level (MP2) and the use of scaled MP2 zero-point energies rather than scaled Hartree--Fock (HF) zero-point energies have also been examined. The set of 125 energies used for validation of G2 theory [J. Chem. Phys. 94, 7221 (1991)] is used to test out these variations of G2 theory. Inclusion of higher levels of correlation treatment has little effect except in the cases of multiply-bonded systems. In these cases better agreement is obtained in some cases and poorer agreement in others so that there is no improvement in overall performance. The use of QCISD geometries yields significantly better agreement with experiment for several cases including the ionization potentials of CS and O 2 , electron affinity of CN, and dissociation energies of N 2 , O 2 , CN, and SO 2 . This leads to a slightly better agreement with experiment overall. The MP2 zero-point energies gives no overall improvement. These methods may be useful for specific systems

  13. Communication: A new ab initio potential energy surface for HCl-H2O, diffusion Monte Carlo calculations of D0 and a delocalized zero-point wavefunction.

    Science.gov (United States)

    Mancini, John S; Bowman, Joel M

    2013-03-28

    We report a global, full-dimensional, ab initio potential energy surface describing the HCl-H2O dimer. The potential is constructed from a permutationally invariant fit, using Morse-like variables, to over 44,000 CCSD(T)-F12b∕aug-cc-pVTZ energies. The surface describes the complex and dissociated monomers with a total RMS fitting error of 24 cm(-1). The normal modes of the minima, low-energy saddle point and separated monomers, the double minimum isomerization pathway and electronic dissociation energy are accurately described by the surface. Rigorous quantum mechanical diffusion Monte Carlo (DMC) calculations are performed to determine the zero-point energy and wavefunction of the complex and the separated fragments. The calculated zero-point energies together with a De value calculated from CCSD(T) with a complete basis set extrapolation gives a D0 value of 1348 ± 3 cm(-1), in good agreement with the recent experimentally reported value of 1334 ± 10 cm(-1) [B. E. Casterline, A. K. Mollner, L. C. Ch'ng, and H. Reisler, J. Phys. Chem. A 114, 9774 (2010)]. Examination of the DMC wavefunction allows for confident characterization of the zero-point geometry to be dominant at the C(2v) double-well saddle point and not the C(s) global minimum. Additional support for the delocalized zero-point geometry is given by numerical solutions to the 1D Schrödinger equation along the imaginary-frequency out-of-plane bending mode, where the zero-point energy is calculated to be 52 cm(-1) above the isomerization barrier. The D0 of the fully deuterated isotopologue is calculated to be 1476 ± 3 cm(-1), which we hope will stand as a benchmark for future experimental work.

  14. Changes in the Zero-Point Energy of the Protons as the Source of the Binding Energy of Water to A-Phase DNA

    International Nuclear Information System (INIS)

    Reiter, G. F.; Senesi, R.; Mayers, J.

    2010-01-01

    The measured changes in the zero-point kinetic energy of the protons are entirely responsible for the binding energy of water molecules to A phase DNA at the concentration of 6 water molecules/base pair. The changes in kinetic energy can be expected to be a significant contribution to the energy balance in intracellular biological processes and the properties of nano-confined water. The shape of the momentum distribution in the dehydrated A phase is consistent with coherent delocalization of some of the protons in a double well potential, with a separation of the wells of 0.2 Angst .

  15. Changes in the zero-point energy of the protons as the source of the binding energy of water to A-phase DNA.

    Science.gov (United States)

    Reiter, G F; Senesi, R; Mayers, J

    2010-10-01

    The measured changes in the zero-point kinetic energy of the protons are entirely responsible for the binding energy of water molecules to A phase DNA at the concentration of 6  water molecules/base pair. The changes in kinetic energy can be expected to be a significant contribution to the energy balance in intracellular biological processes and the properties of nano-confined water. The shape of the momentum distribution in the dehydrated A phase is consistent with coherent delocalization of some of the protons in a double well potential, with a separation of the wells of 0.2 Å.

  16. Referential Zero Point

    Directory of Open Access Journals (Sweden)

    Matjaž Potrč

    2016-04-01

    Full Text Available Perhaps the most important controversy in which ordinary language philosophy was involved is that of definite descriptions, presenting referential act as a community-involving communication-intention endeavor, thereby opposing the direct acquaintance-based and logical proper names inspired reference aimed at securing truth conditions of referential expression. The problem of reference is that of obtaining access to the matters in the world. This access may be forthcoming through the senses, or through descriptions. A review of how the problem of reference is handled shows though that one main practice is to indulge in relations of acquaintance supporting logical proper names, demonstratives, indexicals and causal or historical chains. This testifies that the problem of reference involves the zero point, and with it phenomenology of intentionality. Communication-intention is but one dimension of rich phenomenology that constitutes an agent’s experiential space, his experiential world. Zero point is another constitutive aspect of phenomenology involved in the referential relation. Realizing that the problem of reference is phenomenology based opens a new perspective upon the contribution of analytical philosophy in this area, reconciling it with continental approach, and demonstrating variations of the impossibility related to the real. Chromatic illumination from the cognitive background empowers the referential act, in the best tradition of ordinary language philosophy.

  17. The ground states of iron(III) porphines: Role of entropy–enthalpy compensation, Fermi correlation, dispersion, and zero-point energies

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta

    2011-01-01

    on calculations of five iron(III) porphines. Here, we compute the geometries of 80 different electronic configurations and the free energies of the most stable configurations with the functionals TPSSh, TPSS, and B3LYP. Zero-point energies and entropy favor high-spin by ~4kJ/mol and 0–10kJ/mol, respectively. When...... favors low-spin by 3–53kJ/mol (TPSSh) or 4–15kJ/mol (B3LYP) due to the attractive r−6 term and the shorter distances in low-spin. The very large and diverse corrections from TPSS and TPSSh seem less consistent with the similarity of the systems than when calculated from B3LYP. If the functional......-specific corrections are used, B3LYP and TPSSh are of equal accuracy, and TPSS is much worse, whereas if the physically reasonable B3LYP-computed dispersion effect is used for all functionals, TPSSh is accurate for all systems. B3LYP is significantly more accurate when dispersion is added, confirming previous results....

  18. Zero-point energy of massless scalar fields in the presence of soft and semihard boundaries in D dimensions

    International Nuclear Information System (INIS)

    Caruso, F.; De Paola, R.; Svaiter, N.F.

    1998-06-01

    The renormalized energy density of a massless scalar field defined in a D-dimensional flat spacetime is computed in the presence of 'soft'and 'semihard'boundaries, modeled by some smoothly increasing potential functions. The sign of the renormalized energy densities for these different confining situations is investigated. The dependence of this energy on D for the cases of 'hard'and 'soft/semihard'boundaries area compared. (author)

  19. The ground states of iron(III) porphines: role of entropy-enthalpy compensation, Fermi correlation, dispersion, and zero-point energies.

    Science.gov (United States)

    Kepp, Kasper P

    2011-10-01

    Porphyrins are much studied due to their biochemical relevance and many applications. The density functional TPSSh has previously accurately described the energy of close-lying electronic states of transition metal systems such as porphyrins. However, a recent study questioned this conclusion based on calculations of five iron(III) porphines. Here, we compute the geometries of 80 different electronic configurations and the free energies of the most stable configurations with the functionals TPSSh, TPSS, and B3LYP. Zero-point energies and entropy favor high-spin by ~4kJ/mol and 0-10kJ/mol, respectively. When these effects are included, and all electronic configurations are evaluated, TPSSh correctly predicts the spin of all the four difficult phenylporphine cases and is within the lower bound of uncertainty of any known theoretical method for the fifth, iron(III) chloroporphine. Dispersion computed with DFT-D3 favors low-spin by 3-53kJ/mol (TPSSh) or 4-15kJ/mol (B3LYP) due to the attractive r(-6) term and the shorter distances in low-spin. The very large and diverse corrections from TPSS and TPSSh seem less consistent with the similarity of the systems than when calculated from B3LYP. If the functional-specific corrections are used, B3LYP and TPSSh are of equal accuracy, and TPSS is much worse, whereas if the physically reasonable B3LYP-computed dispersion effect is used for all functionals, TPSSh is accurate for all systems. B3LYP is significantly more accurate when dispersion is added, confirming previous results. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Zero-point oscillations, zero-point fluctuations, and fluctuations of zero-point oscillations

    International Nuclear Information System (INIS)

    Khalili, Farit Ya

    2003-01-01

    Several physical effects and methodological issues relating to the ground state of an oscillator are considered. Even in the simplest case of an ideal lossless harmonic oscillator, its ground state exhibits properties that are unusual from the classical point of view. In particular, the mean value of the product of two non-negative observables, kinetic and potential energies, is negative in the ground state. It is shown that semiclassical and rigorous quantum approaches yield substantially different results for the ground state energy fluctuations of an oscillator with finite losses. The dependence of zero-point fluctuations on the boundary conditions is considered. Using this dependence, it is possible to transmit information without emitting electromagnetic quanta. Fluctuations of electromagnetic pressure of zero-point oscillations are analyzed, and the corresponding mechanical friction is considered. This friction can be viewed as the most fundamental mechanism limiting the quality factor of mechanical oscillators. Observation of these effects exceeds the possibilities of contemporary experimental physics but almost undoubtedly will be possible in the near future. (methodological notes)

  1. The zero-point field. On the search for the cosmic basic energy; Das Nullpunkt-Feld. Auf der Suche nach der kosmischen Ur-Energie

    Energy Technology Data Exchange (ETDEWEB)

    McTaggart, L.

    2007-02-15

    Does an inexhaustable energy source exist from which all life is fed? A form of energy, which penetrates all dead and living expression forms of life? Does a logical, scientific explanation exist for parapsychological phenomena like clairvoyance, telepathy, ghost healing, synchronicity, and a model for the mode of action of homeopathy? Do serious researchers and scientific studies to be token in ernest exist, which not only deal with this questions but also have found answers? During eight years the British scientific journalist Lynne McTaggart has researched. ''Teh zero-point field'' is the result of numerous speeches with renowned physicists, biophysicists, neuroscientists, biologist, and consciousness researchers on the whole world, which have independently discovered phenomena, which are combined like puzzle pieces to a fascinating total picture.

  2. Zero-point field in curved spaces

    International Nuclear Information System (INIS)

    Hacyan, S.; Sarmiento, A.; Cocho, G.; Soto, F.

    1985-01-01

    Boyer's conjecture that the thermal effects of acceleration are manifestations of the zero-point field is further investigated within the context of quantum field theory in curved spaces. The energy-momentum current for a spinless field is defined rigorously and used as the basis for investigating the energy density observed in a noninertial frame. The following examples are considered: (i) uniformly accelerated observers, (ii) two-dimensional Schwarzschild black holes, (iii) the Einstein universe. The energy spectra which have been previously calculated appear in the present formalism as an additional contribution to the energy of the zero-point field, but particle creation does not occur. It is suggested that the radiation produced by gravitational fields or by acceleration is a manifestation of the zero-point field and of the same nature (whether real or virtual)

  3. Direct dynamics trajectory study of the reaction of formaldehyde cation with D2: vibrational and zero-point energy effects on quasiclassical trajectories.

    Science.gov (United States)

    Liu, Jianbo; Song, Kihyung; Hase, William L; Anderson, Scott L

    2005-12-22

    Quasiclassical, direct dynamics trajectories have been used to study the reaction of formaldehyde cation with molecular hydrogen, simulating the conditions in an experimental study of H2CO+ vibrational effects on this reaction. Effects of five different H2CO+ modes were probed, and we also examined different approaches to treating zero-point energy in quasiclassical trajectories. The calculated absolute cross-sections are in excellent agreement with experiments, and the results provide insight into the reaction mechanism, product scattering behavior, and energy disposal, and how they vary with impact parameter and reactant state. The reaction is sharply orientation-dependent, even at high collision energies, and both trajectories and experiment find that H2CO+ vibration inhibits reaction. On the other hand, the trajectories do not reproduce the anomalously strong effect of nu2(+) (the CO stretch). The origin of the discrepancy and approaches for minimizing such problems in quasiclassical trajectories are discussed.

  4. Chemical Reaction Rates from Ring Polymer Molecular Dynamics: Zero Point Energy Conservation in Mu + H2 → MuH + H.

    Science.gov (United States)

    Pérez de Tudela, Ricardo; Aoiz, F J; Suleimanov, Yury V; Manolopoulos, David E

    2012-02-16

    A fundamental issue in the field of reaction dynamics is the inclusion of the quantum mechanical (QM) effects such as zero point energy (ZPE) and tunneling in molecular dynamics simulations, and in particular in the calculation of chemical reaction rates. In this work we study the chemical reaction between a muonium atom and a hydrogen molecule. The recently developed ring polymer molecular dynamics (RPMD) technique is used, and the results are compared with those of other methods. For this reaction, the thermal rate coefficients calculated with RPMD are found to be in excellent agreement with the results of an accurate QM calculation. The very minor discrepancies are within the convergence error even at very low temperatures. This exceptionally good agreement can be attributed to the dominant role of ZPE in the reaction, which is accounted for extremely well by RPMD. Tunneling only plays a minor role in the reaction.

  5. Correlation of zero-point energy with molecular structure and molecular forces. 3. Approximation for H/D isotope shifts and linear frequency sum rule

    International Nuclear Information System (INIS)

    Oi, T.; Ishida, T.

    1984-01-01

    The approximation methods for the zero-point energy (ZPE) previously developed using the Lanczo's tau method have been applied to the shifts in ZPE due to hydrogen isotope substitutions. Six types of approximation methods have been compared and analyzed on the basis of a weighing function Ω(lambda) varies as lambda/sup k/ and the actual eigenvalue shift spectra. The method generated by the most general optimzation treatment yields a predictable and generally satisfactory precision of the order of 1% or better. A linear frequency sum rule has been derived, which approximately holds for the sets of isotopic molecules which satisfy the second-order frequency sum rule. 19 references, 3 figures, 3 tables

  6. An ab initio potential energy surface for the formic acid dimer: zero-point energy, selected anharmonic fundamental energies, and ground-state tunneling splitting calculated in relaxed 1-4-mode subspaces.

    Science.gov (United States)

    Qu, Chen; Bowman, Joel M

    2016-09-14

    We report a full-dimensional, permutationally invariant potential energy surface (PES) for the cyclic formic acid dimer. This PES is a least-squares fit to 13475 CCSD(T)-F12a/haTZ (VTZ for H and aVTZ for C and O) energies. The energy-weighted, root-mean-square fitting error is 11 cm -1 and the barrier for the double-proton transfer on the PES is 2848 cm -1 , in good agreement with the directly-calculated ab initio value of 2853 cm -1 . The zero-point vibrational energy of 15 337 ± 7 cm -1 is obtained from diffusion Monte Carlo calculations. Energies of fundamentals of fifteen modes are calculated using the vibrational self-consistent field and virtual-state configuration interaction method. The ground-state tunneling splitting is computed using a reduced-dimensional Hamiltonian with relaxed potentials. The highest-level, four-mode coupled calculation gives a tunneling splitting of 0.037 cm -1 , which is roughly twice the experimental value. The tunneling splittings of (DCOOH) 2 and (DCOOD) 2 from one to three mode calculations are, as expected, smaller than that for (HCOOH) 2 and consistent with experiment.

  7. A thermodynamically consistent quasi-particle model without temperature-dependent infinity of the vacuum zero point energy

    International Nuclear Information System (INIS)

    Cao Jing; Jiang Yu; Sun Weimin; Zong Hongshi

    2012-01-01

    In this Letter, an improved quasi-particle model is presented. Unlike the previous approach of establishing quasi-particle model, we introduce a classical background field (it is allowed to depend on the temperature) to deal with the infinity of thermal vacuum energy which exists in previous quasi-particle models. After taking into account the effect of this classical background field, the partition function of quasi-particle system can be made well-defined. Based on this and following the standard ensemble theory, we construct a thermodynamically consistent quasi-particle model without the need of any reformulation of statistical mechanics or thermodynamical consistency relation. As an application of our model, we employ it to the case of (2+1) flavor QGP at zero chemical potential and finite temperature and obtain a good fit to the recent lattice simulation results of Borsányi et al. A comparison of the result of our model with early calculations using other models is also presented. It is shown that our method is general and can be generalized to the case where the effective mass depends not only on the temperature but also on the chemical potential.

  8. Solving discrete zero point problems

    NARCIS (Netherlands)

    van der Laan, G.; Talman, A.J.J.; Yang, Z.F.

    2004-01-01

    In this paper an algorithm is proposed to .nd a discrete zero point of a function on the collection of integral points in the n-dimensional Euclidean space IRn.Starting with a given integral point, the algorithm generates a .nite sequence of adjacent integral simplices of varying dimension and

  9. ACS Photometric Zero Point Verification

    Science.gov (United States)

    Dolphin, Andrew

    2003-07-01

    The uncertainties in the photometric zero points create a fundamental limit to the accuracy of photometry. The current state of the ACS calibration is surprisingly poor, with zero point uncertainties of 0.03 magnitudes in the Johnson filters. The reason for this is that ACS observations of excellent ground-based standard fields, such as the omega Cen field used for WFPC2 calibrations, have not been obtained. Instead, the ACS photometric calibrations are based primarily on semi-emprical synthetic zero points and observations of fields too crowded for accurate ground-based photometry. I propose to remedy this problem by obtaining ACS broadband images of the omega Cen standard field with both the WFC and HRC. This will permit the direct determination of the ACS transformations, and is expected to double the accuracy to which the ACS zero points are known. A second benefit is that it will facilitate the comparison of the WFPC2 and ACS photometric systems, which will be important as WFPC2 is phased out and ACS becomes HST's primary imager.

  10. Full-dimensional quantum calculations of the dissociation energy, zero-point, and 10 K properties of H7+/D7+ clusters using an ab initio potential energy surface.

    Science.gov (United States)

    Barragán, Patricia; Pérez de Tudela, Ricardo; Qu, Chen; Prosmiti, Rita; Bowman, Joel M

    2013-07-14

    Diffusion Monte Carlo (DMC) and path-integral Monte Carlo computations of the vibrational ground state and 10 K equilibrium state properties of the H7 (+)/D7 (+) cations are presented, using an ab initio full-dimensional potential energy surface. The DMC zero-point energies of dissociated fragments H5 (+)(D5 (+))+H2(D2) are also calculated and from these results and the electronic dissociation energy, dissociation energies, D0, of 752 ± 15 and 980 ± 14 cm(-1) are reported for H7 (+) and D7 (+), respectively. Due to the known error in the electronic dissociation energy of the potential surface, these quantities are underestimated by roughly 65 cm(-1). These values are rigorously determined for first time, and compared with previous theoretical estimates from electronic structure calculations using standard harmonic analysis, and available experimental measurements. Probability density distributions are also computed for the ground vibrational and 10 K state of H7 (+) and D7 (+). These are qualitatively described as a central H3 (+)/D3 (+) core surrounded by "solvent" H2/D2 molecules that nearly freely rotate.

  11. The effect of zero-point energy differences on the isotope dependence of the formation of ozone: a classical trajectory study.

    Science.gov (United States)

    Schinke, Reinhard; Fleurat-Lessard, Paul

    2005-03-01

    The effect of zero-point energy differences (DeltaZPE) between the possible fragmentation channels of highly excited O(3) complexes on the isotope dependence of the formation of ozone is investigated by means of classical trajectory calculations and a strong-collision model. DeltaZPE is incorporated in the calculations in a phenomenological way by adjusting the potential energy surface in the product channels so that the correct exothermicities and endothermicities are matched. The model contains two parameters, the frequency of stabilizing collisions omega and an energy dependent parameter Delta(damp), which favors the lower energies in the Maxwell-Boltzmann distribution. The stabilization frequency is used to adjust the pressure dependence of the absolute formation rate while Delta(damp) is utilized to control its isotope dependence. The calculations for several isotope combinations of oxygen atoms show a clear dependence of relative formation rates on DeltaZPE. The results are similar to those of Gao and Marcus [J. Chem. Phys. 116, 137 (2002)] obtained within a statistical model. In particular, like in the statistical approach an ad hoc parameter eta approximately 1.14, which effectively reduces the formation rates of the symmetric ABA ozone molecules, has to be introduced in order to obtain good agreement with the measured relative rates of Janssen et al. [Phys. Chem. Chem. Phys. 3, 4718 (2001)]. The temperature dependence of the recombination rate is also addressed.

  12. Trajectory dynamics study of the Ar + CH4 dissociation reaction at high temperatures: the importance of zero-point-energy effects.

    Science.gov (United States)

    Marques, J M C; Martínez-Núñez, E; Fernandez-Ramos, A; Vazquez, S A

    2005-06-23

    Large-scale classical trajectory calculations have been performed to study the reaction Ar + CH4--> CH3 +H + Ar in the temperature range 2500 energy surface used for ArCH4 is the sum of the nonbonding pairwise potentials of Hase and collaborators (J. Chem. Phys. 2001, 114, 535) that models the intermolecular interaction and the CH4 intramolecular potential of Duchovic et al. (J. Phys. Chem. 1984, 88, 1339), which has been modified to account for the H-H repulsion at small bending angles. The thermal rate coefficient has been calculated, and the zero-point energy (ZPE) of the CH3 product molecule has been taken into account in the analysis of the results; also, two approaches have been applied for discarding predissociative trajectories. In both cases, good agreement is observed between the experimental and trajectory results after imposing the ZPE of CH3. The energy-transfer parameters have also been obtained from trajectory calculations and compared with available values estimated from experiment using the master equation formalism; in general, the agreement is good.

  13. Full dimensional (15-dimensional) quantum-dynamical simulation of the protonated water-dimer III: Mixed Jacobi-valence parametrization and benchmark results for the zero point energy, vibrationally excited states, and infrared spectrum.

    Science.gov (United States)

    Vendrell, Oriol; Brill, Michael; Gatti, Fabien; Lauvergnat, David; Meyer, Hans-Dieter

    2009-06-21

    Quantum dynamical calculations are reported for the zero point energy, several low-lying vibrational states, and the infrared spectrum of the H(5)O(2)(+) cation. The calculations are performed by the multiconfiguration time-dependent Hartree (MCTDH) method. A new vector parametrization based on a mixed Jacobi-valence description of the system is presented. With this parametrization the potential energy surface coupling is reduced with respect to a full Jacobi description, providing a better convergence of the n-mode representation of the potential. However, new coupling terms appear in the kinetic energy operator. These terms are derived and discussed. A mode-combination scheme based on six combined coordinates is used, and the representation of the 15-dimensional potential in terms of a six-combined mode cluster expansion including up to some 7-dimensional grids is discussed. A statistical analysis of the accuracy of the n-mode representation of the potential at all orders is performed. Benchmark, fully converged results are reported for the zero point energy, which lie within the statistical uncertainty of the reference diffusion Monte Carlo result for this system. Some low-lying vibrationally excited eigenstates are computed by block improved relaxation, illustrating the applicability of the approach to large systems. Benchmark calculations of the linear infrared spectrum are provided, and convergence with increasing size of the time-dependent basis and as a function of the order of the n-mode representation is studied. The calculations presented here make use of recent developments in the parallel version of the MCTDH code, which are briefly discussed. We also show that the infrared spectrum can be computed, to a very good approximation, within D(2d) symmetry, instead of the G(16) symmetry used before, in which the complete rotation of one water molecule with respect to the other is allowed, thus simplifying the dynamical problem.

  14. Zero-point length from string fluctuations

    International Nuclear Information System (INIS)

    Fontanini, Michele; Spallucci, Euro; Padmanabhan, T.

    2006-01-01

    One of the leading candidates for quantum gravity, viz. string theory, has the following features incorporated in it. (i) The full spacetime is higher-dimensional, with (possibly) compact extra-dimensions; (ii) there is a natural minimal length below which the concept of continuum spacetime needs to be modified by some deeper concept. On the other hand, the existence of a minimal length (zero-point length) in four-dimensional spacetime, with obvious implications as UV regulator, has been often conjectured as a natural aftermath of any correct quantum theory of gravity. We show that one can incorporate the apparently unrelated pieces of information-zero-point length, extra-dimensions, string T-duality-in a consistent framework. This is done in terms of a modified Kaluza-Klein theory that interpolates between (high-energy) string theory and (low-energy) quantum field theory. In this model, the zero-point length in four dimensions is a 'virtual memory' of the length scale of compact extra-dimensions. Such a scale turns out to be determined by T-duality inherited from the underlying fundamental string theory. From a low energy perspective short distance infinities are cutoff by a minimal length which is proportional to the square root of the string slope, i.e., α ' . Thus, we bridge the gap between the string theory domain and the low energy arena of point-particle quantum field theory

  15. Mixed quantum/classical investigation of the photodissociation of NH3(A-tilde) and a practical method for maintaining zero-point energy in classical trajectories

    International Nuclear Information System (INIS)

    Bonhommeau, David; Truhlar, Donald G.

    2008-01-01

    The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode ν 2 with n 2 =0,...,6 quanta of vibration) in the A-tilde electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTU/SD+trajectory projection onto ZPE orbit (TRAPZ) and FSTU/SD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH 2 internal energy distributions obtained for n 2 =0 and n 2 >1, as observed in experiments. Distributions obtained for n 2 =1 present an intermediate behavior between distributions obtained for smaller and larger n 2 values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH 2 internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n 2 =0 and n 2 =6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching

  16. Frequency and zero-point vibrational energy scale factors for double-hybrid density functionals (and other selected methods): can anharmonic force fields be avoided?

    Science.gov (United States)

    Kesharwani, Manoj K; Brauer, Brina; Martin, Jan M L

    2015-03-05

    We have obtained uniform frequency scaling factors λ(harm) (for harmonic frequencies), λ(fund) (for fundamentals), and λ(ZPVE) (for zero-point vibrational energies (ZPVEs)) for the Weigend-Ahlrichs and other selected basis sets for MP2, SCS-MP2, and a variety of DFT functionals including double hybrids. For selected levels of theory, we have also obtained scaling factors for true anharmonic fundamentals and ZPVEs obtained from quartic force fields. For harmonic frequencies, the double hybrids B2PLYP, B2GP-PLYP, and DSD-PBEP86 clearly yield the best performance at RMSD = 10-12 cm(-1) for def2-TZVP and larger basis sets, compared to 5 cm(-1) at the CCSD(T) basis set limit. For ZPVEs, again, the double hybrids are the best performers, reaching root-mean-square deviations (RMSDs) as low as 0.05 kcal/mol, but even mainstream functionals like B3LYP can get down to 0.10 kcal/mol. Explicitly anharmonic ZPVEs only are marginally more accurate. For fundamentals, however, simple uniform scaling is clearly inadequate.

  17. A density functional theory study on the effect of zero-point energy corrections on the methanation profile on Fe(100).

    Science.gov (United States)

    Govender, Ashriti; Ferré, Daniel Curulla; Niemantsverdriet, J W Hans

    2012-04-23

    The thermodynamics and kinetics of the surface hydrogenation of adsorbed atomic carbon to methane, following the reaction sequence C+4H(-->////ZPE) corrections are included. The C, CH and CH(2) species are most stable at the fourfold hollow site, while CH(3) prefers the twofold bridge site. Atomic hydrogen is adsorbed at both the twofold bridge and fourfold hollow sites. Methane is physisorbed on the surface and shows neither orientation nor site preference. It is easily desorbed to the gas phase once formed. The incorporation of ZPE corrections has a very slight, if any, effect on the adsorption energies and does not alter the trends with regards to the most stable adsorption sites. The successive addition of hydrogen to atomic carbon is endothermic up to the addition of the third hydrogen atom resulting in the methyl species, but exothermic in the final hydrogenation step, which leads to methane. The overall methanation reaction is endothermic when starting from atomic carbon and hydrogen on the surface. Zero-point energy corrections are rarely provided in the literature. Since they are derived from C-H bonds with characteristic vibrations on the order of 2500-3000 cm(-1), the equivalent ZPE of 1/2 hν is on the order of 0.2-0.3 eV and its effect on adsorption energy can in principle be significant. Particularly in reactions between CH(x) and H, the ZPE correction is expected to be significant, as additional C-H bonds are formed. In this instance, the methanation reaction energy of +0.77 eV increased to +1.45 eV with the inclusion of ZPE corrections, that is, less favourable. Therefore, it is crucial to include ZPE corrections when reporting reactions involving hydrogen-containing species. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. The zero-point field in non-inertial frames

    International Nuclear Information System (INIS)

    Hacyan, S.

    1985-01-01

    The energy spectrum of the zero-point field as seen in non-inertial frames is investigated. Uniformly accelerated frames and black holes are considered. It is suggested that the radiation produced by black holes or acceleration is a manifestation of the zero-point field and of the same nature (whether real or virtual)

  19. Zero Point of Historical Time

    Directory of Open Access Journals (Sweden)

    R.S. Khakimov

    2014-02-01

    Full Text Available Historical studies are based on the assumption that there is a reference-starting point of the space-time – the Zero point of coordinate system. Due to the bifurcation in the Zero Point, the course of social processes changes sharply and the probabilistic causality replaces the deterministic one. For this reason, changes occur in the structure of social relations and statehood form as well as in the course of the ethnic processes. In such a way emerges a new discourse of the national behavior. With regard to the history of the Tatars and Tatarstan, such bifurcation points occurred in the periods of the formation: 1 of the Turkic Khaganate, which began to exist from the 6th century onward and became a qualitatively new State system that reformatted old elements in the new matrix introducing a new discourse of behavior; 2 of the Volga-Kama Bulgaria, where the rivers (Kama, Volga, Vyatka became the most important trade routes determining the singularity of this State. Here the nomadic culture was connected with the settled one and Islam became the official religion in 922; 3 and of the Golden Hordе, a powerful State with a remarkable system of communication, migration of huge human resources for thousands of kilometers, and extensive trade, that caused severe “mutations” in the ethnic terms and a huge mixing of ethnic groups. Given the dwelling space of Tatar population and its evolution within Russia, it can be argued that the Zero point of Tatar history, which conveyed the cultural invariants until today, begins in the Golden Horde. Neither in the Turkic khaganate nor in the Bulgar State, but namely in the Golden Horde. Despite the radical changes, the Russian Empire failed to transform the Tatars in the Russians. Therefore, contemporary Tatars preserved the Golden Horde tradition as a cultural invariant.

  20. Mixed quantum/classical investigation of the photodissociation of NH3(A) and a practical method for maintaining zero-point energy in classical trajectories.

    Science.gov (United States)

    Bonhommeau, David; Truhlar, Donald G

    2008-07-07

    The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode nu(2) with n(2)=0,[ellipsis (horizontal)],6 quanta of vibration) in the A electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTUSD+trajectory projection onto ZPE orbit (TRAPZ) and FSTUSD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH(2) internal energy distributions obtained for n(2)=0 and n(2)>1, as observed in experiments. Distributions obtained for n(2)=1 present an intermediate behavior between distributions obtained for smaller and larger n(2) values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH(2) internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n(2)=0 and n(2)=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.

  1. Mixed quantum/classical investigation of the photodissociation of NH3(Ã) and a practical method for maintaining zero-point energy in classical trajectories

    Science.gov (United States)

    Bonhommeau, David; Truhlar, Donald G.

    2008-07-01

    The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode ν2 with n2=0,…,6 quanta of vibration) in the à electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTU /SD+trajectory projection onto ZPE orbit (TRAPZ) and FSTU /SD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH2 internal energy distributions obtained for n2=0 and n2>1, as observed in experiments. Distributions obtained for n2=1 present an intermediate behavior between distributions obtained for smaller and larger n2 values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH2 internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n2=0 and n2=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.

  2. Ca(AlH4)2, CaAlH5, and CaH2+6LiBH4 : Calculated dehydrogenation enthalpy, including zero point energy, and the structure of the phonon spectra

    NARCIS (Netherlands)

    Marashdeh, A.; Frankcombe, T.J.

    2008-01-01

    The dehydrogenation enthalpies of Ca(AlH4)2, CaAlH5, and CaH2+6LiBH4 have been calculated using density functional theory calculations at the generalized gradient approximation level. Harmonic phonon zero point energy (ZPE) corrections have been included using Parlinski’s direct method. The

  3. High-resolution studies of tropolone in the S0 and S1 electronic states: isotope driven dynamics in the zero-point energy levels.

    Science.gov (United States)

    Keske, John C; Lin, Wei; Pringle, Wallace C; Novick, Stewart E; Blake, Thomas A; Plusquellic, David F

    2006-02-21

    Rotationally resolved microwave (MW) and ultraviolet (UV) spectra of jet-cooled tropolone have been obtained in S(0) and S(1) electronic states using Fourier-transform microwave and UV-laser/molecular-beam spectrometers. In the ground electronic state, the MW spectra of all heavy-atom isotopomers including one (18)O and four (13)C isotopomers were observed in natural abundance. The OD isotopomer was obtained from isotopically enriched samples. The two lowest tunneling states of each isotopomer except (18)O have been assigned. The observed inversion splitting for the OD isotopomer is 1523.227(5) MHz. For the asymmetric (13)C structures, the magnitudes of tunneling-rotation interactions are found to diminish with decreasing distance between the heavy atom and the tunneling proton. In the limit of closest approach, the 0(+) state of (18)O was well fitted to an asymmetric rotor Hamiltonian, reflecting significant changes in the tautomerization dynamics. Comparisons of the substituted atom coordinates with theoretical predictions at the MP2/aug-cc-pVTZ level of theory suggest the localized 0(+) and 0(-) wave functions of the heavier isotopes favor the C-OH and C=O forms of tropolone, respectively. The only exception occurs for the (13)C-OH and (13)C[Double Bond]O structures which correlate to the 0(-) and 0(+) states, respectively. These preferences reflect kinetic isotope effects as quantitatively verified by the calculated zero-point energy differences between members of the asymmetric atom pairs. From rotationally resolved data of the 0(+) <--0(+) and 0(-) <--0(-) bands in S(1), line-shape fits have yielded Lorentzian linewidths that differ by 12.2(16) MHz over the 19.88(4) cm(-1) interval in S(1). The fluorescence decay rates together with previously reported quantum yield data give nonradiative decay rates of 7.7(5) x 10(8) and 8.5(5) x 10(8) s(-1) for the 0(+) and 0(-) levels of the S(1) state of tropolone.

  4. Zero-point field in a circular-motion frame

    International Nuclear Information System (INIS)

    Kim, S.K.; Soh, K.S.; Yee, J.H.

    1987-01-01

    The energy spectrum of zero-point fields of a massless scalar field observed by a detector in circular motion is studied by analyzing the Wightman function. It is shown to be quite different from the Planck spectrum which would have been expected from the result of a uniformly accelerated detector. In a nonrelativistic limit zero-point fields with frequencies only up to the first harmonics of the circular-motion frequency contribute dominantly. In an extremely relativistic case the energy spectrum is dominated by a particular pole in the complex proper-time plane

  5. Zero-point Energy is Needed in Molecular Dynamics Calculations to Access the Saddle Point for H+HCN→H2CN* and cis/trans-HCNH* on a New Potential Energy Surface.

    Science.gov (United States)

    Wang, Xiaohong; Bowman, Joel M

    2013-02-12

    We calculate the probabilities for the association reactions H+HCN→H2CN* and cis/trans-HCNH*, using quasiclassical trajectory (QCT) and classical trajectory (CT) calculations, on a new global ab initio potential energy surface (PES) for H2CN including the reaction channels. The surface is a linear least-squares fit of roughly 60 000 CCSD(T)-F12b/aug-cc-pVDZ electronic energies, using a permutationally invariant basis with Morse-type variables. The reaction probabilities are obtained at a variety of collision energies and impact parameters. Large differences in the threshold energies in the two types of dynamics calculations are traced to the absence of zero-point energy in the CT calculations. We argue that the QCT threshold energy is the realistic one. In addition, trajectories find a direct pathway to trans-HCNH, even though there is no obvious transition state (TS) for this pathway. Instead the saddle point (SP) for the addition to cis-HCNH is evidently also the TS for direct formation of trans-HCNH.

  6. XZP + 1d and XZP + 1d-DKH basis sets for second-row elements: application to CCSD(T) zero-point vibrational energy and atomization energy calculations.

    Science.gov (United States)

    Campos, Cesar T; Jorge, Francisco E; Alves, Júlia M A

    2012-09-01

    Recently, segmented all-electron contracted double, triple, quadruple, quintuple, and sextuple zeta valence plus polarization function (XZP, X = D, T, Q, 5, and 6) basis sets for the elements from H to Ar were constructed for use in conjunction with nonrelativistic and Douglas-Kroll-Hess Hamiltonians. In this work, in order to obtain a better description of some molecular properties, the XZP sets for the second-row elements were augmented with high-exponent d "inner polarization functions," which were optimized in the molecular environment at the second-order Møller-Plesset level. At the coupled cluster level of theory, the inclusion of tight d functions for these elements was found to be essential to improve the agreement between theoretical and experimental zero-point vibrational energies (ZPVEs) and atomization energies. For all of the molecules studied, the ZPVE errors were always smaller than 0.5 %. The atomization energies were also improved by applying corrections due to core/valence correlation and atomic spin-orbit effects. This led to estimates for the atomization energies of various compounds in the gaseous phase. The largest error (1.2 kcal mol(-1)) was found for SiH(4).

  7. A modified variation-perturbation approach to zero-point vibrational motion

    DEFF Research Database (Denmark)

    Åstrand, Per-Olof; Ruud, K.; Sundholm, D.

    2000-01-01

    We present a detailed investigation of the perturbation approach for calculating zero-point vibrational contributions to molecular properties. It is demonstrated that if the sum of the potential energy and the zero-point vibrational energy is regarded as an effective potential energy, the leading...

  8. Fast space travel by vacuum zero-point field perturbations

    International Nuclear Information System (INIS)

    Froning, H. D. Jr.

    1999-01-01

    Forces acting upon an accelerating vehicle that is 'warping' its surrounding space are estimated, using the techniques of computational gas/fluid dynamics. Disturbances corresponding to perturbation of spacetime metric and vacuum zero-point fields by electromagnetic discharges are modeled as changes in the electric permittivity and magnetic permeability characteristics of the vacuum of space. And it is assumed that resistance to acceleration (vehicle inertia) is, in part, a consequence of zero-point radiation pressure field anisotropy in the warped space region surrounding the craft. The paper shows that resistance to vehicle acceleration can be diminished by spacetime warping that increases light propagation speed within the warped region. If sufficient warping is achieved, ship speed is slower than light speed within the region that surrounds it-even if it is moving faster-than-light with respect to earth

  9. Ca(AlH4)2, CaAlH5, and CaH2+6LiBH4: Calculated dehydrogenation enthalpy, including zero point energy, and the structure of the phonon spectra.

    Science.gov (United States)

    Marashdeh, Ali; Frankcombe, Terry J

    2008-06-21

    The dehydrogenation enthalpies of Ca(AlH(4))(2), CaAlH(5), and CaH(2)+6LiBH(4) have been calculated using density functional theory calculations at the generalized gradient approximation level. Harmonic phonon zero point energy (ZPE) corrections have been included using Parlinski's direct method. The dehydrogenation of Ca(AlH(4))(2) is exothermic, indicating a metastable hydride. Calculations for CaAlH(5) including ZPE effects indicate that it is not stable enough for a hydrogen storage system operating near ambient conditions. The destabilized combination of LiBH(4) with CaH(2) is a promising system after ZPE-corrected enthalpy calculations. The calculations confirm that including ZPE effects in the harmonic approximation for the dehydrogenation of Ca(AlH(4))(2), CaAlH(5), and CaH(2)+6LiBH(4) has a significant effect on the calculated reaction enthalpy. The contribution of ZPE to the dehydrogenation enthalpies of Ca(AlH(4))(2) and CaAlH(5) calculated by the direct method phonon analysis was compared to that calculated by the frozen-phonon method. The crystal structure of CaAlH(5) is presented in the more useful standard setting of P2(1)c symmetry and the phonon density of states of CaAlH(5), significantly different to other common complex metal hydrides, is rationalized.

  10. Ubiquity of quantum zero-point fluctuations in dislocation glide

    Science.gov (United States)

    Landeiro Dos Reis, Marie; Choudhury, Anshuman; Proville, Laurent

    2017-03-01

    Modeling the dislocation glide through atomic scale simulations in Al, Cu, and Ni and in solid solution alloys Al(Mg) and Cu(Ag), we show that in the course of the plastic deformation the variation of the crystal zero-point energy (ZPE) and the dislocation potential energy barriers are of opposite sign. The multiplicity of situations where we have observed the same trend allows us to conclude that quantum fluctuations, giving rise to the crystal ZPE, make easier the dislocation glide in most materials, even those constituted of atoms heavier than H and He.

  11. Determination of the zero point of charge of kaolin waste from the Northeast of Para, BR

    International Nuclear Information System (INIS)

    Pinto, R.L.S.; Maia, R.F.S.; Felipe, A.M.P.F.

    2012-01-01

    The Para contributes with more than 50% of national production of kaolin of which 12.5% correspond to the waste generated, which has similar composition to benefited kaolin, can be used as adsorbent of heavy metals. The viscosity influences the design of equipment that can reuse that waste. The pH changes the pulp viscosity and the determination of the zero point of charge can estimate this variation. This study analyzes the influence of pH on the pulp rheology by the determination of the zero point of charge. Tests were made by Scanning Electron Microscopy, Energy Dispersive Spectroscopy, potentiometric titulation and rheological analysis. The results showed zero point of charge equal to 3.7 and confirmed that the viscosity increase at pH values near the zero point of charge and decrease at pH values away from this. (author)

  12. Zero-point length, extra-dimensions and string T-duality

    OpenAIRE

    Spallucci, Euro; Fontanini, Michele

    2005-01-01

    In this paper, we are going to put in a single consistent framework apparently unrelated pieces of information, i.e. zero-point length, extra-dimensions, string T-duality. More in details we are going to introduce a modified Kaluza-Klein theory interpolating between (high-energy) string theory and (low-energy) quantum field theory. In our model zero-point length is a four dimensional ``virtual memory'' of compact extra-dimensions length scale. Such a scale turns out to be determined by T-dual...

  13. Nuclear dynamics of zero point fluctuations in ordinary and in gauge space

    International Nuclear Information System (INIS)

    Broglia, R.A.; Barranco, F.; Gallardo, M.

    1985-01-01

    The change of the nuclear density due to the zero point fluctuations associated with surface modes are calculated making use of field theoretical many-body techniques. For medium heavy nuclei the density renormalizations (vertex corrections) are much smaller than the potential renormalizations (self-energy contributions). The microscopic results agree well with the results of the collective model. Zero point fluctuations associated with pairing vibrations renormalize the properties of strongly rotating nuclei around the critical frequency at which the pairing phase transition takes place. Fluctuations of the pairing field play also an important role in the sub-barrier fusion cross section associated with the 58 Ni+ 64 Ni reaction. (orig.)

  14. Zero-point vibrational effects on optical rotation

    DEFF Research Database (Denmark)

    Ruud, K.; Taylor, P.R.; Åstrand, P.-O.

    2001-01-01

    We investigate the effects of molecular vibrations on the optical rotation in two chiral molecules, methyloxirane and trans-2,3-dimethylthiirane. It is shown that the magnitude of zero-point vibrational corrections increases as the electronic contribution to the optical rotation increases....... Vibrational effects thus appear to be important for an overall estimate of the molecular optical rotation, amounting to about 20-30% of the electronic counterpart. We also investigate the special case of chirality introduced in a molecule through isotopic substitution. In this case, the zero-point vibrational...

  15. DISTANCE SCALE ZERO POINTS FROM GALACTIC RR LYRAE STAR PARALLAXES

    Energy Technology Data Exchange (ETDEWEB)

    Benedict, G. Fritz; McArthur, Barbara E.; Barnes, Thomas G. [McDonald Observatory, University of Texas, Austin, TX 78712 (United States); Feast, Michael W. [Centre for Astrophysics, Cosmology and Gravitation, Astronomy Department, University of Cape Town, Rondebosch 7701 (South Africa); Harrison, Thomas E. [Department of Astronomy, New Mexico State University, Las Cruces, NM 88003 (United States); Bean, Jacob L.; Kolenberg, Katrien [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Menzies, John W.; Laney, C. D. [South African Astronomical Observatory, Observatory 7935 (South Africa); Chaboyer, Brian [Department of Physics and Astronomy, Dartmouth College, Hanover, NH 03755 (United States); Fossati, Luca [Department of Physics and Astronomy, Open University, Milton Keynes MK7 6AA (United Kingdom); Nesvacil, Nicole [Institute of Astronomy, University of Vienna, A-1180 Vienna (Austria); Smith, Horace A. [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Kochukhov, Oleg [Department of Physics and Astronomy, Uppsala University, 75120 Uppsala (Sweden); Nelan, Edmund P.; Taylor, Denise [STScI, Baltimore, MD 21218 (United States); Shulyak, D. V. [Institute of Astrophysics, Georg-August-University, Friedrich-Hund-Platz 1, D-37077 Goettingen (Germany); Freedman, Wendy L. [The Observatories, Carnegie Institution of Washington, Pasadena, CA 91101 (United States)

    2011-12-15

    We present new absolute trigonometric parallaxes and proper motions for seven Population II variable stars-five RR Lyr variables: RZ Cep, XZ Cyg, SU Dra, RR Lyr, and UV Oct; and two type 2 Cepheids: VY Pyx and {kappa} Pav. We obtained these results with astrometric data from Fine Guidance Sensors, white-light interferometers on Hubble Space Telescope. We find absolute parallaxes in milliseconds of arc: RZ Cep, 2.12 {+-} 0.16 mas; XZ Cyg, 1.67 {+-} 0.17 mas; SU Dra, 1.42 {+-} 0.16 mas; RR Lyr, 3.77 {+-} 0.13 mas; UV Oct, 1.71 {+-} 0.10 mas; VY Pyx, 6.44 {+-} 0.23 mas; and {kappa} Pav, 5.57 {+-} 0.28 mas; an average {sigma}{sub {pi}}/{pi} = 5.4%. With these parallaxes, we compute absolute magnitudes in V and K bandpasses corrected for interstellar extinction and Lutz-Kelker-Hanson bias. Using these RR Lyrae variable star absolute magnitudes, we then derive zero points for M{sub V} -[Fe/H] and M{sub K} -[Fe/H]-log P relations. The technique of reduced parallaxes corroborates these results. We employ our new results to determine distances and ages of several Galactic globular clusters and the distance of the Large Magellanic Cloud. The latter is close to that previously derived from Classical Cepheids uncorrected for any metallicity effect, indicating that any such effect is small. We also discuss the somewhat puzzling results obtained for our two type 2 Cepheids.

  16. Muon zero point motion and the hyperfine field in nickel

    International Nuclear Information System (INIS)

    Elzain, M.E.

    1984-09-01

    It is argued that the effect of zero point motion of muons in Ni is to induce local vibrations of the neighbouring Ni atoms. This local vibration reduces the Hubbard correlation and hence decreases the net spin per atom. This acts back to reduce the hyperfine field at the muon site. (author)

  17. Electronic zero-point fluctuation forces inside circuit components

    Science.gov (United States)

    Leonhardt, Ulf

    2018-01-01

    One of the most intriguing manifestations of quantum zero-point fluctuations are the van der Waals and Casimir forces, often associated with vacuum fluctuations of the electromagnetic field. We study generalized fluctuation potentials acting on internal degrees of freedom of components in electrical circuits. These electronic Casimir-like potentials are induced by the zero-point current fluctuations of any general conductive circuit. For realistic examples of an electromechanical capacitor and a superconducting qubit, our results reveal the possibility of tunable forces between the capacitor plates, or the level shifts of the qubit, respectively. Our analysis suggests an alternative route toward the exploration of Casimir-like fluctuation potentials, namely, by characterizing and measuring them as a function of parameters of the environment. These tunable potentials may be useful for future nanoelectromechanical and quantum technologies. PMID:29719863

  18. Electronic zero-point fluctuation forces inside circuit components.

    Science.gov (United States)

    Shahmoon, Ephraim; Leonhardt, Ulf

    2018-04-01

    One of the most intriguing manifestations of quantum zero-point fluctuations are the van der Waals and Casimir forces, often associated with vacuum fluctuations of the electromagnetic field. We study generalized fluctuation potentials acting on internal degrees of freedom of components in electrical circuits. These electronic Casimir-like potentials are induced by the zero-point current fluctuations of any general conductive circuit. For realistic examples of an electromechanical capacitor and a superconducting qubit, our results reveal the possibility of tunable forces between the capacitor plates, or the level shifts of the qubit, respectively. Our analysis suggests an alternative route toward the exploration of Casimir-like fluctuation potentials, namely, by characterizing and measuring them as a function of parameters of the environment. These tunable potentials may be useful for future nanoelectromechanical and quantum technologies.

  19. The Apparent Lack of Lorentz Invariance in Zero-Point Fields with Truncated Spectra

    Directory of Open Access Journals (Sweden)

    Daywitt W. C.

    2009-01-01

    Full Text Available The integrals that describe the expectation values of the zero-point quantum-field-theoretic vacuum state are semi-infinite, as are the integrals for the stochastic electrodynamic vacuum. The unbounded upper limit to these integrals leads in turn to infinite energy densities and renormalization masses. A number of models have been put forward to truncate the integrals so that these densities and masses are finite. Unfortunately the truncation apparently destroys the Lorentz invariance of the integrals. This note argues that the integrals are naturally truncated by the graininess of the negative-energy Planck vacuum state from which the zero-point vacuum arises, and are thus automatically Lorentz invariant.

  20. The Casimir effect physical manifestations of zero-point energy

    CERN Document Server

    Milton, K A

    2001-01-01

    In its simplest manifestation, the Casimir effect is a quantum force of attraction between two parallel uncharged conducting plates. More generally, it refers to the interaction - which may be either attractive or repulsive - between material bodies due to quantum fluctuations in whatever fields are relevant. It is a local version of the van der Waals force between molecules. Its sweep ranges from perhaps its being the origin of the cosmological constant to its being responsible for the confinement of quarks. This monograph develops the theory of such forces, based primarily on physically tran

  1. Zero-Point Corrections for Isotropic Coupling Constants for Cyclohexadienyl Radical, C6H7 and C6H6Mu: Beyond the Bond Length Change Approximation

    Directory of Open Access Journals (Sweden)

    Bruce S. Hudson

    2013-04-01

    Full Text Available Zero-point vibrational level averaging for electron spin resonance (ESR and muon spin resonance (µSR hyperfine coupling constants (HFCCs are computed for H and Mu isotopomers of the cyclohexadienyl radical. A local mode approximation previously developed for computation of the effect of replacement of H by D on 13C-NMR chemical shifts is used. DFT methods are used to compute the change in energy and HFCCs when the geometry is changed from the equilibrium values for the stretch and both bend degrees of freedom. This variation is then averaged over the probability distribution for each degree of freedom. The method is tested using data for the methylene group of C6H7, cyclohexadienyl radical and its Mu analog. Good agreement is found for the difference between the HFCCs for Mu and H of CHMu and that for H of CHMu and CH2 of the parent radical methylene group. All three of these HFCCs are the same in the absence of the zero point average, a one-parameter fit of the static HFCC, a(0, can be computed. That value, 45.2 Gauss, is compared to the results of several fixed geometry electronic structure computations. The HFCC values for the ortho, meta and para H atoms are then discussed.

  2. Effect of zero-point oscillations of nuclear surface on observable properties of nuclei

    International Nuclear Information System (INIS)

    Masterov, V.S.; Rabotnov, N.S.

    1982-01-01

    Possible effect of zero-point oscillations of nuclear surface on such observable nucleus characteristics as the mass of ground state, edge diffusion and height of fission barrier is considered. Within the framework of a drop model the calculation of binding energy per nucleon for even-even nuclei with a mass number 8 <= A <= 60 depending on A is given. It is shown that consideration of even quadrupole and octupole oscillations results in marked effects which are necessary to consider when comparing results of model calculations with experiment

  3. Effect of the ground state correlations in the density distribution and zero point fluctuations

    International Nuclear Information System (INIS)

    Barranco, F.; Broglia, R.A.

    1985-01-01

    The existence of collective vibrations in the spectrum implies that the description of the ground state in an independent particle model must be corrected. This is because of the zero point fluctuations induced by the collective vibrations, so that ground state correlations have to be included. These are taken into account via the diagrammatic expansion of the Nuclear Field Theory, giving place to a renormalization in the different properties of the ground state. As far as the density distribution is concerned, in a NFT consistent calculation, the largest contributions arise from diagrams that cannot be expressed in terms of backward going amplitudes of the phonon RPA wave function. For a given multipolarity the main correction comes from the low lying state. The giant resonance is of smaller relevance since it lies at larger energies in the response function. The octupole modes give the dominant contribution, and the effect in average becomes smaller as the multipolarity increases. These results agree quite well with those obtained taking into account the zero point fluctuations of the nuclear surface in the collective model with the Esbensen and Bertsch prescription, which the authors use to explain the anomalous behaviour of the mean square radii of the Calcium isotopes

  4. The Electromagnetic Zero-Point Field and the Flat Polarizable Vacuum Representation

    CERN Document Server

    Desiato, J T

    2003-01-01

    There are several interpretations of the Polarizable Vacuum (PV). One is the variable speed of light (VSL) approach, that has been shown to be isomorphic to General Relativity (GR) within experimental limits. However, another interpretation is representative of flat geometry, in which intervals of time and distance are measured in local inertial reference frames where the speed of light remains constant. The Flat PV approach leads to variable impedance transformations, governed by the spectral energy content of the Quantum Vacuum’s Electromagnetic (EM) Zero-Point Field (ZPF). The EM ZPF consists of photons. An unlimited number of photons may occupy the same quantum state at an arbitrary set of coordinates. Therefore, the spectral energy of the ZPF may be varied smoothly, represented by a superposition of EM waves with a large number of photons per cubic wavelength. Utilizing the Flat PV representation, a family of frequency dependent solutions of Poisson’s equation are derived, that may be applied as tool...

  5. Threshold-adaptive canny operator based on cross-zero points

    Science.gov (United States)

    Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu

    2018-03-01

    Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.

  6. Zero-point motion in the bag description of the nucleon

    International Nuclear Information System (INIS)

    Brown, G.E.; Durso, J.W.; Johnson, M.B.

    1983-01-01

    In the bag model, confinement of quarks is accomplished by introduction of a boundary condition at some definite radius R, where the energy of the total system is a minimum. This minimum is, however, relatively shallow and energies for substantially different bag radii are not much larger than this minimum value. This indicates that the zero-point motion of the bag surface may be important. In this paper, quantization of the bag surface motion is carried out in a somewhat ad hoc fashion, modelled after the generator coordinate theory in nuclear physics. This procedure unifies a number of ideas previously in the literature; it stresses the anharmonicity of the collective motion. As in earlier treatments, the Roper resonance emerges as a breathing-mode type of excitation of the nucleon. The one- and two-pion decays of the Roper resonance are calculated and the widths are found to fall short of the empirical ones. It is pointed out, however, that decays involving intermediate states containing virtual rho-mesons will enhance the widths. Pion-nucleon scattering in the P 11 channel is constructed in our model and found to agree roughly with experiment. A crucial term in the driving force involves the pion coupling to the nucleon through a virtual rho-meson. With introduction of zero-point motion of the bag surface, the motion of 'bag radius' becomes dependent on precisely which moment of the radius is measured. Our development gives a model for cutting off smoothly the pion-exchange term in the nucleon-nucleon interaction. (orig.)

  7. Effects of zero point vibration on the reaction dynamics of water dimer cations following ionization.

    Science.gov (United States)

    Tachikawa, Hiroto

    2017-06-30

    Reactions of water dimer cation (H2O)2+ following ionization have been investigated by means of a direct ab initio molecular dynamics method. In particular, the effects of zero point vibration and zero point energy (ZPE) on the reaction mechanism were considered in this work. Trajectories were run on two electronic potential energy surfaces (PESs) of (H2O)2+: ground state ( 2 A″-like state) and the first excited state ( 2 A'-like state). All trajectories on the ground-state PES lead to the proton-transferred product: H 2 O + (Wd)-H 2 O(Wa) → OH(Wd)-H 3 O + (Wa), where Wd and Wa refer to the proton donor and acceptor water molecules, respectively. Time of proton transfer (PT) varied widely from 15 to 40 fs (average time of PT = 30.9 fs). The trajectories on the excited-state PES gave two products: an intermediate complex with a face-to-face structure (H 2 O-OH 2 ) + and a PT product. However, the proton was transferred to the opposite direction, and the reverse PT was found on the excited-state PES: H 2 O(Wd)-H 2 O + (Wa) → H 3 O + (Wd)-OH(Wa). This difference occurred because the ionizing water molecule in the dimer switched between the ground and excited states. The reaction mechanism of (H2O)2+ and the effects of ZPE are discussed on the basis of the results. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  8. Investigating the significance of zero-point motion in small molecular clusters of sulphuric acid and water

    International Nuclear Information System (INIS)

    Stinson, Jake L.; Ford, Ian J.; Kathmann, Shawn M.

    2014-01-01

    The nucleation of particles from trace gases in the atmosphere is an important source of cloud condensation nuclei, and these are vital for the formation of clouds in view of the high supersaturations required for homogeneous water droplet nucleation. The methods of quantum chemistry have increasingly been employed to model nucleation due to their high accuracy and efficiency in calculating configurational energies; and nucleation rates can be obtained from the associated free energies of particle formation. However, even in such advanced approaches, it is typically assumed that the nuclei have a classical nature, which is questionable for some systems. The importance of zero-point motion (also known as quantum nuclear dynamics) in modelling small clusters of sulphuric acid and water is tested here using the path integral molecular dynamics method at the density functional level of theory. The general effect of zero-point motion is to distort the mean structure slightly, and to promote the extent of proton transfer with respect to classical behaviour. In a particular configuration of one sulphuric acid molecule with three waters, the range of positions explored by a proton between a sulphuric acid and a water molecule at 300 K (a broad range in contrast to the confinement suggested by geometry optimisation at 0 K) is clearly affected by the inclusion of zero point motion, and similar effects are observed for other configurations

  9. Data-Driven Method for Wind Turbine Yaw Angle Sensor Zero-Point Shifting Fault Detection

    Directory of Open Access Journals (Sweden)

    Yan Pei

    2018-03-01

    Full Text Available Wind turbine yaw control plays an important role in increasing the wind turbine production and also in protecting the wind turbine. Accurate measurement of yaw angle is the basis of an effective wind turbine yaw controller. The accuracy of yaw angle measurement is affected significantly by the problem of zero-point shifting. Hence, it is essential to evaluate the zero-point shifting error on wind turbines on-line in order to improve the reliability of yaw angle measurement in real time. Particularly, qualitative evaluation of the zero-point shifting error could be useful for wind farm operators to realize prompt and cost-effective maintenance on yaw angle sensors. In the aim of qualitatively evaluating the zero-point shifting error, the yaw angle sensor zero-point shifting fault is firstly defined in this paper. A data-driven method is then proposed to detect the zero-point shifting fault based on Supervisory Control and Data Acquisition (SCADA data. The zero-point shifting fault is detected in the proposed method by analyzing the power performance under different yaw angles. The SCADA data are partitioned into different bins according to both wind speed and yaw angle in order to deeply evaluate the power performance. An indicator is proposed in this method for power performance evaluation under each yaw angle. The yaw angle with the largest indicator is considered as the yaw angle measurement error in our work. A zero-point shifting fault would trigger an alarm if the error is larger than a predefined threshold. Case studies from several actual wind farms proved the effectiveness of the proposed method in detecting zero-point shifting fault and also in improving the wind turbine performance. Results of the proposed method could be useful for wind farm operators to realize prompt adjustment if there exists a large error of yaw angle measurement.

  10. A Genuine Jahn-Teller System with Compressed Geometry and Quantum Effects Originating from Zero-Point Motion

    DEFF Research Database (Denmark)

    Aramburu, José Antonio; García-Fernández, Pablo; García Lastra, Juan Maria

    2016-01-01

    that the anomalous positive g∥ shift (g∥−g0=0.065) measured at T=20 K obeys the superposition of the |3 z2−r2⟩ and |x2−y2⟩ states driven by quantum effects associated with the zero-point motion, a mechanism first put forward by O'Brien for static Jahn–Teller systems and later extended by Ham to the dynamic Jahn...... of the calculated energy barriers for different Jahn–Teller systems allowed us to explain the origin of the compressed geometry observed for CaO:Ni+....

  11. Quantal and thermal zero point motion formulae of barrier transmission probability

    International Nuclear Information System (INIS)

    Takigawa, N.; Alhassid, Y.; Balantekin, A.B.

    1992-01-01

    A Green's function method is developed to derive quantal zero point motion formulae for the barrier transmission probability in heavy ion fusion reactions corresponding to various nuclear intrinsic degrees of freedom. In order to apply to the decay of a hot nucleus, the formulae are then generalized to the case where the intrinsic degrees of freedom are in thermal equilibrium with a heat bath. A thermal zero point motion formula for vibrational coupling previously obtained through the use of influence functional methods naturally follows, and the effects of rotational coupling are found to be independent of temperature if the deformation is rigid

  12. Zero point and zero suffix methods with robust ranking for solving fully fuzzy transportation problems

    Science.gov (United States)

    Ngastiti, P. T. B.; Surarso, Bayu; Sutimin

    2018-05-01

    Transportation issue of the distribution problem such as the commodity or goods from the supply tothe demmand is to minimize the transportation costs. Fuzzy transportation problem is an issue in which the transport costs, supply and demand are in the form of fuzzy quantities. Inthe case study at CV. Bintang Anugerah Elektrik, a company engages in the manufacture of gensets that has more than one distributors. We use the methods of zero point and zero suffix to investigate the transportation minimum cost. In implementing both methods, we use robust ranking techniques for the defuzzification process. The studyresult show that the iteration of zero suffix method is less than that of zero point method.

  13. Computed potential energy surfaces for chemical reactions

    Science.gov (United States)

    Walch, Stephen P.

    1988-01-01

    The minimum energy path for the addition of a hydrogen atom to N2 is characterized in CASSCF/CCI calculations using the (4s3p2d1f/3s2p1d) basis set, with additional single point calculations at the stationary points of the potential energy surface using the (5s4p3d2f/4s3p2d) basis set. These calculations represent the most extensive set of ab initio calculations completed to date, yielding a zero point corrected barrier for HN2 dissociation of approx. 8.5 kcal mol/1. The lifetime of the HN2 species is estimated from the calculated geometries and energetics using both conventional Transition State Theory and a method which utilizes an Eckart barrier to compute one dimensional quantum mechanical tunneling effects. It is concluded that the lifetime of the HN2 species is very short, greatly limiting its role in both termolecular recombination reactions and combustion processes.

  14. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Watase, Yoshiyuki

    1991-09-15

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors.

  15. Influence of the zero point oscillation on the charge transfer in the heavy-ion deep inelastic collisions

    Energy Technology Data Exchange (ETDEWEB)

    Lin-xiao, Ge; Wen-qing, Shen; Chao-fan, Yu

    1982-01-01

    We discuss the variance of the charge distribution in the heavy ion deep inelastic collision on the basis of the Langevin equation. In order to explain the difference of the inital slope (early stage) of the charge distribution for the different reaction systems and different bombarding energy, an initial condition of the charge drift in the early stage of Dic is introduced. It is given by the harmonic or inhamonic motion around the zero point and closely depends on the nuclear structure and incident energy. The difference of the inertial mass and stiffness parameter may be the one of the reasons for the difference of charge transfer. In addition we also analyse the characterstic of the inertial mass parameter.

  16. The Schrödinger Equation, the Zero-Point Electromagnetic Radiation, and the Photoelectric Effect

    Science.gov (United States)

    França, H. M.; Kamimura, A.; Barreto, G. A.

    2016-04-01

    A Schrödinger type equation for a mathematical probability amplitude Ψ( x, t) is derived from the generalized phase space Liouville equation valid for the motion of a microscopic particle, with mass M and charge e, moving in a potential V( x). The particle phase space probability density is denoted Q( x, p, t), and the entire system is immersed in the "vacuum" zero-point electromagnetic radiation. We show, in the first part of the paper, that the generalized Liouville equation is reduced to a simpler Liouville equation in the equilibrium limit where the small radiative corrections cancel each other approximately. This leads us to a simpler Liouville equation that will facilitate the calculations in the second part of the paper. Within this second part, we address ourselves to the following task: Since the Schrödinger equation depends on hbar , and the zero-point electromagnetic spectral distribution, given by ρ 0{(ω )} = hbar ω 3/2 π 2 c3, also depends on hbar , it is interesting to verify the possible dynamical connection between ρ 0( ω) and the Schrödinger equation. We shall prove that the Planck's constant, present in the momentum operator of the Schrödinger equation, is deeply related with the ubiquitous zero-point electromagnetic radiation with spectral distribution ρ 0( ω). For simplicity, we do not use the hypothesis of the existence of the L. de Broglie matter-waves. The implications of our study for the standard interpretation of the photoelectric effect are discussed by considering the main characteristics of the phenomenon. We also mention, briefly, the effects of the zero-point radiation in the tunneling phenomenon and the Compton's effect.

  17. Progress in establishing a connection between the electromagnetic zero-point field and inertia

    International Nuclear Information System (INIS)

    Haisch, Bernhard; Rueda, Alfonso

    1999-01-01

    We report on the progress of a NASA-funded study being carried out at the Lockheed Martin Advanced Technology Center in Palo Alto and the California State University in Long Beach to investigate the proposed link between the zero-point field of the quantum vacuum and inertia. It is well known that an accelerating observer will experience a bath of radiation resulting from the quantum vacuum which mimics that of a heat bath, the so-called Davies-Unruh effect. We have further analyzed this problem of an accelerated object moving through the vacuum and have shown that the zero-point field will yield a non-zero Poynting vector to an accelerating observer. Scattering of this radiation by the quarks and electrons constituting matter would result in an acceleration-dependent reaction force that would appear to be the origin of inertia of matter (Rueda and Haisch 1998a, 1998b). In the subrelativistic case this inertia reaction force is exactly newtonian and in the relativistic case it exactly reproduces the well known relativistic extension of Newton's Law. This analysis demonstrates then that both the ordinary, F-vector=ma-vector, and the relativistic forms of Newton's equation of motion may be derived from Maxwell's equations as applied to the electromagnetic zero-point field. We expect to be able to extend this analysis in the future to more general versions of the quantum vacuum than just the electromagnetic one discussed herein

  18. Computing in high energy physics

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1991-01-01

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors

  19. Radiological zero point in the Taaone lagoon before starting running the Jacques Chirac hospital in Tahiti

    International Nuclear Information System (INIS)

    Bouisset, P.

    2009-01-01

    Within the perspective of the implementation of a radiotherapy and diagnosis department in the future Jacques Chirac hospital in Tahiti, and with the aim of having a radio-ecological zero point in the lagoon where liquid effluents from the hospital will be rejected, this report describes the sampling campaign performed in the Taaone' bay in Tahiti by collecting water and sediments as well as seaweeds and fishes. It also describes how these samples have been prepared and analysed, gives and comments results of radioactivity and ionizing radiations measurements performed on these samples

  20. Random electrodynamics: the theory of classical electrodynamics with classical electromagnetic zero-point radiation

    International Nuclear Information System (INIS)

    Boyer, T.H.

    1975-01-01

    The theory of classical electrodynamics with classical electromagnetic zero-point radiation is outlined here under the title random electrodynamics. The work represents a reanalysis of the bounds of validity of classical electron theory which should sharpen the understanding of the connections and distinctions between classical and quantum theories. The new theory of random electrodynamics is a classical electron theory involving Newton's equations for particle motion due to the Lorentz force, and Maxwell's equations for the electromagnetic fields with point particles as sources. However, the theory departs from the classical electron theory of Lorentz in that it adopts a new boundary condition on Maxwell's equations. It is assumed that the homogeneous boundary condition involves random classical electromagnetic radiation with a Lorentz-invariant spectrum, classical electromagnetic zero-point radiation. The implications of random electrodynamics for atomic structure, atomic spectra, and particle-interference effects are discussed on an order-of-magnitude or heuristic level. Some detailed mathematical connections and some merely heuristic connections are noted between random electrodynamics and quantum theory. (U.S.)

  1. Quantum features derived from the classical model of a bouncer-walker coupled to a zero-point field

    International Nuclear Information System (INIS)

    Schwabl, H; Mesa Pascasio, J; Fussy, S; Grössing, G

    2012-01-01

    In our bouncer-walker model a quantum is a nonequilibrium steady-state maintained by a permanent throughput of energy. Specifically, we consider a 'particle' as a bouncer whose oscillations are phase-locked with those of the energy-momentum reservoir of the zero-point field (ZPF), and we combine this with the random-walk model of the walker, again driven by the ZPF. Starting with this classical toy model of the bouncer-walker we were able to derive fundamental elements of quantum theory. Here this toy model is revisited with special emphasis on the mechanism of emergence. Especially the derivation of the total energy hω o and the coupling to the ZPF are clarified. For this we make use of a sub-quantum equipartition theorem. It can further be shown that the couplings of both bouncer and walker to the ZPF are identical. Then we follow this path in accordance with Ref. [2], expanding the view from the particle in its rest frame to a particle in motion. The basic features of ballistic diffusion are derived, especially the diffusion constant D, thus providing a missing link between the different approaches of our previous works.

  2. Derivation of the blackbody radiation spectrum from the equivalence principle in classical physics with classical electromagnetic zero-point radiation

    International Nuclear Information System (INIS)

    Boyer, T.H.

    1984-01-01

    A derivation of Planck's spectrum including zero-point radiation is given within classical physics from recent results involving the thermal effects of acceleration through classical electromagnetic zero-point radiation. A harmonic electric-dipole oscillator undergoing a uniform acceleration a through classical electromagnetic zero-point radiation responds as would the same oscillator in an inertial frame when not in zero-point radiation but in a different spectrum of random classical radiation. Since the equivalence principle tells us that the oscillator supported in a gravitational field g = -a will respond in the same way, we see that in a gravitational field we can construct a perpetual-motion machine based on this different spectrum unless the different spectrum corresponds to that of thermal equilibrium at a finite temperature. Therefore, assuming the absence of perpetual-motion machines of the first kind in a gravitational field, we conclude that the response of an oscillator accelerating through classical zero-point radiation must be that of a thermal system. This then determines the blackbody radiation spectrum in an inertial frame which turns out to be exactly Planck's spectrum including zero-point radiation

  3. Evaluation of parameters for particles acceleration by the zero-point field of quantum electrodynamics

    Science.gov (United States)

    Rueda, A.

    1985-01-01

    That particles may be accelerated by vacuum effects in quantum field theory has been repeatedly proposed in the last few years. A natural upshot of this is a mechanism for cosmic rays (CR) primaries acceleration. A mechanism for acceleration by the zero-point field (ZPE) when the ZPE is taken in a realistic sense (in opposition to a virtual field) was considered. Originally the idea was developed within a semiclassical context. The classical Einstein-Hopf model (EHM) was used to show that free isolated electromagnrtically interacting particles performed a random walk in phase space and more importantly in momentum space when submitted to the perennial action of the so called classical electromagnrtic ZPE.

  4. Evaluation of parameters for particles acceleration by the zero-point field of quantum electrodynamics

    International Nuclear Information System (INIS)

    Rueda, A.

    1985-01-01

    That particles may be accelerated by vacuum effects in quantum field theory has been repeatedly proposed in the last few years. A natural upshot of this is a mechanism for cosmic rays (CR) primaries acceleration. A mechanism for acceleration by the zero-point field (ZPE) when the ZPE is taken in a realistic sense (in opposition to a virtual field) was considered. Originally the idea was developed within a semiclassical context. The calssical Einstein-Hopf model (EHM) was used to show that free isolated electromagnrtically interacting particles performed a random walk in phase space and more importantly in momentum space when submitted to the perennial action of the so called classical electromagnetic ZPE

  5. The environmental zero-point problem in evolutionary reaction norm modeling.

    Science.gov (United States)

    Ergon, Rolf

    2018-04-01

    There is a potential problem in present quantitative genetics evolutionary modeling based on reaction norms. Such models are state-space models, where the multivariate breeder's equation in some form is used as the state equation that propagates the population state forward in time. These models use the implicit assumption of a constant reference environment, in many cases set to zero. This zero-point is often the environment a population is adapted to, that is, where the expected geometric mean fitness is maximized. Such environmental reference values follow from the state of the population system, and they are thus population properties. The environment the population is adapted to, is, in other words, an internal population property, independent of the external environment. It is only when the external environment coincides with the internal reference environment, or vice versa, that the population is adapted to the current environment. This is formally a result of state-space modeling theory, which is an important theoretical basis for evolutionary modeling. The potential zero-point problem is present in all types of reaction norm models, parametrized as well as function-valued, and the problem does not disappear when the reference environment is set to zero. As the environmental reference values are population characteristics, they ought to be modeled as such. Whether such characteristics are evolvable is an open question, but considering the complexity of evolutionary processes, such evolvability cannot be excluded without good arguments. As a straightforward solution, I propose to model the reference values as evolvable mean traits in their own right, in addition to other reaction norm traits. However, solutions based on an evolvable G matrix are also possible.

  6. Mixed quantum-classical electrodynamics: Understanding spontaneous decay and zero-point energy

    Science.gov (United States)

    Li, Tao E.; Nitzan, Abraham; Sukharev, Maxim; Martinez, Todd; Chen, Hsing-Ta; Subotnik, Joseph E.

    2018-03-01

    The dynamics of an electronic two-level system coupled to an electromagnetic field are simulated explicitly for one- and three-dimensional systems through semiclassical propagation of the Maxwell-Liouville equations. We consider three flavors of mixed quantum-classical dynamics: (i) the classical path approximation (CPA), (ii) Ehrenfest dynamics, and (iii) symmetrical quasiclassical (SQC) dynamics. Our findings are as follows: (i) The CPA fails to recover a consistent description of spontaneous emission, (ii) a consistent "spontaneous" emission can be obtained from Ehrenfest dynamics, provided that one starts in an electronic superposition state, and (iii) spontaneous emission is always obtained using SQC dynamics. Using the SQC and Ehrenfest frameworks, we further calculate the dynamics following an incoming pulse, but here we find very different responses: SQC and Ehrenfest dynamics deviate sometimes strongly in the calculated rate of decay of the transient excited state. Nevertheless, our work confirms the earlier observations by Miller [J. Chem. Phys. 69, 2188 (1978), 10.1063/1.436793] that Ehrenfest dynamics can effectively describe some aspects of spontaneous emission and highlights interesting possibilities for studying light-matter interactions with semiclassical mechanics.

  7. IAU 2015 Resolution B2 on Recommended Zero Points for the Absolute and Apparent Bolometric Magnitude Scales

    DEFF Research Database (Denmark)

    Mamajek, E. E.; Torres, G.; Prsa, A.

    2015-01-01

    The XXIXth IAU General Assembly in Honolulu adopted IAU 2015 Resolution B2 on recommended zero points for the absolute and apparent bolometric magnitude scales. The resolution was proposed by the IAU Inter-Division A-G Working Group on Nominal Units for Stellar and Planetary Astronomy after...... consulting with a broad spectrum of researchers from the astronomical community. Resolution B2 resolves the long-standing absence of an internationally-adopted zero point for the absolute and apparent bolometric magnitude scales. Resolution B2 defines the zero point of the absolute bolometric magnitude scale...... such that a radiation source with $M_{\\rm Bol}$ = 0 has luminosity L$_{\\circ}$ = 3.0128e28 W. The zero point of the apparent bolometric magnitude scale ($m_{\\rm Bol}$ = 0) corresponds to irradiance $f_{\\circ}$ = 2.518021002e-8 W/m$^2$. The zero points were chosen so that the nominal solar luminosity (3.828e26 W...

  8. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Sarah; Devenish, Robin [Nuclear Physics Laboratory, Oxford University (United Kingdom)

    1989-07-15

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'.

  9. Computing in high energy physics

    International Nuclear Information System (INIS)

    Smith, Sarah; Devenish, Robin

    1989-01-01

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'

  10. Toward an interstellar mission: Zeroing in on the zero-point-field inertia resonance

    International Nuclear Information System (INIS)

    Haisch, Bernhard; Rueda, Alfonso

    2000-01-01

    While still an admittedly remote possibility, the concept of an interstellar mission has become a legitimate topic for scientific discussion as evidenced by several recent NASA activities and programs. One approach is to extrapolate present-day technologies by orders of magnitude; the other is to find new regimes in physics and to search for possible new laws of physics. Recent work on the zero-point field (ZPF), or electromagnetic quantum vacuum, is promising in regard to the latter, especially concerning the possibility that the inertia of matter may, at least in part, be attributed to interaction between the quarks and electrons in matter and the ZPF. A NASA-funded study (independent of the BPP program) of this concept has been underway since 1996 at the Lockheed Martin Advanced Technology Center in Palo Alto and the California State University at Long Beach. We report on a new development resulting from this effort: that for the specific case of the electron, a resonance for the inertia-generating process at the Compton frequency would simultaneously explain both the inertial mass of the electron and the de Broglie wavelength of a moving electron as first measured by Davisson and Germer in 1927. This line of investigation is leading to very suggestive connections between electrodynamics, inertia, gravitation and the wave nature of matter

  11. Magnetic fusion energy and computers

    International Nuclear Information System (INIS)

    Killeen, J.

    1982-01-01

    The application of computers to magnetic fusion energy research is essential. In the last several years the use of computers in the numerical modeling of fusion systems has increased substantially. There are several categories of computer models used to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies are also in use. To meet the needs of the fusion program, the National Magnetic Fusion Energy Computer Center has been established at the Lawrence Livermore National Laboratory. A large central computing facility is linked to smaller computer centers at each of the major MFE laboratories by a communication network. In addition to providing cost effective computing services, the NMFECC environment stimulates collaboration and the sharing of computer codes among the various fusion research groups

  12. Computation of free energy

    NARCIS (Netherlands)

    van Gunsteren, WF; Daura, [No Value; Mark, AE

    2002-01-01

    Many quantities that are standardly used to characterize a chemical system are related to free-energy differences between particular states of the system. By statistical mechanics, free-energy differences may be expressed in terms of averages over ensembles of atomic configurations for the molecular

  13. Computational approaches to energy materials

    CERN Document Server

    Catlow, Richard; Walsh, Aron

    2013-01-01

    The development of materials for clean and efficient energy generation and storage is one of the most rapidly developing, multi-disciplinary areas of contemporary science, driven primarily by concerns over global warming, diminishing fossil-fuel reserves, the need for energy security, and increasing consumer demand for portable electronics. Computational methods are now an integral and indispensable part of the materials characterisation and development process.   Computational Approaches to Energy Materials presents a detailed survey of current computational techniques for the

  14. Computing in high energy physics

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Hoogland, W.

    1986-01-01

    This book deals with advanced computing applications in physics, and in particular in high energy physics environments. The main subjects covered are networking; vector and parallel processing; and embedded systems. Also examined are topics such as operating systems, future computer architectures and commercial computer products. The book presents solutions that are foreseen as coping, in the future, with computing problems in experimental and theoretical High Energy Physics. In the experimental environment the large amounts of data to be processed offer special problems on-line as well as off-line. For on-line data reduction, embedded special purpose computers, which are often used for trigger applications are applied. For off-line processing, parallel computers such as emulator farms and the cosmic cube may be employed. The analysis of these topics is therefore a main feature of this volume

  15. Energy Dissipation in Quantum Computers

    OpenAIRE

    Granik, A.; Chapline, G.

    2003-01-01

    A method is described for calculating the heat generated in a quantum computer due to loss of quantum phase information. Amazingly enough, this heat generation can take place at zero temperature. and may explain why it is impossible to extract energy from vacuum fluctuations. Implications for optical computers and quantum cosmology are also briefly discussed.

  16. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  17. Electronic zero-point oscillations in the strong-interaction limit of density functional theory

    NARCIS (Netherlands)

    Gori Giorgi, P.; Vignale, G.; Seidl, M.

    2009-01-01

    The exchange-correlation energy in Kohn-Sham density functional theory can be expressed exactly in terms of the change in the expectation of the electron-electron repulsion operator when, in the many-electron Hamiltonian, this same operator is multiplied by a real parameter λ varying between 0

  18. High energy physics and grid computing

    International Nuclear Information System (INIS)

    Yu Chuansong

    2004-01-01

    The status of the new generation computing environment of the high energy physics experiments is introduced briefly in this paper. The development of the high energy physics experiments and the new computing requirements by the experiments are presented. The blueprint of the new generation computing environment of the LHC experiments, the history of the Grid computing, the R and D status of the high energy physics grid computing technology, the network bandwidth needed by the high energy physics grid and its development are described. The grid computing research in Chinese high energy physics community is introduced at last. (authors)

  19. Engineering computations at the national magnetic fusion energy computer center

    International Nuclear Information System (INIS)

    Murty, S.

    1983-01-01

    The National Magnetic Fusion Energy Computer Center (NMFECC) was established by the U.S. Department of Energy's Division of Magnetic Fusion Energy (MFE). The NMFECC headquarters is located at Lawrence Livermore National Laboratory. Its purpose is to apply large-scale computational technology and computing techniques to the problems of controlled thermonuclear research. In addition to providing cost effective computing services, the NMFECC also maintains a large collection of computer codes in mathematics, physics, and engineering that is shared by the entire MFE research community. This review provides a broad perspective of the NMFECC, and a list of available codes at the NMFECC for engineering computations is given

  20. Perturbative evaluation of the zero-point function for self-interacting scalar field on a manifold with boundary

    International Nuclear Information System (INIS)

    Tsoupros, George

    2002-01-01

    The character of quantum corrections to the gravitational action of a conformally invariant field theory for a self-interacting scalar field on a manifold with boundary is considered at third loop-order in the perturbative expansion of the zero-point function. Diagramatic evaluations and higher loop-order renormalization can be best accomplished on a Riemannian manifold of positive constant curvature accommodating a boundary of constant extrinsic curvature. The associated spherical formulation for diagramatic evaluations reveals a non-trivial effect which the topology of the manifold has on the vacuum processes and which ultimately dissociates the dynamical behaviour of the quantized field from its behaviour in the absence of a boundary. The first surface divergence is evaluated and the necessity for simultaneous renormalization of volume and surface divergences is shown

  1. A dual-unit pressure sensor for on-chip self-compensation of zero-point temperature drift

    International Nuclear Information System (INIS)

    Wang, Jiachou; Li, Xinxin

    2014-01-01

    A novel dual-unit piezoresistive pressure sensor, consisting of a sensing unit and a dummy unit, is proposed and developed for on-chip self-compensation for zero-point temperature drift. With an MIS (microholes inter-etch and sealing) process implemented only from the front side of single (1 1 1) silicon wafers, a pressure sensitive unit and another identically structured pressure insensitive dummy unit are compactly integrated on-chip to eliminate unbalance factors induced zero-point temperature-drift by mutual compensation between the two units. Besides, both units are physically suspended from silicon substrate to further suppress packaging-stress induced temperature drift. A simultaneously processes ventilation hole-channel structure is connected with the pressure reference cavity of the dummy unit to make it insensitive to detected pressure. In spite of the additional dummy unit, the sensor chip dimensions are still as small as 1.2 mm × 1.2 mm × 0.4 mm. The proposed dual-unit sensor is fabricated and tested, with the tested sensitivity being 0.104 mV kPa −1 3.3 V −1 , nonlinearity of less than 0.08% · FSO and overall accuracy error of ± 0.18% · FSO. Without using any extra compensation method, the sensor features an ultra-low temperature coefficient of offset (TCO) of 0.002% °C −1 · FSO that is much better than the performance of conventional pressure sensors. The highly stable and small-sized sensors are promising for low cost production and applications. (paper)

  2. High energy physics and cloud computing

    International Nuclear Information System (INIS)

    Cheng Yaodong; Liu Baoxu; Sun Gongxing; Chen Gang

    2011-01-01

    High Energy Physics (HEP) has been a strong promoter of computing technology, for example WWW (World Wide Web) and the grid computing. In the new era of cloud computing, HEP has still a strong demand, and major international high energy physics laboratories have launched a number of projects to research on cloud computing technologies and applications. It describes the current developments in cloud computing and its applications in high energy physics. Some ongoing projects in the institutes of high energy physics, Chinese Academy of Sciences, including cloud storage, virtual computing clusters, and BESⅢ elastic cloud, are also described briefly in the paper. (authors)

  3. Computational force, mass, and energy

    International Nuclear Information System (INIS)

    Numrich, R.W.

    1997-01-01

    This paper describes a correspondence between computational quantities commonly used to report computer performance measurements and mechanical quantities from classical Newtonian mechanics. It defines a set of three fundamental computational quantities that are sufficient to establish a system of computational measurement. From these quantities, it defines derived computational quantities that have analogous physical counterparts. These computational quantities obey three laws of motion in computational space. The solutions to the equations of motion, with appropriate boundary conditions, determine the computational mass of the computer. Computational forces, with magnitudes specific to each instruction and to each computer, overcome the inertia represented by this mass. The paper suggests normalizing the computational mass scale by picking the mass of a register on the CRAY-1 as the standard unit of mass

  4. High energy physics computing in Japan

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1989-01-01

    A brief overview of the computing provision for high energy physics in Japan is presented. Most of the computing power for high energy physics is concentrated in KEK. Here there are two large scale systems: one providing a general computing service including vector processing and the other dedicated to TRISTAN experiments. Each university group has a smaller sized mainframe or VAX system to facilitate both their local computing needs and the remote use of the KEK computers through a network. The large computer system for the TRISTAN experiments is described. An overview of a prospective future large facility is also given. (orig.)

  5. Fusion energy division computer systems network

    International Nuclear Information System (INIS)

    Hammons, C.E.

    1980-12-01

    The Fusion Energy Division of the Oak Ridge National Laboratory (ORNL) operated by Union Carbide Corporation Nuclear Division (UCC-ND) is primarily involved in the investigation of problems related to the use of controlled thermonuclear fusion as an energy source. The Fusion Energy Division supports investigations of experimental fusion devices and related fusion theory. This memo provides a brief overview of the computing environment in the Fusion Energy Division and the computing support provided to the experimental effort and theory research

  6. Computational Screening of Energy Materials

    DEFF Research Database (Denmark)

    Pandey, Mohnish

    , it is the need of the hour to search for environmentally benign renewable energy resources. The biggest source of the renewable energy is our sun and the immense energy it provides can be used to power the whole planet. However, an efficient way to harvest the solar energy to meet all the energy demand has...... not been realized yet. A promising way to utilize the solar energy is the photon assisted water splitting. The process involves the absorption of sunlight with a semiconducting material (or a photoabsorber) and the generated electron-hole pair can be used to produce hydrogen by splitting the water. However...... an accurate description of the energies with the first-principle calculations. Therefore, along this line the accuracy and predictability of the Meta-Generalized Gradient Approximation functional with Bayesian error estimation is also assessed....

  7. Magnetic-fusion energy and computers

    International Nuclear Information System (INIS)

    Killeen, J.

    1982-01-01

    The application of computers to magnetic fusion energy research is essential. In the last several years the use of computers in the numerical modeling of fusion systems has increased substantially. There are several categories of computer models used to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies are also in use. To meet the needs of the fusion program, the National Magnetic Fusion Energy Computer Center has been established at the Lawrence Livermore National Laboratory. A large central computing facility is linked to smaller computer centers at each of the major MFE laboratories by a communication network. In addition to providing cost effective computing services, the NMFECC environment stimulates collaboration and the sharing of computer codes among the various fusion research groups

  8. Intelligent computing for sustainable energy and environment

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kang [Queen' s Univ. Belfast (United Kingdom). School of Electronics, Electrical Engineering and Computer Science; Li, Shaoyuan; Li, Dewei [Shanghai Jiao Tong Univ., Shanghai (China). Dept. of Automation; Niu, Qun (eds.) [Shanghai Univ. (China). School of Mechatronic Engineering and Automation

    2013-07-01

    Fast track conference proceedings. State of the art research. Up to date results. This book constitutes the refereed proceedings of the Second International Conference on Intelligent Computing for Sustainable Energy and Environment, ICSEE 2012, held in Shanghai, China, in September 2012. The 60 full papers presented were carefully reviewed and selected from numerous submissions and present theories and methodologies as well as the emerging applications of intelligent computing in sustainable energy and environment.

  9. Computed tomography in severe protein energy malnutrition.

    OpenAIRE

    Househam, K C; de Villiers, J F

    1987-01-01

    Computed tomography of the brain was performed on eight children aged 1 to 4 years with severe protein energy malnutrition. Clinical features typical of kwashiorkor were present in all the children studied. Severe cerebral atrophy or brain shrinkage according to standard radiological criteria was present in every case. The findings of this study suggest considerable cerebral insult associated with severe protein energy malnutrition.

  10. Computing in high-energy physics

    International Nuclear Information System (INIS)

    Mount, Richard P.

    2016-01-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software

  11. Computing in high-energy physics

    Science.gov (United States)

    Mount, Richard P.

    2016-04-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Finally, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  12. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  13. GRID computing for experimental high energy physics

    International Nuclear Information System (INIS)

    Moloney, G.R.; Martin, L.; Seviour, E.; Taylor, G.N.; Moorhead, G.F.

    2002-01-01

    Full text: The Large Hadron Collider (LHC), to be completed at the CERN laboratory in 2006, will generate 11 petabytes of data per year. The processing of this large data stream requires a large, distributed computing infrastructure. A recent innovation in high performance distributed computing, the GRID, has been identified as an important tool in data analysis for the LHC. GRID computing has actual and potential application in many fields which require computationally intensive analysis of large, shared data sets. The Australian experimental High Energy Physics community has formed partnerships with the High Performance Computing community to establish a GRID node at the University of Melbourne. Through Australian membership of the ATLAS experiment at the LHC, Australian researchers have an opportunity to be involved in the European DataGRID project. This presentation will include an introduction to the GRID, and it's application to experimental High Energy Physics. We will present the results of our studies, including participation in the first LHC data challenge

  14. Grid Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Avery, Paul

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them.Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software resources, regardless of location); (4) collaboration (providing tools that allow members full and fair access to all collaboration resources and enable distributed teams to work effectively, irrespective of location); and (5) education, training and outreach (providing resources and mechanisms for training students and for communicating important information to the public).It is believed that computing infrastructures based on Data Grids and optical networks can meet these challenges and can offer data intensive enterprises in high energy physics and elsewhere a comprehensive, scalable framework for collaboration and resource sharing. A number of Data Grid projects have been underway since 1999. Interestingly, the most exciting and far ranging of these projects are led by collaborations of high energy physicists, computer scientists and scientists from other disciplines in support of experiments with massive, near-term data needs. I review progress in this

  15. CHEP95: Computing in high energy physics. Abstracts

    International Nuclear Information System (INIS)

    1995-01-01

    These proceedings cover the technical papers on computation in High Energy Physics, including computer codes, computer devices, control systems, simulations, data acquisition systems. New approaches on computer architectures are also discussed

  16. Energy Aware Computing in Cooperative Wireless Networks

    DEFF Research Database (Denmark)

    Olsen, Anders Brødløs; Fitzek, Frank H. P.; Koch, Peter

    2005-01-01

    In this work the idea of cooperation is applied to wireless communication systems. It is generally accepted that energy consumption is a significant design constraint for mobile handheld systems. We propose a novel method of cooperative task computing by distributing tasks among terminals over...... the unreliable wireless link. Principles of multi–processor energy aware task scheduling are used exploiting performance scalable technologies such as Dynamic Voltage Scaling (DVS). We introduce a novel mechanism referred to as D2VS and here it is shown by means of simulation that savings of 40% can be achieved....

  17. Computer Architecture for Energy Efficient SFQ

    Science.gov (United States)

    2014-08-27

    IBM Corporation (T.J. Watson Research Laboratory) 1101 Kitchawan Road Yorktown Heights, NY 10598 -0000 2 ABSTRACT Number of Papers published in peer...accomplished during this ARO-sponsored project at IBM Research to identify and model an energy efficient SFQ-based computer architecture. The... IBM Windsor Blue (WB), illustrated schematically in Figure 2. The basic building block of WB is a "tile" comprised of a 64-bit arithmetic logic unit

  18. Magnetic fusion energy and computers: the role of computing in magnetic fusion energy research and development

    International Nuclear Information System (INIS)

    1979-10-01

    This report examines the role of computing in the Department of Energy magnetic confinement fusion program. The present status of the MFECC and its associated network is described. The third part of this report examines the role of computer models in the main elements of the fusion program and discusses their dependence on the most advanced scientific computers. A review of requirements at the National MFE Computer Center was conducted in the spring of 1976. The results of this review led to the procurement of the CRAY 1, the most advanced scientific computer available, in the spring of 1978. The utilization of this computer in the MFE program has been very successful and is also described in the third part of the report. A new study of computer requirements for the MFE program was conducted during the spring of 1979 and the results of this analysis are presented in the forth part of this report

  19. Grid computing in high-energy physics

    International Nuclear Information System (INIS)

    Bischof, R.; Kuhn, D.; Kneringer, E.

    2003-01-01

    Full text: The future high energy physics experiments are characterized by an enormous amount of data delivered by the large detectors presently under construction e.g. at the Large Hadron Collider and by a large number of scientists (several thousands) requiring simultaneous access to the resulting experimental data. Since it seems unrealistic to provide the necessary computing and storage resources at one single place, (e.g. CERN), the concept of grid computing i.e. the use of distributed resources, will be chosen. The DataGrid project (under the leadership of CERN) develops, based on the Globus toolkit, the software necessary for computation and analysis of shared large-scale databases in a grid structure. The high energy physics group Innsbruck participates with several resources in the DataGrid test bed. In this presentation our experience as grid users and resource provider is summarized. In cooperation with the local IT-center (ZID) we installed a flexible grid system which uses PCs (at the moment 162) in student's labs during nights, weekends and holidays, which is especially used to compare different systems (local resource managers, other grid software e.g. from the Nordugrid project) and to supply a test bed for the future Austrian Grid (AGrid). (author)

  20. Energy Efficiency in Computing (1/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    As manufacturers improve the silicon process, truly low energy computing is becoming a reality - both in servers and in the consumer space. This series of lectures covers a broad spectrum of aspects related to energy efficient computing - from circuits to datacentres. We will discuss common trade-offs and basic components, such as processors, memory and accelerators. We will also touch on the fundamentals of modern datacenter design and operation. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP and Google), as well as international research institutes, such as EPFL. Currently, Andrzej acts as a consultant on technology and innovation with TIK Services (http://tik.services), and runs a peer-to-peer lending start-up. NB! All Academic L...

  1. Computer simulation of high energy displacement cascades

    International Nuclear Information System (INIS)

    Heinisch, H.L.

    1990-01-01

    A methodology developed for modeling many aspects of high energy displacement cascades with molecular level computer simulations is reviewed. The initial damage state is modeled in the binary collision approximation (using the MARLOWE computer code), and the subsequent disposition of the defects within a cascade is modeled with a Monte Carlo annealing simulation (the ALSOME code). There are few adjustable parameters, and none are set to physically unreasonable values. The basic configurations of the simulated high energy cascades in copper, i.e., the number, size and shape of damage regions, compare well with observations, as do the measured numbers of residual defects and the fractions of freely migrating defects. The success of these simulations is somewhat remarkable, given the relatively simple models of defects and their interactions that are employed. The reason for this success is that the behavior of the defects is very strongly influenced by their initial spatial distributions, which the binary collision approximation adequately models. The MARLOWE/ALSOME system, with input from molecular dynamics and experiments, provides a framework for investigating the influence of high energy cascades on microstructure evolution. (author)

  2. Computational materials design for energy applications

    Science.gov (United States)

    Ozolins, Vidvuds

    2013-03-01

    General adoption of sustainable energy technologies depends on the discovery and development of new high-performance materials. For instance, waste heat recovery and electricity generation via the solar thermal route require bulk thermoelectrics with a high figure of merit (ZT) and thermal stability at high-temperatures. Energy recovery applications (e.g., regenerative braking) call for the development of rapidly chargeable systems for electrical energy storage, such as electrochemical supercapacitors. Similarly, use of hydrogen as vehicular fuel depends on the ability to store hydrogen at high volumetric and gravimetric densities, as well as on the ability to extract it at ambient temperatures at sufficiently rapid rates. We will discuss how first-principles computational methods based on quantum mechanics and statistical physics can drive the understanding, improvement and prediction of new energy materials. We will cover prediction and experimental verification of new earth-abundant thermoelectrics, transition metal oxides for electrochemical supercapacitors, and kinetics of mass transport in complex metal hydrides. Research has been supported by the US Department of Energy under grant Nos. DE-SC0001342, DE-SC0001054, DE-FG02-07ER46433, and DE-FC36-08GO18136.

  3. Nuclear Computational Low Energy Initiative (NUCLEI)

    Energy Technology Data Exchange (ETDEWEB)

    Reddy, Sanjay K. [University of Washington

    2017-08-14

    This is the final report for University of Washington for the NUCLEI SciDAC-3. The NUCLEI -project, as defined by the scope of work, will develop, implement and run codes for large-scale computations of many topics in low-energy nuclear physics. Physics to be studied include the properties of nuclei and nuclear decays, nuclear structure and reactions, and the properties of nuclear matter. The computational techniques to be used include Quantum Monte Carlo, Configuration Interaction, Coupled Cluster, and Density Functional methods. The research program will emphasize areas of high interest to current and possible future DOE nuclear physics facilities, including ATLAS and FRIB (nuclear structure and reactions, and nuclear astrophysics), TJNAF (neutron distributions in nuclei, few body systems, and electroweak processes), NIF (thermonuclear reactions), MAJORANA and FNPB (neutrino-less double-beta decay and physics beyond the Standard Model), and LANSCE (fission studies).

  4. Energy Efficiency in Computing (2/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    We will start the second day of our energy efficient computing series with a brief discussion of software and the impact it has on energy consumption. A second major point of this lecture will be the current state of research and a few future technologies, ranging from mainstream (e.g. the Internet of Things) to exotic. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP and Google), as well as international research institutes, such as EPFL. Currently, Andrzej acts as a consultant on technology and innovation with TIK Services (http://tik.services), and runs a peer-to-peer lending start-up. NB! All Academic Lectures are recorded. No webcast! Because of a problem of the recording equipment, this lecture will be repeated for recording pu...

  5. Determinação do ponto de carga zero em solos Determination of the zero point of charge in soils

    Directory of Open Access Journals (Sweden)

    Bernardo van Raij

    1973-01-01

    Full Text Available São apresentados os fundamentos e dois métodos de determinação do ponto de carga zero (PCZ em solos. Por um dos métodos o PCZ foi determinado como sendo o pH do ponto de cruzamento de curvas de titulação dos solos em soluções de NaCl 1; 0.1; 0 01; c 0,001N. Pelo outro método o PCZ foi determinado por extrapolação ou interpolação da carga líquida dos solos, determinada por retenção de íons em soluções de NaCl 0,2N, CaCl2 0,01N e MgSO4 0,01N, ao valor de pH em que a carga líquida era nula.The zero point of charge (ZPC of soils was determined by the crossing point of acid-base potenciometric titration curves in different concentrations of NaCl. Alternatively, the ZPC was found by extrapolating or interpolating the net electric charge of soils, determined by direct adsorption of ions from solution of NaCl, CaCl2 and MgSO4, to the pH of charge zero.

  6. Adiabatic physics of an exchange-coupled spin-dimer system: Magnetocaloric effect, zero-point fluctuations, and possible two-dimensional universal behavior

    International Nuclear Information System (INIS)

    Brambleby, J.; Goddard, P. A.; Singleton, John; Jaime, Marcelo; Lancaster, T.

    2017-01-01

    We present the magnetic and thermal properties of the bosonic-superfluid phase in a spin-dimer network using both quasistatic and rapidly changing pulsed magnetic fields. The entropy derived from a heat-capacity study reveals that the pulsed-field measurements are strongly adiabatic in nature and are responsible for the onset of a significant magnetocaloric effect (MCE). In contrast to previous predictions we show that the MCE is not just confined to the critical regions, but occurs for all fields greater than zero at sufficiently low temperatures. We explain the MCE using a model of the thermal occupation of exchange-coupled dimer spin states and highlight that failure to take this effect into account inevitably leads to incorrect interpretations of experimental results. In addition, the heat capacity in our material is suggestive of an extraordinary contribution from zero-point fluctuations and appears to indicate universal behavior with different critical exponents at the two field-induced critical points. Finally, the data at the upper critical point, combined with the layered structure of the system, are consistent with a two-dimensional nature of spin excitations in the system.

  7. Soft computing trends in nuclear energy system

    International Nuclear Information System (INIS)

    Paramasivan, B.

    2012-01-01

    In spite of so many advancements in the power and energy sector over the last two decades, its survival to cater quality power with due consideration for planning, coordination, marketing, safety, stability, optimality and reliability is still believed to remain critical. Though it appears simple from the outside, yet the internal structure of large scale power systems is so complex that event management and decision making requires a formidable preliminary preparation, which gets still worsened in the presence of uncertainties and contingencies. These aspects have attracted several researchers to carryout continued research in this field and their valued contributions have been significantly helping the newcomers in understanding the evolutionary growth in this sector, starting from phenomena, tools, methodologies to strategies so as to ensure smooth, stable, safe, reliable and economic operation. The usage of soft computing would accelerate interaction between the energy and technology research community with an aim to foster unified development in the next generation. Monitoring the mechanical impact of a loose (detached or drifting) part in the reactor coolant system of a nuclear power plant is one of the essential functions for operation and maintenance of the plant. Large data tables are generated during this monitoring process. This data can be 'mined' to reveal latent patterns of interest to operation and maintenance. Rough set theory has been applied successfully to data mining. It can be used in the nuclear power industry and elsewhere to identify classes in datasets, finding dependencies in relations and discovering rules which are hidden in databases. An important role may be played by nuclear energy, provided that major safety, waste and proliferation issues affecting current nuclear reactors are satisfactorily addressed. In this respect, a large effort is under way since a few years towards the development of advanced nuclear systems that would use

  8. Bringing together high energy physicist and computer scientist

    International Nuclear Information System (INIS)

    Bock, R.K.

    1989-01-01

    The Oxford Conference on Computing in High Energy Physics approached the physics and computing issues with the question, ''Can computer science help?'' always in mind. This summary is a personal recollection of what I considered to be the highlights of the conference: the parts which contributed to my own learning experience. It can be used as a general introduction to the following papers, or as a brief overview of the current states of computer science within high energy physics. (orig.)

  9. Norwegian computers in European energy research project

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    16 NORD computers have been ordered for the JET data acquisition and storage system. The computers will be arranged in a 'double star' configuration, developed by CERN. Two control consoles each have their own computer. All computers for communication, control, diagnostics, consoles and testing are NORD-100s while the computer for data storage and analysis is a NORD-500. The operating system is SINTRAN CAMAC SERIAL HIGHWAY with fibre optics to be used for long communications paths. The programming languages FORTRAN, NODAL, NORD PL, PASCAL and BASIC may be used. The JET project and TOKAMAK type machines are briefly described. (JIW)

  10. Personal computers in high energy physics

    International Nuclear Information System (INIS)

    Quarrie, D.R.

    1987-01-01

    The role of personal computers within HEP is expanding as their capabilities increase and their cost decreases. Already they offer greater flexibility than many low-cost graphics terminals for a comparable cost and in addition they can significantly increase the productivity of physicists and programmers. This talk will discuss existing uses for personal computers and explore possible future directions for their integration into the overall computing environment. (orig.)

  11. Analytic computation of average energy of neutrons inducing fission

    International Nuclear Information System (INIS)

    Clark, Alexander Rich

    2016-01-01

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  12. Computing with memory for energy-efficient robust systems

    CERN Document Server

    Paul, Somnath

    2013-01-01

    This book analyzes energy and reliability as major challenges faced by designers of computing frameworks in the nanometer technology regime.  The authors describe the existing solutions to address these challenges and then reveal a new reconfigurable computing platform, which leverages high-density nanoscale memory for both data storage and computation to maximize the energy-efficiency and reliability. The energy and reliability benefits of this new paradigm are illustrated and the design challenges are discussed. Various hardware and software aspects of this exciting computing paradigm are de

  13. Bringing Advanced Computational Techniques to Energy Research

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Julie C

    2012-11-17

    Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

  14. Office of Fusion Energy computational review

    International Nuclear Information System (INIS)

    Cohen, B.I.; Cohen, R.H.; Byers, J.A.

    1996-01-01

    The LLNL MFE Theory and Computations Program supports computational efforts in the following areas: (1) Magnetohydrodynamic equilibrium and stability; (2) Fluid and kinetic edge plasma simulation and modeling; (3) Kinetic and fluid core turbulent transport simulation; (4) Comprehensive tokamak modeling (CORSICA Project) - transport, MHD equilibrium and stability, edge physics, heating, turbulent transport, etc. and (5) Other: ECRH ray tracing, reflectometry, plasma processing. This report discusses algorithm and codes pertaining to these areas

  15. Energy Consumption Management of Virtual Cloud Computing Platform

    Science.gov (United States)

    Li, Lin

    2017-11-01

    For energy consumption management research on virtual cloud computing platforms, energy consumption management of virtual computers and cloud computing platform should be understood deeper. Only in this way can problems faced by energy consumption management be solved. In solving problems, the key to solutions points to data centers with high energy consumption, so people are in great need to use a new scientific technique. Virtualization technology and cloud computing have become powerful tools in people’s real life, work and production because they have strong strength and many advantages. Virtualization technology and cloud computing now is in a rapid developing trend. It has very high resource utilization rate. In this way, the presence of virtualization and cloud computing technologies is very necessary in the constantly developing information age. This paper has summarized, explained and further analyzed energy consumption management questions of the virtual cloud computing platform. It eventually gives people a clearer understanding of energy consumption management of virtual cloud computing platform and brings more help to various aspects of people’s live, work and son on.

  16. Computational Fluid Dynamics and Building Energy Performance Simulation

    DEFF Research Database (Denmark)

    Nielsen, Peter V.; Tryggvason, Tryggvi

    An interconnection between a building energy performance simulation program and a Computational Fluid Dynamics program (CFD) for room air distribution will be introduced for improvement of the predictions of both the energy consumption and the indoor environment. The building energy performance...

  17. Energy-efficient computing and networking. Revised selected papers

    Energy Technology Data Exchange (ETDEWEB)

    Hatziargyriou, Nikos; Dimeas, Aris [Ethnikon Metsovion Polytechneion, Athens (Greece); Weidlich, Anke (eds.) [SAP Research Center, Karlsruhe (Germany); Tomtsi, Thomai

    2011-07-01

    This book constitutes the postproceedings of the First International Conference on Energy-Efficient Computing and Networking, E-Energy, held in Passau, Germany in April 2010. The 23 revised papers presented were carefully reviewed and selected for inclusion in the post-proceedings. The papers are organized in topical sections on energy market and algorithms, ICT technology for the energy market, implementation of smart grid and smart home technology, microgrids and energy management, and energy efficiency through distributed energy management and buildings. (orig.)

  18. Optimum energies for dual-energy computed tomography

    International Nuclear Information System (INIS)

    Talbert, A.J.; Brooks, R.A.; Morgenthaler, D.G.

    1980-01-01

    By performing a dual-energy scan, separate information can be obtained on the Compton and photoelectric components of attenuation for an unknown material. This procedure has been analysed for the optimum energies, and for the optimum dose distribution between the two scans. It was found that an equal dose at both energies was a good compromise, compared with optimising the dose distributing for either the Compton or photoelectric components individually. For monoenergetic beams, it was found that low energy of 40 keV produced minimum noise when using high-energy beams of 80 to 100 keV. This was true whether one maintained constant integral dose or constant surface dose. A low energy of 50 keV which is more nearly attainable in practice, produced almost as good a degree of accuracy. The analysis can be extended to polyenergetic beams by the inclusion of a noise factor. The above results were qualitatively unchanged, although the noise was increased by about 20% with integral dose equivalence and 50% with surface dose equivalence. It is very important to make the spectra as narrow as possible, especially at the low energy, in order to minimise the noise. (author)

  19. Soft computing in green and renewable energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Gopalakrishnan, Kasthurirangan [Iowa State Univ., Ames, IA (United States). Iowa Bioeconomy Inst.; US Department of Energy, Ames, IA (United States). Ames Lab; Kalogirou, Soteris [Cyprus Univ. of Technology, Limassol (Cyprus). Dept. of Mechanical Engineering and Materials Sciences and Engineering; Khaitan, Siddhartha Kumar (eds.) [Iowa State Univ. of Science and Technology, Ames, IA (United States). Dept. of Electrical Engineering and Computer Engineering

    2011-07-01

    Soft Computing in Green and Renewable Energy Systems provides a practical introduction to the application of soft computing techniques and hybrid intelligent systems for designing, modeling, characterizing, optimizing, forecasting, and performance prediction of green and renewable energy systems. Research is proceeding at jet speed on renewable energy (energy derived from natural resources such as sunlight, wind, tides, rain, geothermal heat, biomass, hydrogen, etc.) as policy makers, researchers, economists, and world agencies have joined forces in finding alternative sustainable energy solutions to current critical environmental, economic, and social issues. The innovative models, environmentally benign processes, data analytics, etc. employed in renewable energy systems are computationally-intensive, non-linear and complex as well as involve a high degree of uncertainty. Soft computing technologies, such as fuzzy sets and systems, neural science and systems, evolutionary algorithms and genetic programming, and machine learning, are ideal in handling the noise, imprecision, and uncertainty in the data, and yet achieve robust, low-cost solutions. As a result, intelligent and soft computing paradigms are finding increasing applications in the study of renewable energy systems. Researchers, practitioners, undergraduate and graduate students engaged in the study of renewable energy systems will find this book very useful. (orig.)

  20. Computer Profile of School Facilities Energy Consumption.

    Science.gov (United States)

    Oswalt, Felix E.

    This document outlines a computerized management tool designed to enable building managers to identify energy consumption as related to types and uses of school facilities for the purpose of evaluating and managing the operation, maintenance, modification, and planning of new facilities. Specifically, it is expected that the statistics generated…

  1. National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology

    Energy Technology Data Exchange (ETDEWEB)

    Hules, J. [ed.

    1996-11-01

    National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

  2. Computer-aided engineering in High Energy Physics

    International Nuclear Information System (INIS)

    Bachy, G.; Hauviller, C.; Messerli, R.; Mottier, M.

    1988-01-01

    Computing, standard tool for a long time in the High Energy Physics community, is being slowly introduced at CERN in the mechanical engineering field. The first major application was structural analysis followed by Computer-Aided Design (CAD). Development work is now progressing towards Computer-Aided Engineering around a powerful data base. This paper gives examples of the power of this approach applied to engineering for accelerators and detectors

  3. Alternative energy technologies an introduction with computer simulations

    CERN Document Server

    Buxton, Gavin

    2014-01-01

    Introduction to Alternative Energy SourcesGlobal WarmingPollutionSolar CellsWind PowerBiofuelsHydrogen Production and Fuel CellsIntroduction to Computer ModelingBrief History of Computer SimulationsMotivation and Applications of Computer ModelsUsing Spreadsheets for SimulationsTyping Equations into SpreadsheetsFunctions Available in SpreadsheetsRandom NumbersPlotting DataMacros and ScriptsInterpolation and ExtrapolationNumerical Integration and Diffe

  4. Energy expenditure in adolescents playing new generation computer games.

    Science.gov (United States)

    Graves, Lee; Stratton, Gareth; Ridgers, N D; Cable, N T

    2008-07-01

    To compare the energy expenditure of adolescents when playing sedentary and new generation active computer games. Cross sectional comparison of four computer games. Setting Research laboratories. Six boys and five girls aged 13-15 years. Participants were fitted with a monitoring device validated to predict energy expenditure. They played four computer games for 15 minutes each. One of the games was sedentary (XBOX 360) and the other three were active (Wii Sports). Predicted energy expenditure, compared using repeated measures analysis of variance. Mean (standard deviation) predicted energy expenditure when playing Wii Sports bowling (190.6 (22.2) kl/kg/min), tennis (202.5 (31.5) kl/kg/min), and boxing (198.1 (33.9) kl/kg/min) was significantly greater than when playing sedentary games (125.5 (13.7) kl/kg/min) (Pgames. Playing new generation active computer games uses significantly more energy than playing sedentary computer games but not as much energy as playing the sport itself. The energy used when playing active Wii Sports games was not of high enough intensity to contribute towards the recommended daily amount of exercise in children.

  5. Large scale computing in the Energy Research Programs

    International Nuclear Information System (INIS)

    1991-05-01

    The Energy Research Supercomputer Users Group (ERSUG) comprises all investigators using resources of the Department of Energy Office of Energy Research supercomputers. At the December 1989 meeting held at Florida State University (FSU), the ERSUG executive committee determined that the continuing rapid advances in computational sciences and computer technology demanded a reassessment of the role computational science should play in meeting DOE's commitments. Initial studies were to be performed for four subdivisions: (1) Basic Energy Sciences (BES) and Applied Mathematical Sciences (AMS), (2) Fusion Energy, (3) High Energy and Nuclear Physics, and (4) Health and Environmental Research. The first two subgroups produced formal subreports that provided a basis for several sections of this report. Additional information provided in the AMS/BES is included as Appendix C in an abridged form that eliminates most duplication. Additionally, each member of the executive committee was asked to contribute area-specific assessments; these assessments are included in the next section. In the following sections, brief assessments are given for specific areas, a conceptual model is proposed that the entire computational effort for energy research is best viewed as one giant nation-wide computer, and then specific recommendations are made for the appropriate evolution of the system

  6. Improvements in high energy computed tomography

    International Nuclear Information System (INIS)

    Burstein, P.; Krieger, A.; Annis, M.

    1984-01-01

    In computerized axial tomography employed with large relatively dense objects such as a solid fuel rocket engine, using high energy x-rays, such as a 15 MeV source, a collimator is employed with an acceptance angle substantially less than 1 0 , in a preferred embodiment 7 minutes of a degree. In a preferred embodiment, the collimator may be located between the object and the detector, although in other embodiments, a pre-collimator may also be used, that is between the x-ray source and the object being illuminated. (author)

  7. A review of residential computer oriented energy control systems

    Energy Technology Data Exchange (ETDEWEB)

    North, Greg

    2000-07-01

    The purpose of this report is to bring together as much information on Residential Computer Oriented Energy Control Systems as possible within a single document. This report identifies the main elements of the system and is intended to provide many technical options for the design and implementation of various energy related services.

  8. Asymmetric energy flow in liquid alkylbenzenes: A computational study

    International Nuclear Information System (INIS)

    Leitner, David M.; Pandey, Hari Datt

    2015-01-01

    Ultrafast IR-Raman experiments on substituted benzenes [B. C. Pein et al., J. Phys. Chem. B 117, 10898–10904 (2013)] reveal that energy can flow more efficiently in one direction along a molecule than in others. We carry out a computational study of energy flow in the three alkyl benzenes, toluene, isopropylbenzene, and t-butylbenzene, studied in these experiments, and find an asymmetry in the flow of vibrational energy between the two chemical groups of the molecule due to quantum mechanical vibrational relaxation bottlenecks, which give rise to a preferred direction of energy flow. We compare energy flow computed for all modes of the three alkylbenzenes over the relaxation time into the liquid with energy flow through the subset of modes monitored in the time-resolved Raman experiments and find qualitatively similar results when using the subset compared to all the modes

  9. COMPUTER MODELLING OF ENERGY SAVING EFFECTS

    Directory of Open Access Journals (Sweden)

    Marian JANCZAREK

    2016-09-01

    Full Text Available The paper presents the analysis of the dynamics of the heat transfer through the outer wall of the thermal technical spaces, taking into account the impact of the sinusoidal nature of the changes in atmospheric temperature. These temporal variations of the input on the outer surface of the chamber divider result at the output of the sinusoidal change on the inner wall of the room, but suitably suppressed and shifted in phase. Properly selected phase shift is clearly important for saving energy used for the operation associated with the maintenance of a specific regime of heat inside the thermal technical chamber support. Laboratory tests of the model and the actual object allowed for optimal design of the chamber due to the structure of the partition as well as due to the orientation of the geographical location of the chamber.

  10. Dual energy computed tomography for the head.

    Science.gov (United States)

    Naruto, Norihito; Itoh, Toshihide; Noguchi, Kyo

    2018-02-01

    Dual energy CT (DECT) is a promising technology that provides better diagnostic accuracy in several brain diseases. DECT can generate various types of CT images from a single acquisition data set at high kV and low kV based on material decomposition algorithms. The two-material decomposition algorithm can separate bone/calcification from iodine accurately. The three-material decomposition algorithm can generate a virtual non-contrast image, which helps to identify conditions such as brain hemorrhage. A virtual monochromatic image has the potential to eliminate metal artifacts by reducing beam-hardening effects. DECT also enables exploration of advanced imaging to make diagnosis easier. One such novel application of DECT is the X-Map, which helps to visualize ischemic stroke in the brain without using iodine contrast medium.

  11. Parallel Computing:. Some Activities in High Energy Physics

    Science.gov (United States)

    Willers, Ian

    This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.

  12. High resolution measurements supported by electronic structure calculations of two naphthalene derivatives: [1,5]- and [1,6]-naphthyridine—Estimation of the zero point inertial defect for planar polycyclic aromatic compounds

    Energy Technology Data Exchange (ETDEWEB)

    Gruet, S., E-mail: sebastien.gruet@synchrotron-soleil.fr, E-mail: manuel.goubet@univ-lille1.fr; Pirali, O. [AILES Beamline, Synchrotron SOLEIL, Saint-Aubin, 91192 Gif-sur-Yvette (France); Institut des Sciences Moléculaires d’Orsay, UMR 8214 CNRS – Université Paris Sud, 91405 Orsay Cedex (France); Goubet, M., E-mail: sebastien.gruet@synchrotron-soleil.fr, E-mail: manuel.goubet@univ-lille1.fr [Laboratoire de Physique des Lasers, Atomes et Molécules, UMR 8523 CNRS – Université Lille 1, 59655 Villeneuve d’Ascq Cedex (France)

    2014-06-21

    the semi-empirical relations to estimate the zero-point inertial defect (Δ{sub 0}) of polycyclic aromatic molecules and confirmed the contribution of low frequency out-of-plane vibrational modes to the GS inertial defects of PAHs, which is indeed a key parameter to validate the analysis of such large molecules.

  13. Survey of Energy Computing in the Smart Grid Domain

    OpenAIRE

    Rajesh Kumar; Arun Agarwala

    2013-01-01

    Resource optimization, with advance computing tools, improves the efficient use of energy resources. The renewable energy resources are instantaneous and needs to be conserve at the same time. To optimize real time process, the complex design, includes plan of resources and control for effective utilization. The advances in information communication technology tools enables data formatting and analysis results in optimization of use the renewable resources for sustainable energy solution on s...

  14. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  15. Dual-Energy Computed Tomography: Image Acquisition, Processing, and Workflow.

    Science.gov (United States)

    Megibow, Alec J; Kambadakone, Avinash; Ananthakrishnan, Lakshmi

    2018-07-01

    Dual energy computed tomography has been available for more than 10 years; however, it is currently on the cusp of widespread clinical use. The way dual energy data are acquired and assembled must be appreciated at the clinical level so that the various reconstruction types can extend its diagnostic power. The type of scanner that is present in a given practice dictates the way in which the dual energy data can be presented and used. This article compares and contrasts how dual source, rapid kV switching, and spectral technologies acquire and present dual energy reconstructions to practicing radiologists. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Three numerical methods for the computation of the electrostatic energy

    International Nuclear Information System (INIS)

    Poenaru, D.N.; Galeriu, D.

    1975-01-01

    The FORTRAN programs for computation of the electrostatic energy of a body with axial symmetry by Lawrence, Hill-Wheeler and Beringer methods are presented in detail. The accuracy, time of computation and the required memory of these methods are tested at various deformations for two simple parametrisations: two overlapping identical spheres and a spheroid. On this basis the field of application of each method is recomended

  17. Experimental high energy physics and modern computer architectures

    International Nuclear Information System (INIS)

    Hoek, J.

    1988-06-01

    The paper examines how experimental High Energy Physics can use modern computer architectures efficiently. In this connection parallel and vector architectures are investigated, and the types available at the moment for general use are discussed. A separate section briefly describes some architectures that are either a combination of both, or exemplify other architectures. In an appendix some directions in which computing seems to be developing in the USA are mentioned. (author)

  18. Complexity vs energy: theory of computation and theoretical physics

    International Nuclear Information System (INIS)

    Manin, Y I

    2014-01-01

    This paper is a survey based upon the talk at the satellite QQQ conference to ECM6, 3Quantum: Algebra Geometry Information, Tallinn, July 2012. It is dedicated to the analogy between the notions of complexity in theoretical computer science and energy in physics. This analogy is not metaphorical: I describe three precise mathematical contexts, suggested recently, in which mathematics related to (un)computability is inspired by and to a degree reproduces formalisms of statistical physics and quantum field theory.

  19. Energy efficiency of computer power supply units - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Aebischer, B. [cepe - Centre for Energy Policy and Economics, Swiss Federal Institute of Technology Zuerich, Zuerich (Switzerland); Huser, H. [Encontrol GmbH, Niederrohrdorf (Switzerland)

    2002-11-15

    This final report for the Swiss Federal Office of Energy (SFOE) takes a look at the efficiency of computer power supply units, which decreases rapidly during average computer use. The background and the purpose of the project are examined. The power supplies for personal computers are discussed and the testing arrangement used is described. Efficiency, power-factor and operating points of the units are examined. Potentials for improvement and measures to be taken are discussed. Also, action to be taken by those involved in the design and operation of such power units is proposed. Finally, recommendations for further work are made.

  20. New Challenges for Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Santoro, Alberto

    2003-01-01

    In view of the new scientific programs established for the LHC (Large Hadron Collider) era, the way to face the technological challenges in computing was develop a new concept of GRID computing. We show some examples and, in particular, a proposal for high energy physicists in countries like Brazil. Due to the big amount of data and the need of close collaboration it will be impossible to work in research centers and universities very far from Fermilab or CERN unless a GRID architecture is built. An important effort is being made by the international community to up to date their computing infrastructure and networks

  1. Opportunities for discovery: Theory and computation in Basic Energy Sciences

    Energy Technology Data Exchange (ETDEWEB)

    Harmon, Bruce; Kirby, Kate; McCurdy, C. William

    2005-01-11

    New scientific frontiers, recent advances in theory, and rapid increases in computational capabilities have created compelling opportunities for theory and computation to advance the scientific mission of the Office of Basic Energy Sciences (BES). The prospects for success in the experimental programs of BES will be enhanced by pursuing these opportunities. This report makes the case for an expanded research program in theory and computation in BES. The Subcommittee on Theory and Computation of the Basic Energy Sciences Advisory Committee was charged with identifying current and emerging challenges and opportunities for theoretical research within the scientific mission of BES, paying particular attention to how computing will be employed to enable that research. A primary purpose of the Subcommittee was to identify those investments that are necessary to ensure that theoretical research will have maximum impact in the areas of importance to BES, and to assure that BES researchers will be able to exploit the entire spectrum of computational tools, including leadership class computing facilities. The Subcommittee s Findings and Recommendations are presented in Section VII of this report.

  2. Computational Fluid Dynamics and Building Energy Performance Simulation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm; Tryggvason, T.

    1998-01-01

    An interconnection between a building energy performance simulation program and a Computational Fluid Dynamics program (CFD) for room air distribution will be introduced for improvement of the predictions of both the energy consumption and the indoor environment. The building energy performance...... simulation program requires a detailed description of the energy flow in the air movement which can be obtained by a CFD program. The paper describes an energy consumption calculation in a large building, where the building energy simulation program is modified by CFD predictions of the flow between three...... zones connected by open areas with pressure and buoyancy driven air flow. The two programs are interconnected in an iterative procedure. The paper shows also an evaluation of the air quality in the main area of the buildings based on CFD predictions. It is shown that an interconnection between a CFD...

  3. A novel dual energy method for enhanced quantitative computed tomography

    Science.gov (United States)

    Emami, A.; Ghadiri, H.; Rahmim, A.; Ay, M. R.

    2018-01-01

    Accurate assessment of bone mineral density (BMD) is critically important in clinical practice, and conveniently enabled via quantitative computed tomography (QCT). Meanwhile, dual-energy QCT (DEQCT) enables enhanced detection of small changes in BMD relative to single-energy QCT (SEQCT). In the present study, we aimed to investigate the accuracy of QCT methods, with particular emphasis on a new dual-energy approach, in comparison to single-energy and conventional dual-energy techniques. We used a sinogram-based analytical CT simulator to model the complete chain of CT data acquisitions, and assessed performance of SEQCT and different DEQCT techniques in quantification of BMD. We demonstrate a 120% reduction in error when using a proposed dual-energy Simultaneous Equation by Constrained Least-squares method, enabling more accurate bone mineral measurements.

  4. A directory of computer software applications: energy. Report for 1974--1976

    International Nuclear Information System (INIS)

    Grooms, D.W.

    1977-04-01

    The computer programs or the computer program documentation cited in this directory have been developed for a variety of applications in the field of energy. The cited computer software includes applications in solar energy, petroleum resources, batteries, electrohydrodynamic generators, magnetohydrodynamic generators, natural gas, nuclear fission, nuclear fusion, hydroelectric power production, and geothermal energy. The computer software cited has been used for simulation and modeling, calculations of future energy requirements, calculations of energy conservation measures, and computations of economic considerations of energy systems

  5. Quantum computing applied to calculations of molecular energies

    Czech Academy of Sciences Publication Activity Database

    Pittner, Jiří; Veis, L.

    2011-01-01

    Roč. 241, - (2011), 151-phys ISSN 0065-7727. [National Meeting and Exposition of the American-Chemical-Society (ACS) /241./. 27.03.2011-31.03.2011, Anaheim] Institutional research plan: CEZ:AV0Z40400503 Keywords : molecular energie * quantum computers Subject RIV: CF - Physical ; Theoretical Chemistry

  6. Computation of the average energy for LXY electrons

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau, A.

    1996-01-01

    The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs

  7. Octopus: embracing the energy efficiency of handheld multimedia computers

    NARCIS (Netherlands)

    Havinga, Paul J.M.; Smit, Gerardus Johannes Maria

    1999-01-01

    In the MOBY DICK project we develop and define the architecture of a new generation of mobile hand-held computers called Mobile Digital Companions. The Companions must meet several major requirements: high performance, energy efficient, a notion of Quality of Service (QoS), small size, and low

  8. Computing energy budget within a crop canopy from Penmann's ...

    Indian Academy of Sciences (India)

    R. Narasimhan, Krishtel eMaging Solutions

    Computing energy budget within a crop canopy from. Penmann's formulae. Mahendra Mohan∗ and K K Srivastava∗∗. ∗Radio and Atmospheric Science Division, National Physical Laboratory, New Delhi 110012, India. ∗∗Department of Chemical Engineering, Institute of Technology, Banaras Hindu University, Varanasi.

  9. Energy consumption program: A computer model simulating energy loads in buildings

    Science.gov (United States)

    Stoller, F. W.; Lansing, F. L.; Chai, V. W.; Higgins, S.

    1978-01-01

    The JPL energy consumption computer program developed as a useful tool in the on-going building modification studies in the DSN energy conservation project is described. The program simulates building heating and cooling loads and computes thermal and electric energy consumption and cost. The accuracy of computations are not sacrificed, however, since the results lie within + or - 10 percent margin compared to those read from energy meters. The program is carefully structured to reduce both user's time and running cost by asking minimum information from the user and reducing many internal time-consuming computational loops. Many unique features were added to handle two-level electronics control rooms not found in any other program.

  10. Error Mitigation in Computational Design of Sustainable Energy Materials

    DEFF Research Database (Denmark)

    Christensen, Rune

    by individual C=O bonds. Energy corrections applied to C=O bonds significantly reduce systematic errors and can be extended to adsorbates. A similar study is performed for intermediates in the oxygen evolution and oxygen reduction reactions. An identified systematic error on peroxide bonds is found to also...... be present in the OOH* adsorbate. However, the systematic error will almost be canceled by inclusion of van der Waals energy. The energy difference between key adsorbates is thus similar to that previously found. Finally, a method is developed for error estimation in computationally inexpensive neural...

  11. Symbolic computation and its application to high energy physics

    International Nuclear Information System (INIS)

    Hearn, A.C.

    1981-01-01

    It is clear that we are in the middle of an electronic revolution whose effect will be as profound as the industrial revolution. The continuing advances in computing technology will provide us with devices which will make present day computers appear primitive. In this environment, the algebraic and other non-mumerical capabilities of such devices will become increasingly important. These lectures will review the present state of the field of algebraic computation and its potential for problem solving in high energy physics and related areas. We shall begin with a brief description of the available systems and examine the data objects which they consider. As an example of the facilities which these systems can offer, we shall then consider the problem of analytic integration, since this is so fundamental to many of the calculational techniques used by high energy physicists. Finally, we shall study the implications which the current developments in hardware technology hold for scientific problem solving. (orig.)

  12. Exascale for Energy: The Role of Exascale Computing in Energy Security

    OpenAIRE

    Authors, Various

    2010-01-01

    How will the United States satisfy energy demand in a tightening global energy marketplace while, at the same time, reducing greenhouse gas emissions? Exascale computing -- expected to be available within the next eight to ten years ? may play a crucial role in answering that question by enabling a paradigm shift from test-based to science-based design and engineering. Computational modeling of complete power generation systems and engines, based on scientific first principles, will accelerat...

  13. Energy efficient hybrid computing systems using spin devices

    Science.gov (United States)

    Sharad, Mrigank

    Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.

  14. Computational Analysis on Performance of Thermal Energy Storage (TES) Diffuser

    Science.gov (United States)

    Adib, M. A. H. M.; Adnan, F.; Ismail, A. R.; Kardigama, K.; Salaam, H. A.; Ahmad, Z.; Johari, N. H.; Anuar, Z.; Azmi, N. S. N.

    2012-09-01

    Application of thermal energy storage (TES) system reduces cost and energy consumption. The performance of the overall operation is affected by diffuser design. In this study, computational analysis is used to determine the thermocline thickness. Three dimensional simulations with different tank height-to-diameter ratio (HD), diffuser opening and the effect of difference number of diffuser holes are investigated. Medium HD tanks simulations with double ring octagonal diffuser show good thermocline behavior and clear distinction between warm and cold water. The result show, the best performance of thermocline thickness during 50% time charging occur in medium tank with height-to-diameter ratio of 4.0 and double ring octagonal diffuser with 48 holes (9mm opening ~ 60%) acceptable compared to diffuser with 6mm ~ 40% and 12mm ~ 80% opening. The conclusion is computational analysis method are very useful in the study on performance of thermal energy storage (TES).

  15. Computational Analysis on Performance of Thermal Energy Storage (TES) Diffuser

    International Nuclear Information System (INIS)

    Adib, M A H M; Ismail, A R; Kardigama, K; Salaam, H A; Ahmad, Z; Johari, N H; Anuar, Z; Azmi, N S N; Adnan, F

    2012-01-01

    Application of thermal energy storage (TES) system reduces cost and energy consumption. The performance of the overall operation is affected by diffuser design. In this study, computational analysis is used to determine the thermocline thickness. Three dimensional simulations with different tank height-to-diameter ratio (HD), diffuser opening and the effect of difference number of diffuser holes are investigated. Medium HD tanks simulations with double ring octagonal diffuser show good thermocline behavior and clear distinction between warm and cold water. The result show, the best performance of thermocline thickness during 50% time charging occur in medium tank with height-to-diameter ratio of 4.0 and double ring octagonal diffuser with 48 holes (9mm opening ∼ 60%) acceptable compared to diffuser with 6mm ∼ 40% and 12mm ∼ 80% opening. The conclusion is computational analysis method are very useful in the study on performance of thermal energy storage (TES).

  16. The Casimir effect as a candidate of dark energy

    OpenAIRE

    Matsumoto, Jiro

    2013-01-01

    It is known that the simply evaluated value of the zero point energy of quantum fields is extremely deviated from the observed value of dark energy density. In this paper, we consider whether the Casimir energy, which is the zero point energy brought from boundary conditions, can cause the accelerating expansion of the Universe by using proper renormalization method and introducing the fermions of finite temperature living in $3+n+1$ space-time. We show that the zero temperature Casimir energ...

  17. Large Scale Computing and Storage Requirements for High Energy Physics

    International Nuclear Information System (INIS)

    Gerber, Richard A.; Wasserman, Harvey

    2010-01-01

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  18. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  19. CAISSE (Computer Aided Information System on Solar Energy) technical manual

    Energy Technology Data Exchange (ETDEWEB)

    Cantelon, P E; Beinhauer, F W

    1979-01-01

    The Computer Aided Information System on Solar Energy (CAISSE) was developed to provide the general public with information on solar energy and its potential uses and costs for domestic consumption. CAISSE is an interactive computing system which illustrates solar heating concepts through the use of 35 mm slides, text displays on a screen and a printed report. The user communicates with the computer by responding to questions about his home and heating requirements through a touch sensitive screen. The CAISSE system contains a solar heating simulation model which calculates the heating load capable of being supplied by a solar heating system and uses this information to illustrate installation costs, fuel savings and a 20 year life-cycle analysis of cost and benefits. The system contains several sets of radiation and weather data for Canada and USA. The selection of one of four collector models is based upon the requirements input during the computer session. Optimistic and pessimistic fuel cost forecasts are made for oil, natural gas, electricity, or propane; and the forecasted fuel cost is made the basis of the life cycle cost evaluation for the solar heating application chosen. This manual is organized so that each section describes one major aspect of the use of solar energy systems to provide energy for domestic consumption. The sources of data and technical information and the method of incorporating them into the CAISSE display system are described in the same order as the computer processing. Each section concludes with a list of future developments that could be included to make CAISSE outputs more regionally specific and more useful to designers. 19 refs., 1 tab.

  20. Computational methods for high-energy source shielding

    International Nuclear Information System (INIS)

    Armstrong, T.W.; Cloth, P.; Filges, D.

    1983-01-01

    The computational methods for high-energy radiation transport related to shielding of the SNQ-spallation source are outlined. The basic approach is to couple radiation-transport computer codes which use Monte Carlo methods and discrete ordinates methods. A code system is suggested that incorporates state-of-the-art radiation-transport techniques. The stepwise verification of that system is briefly summarized. The complexity of the resulting code system suggests a more straightforward code specially tailored for thick shield calculations. A short guide line to future development of such a Monte Carlo code is given

  1. Computed tomography with energy-resolved detection: a feasibility study

    Science.gov (United States)

    Shikhaliev, Polad M.

    2008-03-01

    The feasibility of computed tomography (CT) with energy-resolved x-ray detection has been investigated. A breast CT design with multi slit multi slice (MSMS) data acquisition was used for this study. The MSMS CT includes linear arrays of photon counting detectors separated by gaps. This CT configuration allows for efficient scatter rejection and 3D data acquisition. The energy-resolved CT images were simulated using a digital breast phantom and the design parameters of the proposed MSMS CT. The phantom had 14 cm diameter and 50/50 adipose/glandular composition, and included carcinoma, adipose, blood, iodine and CaCO3 as contrast elements. The x-ray technique was 90 kVp tube voltage with 660 mR skin exposure. Photon counting, charge (energy) integrating and photon energy weighting CT images were generated. The contrast-to-noise (CNR) improvement with photon energy weighting was quantified. The dual energy subtracted images of CaCO3 and iodine were generated using a single CT scan at a fixed x-ray tube voltage. The x-ray spectrum was electronically split into low- and high-energy parts by a photon counting detector. The CNR of the energy weighting CT images of carcinoma, blood, adipose, iodine, and CaCO3 was higher by a factor of 1.16, 1.20, 1.21, 1.36 and 1.35, respectively, as compared to CT with a conventional charge (energy) integrating detector. Photon energy weighting was applied to CT projections prior to dual energy subtraction and reconstruction. Photon energy weighting improved the CNR in dual energy subtracted CT images of CaCO3 and iodine by a factor of 1.35 and 1.33, respectively. The combination of CNR improvements due to scatter rejection and energy weighting was in the range of 1.71-2 depending on the type of the contrast element. The tilted angle CZT detector was considered as the detector of choice. Experiments were performed to test the effect of the tilting angle on the energy spectrum. Using the CZT detector with 20° tilting angle decreased the

  2. Computation of Hemagglutinin Free Energy Difference by the Confinement Method

    Science.gov (United States)

    2017-01-01

    Hemagglutinin (HA) mediates membrane fusion, a crucial step during influenza virus cell entry. How many HAs are needed for this process is still subject to debate. To aid in this discussion, the confinement free energy method was used to calculate the conformational free energy difference between the extended intermediate and postfusion state of HA. Special care was taken to comply with the general guidelines for free energy calculations, thereby obtaining convergence and demonstrating reliability of the results. The energy that one HA trimer contributes to fusion was found to be 34.2 ± 3.4kBT, similar to the known contributions from other fusion proteins. Although computationally expensive, the technique used is a promising tool for the further energetic characterization of fusion protein mechanisms. Knowledge of the energetic contributions per protein, and of conserved residues that are crucial for fusion, aids in the development of fusion inhibitors for antiviral drugs. PMID:29151344

  3. Shadow Replication: An Energy-Aware, Fault-Tolerant Computational Model for Green Cloud Computing

    Directory of Open Access Journals (Sweden)

    Xiaolong Cui

    2014-08-01

    Full Text Available As the demand for cloud computing continues to increase, cloud service providers face the daunting challenge to meet the negotiated SLA agreement, in terms of reliability and timely performance, while achieving cost-effectiveness. This challenge is increasingly compounded by the increasing likelihood of failure in large-scale clouds and the rising impact of energy consumption and CO2 emission on the environment. This paper proposes Shadow Replication, a novel fault-tolerance model for cloud computing, which seamlessly addresses failure at scale, while minimizing energy consumption and reducing its impact on the environment. The basic tenet of the model is to associate a suite of shadow processes to execute concurrently with the main process, but initially at a much reduced execution speed, to overcome failures as they occur. Two computationally-feasible schemes are proposed to achieve Shadow Replication. A performance evaluation framework is developed to analyze these schemes and compare their performance to traditional replication-based fault tolerance methods, focusing on the inherent tradeoff between fault tolerance, the specified SLA and profit maximization. The results show that Shadow Replication leads to significant energy reduction, and is better suited for compute-intensive execution models, where up to 30% more profit increase can be achieved due to reduced energy consumption.

  4. Exascale for Energy: The Role of Exascale Computing in Energy Security

    Energy Technology Data Exchange (ETDEWEB)

    Authors, Various

    2010-07-15

    How will the United States satisfy energy demand in a tightening global energy marketplace while, at the same time, reducing greenhouse gas emissions? Exascale computing -- expected to be available within the next eight to ten years ? may play a crucial role in answering that question by enabling a paradigm shift from test-based to science-based design and engineering. Computational modeling of complete power generation systems and engines, based on scientific first principles, will accelerate the improvement of existing energy technologies and the development of new transformational technologies by pre-selecting the designs most likely to be successful for experimental validation, rather than relying on trial and error. The predictive understanding of complex engineered systems made possible by computational modeling will also reduce the construction and operations costs, optimize performance, and improve safety. Exascale computing will make possible fundamentally new approaches to quantifying the uncertainty of safety and performance engineering. This report discusses potential contributions of exa-scale modeling in four areas of energy production and distribution: nuclear power, combustion, the electrical grid, and renewable sources of energy, which include hydrogen fuel, bioenergy conversion, photovoltaic solar energy, and wind turbines. Examples of current research are taken from projects funded by the U.S. Department of Energy (DOE) Office of Science at universities and national laboratories, with a special focus on research conducted at Lawrence Berkeley National Laboratory.

  5. Computational methods for planning and evaluating geothermal energy projects

    International Nuclear Information System (INIS)

    Goumas, M.G.; Lygerou, V.A.; Papayannakis, L.E.

    1999-01-01

    In planning, designing and evaluating a geothermal energy project, a number of technical, economic, social and environmental parameters should be considered. The use of computational methods provides a rigorous analysis improving the decision-making process. This article demonstrates the application of decision-making methods developed in operational research for the optimum exploitation of geothermal resources. Two characteristic problems are considered: (1) the economic evaluation of a geothermal energy project under uncertain conditions using a stochastic analysis approach and (2) the evaluation of alternative exploitation schemes for optimum development of a low enthalpy geothermal field using a multicriteria decision-making procedure. (Author)

  6. Computing trends using graphic processor in high energy physics

    CERN Document Server

    Niculescu, Mihai

    2011-01-01

    One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

  7. A review of computer tools for analysing the integration of renewable energy into various energy systems

    DEFF Research Database (Denmark)

    Connolly, D.; Lund, Henrik; Mathiesen, Brian Vad

    2010-01-01

    to integrating renewable energy, but instead the ‘ideal’ energy tool is highly dependent on the specific objectives that must be fulfilled. The typical applications for the 37 tools reviewed (from analysing single-building systems to national energy-systems), combined with numerous other factors......This paper includes a review of the different computer tools that can be used to analyse the integration of renewable energy. Initially 68 tools were considered, but 37 were included in the final analysis which was carried out in collaboration with the tool developers or recommended points...... of contact. The results in this paper provide the information necessary to identify a suitable energy tool for analysing the integration of renewable energy into various energy-systems under different objectives. It is evident from this paper that there is no energy tool that addresses all issues related...

  8. School of Analytic Computing in Theoretical High-Energy Physics

    CERN Document Server

    2015-01-01

    In recent years, a huge progress has been made on computing rates for production processes of direct relevance to experiments at the Large Hadron Collider (LHC). Crucial to that remarkable advance has been our understanding and ability to compute scattering amplitudes and cross sections. The aim of the School is to bring together young theorists working on the phenomenology of LHC physics with those working in more formal areas, and to provide them the analytic tools to compute amplitudes in gauge theories. The school is addressed to Ph.D. students and post-docs in Theoretical High-Energy Physics. 30 hours of lectures and 4 hours of tutorials will be delivered over the 6 days of the School.

  9. An energy-efficient failure detector for vehicular cloud computing.

    Science.gov (United States)

    Liu, Jiaxi; Wu, Zhibo; Dong, Jian; Wu, Jin; Wen, Dongxin

    2018-01-01

    Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption.

  10. Providing a computing environment for a high energy physics workshop

    International Nuclear Information System (INIS)

    Nicholls, J.

    1991-03-01

    Although computing facilities have been provided at conferences and workshops remote from the hose institution for some years, the equipment provided has rarely been capable of providing for much more than simple editing and electronic mail over leased lines. This presentation describes the pioneering effort involved by the Computing Department/Division at Fermilab in providing a local computing facility with world-wide networking capability for the Physics at Fermilab in the 1990's workshop held in Breckenridge, Colorado, in August 1989, as well as the enhanced facilities provided for the 1990 Summer Study on High Energy Physics at Snowmass, Colorado, in June/July 1990. Issues discussed include type and sizing of the facilities, advance preparations, shipping, on-site support, as well as an evaluation of the value of the facility to the workshop participants

  11. KEYNOTE: Simulation, computation, and the Global Nuclear Energy Partnership

    Science.gov (United States)

    Reis, Victor, Dr.

    2006-01-01

    Dr. Victor Reis delivered the keynote talk at the closing session of the conference. The talk was forward looking and focused on the importance of advanced computing for large-scale nuclear energy goals such as Global Nuclear Energy Partnership (GNEP). Dr. Reis discussed the important connections of GNEP to the Scientific Discovery through Advanced Computing (SciDAC) program and the SciDAC research portfolio. In the context of GNEP, Dr. Reis talked about possible fuel leasing configurations, strategies for their implementation, and typical fuel cycle flow sheets. A major portion of the talk addressed lessons learnt from ‘Science Based Stockpile Stewardship’ and the Accelerated Strategic Computing Initiative (ASCI) initiative and how they can provide guidance for advancing GNEP and SciDAC goals. Dr. Reis’s colorful and informative presentation included international proverbs, quotes and comments, in tune with the international flavor that is part of the GNEP philosophy and plan. He concluded with a positive and motivating outlook for peaceful nuclear energy and its potential to solve global problems. An interview with Dr. Reis, addressing some of the above issues, is the cover story of Issue 2 of the SciDAC Review and available at http://www.scidacreview.org This summary of Dr. Reis’s PowerPoint presentation was prepared by Institute of Physics Publishing, the complete PowerPoint version of Dr. Reis’s talk at SciDAC 2006 is given as a multimedia attachment to this summary.

  12. Exascale for Energy: The Role of Exascale Computing in Energy Security

    International Nuclear Information System (INIS)

    2010-01-01

    How will the United States satisfy energy demand in a tightening global energy marketplace while, at the same time, reducing greenhouse gas emissions? Exascale computing - expected to be available within the next eight to ten years - may play a crucial role in answering that question by enabling a paradigm shift from test-based to science-based design and engineering. Computational modeling of complete power generation systems and engines, based on scientific first principles, will accelerate the improvement of existing energy technologies and the development of new transformational technologies by pre-selecting the designs most likely to be successful for experimental validation, rather than relying on trial and error. The predictive understanding of complex engineered systems made possible by computational modeling will also reduce the construction and operations costs, optimize performance, and improve safety. Exascale computing will make possible fundamentally new approaches to quantifying the uncertainty of safety and performance engineering. This report discusses potential contributions of exa-scale modeling in four areas of energy production and distribution: nuclear power, combustion, the electrical grid, and renewable sources of energy, which include hydrogen fuel, bioenergy conversion, photovoltaic solar energy, and wind turbines.

  13. Energy-resolved computed tomography: first experimental results

    International Nuclear Information System (INIS)

    Shikhaliev, Polad M

    2008-01-01

    First experimental results with energy-resolved computed tomography (CT) are reported. The contrast-to-noise ratio (CNR) in CT has been improved with x-ray energy weighting for the first time. Further, x-ray energy weighting improved the CNR in material decomposition CT when applied to CT projections prior to dual-energy subtraction. The existing CT systems use an energy (charge) integrating x-ray detector that provides a signal proportional to the energy of the x-ray photon. Thus, the x-ray photons with lower energies are scored less than those with higher energies. This underestimates contribution of lower energy photons that would provide higher contrast. The highest CNR can be achieved if the x-ray photons are scored by a factor that would increase as the x-ray energy decreases. This could be performed by detecting each x-ray photon separately and measuring its energy. The energy selective CT data could then be saved, and any weighting factor could be applied digitally to a detected x-ray photon. The CT system includes a photon counting detector with linear arrays of pixels made from cadmium zinc telluride (CZT) semiconductor. A cylindrical phantom with 10.2 cm diameter made from tissue-equivalent material was used for CT imaging. The phantom included contrast elements representing calcifications, iodine, adipose and glandular tissue. The x-ray tube voltage was 120 kVp. The energy selective CT data were acquired, and used to generate energy-weighted and material-selective CT images. The energy-weighted and material decomposition CT images were generated using a single CT scan at a fixed x-ray tube voltage. For material decomposition the x-ray spectrum was digitally spilt into low- and high-energy parts and dual-energy subtraction was applied. The x-ray energy weighting resulted in CNR improvement of calcifications and iodine by a factor of 1.40 and 1.63, respectively, as compared to conventional charge integrating CT. The x-ray energy weighting was also applied

  14. An accessibility solution of cloud computing by solar energy

    Directory of Open Access Journals (Sweden)

    Zuzana Priščáková

    2013-01-01

    Full Text Available Cloud computing is a modern innovative technology of solution of a problem with data storage, data processing, company infrastructure building and so on. Many companies worry over the changes by the implementation of this solution because these changes could have a negative impact on the company, or, in the case of establishment of new companies, this worry results from an unfamiliar environment. Data accessibility, integrity and security belong among basic problems of cloud computing. The aim of this paper is to offer and scientifically confirm a proposal of an accessibility solution of cloud by implementing of solar energy as a primary source. Problems with accessibility rise from power failures when data may be stolen or lost. Since cloud is often started from a server, the server dependence on power is strong. Modern conditions offer us a new, more innovative solution regarding the ecological as well as an economical company solution. The Sun as a steady source of energy offers us a possibility to produce necessary energy by a solar technique – solar panels. The connection of a solar panel as a primary source of energy for a server would remove its power dependence as well as possible failures. The power dependence would stay as a secondary source. Such an ecological solution would influence the economical side of company because the energy consumption would be lower. Besides a proposal of an accessibility solution, this paper involves a physical and mathematical solution to a calculation of solar energy showered on the Earth, a calculation of the panel size by cosines method and a simulation of these calculations in MATLAB conditions.

  15. Computed Potential Energy Surfaces and Minimum Energy Pathway for Chemical Reactions

    Science.gov (United States)

    Walch, Stephen P.; Langhoff, S. R. (Technical Monitor)

    1994-01-01

    Computed potential energy surfaces are often required for computation of such observables as rate constants as a function of temperature, product branching ratios, and other detailed properties. We have found that computation of the stationary points/reaction pathways using CASSCF/derivative methods, followed by use of the internally contracted CI method with the Dunning correlation consistent basis sets to obtain accurate energetics, gives useful results for a number of chemically important systems. Applications to complex reactions leading to NO and soot formation in hydrocarbon combustion are discussed.

  16. Computing at the leading edge: Research in the energy sciences

    Energy Technology Data Exchange (ETDEWEB)

    Mirin, A.A.; Van Dyke, P.T. [eds.

    1994-02-01

    The purpose of this publication is to highlight selected scientific challenges that have been undertaken by the DOE Energy Research community. The high quality of the research reflected in these contributions underscores the growing importance both to the Grand Challenge scientific efforts sponsored by DOE and of the related supporting technologies that the National Energy Research Supercomputer Center (NERSC) and other facilities are able to provide. The continued improvement of the computing resources available to DOE scientists is prerequisite to ensuring their future progress in solving the Grand Challenges. Titles of articles included in this publication include: the numerical tokamak project; static and animated molecular views of a tumorigenic chemical bound to DNA; toward a high-performance climate systems model; modeling molecular processes in the environment; lattice Boltzmann models for flow in porous media; parallel algorithms for modeling superconductors; parallel computing at the Superconducting Super Collider Laboratory; the advanced combustion modeling environment; adaptive methodologies for computational fluid dynamics; lattice simulations of quantum chromodynamics; simulating high-intensity charged-particle beams for the design of high-power accelerators; electronic structure and phase stability of random alloys.

  17. Computing at the leading edge: Research in the energy sciences

    International Nuclear Information System (INIS)

    Mirin, A.A.; Van Dyke, P.T.

    1994-01-01

    The purpose of this publication is to highlight selected scientific challenges that have been undertaken by the DOE Energy Research community. The high quality of the research reflected in these contributions underscores the growing importance both to the Grand Challenge scientific efforts sponsored by DOE and of the related supporting technologies that the National Energy Research Supercomputer Center (NERSC) and other facilities are able to provide. The continued improvement of the computing resources available to DOE scientists is prerequisite to ensuring their future progress in solving the Grand Challenges. Titles of articles included in this publication include: the numerical tokamak project; static and animated molecular views of a tumorigenic chemical bound to DNA; toward a high-performance climate systems model; modeling molecular processes in the environment; lattice Boltzmann models for flow in porous media; parallel algorithms for modeling superconductors; parallel computing at the Superconducting Super Collider Laboratory; the advanced combustion modeling environment; adaptive methodologies for computational fluid dynamics; lattice simulations of quantum chromodynamics; simulating high-intensity charged-particle beams for the design of high-power accelerators; electronic structure and phase stability of random alloys

  18. Convolutional networks for fast, energy-efficient neuromorphic computing.

    Science.gov (United States)

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  19. Convolutional networks for fast, energy-efficient neuromorphic computing

    Science.gov (United States)

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  20. Report of the federal office of the public health (O.F.S.P.) in collaboration with the institute of radiation protection and nuclear safety (I.R.S.N.) concerning the zero point of the CERN

    International Nuclear Information System (INIS)

    2007-01-01

    The the radiation monitoring of environment made in the vicinity of the CERN in the frame of zero point before the beginning of operation of the Large Hadronic Collider (L.H.C.) aims to establish an initial state of the radiological situation in atmosphere, soils and water areas in order to answer to following objectives: to provide a precise knowledge of actual levels of environmental radioactivity, in order to detect very early an impact of the operation of the L.H.C. facilities, eventually a process of contamination. Secondly, to check that the impacts of radioactive release and external irradiation observes the value of 0.3 mSv/year radiation dose and does not go over 1 mSv/year. to give a methodology and the implementation of monitoring means allowing an efficient control of metrological problems in case of radioactivity increase. (N.C.)

  1. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  2. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  3. Determinação do ponto de carga zero da bauxita da região nordeste do Pará Determination of the zero point of charge of the northeast of Pará bauxite

    Directory of Open Access Journals (Sweden)

    R. L. S. Pinto

    2012-12-01

    Full Text Available No nordeste do Pará o transporte da polpa de bauxita é através de mineroduto, sendo a viscosidade um parâmetro de controle fundamental para o bombeamento do minério. Este estudo ilustra a influência do pH na reologia da polpa por meio da determinação do ponto de carga zero (pHpcz da bauxita. Foram efetuadas análise granulométrica, microscopia eletrônica de varredura, ensaios potenciométricos testando o cloreto de amônia como eletrólito indiferente e ensaios reológicos em diferentes valores de pH. Os resultados revelaram que o cloreto de amônia pode ser utilizado como eletrólito indiferente para esse tipo de análise e que ocorre a redução da viscosidade em valores de pH distantes do ponto de carga zero.In the northeast of Pará, Brazil, the transport of bauxite pulp is through pipeline, being the viscosity a parameter which can interfere in the pumping. This study illustrates the influence of pH on pulp rheology through of determination of the zero point of charge. Were done particle size analysis, SEM, EDS, potentiometric tests testing ammonia chloride as indifferent electrolyte and rheological tests at different pH values. The results revealed that ammonia chloride can be used as indifferent electrolyte and showed a decrease in viscosity when there is far of the zero point of charge.

  4. Computed Potential Energy Surfaces and Minimum Energy Pathways for Chemical Reactions

    Science.gov (United States)

    Walch, Stephen P.; Langhoff, S. R. (Technical Monitor)

    1994-01-01

    Computed potential energy surfaces are often required for computation of such parameters as rate constants as a function of temperature, product branching ratios, and other detailed properties. For some dynamics methods, global potential energy surfaces are required. In this case, it is necessary to obtain the energy at a complete sampling of all the possible arrangements of the nuclei, which are energetically accessible, and then a fitting function must be obtained to interpolate between the computed points. In other cases, characterization of the stationary points and the reaction pathway connecting them is sufficient. These properties may be readily obtained using analytical derivative methods. We have found that computation of the stationary points/reaction pathways using CASSCF/derivative methods, followed by use of the internally contracted CI method to obtain accurate energetics, gives usefull results for a number of chemically important systems. The talk will focus on a number of applications including global potential energy surfaces, H + O2, H + N2, O(3p) + H2, and reaction pathways for complex reactions, including reactions leading to NO and soot formation in hydrocarbon combustion.

  5. The Role of Energy Reservoirs in Distributed Computing: Manufacturing, Implementing, and Optimizing Energy Storage in Energy-Autonomous Sensor Nodes

    Science.gov (United States)

    Cowell, Martin Andrew

    The world already hosts more internet connected devices than people, and that ratio is only increasing. These devices seamlessly integrate with peoples lives to collect rich data and give immediate feedback about complex systems from business, health care, transportation, and security. As every aspect of global economies integrate distributed computing into their industrial systems and these systems benefit from rich datasets. Managing the power demands of these distributed computers will be paramount to ensure the continued operation of these networks, and is elegantly addressed by including local energy harvesting and storage on a per-node basis. By replacing non-rechargeable batteries with energy harvesting, wireless sensor nodes will increase their lifetimes by an order of magnitude. This work investigates the coupling of high power energy storage with energy harvesting technologies to power wireless sensor nodes; with sections covering device manufacturing, system integration, and mathematical modeling. First we consider the energy storage mechanism of supercapacitors and batteries, and identify favorable characteristics in both reservoir types. We then discuss experimental methods used to manufacture high power supercapacitors in our labs. We go on to detail the integration of our fabricated devices with collaborating labs to create functional sensor node demonstrations. With the practical knowledge gained through in-lab manufacturing and system integration, we build mathematical models to aid in device and system design. First, we model the mechanism of energy storage in porous graphene supercapacitors to aid in component architecture optimization. We then model the operation of entire sensor nodes for the purpose of optimally sizing the energy harvesting and energy reservoir components. In consideration of deploying these sensor nodes in real-world environments, we model the operation of our energy harvesting and power management systems subject to

  6. Aiding Design of Wave Energy Converters via Computational Simulations

    Science.gov (United States)

    Jebeli Aqdam, Hejar; Ahmadi, Babak; Raessi, Mehdi; Tootkaboni, Mazdak

    2015-11-01

    With the increasing interest in renewable energy sources, wave energy converters will continue to gain attention as a viable alternative to current electricity production methods. It is therefore crucial to develop computational tools for the design and analysis of wave energy converters. A successful design requires balance between the design performance and cost. Here an analytical solution is used for the approximate analysis of interactions between a flap-type wave energy converter (WEC) and waves. The method is verified using other flow solvers and experimental test cases. Then the model is used in conjunction with a powerful heuristic optimization engine, Charged System Search (CSS) to explore the WEC design space. CSS is inspired by charged particles behavior. It searches the design space by considering candidate answers as charged particles and moving them based on the Coulomb's laws of electrostatics and Newton's laws of motion to find the global optimum. Finally the impacts of changes in different design parameters on the power takeout of the superior WEC designs are investigated. National Science Foundation, CBET-1236462.

  7. The Stranger: Adventures at Zero Point

    Science.gov (United States)

    Heraud, Richard

    2013-01-01

    In one of his notebooks, Albert Camus describes, "The stranger," "The myth of Sisyphus," "Caligula" and "The misunderstanding" as pertaining to a series; a schema that suggests that if one were to write about one of these literary works, one would be writing about parts of a whole unless one also engaged…

  8. Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud

    Directory of Open Access Journals (Sweden)

    A. Paulin Florence

    2016-01-01

    Full Text Available Cloud computing is a new technology which supports resource sharing on a “Pay as you go” basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption.

  9. Computational design of RNAs with complex energy landscapes.

    Science.gov (United States)

    Höner zu Siederdissen, Christian; Hammer, Stefan; Abfalter, Ingrid; Hofacker, Ivo L; Flamm, Christoph; Stadler, Peter F

    2013-12-01

    RNA has become an integral building material in synthetic biology. Dominated by their secondary structures, which can be computed efficiently, RNA molecules are amenable not only to in vitro and in vivo selection, but also to rational, computation-based design. While the inverse folding problem of constructing an RNA sequence with a prescribed ground-state structure has received considerable attention for nearly two decades, there have been few efforts to design RNAs that can switch between distinct prescribed conformations. We introduce a user-friendly tool for designing RNA sequences that fold into multiple target structures. The underlying algorithm makes use of a combination of graph coloring and heuristic local optimization to find sequences whose energy landscapes are dominated by the prescribed conformations. A flexible interface allows the specification of a wide range of design goals. We demonstrate that bi- and tri-stable "switches" can be designed easily with moderate computational effort for the vast majority of compatible combinations of desired target structures. RNAdesign is freely available under the GPL-v3 license. Copyright © 2013 Wiley Periodicals, Inc.

  10. The implementation of CP1 computer code in the Honeywell Bull computer in Brazilian Nuclear Energy Commission (CNEN)

    International Nuclear Information System (INIS)

    Couto, R.T.

    1987-01-01

    The implementation of the CP1 computer code in the Honeywell Bull computer in Brazilian Nuclear Energy Comission is presented. CP1 is a computer code used to solve the equations of punctual kinetic with Doppler feed back from the system temperature variation based on the Newton refrigeration equation (E.G.) [pt

  11. Analytical Computation of Energy-Energy Correlation at Next-to-Leading Order in QCD.

    Science.gov (United States)

    Dixon, Lance J; Luo, Ming-Xing; Shtabovenko, Vladyslav; Yang, Tong-Zhi; Zhu, Hua Xing

    2018-03-09

    The energy-energy correlation (EEC) between two detectors in e^{+}e^{-} annihilation was computed analytically at leading order in QCD almost 40 years ago, and numerically at next-to-leading order (NLO) starting in the 1980s. We present the first analytical result for the EEC at NLO, which is remarkably simple, and facilitates analytical study of the perturbative structure of the EEC. We provide the expansion of the EEC in the collinear and back-to-back regions through next-to-leading power, information which should aid resummation in these regions.

  12. Power/energy use cases for high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kelly, Suzanne M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Steven [National Renewable Energy Lab. (NREL), Golden, CO (United States); Elmore, Ryan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Munch, Kristin [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  13. Thrifty: An Exascale Architecture for Energy Proportional Computing

    Energy Technology Data Exchange (ETDEWEB)

    Torrellas, Josep [Univ. of Illinois, Champaign, IL (United States)

    2014-12-23

    The objective of this project is to design different aspects of a novel exascale architecture called Thrifty. Our goal is to focus on the challenges of power/energy efficiency, performance, and resiliency in exascale systems. The project includes work on computer architecture (Josep Torrellas from University of Illinois), compilation (Daniel Quinlan from Lawrence Livermore National Laboratory), runtime and applications (Laura Carrington from University of California San Diego), and circuits (Wilfred Pinfold from Intel Corporation). In this report, we focus on the progress at the University of Illinois during the last year of the grant (September 1, 2013 to August 31, 2014). We also point to the progress in the other collaborating institutions when needed.

  14. Computer-aided safety systems of industrial high energy objects

    International Nuclear Information System (INIS)

    Topolsky, N.G.; Gordeev, S.G.

    1995-01-01

    Modern objects of fuel and energy, chemical industries are characterized by high power consumption; by presence of large quantities of combustible and explosive substances used in technological processes; by advanced communications of submission systems of initial liquid and gasiform reagents, lubricants and coolants, the products of processing, and wastes of production; by advanced ventilation and pneumatic transport; and by complex control systems of energy, material and information flows. Such objects have advanced infrastructures, including a significant quantity of engineering buildings intended for storage, transportation, and processing of combustible liquids, gasiform fuels and materials, and firm materials. Examples of similar objects are nuclear and thermal power stations, chemical plants, machine-building factories, iron and steel industry enterprises, etc. Many tasks and functions characterizing the problem of fire safety of these objects can be accomplished only upon the development of special Computer-Aided Fire Safety Systems (CAFSS). The CAFSS for these objects are intended to reduce the hazard of disastrous accidents both causing fires and caused by them. The tasks of fire prevention and rescue work of large-scale industrial objects are analyzed within the bounds of the recommended conception. A functional structure of CAFSS with a list of the main subsystems forming a part of its composition has been proposed

  15. High performance computing in power and energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  16. Modeling molecular boiling points using computed interaction energies.

    Science.gov (United States)

    Peterangelo, Stephen C; Seybold, Paul G

    2017-12-20

    The noncovalent van der Waals interactions between molecules in liquids are typically described in textbooks as occurring between the total molecular dipoles (permanent, induced, or transient) of the molecules. This notion was tested by examining the boiling points of 67 halogenated hydrocarbon liquids using quantum chemically calculated molecular dipole moments, ionization potentials, and polarizabilities obtained from semi-empirical (AM1 and PM3) and ab initio Hartree-Fock [HF 6-31G(d), HF 6-311G(d,p)], and density functional theory [B3LYP/6-311G(d,p)] methods. The calculated interaction energies and an empirical measure of hydrogen bonding were employed to model the boiling points of the halocarbons. It was found that only terms related to London dispersion energies and hydrogen bonding proved significant in the regression analyses, and the performances of the models generally improved at higher levels of quantum chemical computation. An empirical estimate for the molecular polarizabilities was also tested, and the best models for the boiling points were obtained using either this empirical polarizability itself or the polarizabilities calculated at the B3LYP/6-311G(d,p) level, along with the hydrogen-bonding parameter. The results suggest that the cohesive forces are more appropriately described as resulting from highly localized interactions rather than interactions between the global molecular dipoles.

  17. Clean Energy Use for Cloud Computing Federation Workloads

    Directory of Open Access Journals (Sweden)

    Yahav Biran

    2017-08-01

    Full Text Available Cloud providers seek to maximize their market share. Traditionally, they deploy datacenters with sufficient capacity to accommodate their entire computing demand while maintaining geographical affinity to its customers. Achieving these goals by a single cloud provider is increasingly unrealistic from a cost of ownership perspective. Moreover, the carbon emissions from underutilized datacenters place an increasing demand on electricity and is a growing factor in the cost of cloud provider datacenters. Cloud-based systems may be classified into two categories: serving systems and analytical systems. We studied two primary workload types, on-demand video streaming as a serving system and MapReduce jobs as an analytical systems and suggested two unique energy mix usage for processing that workloads. The recognition that on-demand video streaming now constitutes the bulk portion of traffic to Internet consumers provides a path to mitigate rising energy demand. On-demand video is usually served through Content Delivery Networks (CDN, often scheduled in backend and edge datacenters. This publication describes a CDN deployment solution that utilizes green energy to supply on-demand streaming workload. A cross-cloud provider collaboration will allow cloud providers to both operate near their customers and reduce operational costs, primarily by lowering the datacenter deployments per provider ratio. Our approach optimizes cross-datacenters deployment. Specifically, we model an optimized CDN-edge instance allocation system that maximizes, under a set of realistic constraints, green energy utilization. The architecture of this cross-cloud coordinator service is based on Ubernetes, an open source container cluster manager that is a federation of Kubernetes clusters. It is shown how, under reasonable constraints, it can reduce the projected datacenter’s carbon emissions growth by 22% from the currently reported consumption. We also suggest operating

  18. Assessment of proposed electromagnetic quantum vacuum energy extraction methods

    OpenAIRE

    Moddel, Garret

    2009-01-01

    In research articles and patents several methods have been proposed for the extraction of zero-point energy from the vacuum. None has been reliably demonstrated, but the proposals remain largely unchallenged. In this paper the feasibility of these methods is assessed in terms of underlying thermodynamics principles of equilibrium, detailed balance, and conservation laws. The methods are separated into three classes: nonlinear processing of the zero-point field, mechanical extraction using Cas...

  19. A primer on the energy efficiency of computing

    Energy Technology Data Exchange (ETDEWEB)

    Koomey, Jonathan G. [Research Fellow, Steyer-Taylor Center for Energy Policy and Finance, Stanford University (United States)

    2015-03-30

    The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.

  20. Computation techniques and computer programs to analyze Stirling cycle engines using characteristic dynamic energy equations

    Science.gov (United States)

    Larson, V. H.

    1982-01-01

    The basic equations that are used to describe the physical phenomena in a Stirling cycle engine are the general energy equations and equations for the conservation of mass and conversion of momentum. These equations, together with the equation of state, an analytical expression for the gas velocity, and an equation for mesh temperature are used in this computer study of Stirling cycle characteristics. The partial differential equations describing the physical phenomena that occurs in a Stirling cycle engine are of the hyperbolic type. The hyperbolic equations have real characteristic lines. By utilizing appropriate points along these curved lines the partial differential equations can be reduced to ordinary differential equations. These equations are solved numerically using a fourth-fifth order Runge-Kutta integration technique.

  1. Development of optimized segmentation map in dual energy computed tomography

    Science.gov (United States)

    Yamakawa, Keisuke; Ueki, Hironori

    2012-03-01

    Dual energy computed tomography (DECT) has been widely used in clinical practice and has been particularly effective for tissue diagnosis. In DECT the difference of two attenuation coefficients acquired by two kinds of X-ray energy enables tissue segmentation. One problem in conventional DECT is that the segmentation deteriorates in some cases, such as bone removal. This is due to two reasons. Firstly, the segmentation map is optimized without considering the Xray condition (tube voltage and current). If we consider the tube voltage, it is possible to create an optimized map, but unfortunately we cannot consider the tube current. Secondly, the X-ray condition is not optimized. The condition can be set empirically, but this means that the optimized condition is not used correctly. To solve these problems, we have developed methods for optimizing the map (Method-1) and the condition (Method-2). In Method-1, the map is optimized to minimize segmentation errors. The distribution of the attenuation coefficient is modeled by considering the tube current. In Method-2, the optimized condition is decided to minimize segmentation errors depending on tube voltagecurrent combinations while keeping the total exposure constant. We evaluated the effectiveness of Method-1 by performing a phantom experiment under the fixed condition and of Method-2 by performing a phantom experiment under different combinations calculated from the total exposure constant. When Method-1 was followed with Method-2, the segmentation error was reduced from 37.8 to 13.5 %. These results demonstrate that our developed methods can achieve highly accurate segmentation while keeping the total exposure constant.

  2. Science panel to study mega-computers to assess potential energy contributions

    CERN Multimedia

    Jones, D

    2003-01-01

    "Energy Department advisers plan to examine high-end computing in the coming year and assess how computing power could be used to further DOE's basic research agenda on combustion, fusion and other topics" (1 page).

  3. A theology of matter. The strong interaction at strong resonance at the meeting point of I and not-I. Conjectures about oscillating strings and fluctuating vacuum energy; Eine Theologie der Materie. Die starke Wechselwirkung bei starker Resonanz am Begegnungs-Ort von Ich und Nicht-Ich. Mutmassungen ueber oszillierende Strings und fluktuierende Vakuum-Energie

    Energy Technology Data Exchange (ETDEWEB)

    Boes, Roderick H.

    2011-07-01

    This book shows that matter and consciousness are intertwined and mutually produce. Quantum vacuum fluctuations ensure that the latent energy of each event is present as zero-point energy simultaneously at all points of the cosmos.

  4. Computational fluid dynamics simulation of indoor climate in low energy buildings: Computational set up

    Directory of Open Access Journals (Sweden)

    Risberg Daniel

    2017-01-01

    Full Text Available In this paper CFD was used for simulation of the indoor climate in a part of a low energy building. The focus of the work was on investigating the computational set up, such as grid size and boundary conditions in order to solve the indoor climate problems in an accurate way. Future work is to model a complete building, with reasonable calculation time and accuracy. A limited number of grid elements and knowledge of boundary settings are therefore essential. An accurate grid edge size of around 0.1 m was enough to predict the climate according to a grid independency study. Different turbulence models were compared with only small differences in the indoor air velocities and temperatures. The models show that radiation between building surfaces has a large impact on the temperature field inside the building, with the largest differences at the floor level. Simplifying the simulations by modelling the radiator as a surface in the outer wall of the room is appropriate for the calculations. The overall indoor climate is finally compared between three different cases for the outdoor air temperature. The results show a good indoor climate for a low energy building all around the year.

  5. Concept and computation of radiation dose at high energies

    International Nuclear Information System (INIS)

    Sarkar, P.K.

    2010-01-01

    Computational dosimetry, a subdiscipline of computational physics devoted to radiation metrology, is determination of absorbed dose and other dose related quantities by numbers. Computations are done separately both for external and internal dosimetry. The methodology used in external beam dosimetry is necessarily a combination of experimental radiation dosimetry and theoretical dose computation since it is not feasible to plan any physical dose measurements from inside a living human body

  6. Energy-efficient cloud computing : autonomic resource provisioning for datacenters

    OpenAIRE

    Tesfatsion, Selome Kostentinos

    2018-01-01

    Energy efficiency has become an increasingly important concern in data centers because of issues associated with energy consumption, such as capital costs, operating expenses, and environmental impact. While energy loss due to suboptimal use of facilities and non-IT equipment has largely been reduced through the use of best-practice technologies, addressing energy wastage in IT equipment still requires the design and implementation of energy-aware resource management systems. This thesis focu...

  7. Energy-Aware Computation Offloading of IoT Sensors in Cloudlet-Based Mobile Edge Computing.

    Science.gov (United States)

    Ma, Xiao; Lin, Chuang; Zhang, Han; Liu, Jianwei

    2018-06-15

    Mobile edge computing is proposed as a promising computing paradigm to relieve the excessive burden of data centers and mobile networks, which is induced by the rapid growth of Internet of Things (IoT). This work introduces the cloud-assisted multi-cloudlet framework to provision scalable services in cloudlet-based mobile edge computing. Due to the constrained computation resources of cloudlets and limited communication resources of wireless access points (APs), IoT sensors with identical computation offloading decisions interact with each other. To optimize the processing delay and energy consumption of computation tasks, theoretic analysis of the computation offloading decision problem of IoT sensors is presented in this paper. In more detail, the computation offloading decision problem of IoT sensors is formulated as a computation offloading game and the condition of Nash equilibrium is derived by introducing the tool of a potential game. By exploiting the finite improvement property of the game, the Computation Offloading Decision (COD) algorithm is designed to provide decentralized computation offloading strategies for IoT sensors. Simulation results demonstrate that the COD algorithm can significantly reduce the system cost compared with the random-selection algorithm and the cloud-first algorithm. Furthermore, the COD algorithm can scale well with increasing IoT sensors.

  8. The green computing book tackling energy efficiency at large scale

    CERN Document Server

    Feng, Wu-chun

    2014-01-01

    Low-Power, Massively Parallel, Energy-Efficient Supercomputers The Blue Gene TeamCompiler-Driven Energy Efficiency Mahmut Kandemir and Shekhar Srikantaiah An Adaptive Run-Time System for Improving Energy Efficiency Chung-Hsing Hsu, Wu-chun Feng, and Stephen W. PooleEnergy-Efficient Multithreading through Run-Time Adaptation Exploring Trade-Offs between Energy Savings and Reliability in Storage Systems Ali R. Butt, Puranjoy Bhattacharjee, Guanying Wang, and Chris GniadyCross-Layer Power Management Zhikui Wang and Parthasarathy Ranganathan Energy-Efficient Virtualized Systems Ripal Nathuji and K

  9. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    Energy Technology Data Exchange (ETDEWEB)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  10. Saving Energy and Money: A Lesson in Computer Power Management

    Science.gov (United States)

    Lazaros, Edward J.; Hua, David

    2012-01-01

    In this activity, students will develop an understanding of the economic impact of technology by estimating the cost savings of power management strategies in the classroom. Students will learn how to adjust computer display settings to influence the impact that the computer has on the financial burden to the school. They will use mathematics to…

  11. Computational Support for the Selection of Energy Saving Building Components

    NARCIS (Netherlands)

    De Wilde, P.J.C.J.

    2004-01-01

    Buildings use energy for heating, cooling and lighting, contributing to the problems of exhaustion of fossil fuel supplies and environmental pollution. In order to make buildings more energy-efficient an extensive set of âenergy saving building componentsâ has been developed that contributes to

  12. Center for Advanced Energy Studies: Computer Assisted Virtual Environment (CAVE)

    Data.gov (United States)

    Federal Laboratory Consortium — The laboratory contains a four-walled 3D computer assisted virtual environment - or CAVE TM — that allows scientists and engineers to literally walk into their data...

  13. Computing for magnetic fusion energy research: The next five years

    International Nuclear Information System (INIS)

    Mann, L.; Glasser, A.; Sauthoff, N.

    1991-01-01

    This report considers computing needs in magnetic fusion for the next five years. It is the result of two and a half years of effort by representatives of all aspects of the magnetic fusion community. The report also factors in the results of a survey that was distributed to the laboratories and universities that support fusion. There are four areas of computing support discussed: theory, experiment, engineering, and systems

  14. Computing for magnetic fusion energy research: An updated vision

    International Nuclear Information System (INIS)

    Henline, P.; Giarrusso, J.; Davis, S.; Casper, T.

    1993-01-01

    This Fusion Computing Council perspective is written to present the primary of the fusion computing community at the time of publication of the report necessarily as a summary of the information contained in the individual sections. These concerns reflect FCC discussions during final review of contributions from the various working groups and portray our latest information. This report itself should be considered as dynamic, requiring periodic updating in an attempt to track rapid evolution of the computer industry relevant to requirements for magnetic fusion research. The most significant common concern among the Fusion Computing Council working groups is networking capability. All groups see an increasing need for network services due to the use of workstations, distributed computing environments, increased use of graphic services, X-window usage, remote experimental collaborations, remote data access for specific projects and other collaborations. Other areas of concern include support for workstations, enhanced infrastructure to support collaborations, the User Service Centers, NERSC and future massively parallel computers, and FCC sponsored workshops

  15. Wireless-Uplinks-Based Energy-Efficient Scheduling in Mobile Cloud Computing

    OpenAIRE

    Xing Liu; Chaowei Yuan; Zhen Yang; Enda Peng

    2015-01-01

    Mobile cloud computing (MCC) combines cloud computing and mobile internet to improve the computational capabilities of resource-constrained mobile devices (MDs). In MCC, mobile users could not only improve the computational capability of MDs but also save operation consumption by offloading the mobile applications to the cloud. However, MCC faces the problem of energy efficiency because of time-varying channels when the offloading is being executed. In this paper, we address the issue of ener...

  16. Energy Consumption and Indoor Environment Predicted by a Combination of Computational Fluid Dynamics and Building Energy Performance Simulation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm

    2003-01-01

    An interconnection between a building energy performance simulation program and a Computational Fluid Dynamics program (CFD) for room air distribution is introduced for improvement of the predictions of both the energy consumption and the indoor environment.The article describes a calculation...

  17. An urban energy performance evaluation system and its computer implementation.

    Science.gov (United States)

    Wang, Lei; Yuan, Guan; Long, Ruyin; Chen, Hong

    2017-12-15

    To improve the urban environment and effectively reflect and promote urban energy performance, an urban energy performance evaluation system was constructed, thereby strengthening urban environmental management capabilities. From the perspectives of internalization and externalization, a framework of evaluation indicators and key factors that determine urban energy performance and explore the reasons for differences in performance was proposed according to established theory and previous studies. Using the improved stochastic frontier analysis method, an urban energy performance evaluation and factor analysis model was built that brings performance evaluation and factor analysis into the same stage for study. According to data obtained for the Chinese provincial capitals from 2004 to 2013, the coefficients of the evaluation indicators and key factors were calculated by the urban energy performance evaluation and factor analysis model. These coefficients were then used to compile the program file. The urban energy performance evaluation system developed in this study was designed in three parts: a database, a distributed component server, and a human-machine interface. Its functions were designed as login, addition, edit, input, calculation, analysis, comparison, inquiry, and export. On the basis of these contents, an urban energy performance evaluation system was developed using Microsoft Visual Studio .NET 2015. The system can effectively reflect the status of and any changes in urban energy performance. Beijing was considered as an example to conduct an empirical study, which further verified the applicability and convenience of this evaluation system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Computer program for sizing residential energy recovery ventilator

    International Nuclear Information System (INIS)

    Koontz, M.D.; Lee, S.M.; Spears, J.W.; Kesselring, J.P.

    1991-01-01

    Energy recovery ventilators offer the prospect of tighter control over residential ventilation rates than manual methods, such as opening windows, with a lesser energy penalty. However, the appropriate size of such a ventilator is not readily apparent in most situations. Sizing of energy recovery ventilation software was developed to calculate the size of ventilator necessary to satisfy ASHRAE Standard 62-1989, Ventilation for Acceptable Air Quality, or a user-specified air exchange rate. Inputs to the software include house location, structural characteristics, house operations and energy costs, ventilation characteristics, and HVAC system COP/efficiency. Based on these inputs, the program estimates the existing air exchange rate for the house, the ventilation rate required to meet the ASHRAE standard or user-specified air exchange rate, the size of the ventilator needed to meet the requirement, and the expected changes in indoor air quality and energy consumption. In this paper an illustrative application of the software is provided

  19. Casimir energy of a BEC: from moderate interactions to the ideal gas

    International Nuclear Information System (INIS)

    Schiefele, J; Henkel, C

    2009-01-01

    Considering the Casimir effect due to phononic excitations of a weakly interacting dilute Bose-Einstein condensate (BEC), we derive a renormalized expression for the zero-temperature Casimir energy E C of a BEC confined to a parallel plate geometry with periodic boundary conditions. Our expression is formally equivalent to the free energy of a bosonic field at finite temperature, with a nontrivial density of modes that we compute analytically. As a function of the interaction strength, E C smoothly describes the transition from the weakly interacting Bogoliubov regime to the non-interacting ideal BEC. For the weakly interacting case, E C reduces to leading order to the Casimir energy due to zero-point fluctuations of massless phonon modes. In the limit of an ideal Bose gas, our result correctly describes the Casimir energy going to zero

  20. 2003 Conference for Computing in High Energy and Nuclear Physics

    International Nuclear Information System (INIS)

    Schalk, T.

    2003-01-01

    The conference was subdivided into the follow separate tracks. Electronic presentations and/or videos are provided on the main website link. Sessions: Plenary Talks and Panel Discussion; Grid Architecture, Infrastructure, and Grid Security; HENP Grid Applications, Testbeds, and Demonstrations; HENP Computing Systems and Infrastructure; Monitoring; High Performance Networking; Data Acquisition, Triggers and Controls; First Level Triggers and Trigger Hardware; Lattice Gauge Computing; HENP Software Architecture and Software Engineering; Data Management and Persistency; Data Analysis Environment and Visualization; Simulation and Modeling; and Collaboration Tools and Information Systems

  1. Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard

    2014-05-02

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In March 2013, NERSC, DOE?s Office of Advanced Scientific Computing Research (ASCR) and DOE?s Office of Fusion Energy Sciences (FES) held a review to characterize High Performance Computing (HPC) and storage requirements for FES research through 2017. This report is the result.

  2. A survey of energy saving techniques for mobile computers

    NARCIS (Netherlands)

    Smit, Gerardus Johannes Maria; Havinga, Paul J.M.

    1997-01-01

    Portable products such as pagers, cordless and digital cellular telephones, personal audio equipment, and laptop computers are increasingly being used. Because these applications are battery powered, reducing power consumption is vital. In this report we first give a survey of techniques for

  3. ND6600 computer in fusion-energy research

    International Nuclear Information System (INIS)

    Young, K.G.

    1982-12-01

    The ND6600, a computer-based multichannel analyzer with eight ADCs, is used to acquire x-ray data. This manual introduces a user to the Nuclear Data system and contains the information necessary for the user to acquire, display, and record data. The manual also guides the programmer in the hardware and software maintenance of the system

  4. ND6600 computer in fusion-energy research

    Energy Technology Data Exchange (ETDEWEB)

    Young, K.G.

    1982-12-01

    The ND6600, a computer-based multichannel analyzer with eight ADCs, is used to acquire x-ray data. This manual introduces a user to the Nuclear Data system and contains the information necessary for the user to acquire, display, and record data. The manual also guides the programmer in the hardware and software maintenance of the system.

  5. Minimizing energy consumption for wireless computers in Moby Dick

    NARCIS (Netherlands)

    Havinga, Paul J.M.; Smit, Gerardus Johannes Maria

    1997-01-01

    The Moby Dick project is a joint European project to develop and define the architecture of a new generation of mobile hand-held computers, called Pocket Companions. The Pocket Companion is a hand-held device that is resource-poor, i.e. small amount of memory, limited battery life, low processing

  6. National Energy Research Scientific Computing Center 2007 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Hules, John A.; Bashor, Jon; Wang, Ucilia; Yarris, Lynn; Preuss, Paul

    2008-10-23

    This report presents highlights of the research conducted on NERSC computers in a variety of scientific disciplines during the year 2007. It also reports on changes and upgrades to NERSC's systems and services aswell as activities of NERSC staff.

  7. Review of the Fusion Theory and Computing Program. Fusion Energy Sciences Advisory Committee (FESAC)

    International Nuclear Information System (INIS)

    Antonsen, Thomas M.; Berry, Lee A.; Brown, Michael R.; Dahlburg, Jill P.; Davidson, Ronald C.; Greenwald, Martin; Hegna, Chris C.; McCurdy, William; Newman, David E.; Pellegrini, Claudio; Phillips, Cynthia K.; Post, Douglass E.; Rosenbluth, Marshall N.; Sheffield, John; Simonen, Thomas C.; Van Dam, James

    2001-01-01

    At the November 14-15, 2000, meeting of the Fusion Energy Sciences Advisory Committee, a Panel was set up to address questions about the Theory and Computing program, posed in a charge from the Office of Fusion Energy Sciences (see Appendix A). This area was of theory and computing/simulations had been considered in the FESAC Knoxville meeting of 1999 and in the deliberations of the Integrated Program Planning Activity (IPPA) in 2000. A National Research Council committee provided a detailed review of the scientific quality of the fusion energy sciences program, including theory and computing, in 2000.

  8. Magnetic fusion energy and computers. The role of computing in magnetic fusion energy research and development (second edition)

    International Nuclear Information System (INIS)

    1983-01-01

    This report documents the structure and uses of the MFE Network and presents a compilation of future computing requirements. Its primary emphasis is on the role of supercomputers in fusion research. One of its key findings is that with the introduction of each successive class of supercomputer, qualitatively improved understanding of fusion processes has been gained. At the same time, even the current Class VI machines severely limit the attainable realism of computer models. Many important problems will require the introduction of Class VII or even larger machines before they can be successfully attacked

  9. Advanced computational simulations of water waves interacting with wave energy converters

    Science.gov (United States)

    Pathak, Ashish; Freniere, Cole; Raessi, Mehdi

    2017-03-01

    Wave energy converter (WEC) devices harness the renewable ocean wave energy and convert it into useful forms of energy, e.g. mechanical or electrical. This paper presents an advanced 3D computational framework to study the interaction between water waves and WEC devices. The computational tool solves the full Navier-Stokes equations and considers all important effects impacting the device performance. To enable large-scale simulations in fast turnaround times, the computational solver was developed in an MPI parallel framework. A fast multigrid preconditioned solver is introduced to solve the computationally expensive pressure Poisson equation. The computational solver was applied to two surface-piercing WEC geometries: bottom-hinged cylinder and flap. Their numerically simulated response was validated against experimental data. Additional simulations were conducted to investigate the applicability of Froude scaling in predicting full-scale WEC response from the model experiments.

  10. Solving difficult problems creatively: A role for energy optimised deterministic/stochastic hybrid computing

    Directory of Open Access Journals (Sweden)

    Tim ePalmer

    2015-10-01

    Full Text Available How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.

  11. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing.

    Science.gov (United States)

    Palmer, Tim N; O'Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for 'eureka' moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.

  12. Nonlinear generalization of special relativity at very high energies

    International Nuclear Information System (INIS)

    Winterberg, F.

    1984-01-01

    It is shown, that the introduction of a fundamental length constant into the operator representation of the quantum mechanical commutation relations, as suggested by Bagge, leads to a nonlinear generalization of the Lorentz transformations. The theory requires the introduction of a substratum (ether) and which can be identified as the zero point vacuum energy. At very high energies a non-Lorentz invariant behaviour for the cross sections between elementary particles is predicted. Using the Einstein clock synchronisation definition, the velocity of light is also constant and equal to c in the new theory, but the zero point vacuum energy becomes finite, as are all other quantities which are divergent in Lorentz invariant quantum field theories. In the limiting case where the length constant is set equal to zero, the zero point vacuum energy diverges and special relativity is recovered. (orig.) [de

  13. Could dark energy be measured in the lab?

    International Nuclear Information System (INIS)

    Beck, Christian; Mackey, Michael C.

    2005-01-01

    The experimentally measured spectral density of current noise in Josephson junctions provides direct evidence for the existence of zero-point fluctuations. Assuming that the total vacuum energy associated with these fluctuations cannot exceed the presently measured dark energy of the universe, we predict an upper cutoff frequency of ν c =(1.69+/-0.05)x10 12 Hz for the measured frequency spectrum of zero-point fluctuations in the Josephson junction. The largest frequencies that have been reached in the experiments are of the same order of magnitude as ν c and provide a lower bound on the dark energy density of the universe. It is shown that suppressed zero-point fluctuations above a given cutoff frequency can lead to 1/f noise. We propose an experiment which may help to measure some of the properties of dark energy in the lab

  14. High Energy Physics Computer Networking: Report of the HEPNET Review Committee

    International Nuclear Information System (INIS)

    1988-06-01

    This paper discusses the computer networks available to high energy physics facilities for transmission of data. Topics covered in this paper are: Existing and planned networks and HEPNET requirements

  15. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Patrick [Oregon State Univ., Corvallis, OR (United States)

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  16. Wireless-Uplinks-Based Energy-Efficient Scheduling in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Xing Liu

    2015-01-01

    Full Text Available Mobile cloud computing (MCC combines cloud computing and mobile internet to improve the computational capabilities of resource-constrained mobile devices (MDs. In MCC, mobile users could not only improve the computational capability of MDs but also save operation consumption by offloading the mobile applications to the cloud. However, MCC faces the problem of energy efficiency because of time-varying channels when the offloading is being executed. In this paper, we address the issue of energy-efficient scheduling for wireless uplink in MCC. By introducing Lyapunov optimization, we first propose a scheduling algorithm that can dynamically choose channel to transmit data based on queue backlog and channel statistics. Then, we show that the proposed scheduling algorithm can make a tradeoff between queue backlog and energy consumption in a channel-aware MCC system. Simulation results show that the proposed scheduling algorithm can reduce the time average energy consumption for offloading compared to the existing algorithm.

  17. Computing and Systems Applied in Support of Coordinated Energy, Environmental, and Climate Planning

    Science.gov (United States)

    This talk focuses on how Dr. Loughlin is applying Computing and Systems models, tools and methods to more fully understand the linkages among energy systems, environmental quality, and climate change. Dr. Loughlin will highlight recent and ongoing research activities, including: ...

  18. Computer calculations of activation energy for pyrolysis from thermogravimetric curves

    International Nuclear Information System (INIS)

    Hussain, R.

    1994-01-01

    A BASIC programme to determine energy of activation for the degradation of polymers has been described. The calculations are based on the results of thermogravimetric curves. This method is applicable for those polymers which produce volatile products upon thermal degradation. (author)

  19. Energy Use and Power Levels in New Monitors and Personal Computers; TOPICAL

    International Nuclear Information System (INIS)

    Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay; Nordman, Bruce; Webber, Carrie A.; Brown, Richard E.; McWhinney, Marla; Koomey, Jonathan G.

    2002-01-01

    Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can use to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC

  20. Ion range estimation by using dual energy computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Huenemohr, Nora; Greilich, Steffen [German Cancer Research Center (DKFZ), Heidelberg (Germany). Medical Physics in Radiation Oncology; Krauss, Bernhard [Siemens AG, Forchheim (Germany). Imaging and Therapy; Dinkel, Julien [German Cancer Research Center (DKFZ), Heidelberg (Germany). Radiology; Massachusetts General Hospital, Boston, MA (United States). Radiology; Gillmann, Clarissa [German Cancer Research Center (DKFZ), Heidelberg (Germany). Medical Physics in Radiation Oncology; University Hospital Heidelberg (Germany). Radiation Oncology; Ackermann, Benjamin [Heidelberg Ion-Beam Therapy Center (HIT), Heidelberg (Germany); Jaekel, Oliver [German Cancer Research Center (DKFZ), Heidelberg (Germany). Medical Physics in Radiation Oncology; Heidelberg Ion-Beam Therapy Center (HIT), Heidelberg (Germany); University Hospital Heidelberg (Germany). Radiation Oncology

    2013-07-01

    Inaccurate conversion of CT data to water-equivalent path length (WEPL) is one of the most important uncertainty sources in ion treatment planning. Dual energy CT (DECT) imaging might help to reduce CT number ambiguities with the additional information. In our study we scanned a series of materials (tissue substitutes, aluminum, PMMA, and other polymers) in the dual source scanner (Siemens Somatom Definition Flash). Based on the 80 kVp/140Sn kVp dual energy images, the electron densities Q{sub e} and effective atomic numbers Z{sub eff} were calculated. We introduced a new lookup table that translates the Q{sub e} to the WEPL. The WEPL residuals from the calibration were significantly reduced for the investigated tissue surrogates compared to the empirical Hounsfield-look-up table (single energy CT imaging) from (-1.0 {+-} 1.8)% to (0.1 {+-} 0.7)% and for non-tissue equivalent PMMA from -7.8% to -1.0%. To assess the benefit of the new DECT calibration, we conducted a treatment planning study for three different idealized cases based on tissue surrogates and PMMA. The DECT calibration yielded a significantly higher target coverage in tissue surrogates and phantom material (i.e. PMMA cylinder, mean target coverage improved from 62% to 98%). To verify the DECT calibration for real tissue, ion ranges through a frozen pig head were measured and compared to predictions calculated by the standard single energy CT calibration and the novel DECT calibration. By using this method, an improvement of ion range estimation from -2.1% water-equivalent thickness deviation (single energy CT) to 0.3% (DECT) was achieved. If one excludes raypaths located on the edge of the sample accompanied with high uncertainties, no significant difference could be observed. (orig.)

  1. COMPUTATIONAL MODELS USED FOR MINIMIZING THE NEGATIVE IMPACT OF ENERGY ON THE ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Oprea D.

    2012-04-01

    Full Text Available Optimizing energy system is a problem that is extensively studied for many years by scientists. This problem can be studied from different views and using different computer programs. The work is characterized by one of the following calculation methods used in Europe for modelling, power system optimization. This method shall be based on reduce action of energy system on environment. Computer program used and characterized in this article is GEMIS.

  2. Significant decimal digits for energy representation on short-word computers

    International Nuclear Information System (INIS)

    Sartori, E.

    1989-01-01

    The general belief that single precision floating point numbers have always at least seven significant decimal digits on short word computers such as IBM is erroneous. Seven significant digits are required however for representing the energy variable in nuclear cross-section data sets containing sharp p-wave resonances at 0 Kelvin. It is suggested that either the energy variable is stored in double precision or that cross-section resonances are reconstructed to room temperature or higher on short word computers

  3. Energy-Efficient Abundant-Data Computing: The N3XT 1,000X

    OpenAIRE

    Aly Mohamed M. Sabry; Gao Mingyu; Hills Gage; Lee Chi-Shuen; Pinter Greg; Shulaker Max M.; Wu Tony F.; Asheghi Mehdi; Bokor Jeff; Franchetti Franz; Goodson Kenneth E.; Kozyrakis Christos; Markov Igor; Olukotun Kunle; Pileggi Larry

    2015-01-01

    Next generation information technologies will process unprecedented amounts of loosely structured data that overwhelm existing computing systems. N3XT improves the energy efficiency of abundant data applications 1000 fold by using new logic and memory technologies 3D integration with fine grained connectivity and new architectures for computation immersed in memory.

  4. Evaluation of four building energy analysis computer programs against ASHRAE standard 140-2007

    CSIR Research Space (South Africa)

    Szewczuk, S

    2014-08-01

    Full Text Available ) standard or code of practice. Agrément requested the CSIR to evaluate a range of building energy simulation computer programs. The standard against which these computer programs were to be evaluated was developed by the American Society of Heating...

  5. On the energy benefit of compute-and-forward on the hexagonal lattice

    NARCIS (Netherlands)

    Ren, Zhijie; Goseling, Jasper; Weber, Jos; Gastpar, Michael; Skoric, B.; Ignatenko, T.

    2014-01-01

    We study the energy benefit of applying compute-and-forward on a wireless hexagonal lattice network with multiple unicast sessions with a specific session placement. Two compute-and-forward based transmission schemes are proposed, which allow the relays to exploit both the broadcast and

  6. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    Science.gov (United States)

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  7. Computer simulation studies of high energy collision cascades

    International Nuclear Information System (INIS)

    Robinson, M.T.

    1991-07-01

    A modified binary collision approximation allowing the proper order of the collisions in time was used to study cascades in Cu and Au at primary kinetic energies up to 100 keV. Nonlinearities were approximated by letting already-stopped cascade atoms become targets in later collisions, using an improved method of locating potential targets to extend the calculations to energies much higher than heretofore. Beside the effect of the approximate nonlinearity, the effect of thermal disorder in the targets was examined. Target redisplacements reduce the damage in Cu by 3% at most, but in Au they reduce it by amounts up to 20% at 100 keV. Thermal disorder is also important: by disrupting crystal effects, the damage is reduced significantly. 11 refs., 4 figs

  8. Energy-efficient cloud computing application solutions and architectures

    OpenAIRE

    Salama, Abdallah lssa

    2012-01-01

    Environmental issues are receiving unprecedented attention from business and governments around the world. As concern for greenhouse, climate change and sustainability continue to grow; businesses are grappling with improving their environmental impacts while remaining profitable. Many businesses have discovered that Green IT initiatives and strategies can reform the organization, comply with laws and regulations, enhance the public appearance of the organization, save energy cost, and improv...

  9. Computing more proper covariances of energy dependent nuclear data

    International Nuclear Information System (INIS)

    Vanhanen, R.

    2016-01-01

    Highlights: • We present conditions for covariances of energy dependent nuclear data to be proper. • We provide methods to detect non-positive and inconsistent covariances in ENDF-6 format. • We propose methods to find nearby more proper covariances. • The methods can be used as a part of a quality assurance program. - Abstract: We present conditions for covariances of energy dependent nuclear data to be proper in the sense that the covariances are positive, i.e., its eigenvalues are non-negative, and consistent with respect to the sum rules of nuclear data. For the ENDF-6 format covariances we present methods to detect non-positive and inconsistent covariances. These methods would be useful as a part of a quality assurance program. We also propose methods that can be used to find nearby more proper energy dependent covariances. These methods can be used to remove unphysical components, while preserving most of the physical components. We consider several different senses in which the nearness can be measured. These methods could be useful if a re-evaluation of improper covariances is not feasible. Two practical examples are processed and analyzed. These demonstrate some of the properties of the methods. We also demonstrate that the ENDF-6 format covariances of linearly dependent nuclear data should usually be encoded with the derivation rules.

  10. Creating Very True Quantum Algorithms for Quantum Energy Based Computing

    Science.gov (United States)

    Nagata, Koji; Nakamura, Tadao; Geurdes, Han; Batle, Josep; Abdalla, Soliman; Farouk, Ahmed; Diep, Do Ngoc

    2018-04-01

    An interpretation of quantum mechanics is discussed. It is assumed that quantum is energy. An algorithm by means of the energy interpretation is discussed. An algorithm, based on the energy interpretation, for fast determining a homogeneous linear function f( x) := s. x = s 1 x 1 + s 2 x 2 + ⋯ + s N x N is proposed. Here x = ( x 1, … , x N ), x j ∈ R and the coefficients s = ( s 1, … , s N ), s j ∈ N. Given the interpolation values (f(1), f(2),...,f(N))=ěc {y}, the unknown coefficients s = (s1(ěc {y}),\\dots , sN(ěc {y})) of the linear function shall be determined, simultaneously. The speed of determining the values is shown to outperform the classical case by a factor of N. Our method is based on the generalized Bernstein-Vazirani algorithm to qudit systems. Next, by using M parallel quantum systems, M homogeneous linear functions are determined, simultaneously. The speed of obtaining the set of M homogeneous linear functions is shown to outperform the classical case by a factor of N × M.

  11. Casimir energy of massless fermions in the Slab-bag

    International Nuclear Information System (INIS)

    Paola, R.D.M. de; Rodrigues, R.B.; Svaiter, N.F.

    1999-04-01

    The zero-point energy of a massless fermion field in the interior of two parallel plates in a D-dimensional space-time at zero temperature is calculated. In order to regularize the model, a mix between dimensional and zeta function regularization procedure is used and it is founded that the regularized zero-point energy density is finite for any number of space-time dimensions. We present a general expression for the Casimir energy for the fermionic field in such a situation. (author)

  12. Department of Energy research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-08-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programmatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models, the execution of which is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex, and consequently it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  13. BigData and computing challenges in high energy and nuclear physics

    International Nuclear Information System (INIS)

    Klimentov, A.; Grigorieva, M.; Kiryanov, A.; Zarochentsev, A.

    2017-01-01

    In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R and D computing projects started recently in National Research Center ''Kurchatov Institute''

  14. BigData and computing challenges in high energy and nuclear physics

    Science.gov (United States)

    Klimentov, A.; Grigorieva, M.; Kiryanov, A.; Zarochentsev, A.

    2017-06-01

    In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R&D computing projects started recently in National Research Center ``Kurchatov Institute''

  15. Polymer Nanocomposites for Wind Energy Applications: Perspectives and Computational Modeling

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon; Zhou, H.W.; Peng, R.D.

    2013-01-01

    Strength and reliability of wind blades produced from polymer composites are the important preconditions for the successful development of wind energy. One of the ways to increase the reliability and lifetime of polymer matrix composites is the nanoengineering of matrix or fiber/matrix interfaces...... in these composites. The potential and results of nanoclay reinforcements for the improvement of the mechanical properties of polymer composites are investigated using continuum mechanics and micromechanics methods and effective phase model. It is demonstrated that nanoreinforcement allows to increase the stiffness...

  16. Comparison of Langevin dynamics and direct energy barrier computation

    International Nuclear Information System (INIS)

    Dittrich, Rok; Schrefl, Thomas; Thiaville, Andre; Miltat, Jacques; Tsiantos, Vassilios; Fidler, Josef

    2004-01-01

    Two complementary methods to study thermal effects in micromagnetics are compared. On short time scales Langevin dynamics gives insight in the thermally activated dynamics. For longer time scales the 'nudged elastic band' method is applied. The method calculates a highly probable thermal switching path between two local energy minima of a micromagnetic system. Comparing the predicted thermal transition rates between ground states in small softmagnetic elements up to a size of 90x90x4.5 nm 3 gives good agreement of the methods

  17. Factors Affecting Energy Barriers for Pyramidal Inversion in Amines and Phosphines: A Computational Chemistry Lab Exercise

    Science.gov (United States)

    Montgomery, Craig D.

    2013-01-01

    An undergraduate exercise in computational chemistry that investigates the energy barrier for pyramidal inversion of amines and phosphines is presented. Semiempirical calculations (PM3) of the ground-state and transition-state energies for NR[superscript 1]R[superscript 2]R[superscript 3] and PR[superscript 1]R[superscript 2]R[superscript 3] allow…

  18. Application of cadmium telluride detectors to high energy computed tomography

    International Nuclear Information System (INIS)

    Glasser, F.; Thomas, G.; Cuzin, M.; Verger, L.

    1991-01-01

    15 years ago, Cadmium Telluride detectors have been investigated in our laboratory as possible detectors for medical scanners [1]. Today most of these machines are using high pressure Xenon gas as multicells detectors, BGO or CdWO 4 scintillators for industrial computerized tomography. Xenon gas detectors are well suited for detection of 100 KeV X-rays and enables to build 1000 cells homogeneous detector with a dynamic range of 3 decades. BGO and CdWO 4 scintillators, associated with photomultipliers or photodiodes are used for higher energy (400 KeV). They present a low afterglow and a dynamic range of 4 to 5 decades. Non destructive testing of very absorbing objects (eg 2 m diameter solid rocket motor) by X-ray tomography requires much higher energy X-rays (16 MeV) and doses up to 12000 rads/min at 1 meter. For this application Cadmium Telluride detectors operating as photoconductors are well suited. A prototype of tomograph machine, able to scan 0.5 m diameter high density objects has been realized with 25 CdTe detectors (25x15x0.8 mm 3 ). It produces good quality 1024x1024 tomographic images

  19. Theoretical studies of potential energy surfaces and computational methods

    Energy Technology Data Exchange (ETDEWEB)

    Shepard, R. [Argonne National Laboratory, IL (United States)

    1993-12-01

    This project involves the development, implementation, and application of theoretical methods for the calculation and characterization of potential energy surfaces involving molecular species that occur in hydrocarbon combustion. These potential energy surfaces require an accurate and balanced treatment of reactants, intermediates, and products. This difficult challenge is met with general multiconfiguration self-consistent-field (MCSCF) and multireference single- and double-excitation configuration interaction (MRSDCI) methods. In contrast to the more common single-reference electronic structure methods, this approach is capable of describing accurately molecular systems that are highly distorted away from their equilibrium geometries, including reactant, fragment, and transition-state geometries, and of describing regions of the potential surface that are associated with electronic wave functions of widely varying nature. The MCSCF reference wave functions are designed to be sufficiently flexible to describe qualitatively the changes in the electronic structure over the broad range of geometries of interest. The necessary mixing of ionic, covalent, and Rydberg contributions, along with the appropriate treatment of the different electron-spin components (e.g. closed shell, high-spin open-shell, low-spin open shell, radical, diradical, etc.) of the wave functions, are treated correctly at this level. Further treatment of electron correlation effects is included using large scale multireference CI wave functions, particularly including the single and double excitations relative to the MCSCF reference space. This leads to the most flexible and accurate large-scale MRSDCI wave functions that have been used to date in global PES studies.

  20. Computer technology: its potential for industrial energy conservation. A technology applications manual

    Energy Technology Data Exchange (ETDEWEB)

    None

    1979-01-01

    Today, computer technology is within the reach of practically any industrial corporation regardless of product size. This manual highlights a few of the many applications of computers in the process industry and provides the technical reader with a basic understanding of computer technology, terminology, and the interactions among the various elements of a process computer system. The manual has been organized to separate process applications and economics from computer technology. Chapter 1 introduces the present status of process computer technology and describes the four major applications - monitoring, analysis, control, and optimization. The basic components of a process computer system also are defined. Energy-saving applications in the four major categories defined in Chapter 1 are discussed in Chapter 2. The economics of process computer systems is the topic of Chapter 3, where the historical trend of process computer system costs is presented. Evaluating a process for the possible implementation of a computer system requires a basic understanding of computer technology as well as familiarity with the potential applications; Chapter 4 provides enough technical information for an evaluation. Computer and associated peripheral costs and the logical sequence of steps in the development of a microprocessor-based process control system are covered in Chapter 5.

  1. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    Energy Technology Data Exchange (ETDEWEB)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik [School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana 47907 (United States)

    2013-12-21

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  2. Energy-aware memory management for embedded multimedia systems a computer-aided design approach

    CERN Document Server

    Balasa, Florin

    2011-01-01

    Energy-Aware Memory Management for Embedded Multimedia Systems: A Computer-Aided Design Approach presents recent computer-aided design (CAD) ideas that address memory management tasks, particularly the optimization of energy consumption in the memory subsystem. It explains how to efficiently implement CAD solutions, including theoretical methods and novel algorithms. The book covers various energy-aware design techniques, including data-dependence analysis techniques, memory size estimation methods, extensions of mapping approaches, and memory banking approaches. It shows how these techniques

  3. A data acquisition computer for high energy physics applications DAFNE:- hardware manual

    International Nuclear Information System (INIS)

    Barlow, J.; Seller, P.; De-An, W.

    1983-07-01

    A high performance stand alone computer system based on the Motorola 68000 micro processor has been built at the Rutherford Appleton Laboratory. Although the design was strongly influenced by the requirement to provide a compact data acquisition computer for the high energy physics environment, the system is sufficiently general to find applications in a wider area. It provides colour graphics and tape and disc storage together with access to CAMAC systems. This report is the hardware manual of the data acquisition computer, DAFNE (Data Acquisition For Nuclear Experiments), and as such contains a full description of the hardware structure of the computer system. (author)

  4. Impact of office productivity cloud computing on energy consumption and greenhouse gas emissions.

    Science.gov (United States)

    Williams, Daniel R; Tang, Yinshan

    2013-05-07

    Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft's cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.

  5. Research and development of grid computing technology in center for computational science and e-systems of Japan Atomic Energy Agency

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2007-01-01

    Center for Computational Science and E-systems of the Japan Atomic Energy Agency (CCSE/JAEA) has carried out R and D of grid computing technology. Since 1995, R and D to realize computational assistance for researchers called Seamless Thinking Aid (STA) and then to share intellectual resources called Information Technology Based Laboratory (ITBL) have been conducted, leading to construct an intelligent infrastructure for the atomic energy research called Atomic Energy Grid InfraStructure (AEGIS) under the Japanese national project 'Development and Applications of Advanced High-Performance Supercomputer'. It aims to enable synchronization of three themes: 1) Computer-Aided Research and Development (CARD) to realize and environment for STA, 2) Computer-Aided Engineering (CAEN) to establish Multi Experimental Tools (MEXT), and 3) Computer Aided Science (CASC) to promote the Atomic Energy Research and Investigation (AERI). This article reviewed achievements in R and D of grid computing technology so far obtained. (T. Tanaka)

  6. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Wasserman, Harvey

    2011-03-31

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a

  7. Connecting free energy surfaces in implicit and explicit solvent: an efficient method to compute conformational and solvation free energies.

    Science.gov (United States)

    Deng, Nanjie; Zhang, Bin W; Levy, Ronald M

    2015-06-09

    The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions, and protein–ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ∼3 kcal/mol at only ∼8% of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the implicit/explicit thermodynamic cycle.

  8. Cloud computing platform for real-time measurement and verification of energy performance

    International Nuclear Information System (INIS)

    Ke, Ming-Tsun; Yeh, Chia-Hung; Su, Cheng-Jie

    2017-01-01

    Highlights: • Application of PSO algorithm can improve the accuracy of the baseline model. • M&V cloud platform automatically calculates energy performance. • M&V cloud platform can be applied in all energy conservation measures. • Real-time operational performance can be monitored through the proposed platform. • M&V cloud platform facilitates the development of EE programs and ESCO industries. - Abstract: Nations worldwide are vigorously promoting policies to improve energy efficiency. The use of measurement and verification (M&V) procedures to quantify energy performance is an essential topic in this field. Currently, energy performance M&V is accomplished via a combination of short-term on-site measurements and engineering calculations. This requires extensive amounts of time and labor and can result in a discrepancy between actual energy savings and calculated results. In addition, the M&V period typically lasts for periods as long as several months or up to a year, the failure to immediately detect abnormal energy performance not only decreases energy performance, results in the inability to make timely correction, and misses the best opportunity to adjust or repair equipment and systems. In this study, a cloud computing platform for the real-time M&V of energy performance is developed. On this platform, particle swarm optimization and multivariate regression analysis are used to construct accurate baseline models. Instantaneous and automatic calculations of the energy performance and access to long-term, cumulative information about the energy performance are provided via a feature that allows direct uploads of the energy consumption data. Finally, the feasibility of this real-time M&V cloud platform is tested for a case study involving improvements to a cold storage system in a hypermarket. Cloud computing platform for real-time energy performance M&V is applicable to any industry and energy conservation measure. With the M&V cloud platform, real

  9. Evaluation of reinitialization-free nonvolatile computer systems for energy-harvesting Internet of things applications

    Science.gov (United States)

    Onizawa, Naoya; Tamakoshi, Akira; Hanyu, Takahiro

    2017-08-01

    In this paper, reinitialization-free nonvolatile computer systems are designed and evaluated for energy-harvesting Internet of things (IoT) applications. In energy-harvesting applications, as power supplies generated from renewable power sources cause frequent power failures, data processed need to be backed up when power failures occur. Unless data are safely backed up before power supplies diminish, reinitialization processes are required when power supplies are recovered, which results in low energy efficiencies and slow operations. Using nonvolatile devices in processors and memories can realize a faster backup than a conventional volatile computer system, leading to a higher energy efficiency. To evaluate the energy efficiency upon frequent power failures, typical computer systems including processors and memories are designed using 90 nm CMOS or CMOS/magnetic tunnel junction (MTJ) technologies. Nonvolatile ARM Cortex-M0 processors with 4 kB MRAMs are evaluated using a typical computing benchmark program, Dhrystone, which shows a few order-of-magnitude reductions in energy in comparison with a volatile processor with SRAM.

  10. Building Energy Assessment and Computer Simulation Applied to Social Housing in Spain

    Directory of Open Access Journals (Sweden)

    Juan Aranda

    2018-01-01

    Full Text Available The actual energy consumption and simulated energy performance of a building usually differ. This gap widens in social housing, owing to the characteristics of these buildings and the consumption patterns of economically vulnerable households affected by energy poverty. The aim of this work is to characterise the energy poverty of the households that are representative of those residing in social housing, specifically in blocks of apartments in Southern Europe. The main variables that affect energy consumption and costs are analysed, and the models developed for software energy-performance simulations (which are applied to predict energy consumption in social housing are validated against actual energy-consumption values. The results demonstrate that this type of household usually lives in surroundings at a temperature below the average thermal comfort level. We have taken into account that a standard thermal comfort level may lead to significant differences between computer-aided energy building simulation and actual consumption data (which are 40–140% lower than simulated consumption. This fact is of integral importance, as we use computer simulation to predict building energy performance in social housing.

  11. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    International Nuclear Information System (INIS)

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  12. Computer simulation of high-energy recoils in FCC metals: cascade shapes and sizes

    International Nuclear Information System (INIS)

    Heinisch, H.L.

    1981-01-01

    Displacement cascades in copper generated by primary knock-on atoms with energies from 1 keV to 500 keV were produced with the computer code MARLOWE. The sizes and other features of the point defect distributions were measured as a function of energy. In the energy range from 30 keV to 50 keV there is a transition from compact single damage regions to chains of generally closely-spaced, but distinct multiple damage regions. The average spacing between multiple damage regions remains constant with energy. Only a small fraction of the recoils from fusion neutrons is expected to produce widely separated subcascades

  13. Feasibility of dual-energy computed tomography in radiation therapy planning

    Science.gov (United States)

    Sheen, Heesoon; Shin, Han-Back; Cho, Sungkoo; Cho, Junsang; Han, Youngyih

    2017-12-01

    In this study, the noise level, effective atomic number ( Z eff), accuracy of the computed tomography (CT) number, and the CT number to the relative electron density EDconversion curve were estimated for virtual monochromatic energy and polychromatic energy. These values were compared to the theoretically predicted values to investigate the feasibility of the use of dual-energy CT in routine radiation therapy planning. The accuracies of the parameters were within the range of acceptability. These results can serve as a stepping stone toward the routine use of dual-energy CT in radiotherapy planning.

  14. Energy conservation in ICT-businesses. Green computing in the USA; Energiereductie topprioriteit ICT-bedrijven. Green computing is hot in de USA

    Energy Technology Data Exchange (ETDEWEB)

    Hulsebos, M.

    2007-09-15

    A brief overview of the initiatives in ICT-businesses in the USA to save energy, also known as 'green computing'. [Dutch] Een kort overzicht van de initiatieven bij ICT-bedrijven in de USA om energie te besparen, ook bekend onder de naam 'green computing'.

  15. Estimation of numerical uncertainty in computational fluid dynamics simulations of a passively controlled wave energy converter

    DEFF Research Database (Denmark)

    Wang, Weizhi; Wu, Minghao; Palm, Johannes

    2018-01-01

    for almost linear incident waves. First, we show that the computational fluid dynamics simulations have acceptable agreement to experimental data. We then present a verification and validation study focusing on the solution verification covering spatial and temporal discretization, iterative and domain......The wave loads and the resulting motions of floating wave energy converters are traditionally computed using linear radiation–diffraction methods. Yet for certain cases such as survival conditions, phase control and wave energy converters operating in the resonance region, more complete...... dynamics simulations have largely been overlooked in the wave energy sector. In this article, we apply formal verification and validation techniques to computational fluid dynamics simulations of a passively controlled point absorber. The phase control causes the motion response to be highly nonlinear even...

  16. Computer based workstation for development of software for high energy physics experiments

    International Nuclear Information System (INIS)

    Ivanchenko, I.M.; Sedykh, Yu.V.

    1987-01-01

    Methodical principles and results of a successful attempt to create on the base of IBM-PC/AT personal computer of effective means for development of programs for high energy physics experiments are analysed. The obtained results permit to combine the best properties and a positive materialized experience accumulated on the existing time sharing collective systems with a high quality of data representation, reliability and convenience of personal computer applications

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  18. Free energy minimization to predict RNA secondary structures and computational RNA design.

    Science.gov (United States)

    Churkin, Alexander; Weinbrand, Lina; Barash, Danny

    2015-01-01

    Determining the RNA secondary structure from sequence data by computational predictions is a long-standing problem. Its solution has been approached in two distinctive ways. If a multiple sequence alignment of a collection of homologous sequences is available, the comparative method uses phylogeny to determine conserved base pairs that are more likely to form as a result of billions of years of evolution than by chance. In the case of single sequences, recursive algorithms that compute free energy structures by using empirically derived energy parameters have been developed. This latter approach of RNA folding prediction by energy minimization is widely used to predict RNA secondary structure from sequence. For a significant number of RNA molecules, the secondary structure of the RNA molecule is indicative of its function and its computational prediction by minimizing its free energy is important for its functional analysis. A general method for free energy minimization to predict RNA secondary structures is dynamic programming, although other optimization methods have been developed as well along with empirically derived energy parameters. In this chapter, we introduce and illustrate by examples the approach of free energy minimization to predict RNA secondary structures.

  19. Operating Wireless Sensor Nodes without Energy Storage: Experimental Results with Transient Computing

    Directory of Open Access Journals (Sweden)

    Faisal Ahmed

    2016-12-01

    Full Text Available Energy harvesting is increasingly used for powering wireless sensor network nodes. Recently, it has been suggested to combine it with the concept of transient computing whereby the wireless sensor nodes operate without energy storage capabilities. This new combined approach brings benefits, for instance ultra-low power nodes and reduced maintenance, but also raises new challenges, foremost dealing with nodes that may be left without power for various time periods. Although transient computing has been demonstrated on microcontrollers, reports on experiments with wireless sensor nodes are still scarce in the literature. In this paper, we describe our experiments with solar, thermal, and RF energy harvesting sources that are used to power sensor nodes (including wireless ones without energy storage, but with transient computing capabilities. The results show that the selected solar and thermal energy sources can operate both the wired and wireless nodes without energy storage, whereas in our specific implementation, the developed RF energy source can only be used for the selected nodes without wireless connectivity.

  20. Intelligent battery energy management and control for vehicle-to-grid via cloud computing network

    International Nuclear Information System (INIS)

    Khayyam, Hamid; Abawajy, Jemal; Javadi, Bahman; Goscinski, Andrzej; Stojcevski, Alex; Bab-Hadiashar, Alireza

    2013-01-01

    Highlights: • The intelligent battery energy management substantially reduces the interactions of PEV with parking lots. • The intelligent battery energy management improves the energy efficiency. • The intelligent battery energy management predicts the road load demand for vehicles. - Abstract: Plug-in Electric Vehicles (PEVs) provide new opportunities to reduce fuel consumption and exhaust emission. PEVs need to draw and store energy from an electrical grid to supply propulsive energy for the vehicle. As a result, it is important to know when PEVs batteries are available for charging and discharging. Furthermore, battery energy management and control is imperative for PEVs as the vehicle operation and even the safety of passengers depend on the battery system. Thus, scheduling the grid power electricity with parking lots would be needed for efficient charging and discharging of PEV batteries. This paper aims to propose a new intelligent battery energy management and control scheduling service charging that utilize Cloud computing networks. The proposed intelligent vehicle-to-grid scheduling service offers the computational scalability required to make decisions necessary to allow PEVs battery energy management systems to operate efficiently when the number of PEVs and charging devices are large. Experimental analyses of the proposed scheduling service as compared to a traditional scheduling service are conducted through simulations. The results show that the proposed intelligent battery energy management scheduling service substantially reduces the required number of interactions of PEV with parking lots and grid as well as predicting the load demand calculated in advance with regards to their limitations. Also it shows that the intelligent scheduling service charging using Cloud computing network is more efficient than the traditional scheduling service network for battery energy management and control

  1. The application of AFS in the high energy physics computing system

    International Nuclear Information System (INIS)

    Xu Dong; Yan Xiaofei; Chen Yaodong; Chen Gang; Yu Chuansong

    2010-01-01

    With the development of high energy physics, physics experiments are producing large amount of data. The workload of data analysis is very large, and the analysis work needs to be finished by many scientists together. So, the computing system must provide more secure user manage function and higher level of data-sharing ability. The article introduces a solution based on AFS in the high energy physics computing system, which not only make user management safer, but also make data-sharing easier. (authors)

  2. Cloud computing for energy management in smart grid - an application survey

    International Nuclear Information System (INIS)

    Naveen, P; Ing, Wong Kiing; Danquah, Michael Kobina; Sidhu, Amandeep S; Abu-Siada, Ahmed

    2016-01-01

    The smart grid is the emerging energy system wherein the application of information technology, tools and techniques that make the grid run more efficiently. It possesses demand response capacity to help balance electrical consumption with supply. The challenges and opportunities of emerging and future smart grids can be addressed by cloud computing. To focus on these requirements, we provide an in-depth survey on different cloud computing applications for energy management in the smart grid architecture. In this survey, we present an outline of the current state of research on smart grid development. We also propose a model of cloud based economic power dispatch for smart grid. (paper)

  3. TRANGE: computer code to calculate the energy beam degradation in target stack

    International Nuclear Information System (INIS)

    Bellido, Luis F.

    1995-07-01

    A computer code to calculate the projectile energy degradation along a target stack was developed for an IBM or compatible personal microcomputer. A comparison of protons and deuterons bombarding uranium and aluminium targets was made. The results showed that the data obtained with TRANGE were in agreement with other computers code such as TRIM, EDP and also using Williamsom and Janni range and stopping power tables. TRANGE can be used for any charged particle ion, for energies between 1 to 100 MeV, in metal foils and solid compounds targets. (author). 8 refs., 2 tabs

  4. Computational Modelling of Materials for Wind Turbine Blades: Selected DTU Wind Energy Activities.

    Science.gov (United States)

    Mikkelsen, Lars Pilgaard; Mishnaevsky, Leon

    2017-11-08

    Computational and analytical studies of degradation of wind turbine blade materials at the macro-, micro-, and nanoscale carried out by the modelling team of the Section Composites and Materials Mechanics, Department of Wind Energy, DTU, are reviewed. Examples of the analysis of the microstructural effects on the strength and fatigue life of composites are shown. Computational studies of degradation mechanisms of wind blade composites under tensile and compressive loading are presented. The effect of hybrid and nanoengineered structures on the performance of the composite was studied in computational experiments as well.

  5. Introduction to massively-parallel computing in high-energy physics

    CERN Document Server

    AUTHOR|(CDS)2083520

    1993-01-01

    Ever since computers were first used for scientific and numerical work, there has existed an "arms race" between the technical development of faster computing hardware, and the desires of scientists to solve larger problems in shorter time-scales. However, the vast leaps in processor performance achieved through advances in semi-conductor science have reached a hiatus as the technology comes up against the physical limits of the speed of light and quantum effects. This has lead all high performance computer manufacturers to turn towards a parallel architecture for their new machines. In these lectures we will introduce the history and concepts behind parallel computing, and review the various parallel architectures and software environments currently available. We will then introduce programming methodologies that allow efficient exploitation of parallel machines, and present case studies of the parallelization of typical High Energy Physics codes for the two main classes of parallel computing architecture (S...

  6. Report of the Subpanel on Theoretical Computing of the High Energy Physics Advisory Panel

    International Nuclear Information System (INIS)

    1984-09-01

    The Subpanel on Theoretical Computing of the High Energy Physics Advisory Panel (HEPAP) was formed in July 1984 to make recommendations concerning the need for state-of-the-art computing for theoretical studies. The specific Charge to the Subpanel is attached as Appendix A, and the full membership is listed in Appendix B. For the purposes of this study, theoretical computing was interpreted as encompassing both investigations in the theory of elementary particles and computation-intensive aspects of accelerator theory and design. Many problems in both areas are suited to realize the advantages of vectorized processing. The body of the Subpanel Report is organized as follows. The Introduction, Section I, explains some of the goals of computational physics as it applies to elementary particle theory and accelerator design. Section II reviews the availability of mainframe supercomputers to researchers in the United States, in Western Europe, and in Japan. Other promising approaches to large-scale computing are summarized in Section III. Section IV details the current computing needs for problems in high energy theory, and for beam dynamics studies. The Subpanel Recommendations appear in Section V. The Appendices attached to this Report give the Charge to the Subpanel, the Subpanel membership, and some background information on the financial implications of establishing a supercomputer center

  7. Optimizing the Number of Cooperating Terminals for Energy Aware Task Computing in Wireless Networks

    DEFF Research Database (Denmark)

    Olsen, Anders Brødløs; Fitzek, Frank H. P.; Koch, Peter

    2005-01-01

    It is generally accepted that energy consumption is a significant design constraint for mobile handheld systems, therefore motivations for methods optimizing the energy consumption making better use of the restricted battery resources are evident. A novel concept of distributed task computing...... is previously proposed (D2VS), where the overall idea of selective distribution of tasks among terminals is made. In this paper the optimal number of terminals for cooperative task computing in a wireless network will be investigated. The paper presents an energy model for the proposed scheme. Energy...... consumption of the terminals with respect to their workload and the overhead of distributing tasks among terminals are taken into account. The paper shows, that the number of cooperating terminals is in general limited to a few, though alternating with respect to the various system parameters....

  8. Vacuum energy of the electromagnetic field in a rotating system

    International Nuclear Information System (INIS)

    Hacyan, S.; Sarmiento, A.

    1986-01-01

    The vacuum energy of the electromagnetic field is calculated for a uniformly rotating observer. The spectrum of vacuum fluctuations is composed of the zero-point energy with a modified density of states and a contribution due to the rotation which is not thermal. (orig.)

  9. Trends, visions and reality. Cloud computing in the energy industry; Trends, Visionen und Wirklichkeit. Cloud Computing in der Energiewirtschaft

    Energy Technology Data Exchange (ETDEWEB)

    Reuther, Achim [Energy Solution Center (Ensoc) e.V., Karlsruhe (Germany); Maurer, Marion; Pohling, Matthias [Bridging IT GmbH, Mannheim (Germany)

    2011-08-22

    The topic of cloud computing is not only just a temporary hype in the market of information technology, but also a true paradigm shift in the supply and use of information technology services. A sustainable change in the information technology in the energy sector is expected. The authors of the contribution under consideration present current cloud research projects with energy-economic relevance. Some important criteria are presented that should be considered in the selection and use of cloud services. The selective use of cloud services up to the outsourcing of entire business processes of an electric utility in the cloud may provide an added value. Both, current approaches as well as research projects are suitable for the optimization of processes and resources. The numerous possibilities have to be adjusted to the own general conditions.

  10. Assessing Power Monitoring Approaches for Energy and Power Analysis of Computers

    OpenAIRE

    El Mehdi Diouria, Mohammed; Dolz Zaragozá, Manuel Francisco; Glückc, Olivier; Lefèvre, Laurent; Alonso, Pedro; Catalán Pallarés, Sandra; Mayo, Rafael; Quintana Ortí, Enrique S.

    2014-01-01

    Large-scale distributed systems (e.g., datacenters, HPC systems, clouds, large-scale networks, etc.) consume and will consume enormous amounts of energy. Therefore, accurately monitoring the power dissipation and energy consumption of these systems is more unavoidable. The main novelty of this contribution is the analysis and evaluation of different external and internal power monitoring devices tested using two different computing systems, a server and a desktop machine. Furthermore, we prov...

  11. Neural chips, neural computers and application in high and superhigh energy physics experiments

    International Nuclear Information System (INIS)

    Nikityuk, N.M.; )

    2001-01-01

    Architecture peculiarity and characteristics of series of neural chips and neural computes used in scientific instruments are considered. Tendency of development and use of them in high energy and superhigh energy physics experiments are described. Comparative data which characterize the efficient use of neural chips for useful event selection, classification elementary particles, reconstruction of tracks of charged particles and for search of hypothesis Higgs particles are given. The characteristics of native neural chips and accelerated neural boards are considered [ru

  12. Computational screening of new inorganic materials for highly efficient solar energy conversion

    DEFF Research Database (Denmark)

    Kuhar, Korina

    2017-01-01

    in solar cells convert solar energy into electricity, and PC uses harvested energy to conduct chemical reactions, such as splitting water into oxygen and, more importantly, hydrogen, also known as the fuel of the future. Further progress in both PV and PC fields is mostly limited by the flaws in materials...... materials. In this work a high-throughput computational search for suitable absorbers for PV and PC applications is presented. A set of descriptors has been developed, such that each descriptor targets an important property or issue of a good solar energy conversion material. The screening study...... that we have access to. Despite the vast amounts of energy at our disposal, we are not able to harvest this solar energy efficiently. Currently, there are a few ways of converting solar power into usable energy, such as photovoltaics (PV) or photoelectrochemical generation of fuels (PC). PV processes...

  13. A new computer code for quantitative analysis of low-energy ion scattering data

    NARCIS (Netherlands)

    Dorenbos, G; Breeman, M; Boerma, D.O

    We have developed a computer program for the full analysis of low-energy ion scattering (LEIS) data, i.e. an analysis that is equivalent to the full calculation of the three-dimensional trajectories of beam particles through a number of layers in the solid, and ending in the detector. A dedicated

  14. The use of symbolic computation in radiative, energy, and neutron transport calculations. Final report

    International Nuclear Information System (INIS)

    Frankel, J.I.

    1997-01-01

    This investigation used sysmbolic manipulation in developing analytical methods and general computational strategies for solving both linear and nonlinear, regular and singular integral and integro-differential equations which appear in radiative and mixed-mode energy transport. Contained in this report are seven papers which present the technical results as individual modules

  15. On convergence generation in computing the electro-magnetic Casimir force

    International Nuclear Information System (INIS)

    Schuller, F.

    2008-01-01

    We tackle the very fundamental problem of zero-point energy divergence in the context of the Casimir effect. We calculate the Casimir force due to field fluctuations by using standard cavity radiation modes. The validity of convergence generation by means of an exponential energy cut-off factor is discussed in detail. (orig.)

  16. Proceeding of 29th domestic symposium on computational science and nuclear energy in the 21st century

    International Nuclear Information System (INIS)

    2001-10-01

    As the 29th domestic symposium of Atomic Energy Research Committee, the Japan Welding Engineering Society, the symposium was held titled as Computational science and nuclear energy in the 21st century'. Keynote speech was delivered titled as 'Nuclear power plants safety secured by computational science in the 21st century'. Three speakers gave lectures titled as 'Materials design and computational science', 'Development of advanced reactor in the 21st century' and 'Application of computational science to operation and maintenance management of plants'. Lectures held panel discussion titled as 'Computational science and nuclear energy in the 21st century'. (T. Tanaka)

  17. Computer-controlled system for plasma ion energy auto-analyzer

    International Nuclear Information System (INIS)

    Wu Xianqiu; Chen Junfang; Jiang Zhenmei; Zhong Qinghua; Xiong Yuying; Wu Kaihua

    2003-01-01

    A computer-controlled system for plasma ion energy auto-analyzer was technically studied for rapid and online measurement of plasma ion energy distribution. The system intelligently controls all the equipments via a RS-232 port, a printer port and a home-built circuit. The software designed by LabVIEW G language automatically fulfils all of the tasks such as system initializing, adjustment of scanning-voltage, measurement of weak-current, data processing, graphic export, etc. By using the system, a few minutes are taken to acquire the whole ion energy distribution, which rapidly provide important parameters of plasma process techniques based on semiconductor devices and microelectronics

  18. The use of symbolic computation in radiative, energy, and neutron transport calculations

    Science.gov (United States)

    Frankel, J. I.

    This investigation uses symbolic computation in developing analytical methods and general computational strategies for solving both linear and nonlinear, regular and singular, integral and integro-differential equations which appear in radiative and combined mode energy transport. This technical report summarizes the research conducted during the first nine months of the present investigation. The use of Chebyshev polynomials augmented with symbolic computation has clearly been demonstrated in problems involving radiative (or neutron) transport, and mixed-mode energy transport. Theoretical issues related to convergence, errors, and accuracy have also been pursued. Three manuscripts have resulted from the funded research. These manuscripts have been submitted to archival journals. At the present time, an investigation involving a conductive and radiative medium is underway. The mathematical formulation leads to a system of nonlinear, weakly-singular integral equations involving the unknown temperature and various Legendre moments of the radiative intensity in a participating medium. Some preliminary results are presented illustrating the direction of the proposed research.

  19. MeReg: Managing Energy-SLA Tradeoff for Green Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Rahul Yadav

    2017-01-01

    Full Text Available Mobile cloud computing (MCC provides various cloud computing services to mobile users. The rapid growth of MCC users requires large-scale MCC data centers to provide them with data processing and storage services. The growth of these data centers directly impacts electrical energy consumption, which affects businesses as well as the environment through carbon dioxide (CO2 emissions. Moreover, large amount of energy is wasted to maintain the servers running during low workload. To reduce the energy consumption of mobile cloud data centers, energy-aware host overload detection algorithm and virtual machines (VMs selection algorithms for VM consolidation are required during detected host underload and overload. After allocating resources to all VMs, underloaded hosts are required to assume energy-saving mode in order to minimize power consumption. To address this issue, we proposed an adaptive heuristics energy-aware algorithm, which creates an upper CPU utilization threshold using recent CPU utilization history to detect overloaded hosts and dynamic VM selection algorithms to consolidate the VMs from overloaded or underloaded host. The goal is to minimize total energy consumption and maximize Quality of Service, including the reduction of service level agreement (SLA violations. CloudSim simulator is used to validate the algorithm and simulations are conducted on real workload traces in 10 different days, as provided by PlanetLab.

  20. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    Science.gov (United States)

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.

  1. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    Science.gov (United States)

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  2. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    Directory of Open Access Journals (Sweden)

    Iván Tomás Cotes-Ruiz

    Full Text Available Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS. The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  3. Simple prescription for computing the interparticle potential energy for D-dimensional gravity systems

    International Nuclear Information System (INIS)

    Accioly, Antonio; Helayël-Neto, José; Barone, F E; Herdy, Wallace

    2015-01-01

    A straightforward prescription for computing the D-dimensional potential energy of gravitational models, which is strongly based on the Feynman path integral, is built up. Using this method, the static potential energy for the interaction of two masses is found in the context of D-dimensional higher-derivative gravity models, and its behavior is analyzed afterwards in both ultraviolet and infrared regimes. As a consequence, two new gravity systems in which the potential energy is finite at the origin, respectively, in D = 5 and D = 6, are found. Since the aforementioned prescription is equivalent to that based on the marriage between quantum mechanics (to leading order, i.e., in the first Born approximation) and the nonrelativistic limit of quantum field theory, and bearing in mind that the latter relies basically on the calculation of the nonrelativistic Feynman amplitude (M NR ), a trivial expression for computing M NR is obtained from our prescription as an added bonus. (paper)

  4. From the Web to the Grid and beyond computing paradigms driven by high-energy physics

    CERN Document Server

    Carminati, Federico; Galli-Carminati, Giuliana

    2012-01-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the ...

  5. HEPLIB '91: International users meeting on the support and environments of high energy physics computing

    International Nuclear Information System (INIS)

    Johnstad, H.

    1991-01-01

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, data base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards

  6. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    Energy Technology Data Exchange (ETDEWEB)

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  7. Reduction of energy consumption peaks in a greenhouse by computer control

    Energy Technology Data Exchange (ETDEWEB)

    Amsen, M.G.; Froesig Nielsen, O.; Jacobsen, L.H. (Danish Research Service for Plant and Soil Science, Research Centre for Horticulture, Department of Horticultural Engineering, Aarslev (DK))

    1990-01-01

    The results of using a computer for environmental control in one greenhouse is in this paper compared with using modified analogue control equipment in another one. Energy consumption peaks can be almost prevented by properly applying the computer control and strategy. Both treatments were based upon negative DIF, i.e. low day and high night minimum set points (14 deg. C/ 22 deg. C) for room temperature. No difference in production time and quality was observed in six different pot plant species. Only Kalanchoe showed significant increase in fresh weight and dry weight. By applying computer control, the lack of flexibility of analogue control can be avoided by applying computer control and a more accurate room temperature control can be obtained. (author).

  8. Applied & Computational MathematicsChallenges for the Design and Control of Dynamic Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D L; Burns, J A; Collis, S; Grosh, J; Jacobson, C A; Johansen, H; Mezic, I; Narayanan, S; Wetter, M

    2011-03-10

    The Energy Independence and Security Act of 2007 (EISA) was passed with the goal 'to move the United States toward greater energy independence and security.' Energy security and independence cannot be achieved unless the United States addresses the issue of energy consumption in the building sector and significantly reduces energy consumption in buildings. Commercial and residential buildings account for approximately 40% of the U.S. energy consumption and emit 50% of CO{sub 2} emissions in the U.S. which is more than twice the total energy consumption of the entire U.S. automobile and light truck fleet. A 50%-80% improvement in building energy efficiency in both new construction and in retrofitting existing buildings could significantly reduce U.S. energy consumption and mitigate climate change. Reaching these aggressive building efficiency goals will not happen without significant Federal investments in areas of computational and mathematical sciences. Applied and computational mathematics are required to enable the development of algorithms and tools to design, control and optimize energy efficient buildings. The challenge has been issued by the U.S. Secretary of Energy, Dr. Steven Chu (emphasis added): 'We need to do more transformational research at DOE including computer design tools for commercial and residential buildings that enable reductions in energy consumption of up to 80 percent with investments that will pay for themselves in less than 10 years.' On July 8-9, 2010 a team of technical experts from industry, government and academia were assembled in Arlington, Virginia to identify the challenges associated with developing and deploying newcomputational methodologies and tools thatwill address building energy efficiency. These experts concluded that investments in fundamental applied and computational mathematics will be required to build enabling technology that can be used to realize the target of 80% reductions in energy

  9. From the web to the grid and beyond. Computing paradigms driven by high energy physics

    International Nuclear Information System (INIS)

    Brun, Rene; Carminati, Federico; Galli Carminati, Giuliana

    2012-01-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the Web, and grid- and cloud computing. The important issue of intellectual property regulations for distributed software engineering and computing is also addressed. Aptly, the book closes with a visionary chapter of what may lie ahead. Approachable and requiring only basic understanding of physics and computer sciences, this book is intended for both education and research. (orig.)

  10. Structural analysis of magnetic fusion energy systems in a combined interactive/batch computer environment

    International Nuclear Information System (INIS)

    Johnson, N.E.; Singhal, M.K.; Walls, J.C.; Gray, W.H.

    1979-01-01

    A system of computer programs has been developed to aid in the preparation of input data for and the evaluation of output data from finite element structural analyses of magnetic fusion energy devices. The system utilizes the NASTRAN structural analysis computer program and a special set of interactive pre- and post-processor computer programs, and has been designed for use in an environment wherein a time-share computer system is linked to a batch computer system. In such an environment, the analyst must only enter, review and/or manipulate data through interactive terminals linked to the time-share computer system. The primary pre-processor programs include NASDAT, NASERR and TORMAC. NASDAT and TORMAC are used to generate NASTRAN input data. NASERR performs routine error checks on this data. The NASTRAN program is run on a batch computer system using data generated by NASDAT and TORMAC. The primary post-processing programs include NASCMP and NASPOP. NASCMP is used to compress the data initially stored on magnetic tape by NASTRAN so as to facilitate interactive use of the data. NASPOP reads the data stored by NASCMP and reproduces NASTRAN output for selected grid points, elements and/or data types

  11. From the web to the grid and beyond. Computing paradigms driven by high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Brun, Rene; Carminati, Federico [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Galli Carminati, Giuliana (eds.) [Hopitaux Universitaire de Geneve, Chene-Bourg (Switzerland). Unite de la Psychiatrie du Developpement Mental

    2012-07-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the Web, and grid- and cloud computing. The important issue of intellectual property regulations for distributed software engineering and computing is also addressed. Aptly, the book closes with a visionary chapter of what may lie ahead. Approachable and requiring only basic understanding of physics and computer sciences, this book is intended for both education and research. (orig.)

  12. Computer usage and national energy consumption: Results from a field-metering study

    Energy Technology Data Exchange (ETDEWEB)

    Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Fuchs, Heidi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Greenblatt, Jeffery [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Pratt, Stacy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Willem, Henry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Claybaugh, Erin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Beraki, Bereket [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Nagaraju, Mythri [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Price, Sarah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Young, Scott [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division

    2014-12-01

    The electricity consumption of miscellaneous electronic loads (MELs) in the home has grown in recent years, and is expected to continue rising. Consumer electronics, in particular, are characterized by swift technological innovation, with varying impacts on energy use. Desktop and laptop computers make up a significant share of MELs electricity consumption, but their national energy use is difficult to estimate, given uncertainties around shifting user behavior. This report analyzes usage data from 64 computers (45 desktop, 11 laptop, and 8 unknown) collected in 2012 as part of a larger field monitoring effort of 880 households in the San Francisco Bay Area, and compares our results to recent values from the literature. We find that desktop computers are used for an average of 7.3 hours per day (median = 4.2 h/d), while laptops are used for a mean 4.8 hours per day (median = 2.1 h/d). The results for laptops are likely underestimated since they can be charged in other, unmetered outlets. Average unit annual energy consumption (AEC) for desktops is estimated to be 194 kWh/yr (median = 125 kWh/yr), and for laptops 75 kWh/yr (median = 31 kWh/yr). We estimate national annual energy consumption for desktop computers to be 20 TWh. National annual energy use for laptops is estimated to be 11 TWh, markedly higher than previous estimates, likely reflective of laptops drawing more power in On mode in addition to greater market penetration. This result for laptops, however, carries relatively higher uncertainty compared to desktops. Different study methodologies and definitions, changing usage patterns, and uncertainty about how consumers use computers must be considered when interpreting our results with respect to existing analyses. Finally, as energy consumption in On mode is predominant, we outline several energy savings opportunities: improved power management (defaulting to low-power modes after periods of inactivity as well as power scaling), matching the rated power

  13. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and its Application to Sparse Coding

    Directory of Open Access Journals (Sweden)

    Sapan eAgarwal

    2016-01-01

    Full Text Available The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational advantages of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an NxN crossbar, these two kernels are at a minimum O(N more energy efficient than a digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1. These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N reduction in energy for the entire algorithm. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.

  14. E-commerce, paper and energy use: a case study concerning a Dutch electronic computer retailer

    Energy Technology Data Exchange (ETDEWEB)

    Hoogeveen, M.J.; Reijnders, L. [Open University Netherlands, Heerlen (Netherlands)

    2002-07-01

    Impacts of the application of c-commerce on paper and energy use are analysed in a case study concerning a Dutch electronic retailer (e-tailer) of computers. The estimated use of paper associated with the e-tailer concerned was substantially reduced if compared with physical retailing or traditional mail-order retailing. However, the overall effect of e-tailing on paper use strongly depends on customer behaviour. Some characteristics of c-commerce, as practised by the e-tailer concerned, such as diminished floor space requirements, reduced need for personal transport and simplified logistics, improve energy efficiency compared with physical retailing. Substitution of paper information by online information has an energetic effect that is dependent on the time of online information perusal and the extent to which downloaded information is printed. Increasing distances from producers to consumers, outsourcing, and increased use of computers, associated equipment and electronic networks are characteristics of e-commerce that may have an upward effect on energy use. In this case study, the upward effects thereof on energy use were less than the direct energy efficiency gains. However, the indirect effects associated with increased buying power and the rebound effect on transport following from freefalling travel time, greatly exceeded direct energy efficiency gains. (author)

  15. Management of Virtual Machine as an Energy Conservation in Private Cloud Computing System

    Directory of Open Access Journals (Sweden)

    Fauzi Akhmad

    2016-01-01

    Full Text Available Cloud computing is a service model that is packaged in a base computing resources that can be accessed through the Internet on demand and placed in the data center. Data center architecture in cloud computing environments are heterogeneous and distributed, composed of a cluster of network servers with different capacity computing resources in different physical servers. The problems on the demand and availability of cloud services can be solved by fluctuating data center cloud through abstraction with virtualization technology. Virtual machine (VM is a representation of the availability of computing resources that can be dynamically allocated and reallocated on demand. In this study the consolidation of VM as energy conservation in Private Cloud Computing Systems with the target of process optimization selection policy and migration of the VM on the procedure consolidation. VM environment cloud data center to consider hosting a type of service a particular application at the instance VM requires a different level of computing resources. The results of the use of computing resources on a VM that is not balanced in physical servers can be reduced by using a live VM migration to achieve workload balancing. A practical approach used in developing OpenStack-based cloud computing environment by integrating Cloud VM and VM Placement selection procedure using OpenStack Neat VM consolidation. Following the value of CPU Time used as a fill to get the average value in MHz CPU utilization within a specific time period. The average value of a VM’s CPU utilization in getting from the current CPU_time reduced by CPU_time from the previous data retrieval multiplied by the maximum frequency of the CPU. The calculation result is divided by the making time CPU_time when it is reduced to the previous taking time CPU_time multiplied by milliseconds.

  16. Computation of expectation values from vibrational coupled-cluster at the two-mode coupling level

    DEFF Research Database (Denmark)

    Zoccante, Alberto; Seidler, Peter; Christiansen, Ove

    2011-01-01

    In this work we show how the vibrational coupled-cluster method at the two-mode coupling level can be used to calculate zero-point vibrational averages of properties. A technique is presented, where any expectation value can be calculated using a single set of Lagrangian multipliers computed...

  17. Computational Modelling of Materials for Wind Turbine Blades: Selected DTUWind Energy Activities

    DEFF Research Database (Denmark)

    Mikkelsen, Lars Pilgaard; Mishnaevsky, Leon

    2017-01-01

    Computational and analytical studies of degradation of wind turbine blade materials at the macro-, micro-, and nanoscale carried out by the modelling team of the Section Composites and Materials Mechanics, Department of Wind Energy, DTU, are reviewed. Examples of the analysis of the microstructural...... effects on the strength and fatigue life of composites are shown. Computational studies of degradation mechanisms of wind blade composites under tensile and compressive loading are presented. The effect of hybrid and nanoengineered structures on the performance of the composite was studied...

  18. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    Science.gov (United States)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  19. Evaluation of bone mineral density with dual energy quantitative computed tomography (DEQCT)

    International Nuclear Information System (INIS)

    Ito, Masako; Hayashi, Kuniaki; Yamada, Naoyuki.

    1989-01-01

    The purpose of this study was twofold: to investigate the precision and accuracy of dual energy quantitative computed tomography (QCT) and to investigate age-related changes of bone marrow density (BMD) in patients without metabolic disorders. Rapid kilovolt peak switching system, with which SOMATOM DR-H CT is equipped, allows dual energy scanning. KV-separated images and material-separated images were calculated from dual energy scan data. KV-separated data was regarded as single energy QCT. In phantom studies, dipotassium hydrogen phosphate solution, water, and ethanol were used to simulate bone mineral, lean soft tissue, and fat, respectively. Values of BMD obtained by dual energy scanning method had an error of 5.5% per 10% increase of fat, as compared with 12% for BMD values obtained by single energy scanning method. However, single energy scanning method had a higher precision than dual energy scanning method in determining BMD. The selection of CT section is considered most important in the clinical determination of BMD. In a study of age-related changes of BMD in the vertebral trabecular and cortical bones in 161 patients, BMD was found to have two peaks for women in their twenties and thirties, and one peak for men in their twenties. Bone marrow density rapidly declined among women aged 50 years or more. These results suggest that the content of fat in the trabecular bone may increase progressively after the age of 40, regardless of sex. (N.K.)

  20. Computer simulation of energy dissipation from near threshold knock-ons in Fe3Al

    International Nuclear Information System (INIS)

    Schade, G.; Leighly, H.P. Jr.; Edwards, D.R.

    1976-01-01

    A computer program has been developed and used to model a series of knock-ons near the damage energy threshold in a micro-crystallite of the ordered alloy Fe 3 Al. The primary paths of energy removal from the knock-on site were found to be along the [100] and [111] directions by means of focusing type collision chains. The relative importance of either direction as an energy removal path varied with the initial knock-on direction and also changed with time during the course of the knock-on event. The time rate of energy removal was found to be greatest in the [111] direction due to the shorter interatomic distances between atoms along this direction

  1. The role of dual-energy computed tomography in the assessment of pulmonary function

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Hye Jeon [Department of Radiology, Hallym University College of Medicine, Hallym University Sacred Heart Hospital, 22, Gwanpyeong-ro 170beon-gil, Dongan-gu, Anyang-si, Gyeonggi-do 431-796 (Korea, Republic of); Hoffman, Eric A. [Departments of Radiology, Medicine, and Biomedical Engineering, University of Iowa, 200 Hawkins Dr, CC 701 GH, Iowa City, IA 52241 (United States); Lee, Chang Hyun; Goo, Jin Mo [Department of Radiology, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul 110-799 (Korea, Republic of); Levin, David L. [Department of Radiology, Mayo Clinic College of Medicine, 200 First Street, SW, Rochester, MN 55905 (United States); Kauczor, Hans-Ulrich [Diagnostic and Interventional Radiology, University Hospital Heidelberg, Im Neuenheimer Feld 400, 69120 Heidelberg (Germany); Translational Lung Research Center Heidelberg (TLRC), Member of the German Center for Lung Research (DZL), Im Neuenheimer Feld 400, 69120 Heidelberg (Germany); Seo, Joon Beom, E-mail: seojb@amc.seoul.kr [Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 388-1, Pungnap 2-dong, Songpa-ku, Seoul, 05505 (Korea, Republic of)

    2017-01-15

    Highlights: • The dual-energy CT technique enables the differentiation of contrast materials with material decomposition algorithm. • Pulmonary functional information can be evaluated using dual-energy CT with anatomic CT information, simultaneously. • Pulmonary functional information from dual-energy CT can improve diagnosis and severity assessment of diseases. - Abstract: The assessment of pulmonary function, including ventilation and perfusion status, is important in addition to the evaluation of structural changes of the lung parenchyma in various pulmonary diseases. The dual-energy computed tomography (DECT) technique can provide the pulmonary functional information and high resolution anatomic information simultaneously. The application of DECT for the evaluation of pulmonary function has been investigated in various pulmonary diseases, such as pulmonary embolism, asthma and chronic obstructive lung disease and so on. In this review article, we will present principles and technical aspects of DECT, along with clinical applications for the assessment pulmonary function in various lung diseases.

  2. SIVEH: Numerical Computing Simulation of Wireless Energy-Harvesting Sensor Nodes

    Directory of Open Access Journals (Sweden)

    Pedro Yuste

    2013-09-01

    Full Text Available The paper presents a numerical energy harvesting model for sensor nodes, SIVEH (Simulator I–V for EH, based on I–V hardware tracking. I–V tracking is demonstrated to be more accurate than traditional energy modeling techniques when some of the components present different power dissipation at either different operating voltages or drawn currents. SIVEH numerical computing allows fast simulation of long periods of time—days, weeks, months or years—using real solar radiation curves. Moreover, SIVEH modeling has been enhanced with sleep time rate dynamic adjustment, while seeking energy-neutral operation. This paper presents the model description, a functional verification and a critical comparison with the classic energy approach.

  3. Utility of single-energy and dual-energy computed tomography in clot characterization: An in-vitro study.

    Science.gov (United States)

    Brinjikji, Waleed; Michalak, Gregory; Kadirvel, Ramanathan; Dai, Daying; Gilvarry, Michael; Duffy, Sharon; Kallmes, David F; McCollough, Cynthia; Leng, Shuai

    2017-06-01

    Background and purpose Because computed tomography (CT) is the most commonly used imaging modality for the evaluation of acute ischemic stroke patients, developing CT-based techniques for improving clot characterization could prove useful. The purpose of this in-vitro study was to determine which single-energy or dual-energy CT techniques provided optimum discrimination between red blood cell (RBC) and fibrin-rich clots. Materials and methods Seven clot types with varying fibrin and RBC densities were made (90% RBC, 99% RBC, 63% RBC, 36% RBC, 18% RBC and 0% RBC with high and low fibrin density) and their composition was verified histologically. Ten of each clot type were created and scanned with a second generation dual source scanner using three single (80 kV, 100 kV, 120 kV) and two dual-energy protocols (80/Sn 140 kV and 100/Sn 140 kV). A region of interest (ROI) was placed over each clot and mean attenuation was measured. Receiver operating characteristic curves were calculated at each energy level to determine the accuracy at differentiating RBC-rich clots from fibrin-rich clots. Results Clot attenuation increased with RBC content at all energy levels. Single-energy at 80 kV and 120 kV and dual-energy 80/Sn 140 kV protocols allowed for distinguishing between all clot types, with the exception of 36% RBC and 18% RBC. On receiver operating characteristic curve analysis, the 80/Sn 140 kV dual-energy protocol had the highest area under the curve for distinguishing between fibrin-rich and RBC-rich clots (area under the curve 0.99). Conclusions Dual-energy CT with 80/Sn 140 kV had the highest accuracy for differentiating RBC-rich and fibrin-rich in-vitro thrombi. Further studies are needed to study the utility of non-contrast dual-energy CT in thrombus characterization in acute ischemic stroke.

  4. Computing the Free Energy along a Reaction Coordinate Using Rigid Body Dynamics.

    Science.gov (United States)

    Tao, Peng; Sodt, Alexander J; Shao, Yihan; König, Gerhard; Brooks, Bernard R

    2014-10-14

    The calculations of potential of mean force along complex chemical reactions or rare events pathways are of great interest because of their importance for many areas in chemistry, molecular biology, and material science. The major difficulty for free energy calculations comes from the great computational cost for adequate sampling of the system in high-energy regions, especially close to the reaction transition state. Here, we present a method, called FEG-RBD, in which the free energy gradients were obtained from rigid body dynamics simulations. Then the free energy gradients were integrated along a reference reaction pathway to calculate free energy profiles. In a given system, the reaction coordinates defining a subset of atoms (e.g., a solute, or the quantum mechanics (QM) region of a quantum mechanics/molecular mechanics simulation) are selected to form a rigid body during the simulation. The first-order derivatives (gradients) of the free energy with respect to the reaction coordinates are obtained through the integration of constraint forces within the rigid body. Each structure along the reference reaction path is separately subjected to such a rigid body simulation. The individual free energy gradients are integrated along the reference pathway to obtain the free energy profile. Test cases provided demonstrate both the strengths and weaknesses of the FEG-RBD method. The most significant benefit of this method comes from the fast convergence rate of the free energy gradient using rigid-body constraints instead of restraints. A correction to the free energy due to approximate relaxation of the rigid-body constraint is estimated and discussed. A comparison with umbrella sampling using a simple test case revealed the improved sampling efficiency of FEG-RBD by a factor of 4 on average. The enhanced efficiency makes this method effective for calculating the free energy of complex chemical reactions when the reaction coordinate can be unambiguously defined by a

  5. Computation studies into architecture and energy transfer properties of photosynthetic units from filamentous anoxygenic phototrophs

    Energy Technology Data Exchange (ETDEWEB)

    Linnanto, Juha Matti [Institute of Physics, University of Tartu, Riia 142, 51014 Tartu (Estonia); Freiberg, Arvi [Institute of Physics, University of Tartu, Riia 142, 51014 Tartu, Estonia and Institute of Molecular and Cell Biology, University of Tartu, Riia 23, 51010 Tartu (Estonia)

    2014-10-06

    We have used different computational methods to study structural architecture, and light-harvesting and energy transfer properties of the photosynthetic unit of filamentous anoxygenic phototrophs. Due to the huge number of atoms in the photosynthetic unit, a combination of atomistic and coarse methods was used for electronic structure calculations. The calculations reveal that the light energy absorbed by the peripheral chlorosome antenna complex transfers efficiently via the baseplate and the core B808–866 antenna complexes to the reaction center complex, in general agreement with the present understanding of this complex system.

  6. Computing energy-optimal trajectories for an autonomous underwater vehicle using direct shooting

    Directory of Open Access Journals (Sweden)

    Inge Spangelo

    1992-07-01

    Full Text Available Energy-optimal trajectories for an autonomous underwater vehicle can be computed using a numerical solution of the optimal control problem. The vehicle is modeled with the six dimensional nonlinear and coupled equations of motion, controlled with DC-motors in all degrees of freedom. The actuators are modeled and controlled with velocity loops. The dissipated energy is expressed in terms of the control variables as a nonquadratic function. Direct shooting methods, including control vector parameterization (CVP arc used in this study. Numerical calculations are performed and good results are achieved.

  7. Computing Relative Free Energies of Solvation using Single Reference Thermodynamic Integration Augmented with Hamiltonian Replica Exchange.

    Science.gov (United States)

    Khavrutskii, Ilja V; Wallqvist, Anders

    2010-11-09

    This paper introduces an efficient single-topology variant of Thermodynamic Integration (TI) for computing relative transformation free energies in a series of molecules with respect to a single reference state. The presented TI variant that we refer to as Single-Reference TI (SR-TI) combines well-established molecular simulation methodologies into a practical computational tool. Augmented with Hamiltonian Replica Exchange (HREX), the SR-TI variant can deliver enhanced sampling in select degrees of freedom. The utility of the SR-TI variant is demonstrated in calculations of relative solvation free energies for a series of benzene derivatives with increasing complexity. Noteworthy, the SR-TI variant with the HREX option provides converged results in a challenging case of an amide molecule with a high (13-15 kcal/mol) barrier for internal cis/trans interconversion using simulation times of only 1 to 4 ns.

  8. Visualization of flaws within heavy section ultrasonic test blocks using high energy computed tomography

    International Nuclear Information System (INIS)

    House, M.B.; Ross, D.M.; Janucik, F.X.; Friedman, W.D.; Yancey, R.N.

    1996-05-01

    The feasibility of high energy computed tomography (9 MeV) to detect volumetric and planar discontinuities in large pressure vessel mock-up blocks was studied. The data supplied by the manufacturer of the test blocks on the intended flaw geometry were compared to manual, contact ultrasonic test and computed tomography test data. Subsequently, a visualization program was used to construct fully three-dimensional morphological information enabling interactive data analysis on the detected flaws. Density isosurfaces show the relative shape and location of the volumetric defects within the mock-up blocks. Such a technique may be used to qualify personnel or newly developed ultrasonic test methods without the associated high cost of destructive evaluation. Data is presented showing the capability of the volumetric data analysis program to overlay the computed tomography and destructive evaluation (serial metallography) data for a direct, three-dimensional comparison

  9. Characterization of breast tissue using energy-dispersive X-ray diffraction computed tomography

    International Nuclear Information System (INIS)

    Pani, S.; Cook, E.J.; Horrocks, J.A.; Jones, J.L.; Speller, R.D.

    2010-01-01

    A method for sample characterization using energy-dispersive X-ray diffraction computed tomography (EDXRDCT) is presented. The procedures for extracting diffraction patterns from the data and the corrections applied are discussed. The procedures were applied to the characterization of breast tissue samples, 6 mm in diameter. Comparison with histological sections of the samples confirmed the possibility of grouping the patterns into five families, corresponding to adipose tissue, fibrosis, poorly differentiated cancer, well differentiated cancer and benign tumour.

  10. Dual energy quantitative computed tomography (QCT). Precision of the mineral density measurements

    International Nuclear Information System (INIS)

    Braillon, P.; Bochu, M.

    1989-01-01

    The improvement that could be obtained in quantitative bone mineral measurements by dual energy computed tomography was tested in vitro. From the results of 15 mineral density measurements (in mg Ca/cm 3 , done on a precise lumbar spine phantom (Hologic) and referred to the values obtained on the same slices on a Siemens Osteo-CT phantom, the precision found was 0.8%, six times better than the precision calculated from the uncorrected measured values [fr

  11. Computer code to predict the heat of explosion of high energy materials

    International Nuclear Information System (INIS)

    Muthurajan, H.; Sivabalan, R.; Pon Saravanan, N.; Talawar, M.B.

    2009-01-01

    The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-a-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion (ΔH e ) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R 2 = 0.9721 with a linear equation y = 0.9262x + 101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials

  12. A novel cost based model for energy consumption in cloud computing.

    Science.gov (United States)

    Horri, A; Dastghaibyfard, Gh

    2015-01-01

    Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment.

  13. Calculation of free-energy differences from computer simulations of initial and final states

    International Nuclear Information System (INIS)

    Hummer, G.; Szabo, A.

    1996-01-01

    A class of simple expressions of increasing accuracy for the free-energy difference between two states is derived based on numerical thermodynamic integration. The implementation of these formulas requires simulations of the initial and final (and possibly a few intermediate) states. They involve higher free-energy derivatives at these states which are related to the moments of the probability distribution of the perturbation. Given a specified number of such derivatives, these integration formulas are optimal in the sense that they are exact to the highest possible order of free-energy perturbation theory. The utility of this approach is illustrated for the hydration free energy of water. This problem provides a quite stringent test because the free energy is a highly nonlinear function of the charge so that even fourth order perturbation theory gives a very poor estimate of the free-energy change. Our results should prove most useful for complex, computationally demanding problems where free-energy differences arise primarily from changes in the electrostatic interactions (e.g., electron transfer, charging of ions, protonation of amino acids in proteins). copyright 1996 American Institute of Physics

  14. A Two-Tier Energy-Aware Resource Management for Virtualized Cloud Computing System

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2016-01-01

    Full Text Available The economic costs caused by electric power take the most significant part in total cost of data center; thus energy conservation is an important issue in cloud computing system. One well-known technique to reduce the energy consumption is the consolidation of Virtual Machines (VMs. However, it may lose some performance points on energy saving and the Quality of Service (QoS for dynamic workloads. Fortunately, Dynamic Frequency and Voltage Scaling (DVFS is an efficient technique to save energy in dynamic environment. In this paper, combined with the DVFS technology, we propose a cooperative two-tier energy-aware management method including local DVFS control and global VM deployment. The DVFS controller adjusts the frequencies of homogenous processors in each server at run-time based on the practical energy prediction. On the other hand, Global Scheduler assigns VMs onto the designate servers based on the cooperation with the local DVFS controller. The final evaluation results demonstrate the effectiveness of our two-tier method in energy saving.

  15. Development of a global computable general equilibrium model coupled with detailed energy end-use technology

    International Nuclear Information System (INIS)

    Fujimori, Shinichiro; Masui, Toshihiko; Matsuoka, Yuzuru

    2014-01-01

    Highlights: • Detailed energy end-use technology information is considered within a CGE model. • Aggregated macro results of the detailed model are similar to traditional model. • The detailed model shows unique characteristics in the household sector. - Abstract: A global computable general equilibrium (CGE) model integrating detailed energy end-use technologies is developed in this paper. The paper (1) presents how energy end-use technologies are treated within the model and (2) analyzes the characteristics of the model’s behavior. Energy service demand and end-use technologies are explicitly considered, and the share of technologies is determined by a discrete probabilistic function, namely a Logit function, to meet the energy service demand. Coupling with detailed technology information enables the CGE model to have more realistic representation in the energy consumption. The proposed model in this paper is compared with the aggregated traditional model under the same assumptions in scenarios with and without mitigation roughly consistent with the two degree climate mitigation target. Although the results of aggregated energy supply and greenhouse gas emissions are similar, there are three main differences between the aggregated and the detailed technologies models. First, GDP losses in mitigation scenarios are lower in the detailed technology model (2.8% in 2050) as compared with the aggregated model (3.2%). Second, price elasticity and autonomous energy efficiency improvement are heterogeneous across regions and sectors in the detailed technology model, whereas the traditional aggregated model generally utilizes a single value for each of these variables. Third, the magnitude of emissions reduction and factors (energy intensity and carbon factor reduction) related to climate mitigation also varies among sectors in the detailed technology model. The household sector in the detailed technology model has a relatively higher reduction for both energy

  16. Computational scheme for pH-dependent binding free energy calculation with explicit solvent.

    Science.gov (United States)

    Lee, Juyong; Miller, Benjamin T; Brooks, Bernard R

    2016-01-01

    We present a computational scheme to compute the pH-dependence of binding free energy with explicit solvent. Despite the importance of pH, the effect of pH has been generally neglected in binding free energy calculations because of a lack of accurate methods to model it. To address this limitation, we use a constant-pH methodology to obtain a true ensemble of multiple protonation states of a titratable system at a given pH and analyze the ensemble using the Bennett acceptance ratio (BAR) method. The constant pH method is based on the combination of enveloping distribution sampling (EDS) with the Hamiltonian replica exchange method (HREM), which yields an accurate semi-grand canonical ensemble of a titratable system. By considering the free energy change of constraining multiple protonation states to a single state or releasing a single protonation state to multiple states, the pH dependent binding free energy profile can be obtained. We perform benchmark simulations of a host-guest system: cucurbit[7]uril (CB[7]) and benzimidazole (BZ). BZ experiences a large pKa shift upon complex formation. The pH-dependent binding free energy profiles of the benchmark system are obtained with three different long-range interaction calculation schemes: a cutoff, the particle mesh Ewald (PME), and the isotropic periodic sum (IPS) method. Our scheme captures the pH-dependent behavior of binding free energy successfully. Absolute binding free energy values obtained with the PME and IPS methods are consistent, while cutoff method results are off by 2 kcal mol(-1) . We also discuss the characteristics of three long-range interaction calculation methods for constant-pH simulations. © 2015 The Protein Society.

  17. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    Science.gov (United States)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial

  18. Ab initio calculation of reaction energies. III. Basis set dependence of relative energies on the FH2 and H2CO potential energy surfaces

    International Nuclear Information System (INIS)

    Frisch, M.J.; Binkley, J.S.; Schaefer, H.F. III

    1984-01-01

    The relative energies of the stationary points on the FH 2 and H 2 CO nuclear potential energy surfaces relevant to the hydrogen atom abstraction, H 2 elimination and 1,2-hydrogen shift reactions have been examined using fourth-order Moller--Plesset perturbation theory and a variety of basis sets. The theoretical absolute zero activation energy for the F+H 2 →FH+H reaction is in better agreement with experiment than previous theoretical studies, and part of the disagreement between earlier theoretical calculations and experiment is found to result from the use of assumed rather than calculated zero-point vibrational energies. The fourth-order reaction energy for the elimination of hydrogen from formaldehyde is within 2 kcal mol -1 of the experimental value using the largest basis set considered. The qualitative features of the H 2 CO surface are unchanged by expansion of the basis set beyond the polarized triple-zeta level, but diffuse functions and several sets of polarization functions are found to be necessary for quantitative accuracy in predicted reaction and activation energies. Basis sets and levels of perturbation theory which represent good compromises between computational efficiency and accuracy are recommended

  19. Quantitative material decomposition using spectral computed tomography with an energy-resolved photon-counting detector

    International Nuclear Information System (INIS)

    Lee, Seungwan; Choi, Yu-Na; Kim, Hee-Joung

    2014-01-01

    Dual-energy computed tomography (CT) techniques have been used to decompose materials and characterize tissues according to their physical and chemical compositions. However, these techniques are hampered by the limitations of conventional x-ray detectors operated in charge integrating mode. Energy-resolved photon-counting detectors provide spectral information from polychromatic x-rays using multiple energy thresholds. These detectors allow simultaneous acquisition of data in different energy ranges without spectral overlap, resulting in more efficient material decomposition and quantification for dual-energy CT. In this study, a pre-reconstruction dual-energy CT technique based on volume conservation was proposed for three-material decomposition. The technique was combined with iterative reconstruction algorithms by using a ray-driven projector in order to improve the quality of decomposition images and reduce radiation dose. A spectral CT system equipped with a CZT-based photon-counting detector was used to implement the proposed dual-energy CT technique. We obtained dual-energy images of calibration and three-material phantoms consisting of low atomic number materials from the optimal energy bins determined by Monte Carlo simulations. The material decomposition process was accomplished by both the proposed and post-reconstruction dual-energy CT techniques. Linear regression and normalized root-mean-square error (NRMSE) analyses were performed to evaluate the quantitative accuracy of decomposition images. The calibration accuracy of the proposed dual-energy CT technique was higher than that of the post-reconstruction dual-energy CT technique, with fitted slopes of 0.97–1.01 and NRMSEs of 0.20–4.50% for all basis materials. In the three-material phantom study, the proposed dual-energy CT technique decreased the NRMSEs of measured volume fractions by factors of 0.17–0.28 compared to the post-reconstruction dual-energy CT technique. It was concluded that the

  20. Computing conformational free energy differences in explicit solvent: An efficient thermodynamic cycle using an auxiliary potential and a free energy functional constructed from the end points.

    Science.gov (United States)

    Harris, Robert C; Deng, Nanjie; Levy, Ronald M; Ishizuka, Ryosuke; Matubayasi, Nobuyuki

    2017-06-05

    Many biomolecules undergo conformational changes associated with allostery or ligand binding. Observing these changes in computer simulations is difficult if their timescales are long. These calculations can be accelerated by observing the transition on an auxiliary free energy surface with a simpler Hamiltonian and connecting this free energy surface to the target free energy surface with free energy calculations. Here, we show that the free energy legs of the cycle can be replaced with energy representation (ER) density functional approximations. We compute: (1) The conformational free energy changes for alanine dipeptide transitioning from the right-handed free energy basin to the left-handed basin and (2) the free energy difference between the open and closed conformations of β-cyclodextrin, a "host" molecule that serves as a model for molecular recognition in host-guest binding. β-cyclodextrin contains 147 atoms compared to 22 atoms for alanine dipeptide, making β-cyclodextrin a large molecule for which to compute solvation free energies by free energy perturbation or integration methods and the largest system for which the ER method has been compared to exact free energy methods. The ER method replaced the 28 simulations to compute each coupling free energy with two endpoint simulations, reducing the computational time for the alanine dipeptide calculation by about 70% and for the β-cyclodextrin by > 95%. The method works even when the distribution of conformations on the auxiliary free energy surface differs substantially from that on the target free energy surface, although some degree of overlap between the two surfaces is required. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  1. Computer simulations of low energy displacement cascades in a face centered cubic lattice

    International Nuclear Information System (INIS)

    Schiffgens, J.O.; Bourquin, R.D.

    1976-09-01

    Computer simulations of atomic motion in a copper lattice following the production of primary knock-on atoms (PKAs) with energies from 25 to 200 eV are discussed. In this study, a mixed Moliere-Englert pair potential is used to model the copper lattice. The computer code COMENT, which employs the dynamical method, is used to analyze the motion of up to 6000 atoms per time step during cascade evolution. The atoms are specified as initially at rest on the sites of an ideal lattice. A matrix of 12 PKA directions and 6 PKA energies is investigated. Displacement thresholds in the [110] and [100] are calculated to be approximately 17 and 20 eV, respectively. A table showing the stability of isolated Frenkel pairs with different vacancy and interstitial orientations and separations is presented. The numbers of Frenkel pairs and atomic replacements are tabulated as a function of PKA direction for each energy. For PKA energies of 25, 50, 75, 100, 150, and 200 eV, the average number of Frenkel pairs per PKA are 0.4, 0.6, 1.0, 1.2, 1.4, and 2.2 and the average numbers of replacements per PKA are 2.4, 4.0, 3.3, 4.9, 9.3, and 15.8

  2. Determining Balıkesir’s Energy Potential Using a Regression Analysis Computer Program

    Directory of Open Access Journals (Sweden)

    Bedri Yüksel

    2014-01-01

    Full Text Available Solar power and wind energy are used concurrently during specific periods, while at other times only the more efficient is used, and hybrid systems make this possible. When establishing a hybrid system, the extent to which these two energy sources support each other needs to be taken into account. This paper is a study of the effects of wind speed, insolation levels, and the meteorological parameters of temperature and humidity on the energy potential in Balıkesir, in the Marmara region of Turkey. The relationship between the parameters was studied using a multiple linear regression method. Using a designed-for-purpose computer program, two different regression equations were derived, with wind speed being the dependent variable in the first and insolation levels in the second. The regression equations yielded accurate results. The computer program allowed for the rapid calculation of different acceptance rates. The results of the statistical analysis proved the reliability of the equations. An estimate of identified meteorological parameters and unknown parameters could be produced with a specified precision by using the regression analysis method. The regression equations also worked for the evaluation of energy potential.

  3. On the Design of Energy-Efficient Location Tracking Mechanism in Location-Aware Computing

    Directory of Open Access Journals (Sweden)

    MoonBae Song

    2005-01-01

    Full Text Available The battery, in contrast to other hardware, is not governed by Moore's Law. In location-aware computing, power is a very limited resource. As a consequence, recently, a number of promising techniques in various layers have been proposed to reduce the energy consumption. The paper considers the problem of minimizing the energy used to track the location of mobile user over a wireless link in mobile computing. Energy-efficient location update protocol can be done by reducing the number of location update messages as possible and switching off as long as possible. This can be achieved by the concept of mobility-awareness we propose. For this purpose, this paper proposes a novel mobility model, called state-based mobility model (SMM to provide more generalized framework for both describing the mobility and updating location information of complexly moving objects. We also introduce the state-based location update protocol (SLUP based on this mobility model. An extensive experiment on various synthetic datasets shows that the proposed method improves the energy efficiency by 2 ∼ 3 times with the additional 10% of imprecision cost.

  4. Network computing infrastructure to share tools and data in global nuclear energy partnership

    International Nuclear Information System (INIS)

    Kim, Guehee; Suzuki, Yoshio; Teshima, Naoya

    2010-01-01

    CCSE/JAEA (Center for Computational Science and e-Systems/Japan Atomic Energy Agency) integrated a prototype system of a network computing infrastructure for sharing tools and data to support the U.S. and Japan collaboration in GNEP (Global Nuclear Energy Partnership). We focused on three technical issues to apply our information process infrastructure, which are accessibility, security, and usability. In designing the prototype system, we integrated and improved both network and Web technologies. For the accessibility issue, we adopted SSL-VPN (Security Socket Layer - Virtual Private Network) technology for the access beyond firewalls. For the security issue, we developed an authentication gateway based on the PKI (Public Key Infrastructure) authentication mechanism to strengthen the security. Also, we set fine access control policy to shared tools and data and used shared key based encryption method to protect tools and data against leakage to third parties. For the usability issue, we chose Web browsers as user interface and developed Web application to provide functions to support sharing tools and data. By using WebDAV (Web-based Distributed Authoring and Versioning) function, users can manipulate shared tools and data through the Windows-like folder environment. We implemented the prototype system in Grid infrastructure for atomic energy research: AEGIS (Atomic Energy Grid Infrastructure) developed by CCSE/JAEA. The prototype system was applied for the trial use in the first period of GNEP. (author)

  5. Computer simulation of energy use, greenhouse gas emissions, and process economics of the fluid milk process.

    Science.gov (United States)

    Tomasula, P M; Yee, W C F; McAloon, A J; Nutter, D W; Bonnaillie, L M

    2013-05-01

    Energy-savings measures have been implemented in fluid milk plants to lower energy costs and the energy-related carbon dioxide (CO2) emissions. Although these measures have resulted in reductions in steam, electricity, compressed air, and refrigeration use of up to 30%, a benchmarking framework is necessary to examine the implementation of process-specific measures that would lower energy use, costs, and CO2 emissions even further. In this study, using information provided by the dairy industry and equipment vendors, a customizable model of the fluid milk process was developed for use in process design software to benchmark the electrical and fuel energy consumption and CO2 emissions of current processes. It may also be used to test the feasibility of new processing concepts to lower energy and CO2 emissions with calculation of new capital and operating costs. The accuracy of the model in predicting total energy usage of the entire fluid milk process and the pasteurization step was validated using available literature and industry energy data. Computer simulation of small (40.0 million L/yr), medium (113.6 million L/yr), and large (227.1 million L/yr) processing plants predicted the carbon footprint of milk, defined as grams of CO2 equivalents (CO2e) per kilogram of packaged milk, to within 5% of the value of 96 g of CO 2e/kg of packaged milk obtained in an industry-conducted life cycle assessment and also showed, in agreement with the same study, that plant size had no effect on the carbon footprint of milk but that larger plants were more cost effective in producing milk. Analysis of the pasteurization step showed that increasing the percentage regeneration of the pasteurizer from 90 to 96% would lower its thermal energy use by almost 60% and that implementation of partial homogenization would lower electrical energy use and CO2e emissions of homogenization by 82 and 5.4%, respectively. It was also demonstrated that implementation of steps to lower non

  6. Single- versus dual-energy quantitative computed tomography for spinal densitometry in patients with rheumatoid arthritis

    International Nuclear Information System (INIS)

    Laan, R.F.J.M.; Erning, L.J.Th.O. van; Lemmens, J.A.M.; Putte, L.B.A. van de; Ruijs, S.H.J.; Riel, P.L.C.M. van

    1992-01-01

    Lumbar bone mineral density was measured by both single- and dual-energy quantitative computed tomography in 109 patients with rheumatoid arthritis. The results were corrected for the age-related increase in vertebral fat content by converting them to percentages of expected densities, using sex and energy-level specific regression equations obtained in a normal reference population. The percentages of expected density are approximately 10% lower in the single- than in the dual-energy mode, both in the patients with and without prednisone therapy. This difference is statistically highly significant, and is positively correlated with the duration of the disease and with the degree of radiological joint destruction. The data suggest that the vertebral fat content may be increased in patients with rheumatoid arthritis, as a consequence of disease-dependent mechanisms. (Author)

  7. Can a dual-energy computed tomography predict unsuitable stone components for extracorporeal shock wave lithotripsy?

    Science.gov (United States)

    Ahn, Sung Hoon; Oh, Tae Hoon; Seo, Ill Young

    2015-09-01

    To assess the potential of dual-energy computed tomography (DECT) to identify urinary stone components, particularly uric acid and calcium oxalate monohydrate, which are unsuitable for extracorporeal shock wave lithotripsy (ESWL). This clinical study included 246 patients who underwent removal of urinary stones and an analysis of stone components between November 2009 and August 2013. All patients received preoperative DECT using two energy values (80 kVp and 140 kVp). Hounsfield units (HU) were measured and matched to the stone component. Significant differences in HU values were observed between uric acid and nonuric acid stones at the 80 and 140 kVp energy values (penergy values (p<0.001). DECT improved the characterization of urinary stone components and was a useful method for identifying uric acid and calcium oxalate monohydrate stones, which are unsuitable for ESWL.

  8. An Energy Efficient Neuromorphic Computing System Using Real Time Sensing Method

    DEFF Research Database (Denmark)

    Farkhani, Hooman; Tohidi, Mohammad; Farkhani, Sadaf

    2017-01-01

    In spintronic-based neuromorphic computing systems (NCS), the switching of magnetic moment in a magnetic tunnel junction (MTJ) is used to mimic neuron firing. However, the stochastic switching behavior of the MTJ and process variations effect leads to extra stimulation time. This leads to extra...... energy consumption and delay of such NCSs. In this paper, a new real-time sensing (RTS) circuit is proposed to track the MTJ state and terminate stimulation phase immediately after MTJ switching. This leads to significant degradation in energy consumption and delay of NCS. The simulation results using...... a 65-nm CMOS technology and a 40-nm MTJ technology confirm that the energy consumption of a RTS-based NCS is improved by 50% in comparison with a typical NCS. Moreover, utilizing RTS circuit improves the overall speed of an NCS by 2.75x....

  9. Energy saving during bulb storage applying modeling with computational fluid dynamics (CFD)

    Energy Technology Data Exchange (ETDEWEB)

    Sapounas, A.A.; Campen, J.B.; Wildschut, J.; Bot, G.P. [Wageningen UR Greenhouse Horticutlure and Applied Plant Research, Wageningen (Netherlands)

    2010-07-01

    Tulip bulbs are stored in ventilated containers to avoid high ethylene concentration between the bulbs. A commercial computational fluid dynamics (CFD) code was used in this study to examine the distribution of air flow between the containers and the potential energy saving by applying simple solutions concerning the design of the air inlet area and the adjustment of the ventilation rate. The variation in container ventilation was calculated to be between 60 and 180 per cent, with 100 per cent being the average flow through the containers. Various improvement measures were examined. The study showed that 7 per cent energy can be saved by smoothing the sharp corners of the entrance channels of the ventilation wall. The most effective and simple improvement was to cover the open top containers. In this case, the variation was between 80 and 120 per cent. The energy saving was about 38 per cent by adjusting the overall ventilation to the container with the minimal acceptable air flow.

  10. Computational study of energy transfer in two-dimensional J-aggregates

    International Nuclear Information System (INIS)

    Gallos, Lazaros K.; Argyrakis, Panos; Lobanov, A.; Vitukhnovsky, A.

    2004-01-01

    We perform a computational analysis of the intra- and interband energy transfer in two-dimensional J-aggregates. Each aggregate is represented as a two-dimensional array (LB-film or self-assembled film) of two kinds of cyanine dyes. We consider the J-aggregate whose J-band is located at a shorter wavelength to be a donor and an aggregate or a small impurity with longer wavelength to be an acceptor. Light absorption in the blue wing of the donor aggregate gives rise to the population of its excitonic states. The depopulation of these states is possible by (a) radiative transfer to the ground state (b) intraband energy transfer, and (c) interband energy transfer to the acceptor. We study the dependence of energy transfer on properties such as the energy gap, the diagonal disorder, and the exciton-phonon interaction strength. Experimentally observable parameters, such as the position and form of luminescence spectrum, and results of the kinetic spectroscopy measurements strongly depend upon the density of states in excitonic bands, rates of energy exchange between states and oscillator strengths for luminescent transitions originating from these states

  11. Cloud computing-based energy optimization control framework for plug-in hybrid electric bus

    International Nuclear Information System (INIS)

    Yang, Chao; Li, Liang; You, Sixiong; Yan, Bingjie; Du, Xian

    2017-01-01

    Considering the complicated characteristics of traffic flow in city bus route and the nonlinear vehicle dynamics, optimal energy management integrated with clustering and recognition of driving conditions in plug-in hybrid electric bus is still a challenging problem. Motivated by this issue, this paper presents an innovative energy optimization control framework based on the cloud computing for plug-in hybrid electric bus. This framework, which includes offline part and online part, can realize the driving conditions clustering in offline part, and the energy management in online part. In offline part, utilizing the operating data transferred from a bus to the remote monitoring center, K-means algorithm is adopted to cluster the driving conditions, and then Markov probability transfer matrixes are generated to predict the possible operating demand of the bus driver. Next in online part, the current driving condition is real-time identified by a well-trained support vector machine, and Markov chains-based driving behaviors are accordingly selected. With the stochastic inputs, stochastic receding horizon control method is adopted to obtain the optimized energy management of hybrid powertrain. Simulations and hardware-in-loop test are carried out with the real-world city bus route, and the results show that the presented strategy could greatly improve the vehicle fuel economy, and as the traffic flow data feedback increases, the fuel consumption of every plug-in hybrid electric bus running in a specific bus route tends to be a stable minimum. - Highlights: • Cloud computing-based energy optimization control framework is proposed. • Driving cycles are clustered into 6 types by K-means algorithm. • Support vector machine is employed to realize the online recognition of driving condition. • Stochastic receding horizon control-based energy management strategy is designed for plug-in hybrid electric bus. • The proposed framework is verified by simulation and hard

  12. Quasiclassical trajectory study of the energy transfer in CO2--rare gas systems

    International Nuclear Information System (INIS)

    Suzukawa, H.H. Jr.; Wolfsberg, M.; Thompson, D.L.

    1978-01-01

    Computational methods are presented for the study of collisions between a linear, symmetric triatomic molecule and an atom by three-dimensional quasiclassical trajectory calculations. Application is made to the investigation of translational to rotational and translational to vibrational energy transfer in the systems CO 2 --Kr, CO 2 --Ar, and CO 2 --Ne. Potential-energy surfaces based on spectroscopic and molecular beam scattering data are used. In most of the calculations, the CO 2 molecule is initially in the quantum mechanical zero-point vibrational state and in a rotational state picked from a Boltzmann distribution at 300 0 K. The energy transfer processes are investigated for translational energies ranging from 0.1 to 10 eV. Translational to rotational energy transfer is found to be the major process for CO 2 --rare gas collisions at these energies. Below 1 eV there is very little translational to vibrational energy transfer. The effects of changes in the internal energy of the molecule, in the masses of the collidants, and in the potential-energy parameters are studied in an attempt to gain understanding of the energy transfer processes

  13. Ab Initio and DFT Potential Energy Surfaces for Cyanuric Chloride Reactions

    National Research Council Canada - National Science Library

    Pai, Sharmila

    1998-01-01

    ... on the potential energy surface were calculated using the 6-31G and 6-311 +Gbasis sets. DFT(B3LYP) geometry optimizations and zero-point corrections for critical points on the potential energy surface were calculated with the 6-31G, 6-311...

  14. Integrated energy efficient data centre management for green cloud computing : the FP7 GENiC project experience

    NARCIS (Netherlands)

    Torrens, J.I.; Mehta, D.; Zavrel, V.; Grimes, D.; Scherer, Th.; Birke, R.; Chen, L.; Rea, S.; Lopez, L.; Pages, E.; Pesch, D.

    2016-01-01

    Energy consumed by computation and cooling represents the greatest percentage of the average energy consumed in a data centre. As these two aspects are not always coordinated, energy consumption is not optimised. Data centres lack an integrated system that jointly optimises and controls all the

  15. Could running experience on SPMD computers contribute to the architectural choices for future dedicated computers for high energy physics simulation?

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Silva, J.; Auguin, M.; Boeri, F.

    1989-01-01

    Results obtained on a strongly coupled parallel computer are reported. They concern Monte-Carlo simulation and pattern recognition. Though the calculations were made on an experimental computer of rather low processing power, it is believed that the quoted figures could give useful indications on architectural choices for dedicated computers. (orig.)

  16. Interdisciplinary Team-Teaching Experience for a Computer and Nuclear Energy Course for Electrical and Computer Engineering Students

    Science.gov (United States)

    Kim, Charles; Jackson, Deborah; Keiller, Peter

    2016-01-01

    A new, interdisciplinary, team-taught course has been designed to educate students in Electrical and Computer Engineering (ECE) so that they can respond to global and urgent issues concerning computer control systems in nuclear power plants. This paper discusses our experience and assessment of the interdisciplinary computer and nuclear energy…

  17. Could running experience on SPMD computers contribute to the architectural choices for future dedicated computers for high energy physics simulation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Silva, J.; Auguin, M.; Boeri, F.

    1989-01-01

    Results obtained on strongly coupled parallel computer are reported. They concern Monte-Carlo simulation and pattern recognition. Though the calculations were made on an experimental computer of rather low processing power, it is believed that the quoted figures could give useful indications on architectural choices for dedicated computers

  18. Computer Simulation in Predicting Biochemical Processes and Energy Balance at WWTPs

    Science.gov (United States)

    Drewnowski, Jakub; Zaborowska, Ewa; Hernandez De Vega, Carmen

    2018-02-01

    Nowadays, the use of mathematical models and computer simulation allow analysis of many different technological solutions as well as testing various scenarios in a short time and at low financial budget in order to simulate the scenario under typical conditions for the real system and help to find the best solution in design or operation process. The aim of the study was to evaluate different concepts of biochemical processes and energy balance modelling using a simulation platform GPS-x and a comprehensive model Mantis2. The paper presents the example of calibration and validation processes in the biological reactor as well as scenarios showing an influence of operational parameters on the WWTP energy balance. The results of batch tests and full-scale campaign obtained in the former work were used to predict biochemical and operational parameters in a newly developed plant model. The model was extended with sludge treatment devices, including anaerobic digester. Primary sludge removal efficiency was found as a significant factor determining biogas production and further renewable energy production in cogeneration. Water and wastewater utilities, which run and control WWTP, are interested in optimizing the process in order to save environment, their budget and decrease the pollutant emissions to water and air. In this context, computer simulation can be the easiest and very useful tool to improve the efficiency without interfering in the actual process performance.

  19. Computer Simulation in Predicting Biochemical Processes and Energy Balance at WWTPs

    Directory of Open Access Journals (Sweden)

    Drewnowski Jakub

    2018-01-01

    Full Text Available Nowadays, the use of mathematical models and computer simulation allow analysis of many different technological solutions as well as testing various scenarios in a short time and at low financial budget in order to simulate the scenario under typical conditions for the real system and help to find the best solution in design or operation process. The aim of the study was to evaluate different concepts of biochemical processes and energy balance modelling using a simulation platform GPS-x and a comprehensive model Mantis2. The paper presents the example of calibration and validation processes in the biological reactor as well as scenarios showing an influence of operational parameters on the WWTP energy balance. The results of batch tests and full-scale campaign obtained in the former work were used to predict biochemical and operational parameters in a newly developed plant model. The model was extended with sludge treatment devices, including anaerobic digester. Primary sludge removal efficiency was found as a significant factor determining biogas production and further renewable energy production in cogeneration. Water and wastewater utilities, which run and control WWTP, are interested in optimizing the process in order to save environment, their budget and decrease the pollutant emissions to water and air. In this context, computer simulation can be the easiest and very useful tool to improve the efficiency without interfering in the actual process performance.

  20. Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments

    Science.gov (United States)

    Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361

  1. Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Sonia Yassa

    2013-01-01

    Full Text Available We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.

  2. Computer Controlled Portable Greenhouse Climate Control System for Enhanced Energy Efficiency

    Science.gov (United States)

    Datsenko, Anthony; Myer, Steve; Petties, Albert; Hustek, Ryan; Thompson, Mark

    2010-04-01

    This paper discusses a student project at Kettering University focusing on the design and construction of an energy efficient greenhouse climate control system. In order to maintain acceptable temperatures and stabilize temperature fluctuations in a portable plastic greenhouse economically, a computer controlled climate control system was developed to capture and store thermal energy incident on the structure during daylight periods and release the stored thermal energy during dark periods. The thermal storage mass for the greenhouse system consisted of a water filled base unit. The heat exchanger consisted of a system of PVC tubing. The control system used a programmable LabView computer interface to meet functional specifications that minimized temperature fluctuations and recorded data during operation. The greenhouse was a portable sized unit with a 5' x 5' footprint. Control input sensors were temperature, water level, and humidity sensors and output control devices were fan actuating relays and water fill solenoid valves. A Graphical User Interface was developed to monitor the system, set control parameters, and to provide programmable data recording times and intervals.

  3. Contributing to global computing platform: gliding, tunneling standard services and high energy physics application

    International Nuclear Information System (INIS)

    Lodygensky, O.

    2006-09-01

    Centralized computers have been replaced by 'client/server' distributed architectures which are in turn in competition with new distributed systems known as 'peer to peer'. These new technologies are widely spread, and trading, industry and the research world have understood the new goals involved and massively invest around these new technologies, named 'grid'. One of the fields is about calculating. This is the subject of the works presented here. At the Paris Orsay University, a synergy emerged between the Computing Science Laboratory (LRI) and the Linear Accelerator Laboratory (LAL) on grid infrastructure, opening new investigations fields for the first and new high computing perspective for the other. Works presented here are the results of this multi-discipline collaboration. They are based on XtremWeb, the LRI global computing platform. We first introduce a state of the art of the large scale distributed systems, its principles, its architecture based on services. We then introduce XtremWeb and detail modifications and improvements we had to specify and implement to achieve our goals. We present two different studies, first interconnecting grids in order to generalize resource sharing and secondly, be able to use legacy services on such platforms. We finally explain how a research community like the community of high energy cosmic radiation detection can gain access to these services and detail Monte Carlos and data analysis processes over the grids. (author)

  4. Computer simulation study of the displacement threshold-energy surface in Cu

    International Nuclear Information System (INIS)

    King, W.E.; Benedek, R.

    1981-01-01

    Computer simulations were performed using the molecular-dynamics technique to determine the directional dependence of the threshold energy for production of stable Frenkel pairs in copper. Sharp peaks were observed in the simulated threshold energy surface in between the low-index directions. Threshold energies ranged from approx.25 eV for directions near or to 180 eV at the position of the peak between and . The general topographical features of the simulated threshold-energy surface are in good agreement with those determined from an analysis of recent experiments by King et al. on the basis of a Frenkel-pair resistivity rho/sub F/ = 2.85 x 10 -4 Ω cm. Evidence is presented in favor of this number as opposed to the usually assumed value, rho/sub F/ = 2.00 x 10 -4 Ω cm. The energy dependence of defect production in a number of directions was investigated to determine the importance of nonproductive events above threshold

  5. Dynamic pricing based on a cloud computing framework to support the integration of renewable energy sources

    Directory of Open Access Journals (Sweden)

    Rajeev Thankappan Nair

    2014-12-01

    Full Text Available Integration of renewable energy sources into the electric grid in the domestic sector results in bidirectional energy flow from the supply side of the consumer to the grid. Traditional pricing methods are difficult to implement in such a situation of bidirectional energy flow and they face operational challenges on the application of price-based demand side management programme because of the intermittent characteristics of renewable energy sources. In this study, a dynamic pricing method using real-time data based on a cloud computing framework is proposed to address the aforementioned issues. The case study indicates that the dynamic pricing captures the variation of energy flow in the household. The dynamic renewable factor introduced in the model supports consumer oriented pricing. A new method is presented in this study to determine the appropriate level of photovoltaic (PV penetration in the distribution system based on voltage stability aspect. The load flow study result for the electric grid in Kerala, India, indicates that the overvoltage caused by various PV penetration levels up to 33% is within the voltage limits defined for distribution feeders. The result justifies the selected level of penetration.

  6. Energy, economy and equity interactions in a CGE [Computable General Equilibrium] model for Pakistan

    International Nuclear Information System (INIS)

    Naqvi, Farzana

    1997-01-01

    In the last three decades, Computable General Equilibrium modelling has emerged as an established field of applied economics. This book presents a CGE model developed for Pakistan with the hope that it will lay down a foundation for application of general equilibrium modelling for policy formation in Pakistan. As the country is being driven swiftly to become an open market economy, it becomes vital to found out the policy measures that can foster the objectives of economic planning, such as social equity, with the minimum loss of the efficiency gains from the open market resource allocations. It is not possible to build a model for practical use that can do justice to all sectors of the economy in modelling of their peculiar features. The CGE model developed in this book focuses on the energy sector. Energy is considered as one of the basic needs and an essential input to economic growth. Hence, energy policy has multiple criteria to meet. In this book, a case study has been carried out to analyse energy pricing policy in Pakistan using this CGE model of energy, economy and equity interactions. Hence, the book also demonstrates how researchers can model the fine details of one sector given the core structure of a CGE model. (UK)

  7. Discovering Unique, Low-Energy Transition States Using Evolutionary Molecular Memetic Computing

    DEFF Research Database (Denmark)

    Ellabaan, Mostafa M Hashim; Ong, Y.S.; Handoko, S.D.

    2013-01-01

    In the last few decades, identification of transition states has experienced significant growth in research interests from various scientific communities. As per the transition states theory, reaction paths and landscape analysis as well as many thermodynamic properties of biochemical systems can...... be accurately identified through the transition states. Transition states describe the paths of molecular systems in transiting across stable states. In this article, we present the discovery of unique, low-energy transition states and showcase the efficacy of their identification using the memetic computing...... paradigm under a Molecular Memetic Computing (MMC) framework. In essence, the MMC is equipped with the tree-based representation of non-cyclic molecules and the covalent-bond-driven evolutionary operators, in addition to the typical backbone of memetic algorithms. Herein, we employ genetic algorithm...

  8. A network of spiking neurons for computing sparse representations in an energy-efficient way.

    Science.gov (United States)

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B

    2012-11-01

    Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.

  9. Computational Analysis of Nanoparticles-Molten Salt Thermal Energy Storage for Concentrated Solar Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Vinod [Univ. of Texas, El Paso, TX (United States)

    2017-05-05

    High fidelity computational models of thermocline-based thermal energy storage (TES) were developed. The research goal was to advance the understanding of a single tank nanofludized molten salt based thermocline TES system under various concentration and sizes of the particles suspension. Our objectives were to utilize sensible-heat that operates with least irreversibility by using nanoscale physics. This was achieved by performing computational analysis of several storage designs, analyzing storage efficiency and estimating cost effectiveness for the TES systems under a concentrating solar power (CSP) scheme using molten salt as the storage medium. Since TES is one of the most costly but important components of a CSP plant, an efficient TES system has potential to make the electricity generated from solar technologies cost competitive with conventional sources of electricity.

  10. Expedited Holonomic Quantum Computation via Net Zero-Energy-Cost Control in Decoherence-Free Subspace.

    Science.gov (United States)

    Pyshkin, P V; Luo, Da-Wei; Jing, Jun; You, J Q; Wu, Lian-Ao

    2016-11-25

    Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol.

  11. A ground-up approach to High Throughput Cloud Computing in High-Energy Physics

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00245123; Ganis, Gerardo; Bagnasco, Stefano

    The thesis explores various practical approaches in making existing High Throughput computing applications common in High Energy Physics work on cloud-provided resources, as well as opening the possibility for running new applications. The work is divided into two parts: firstly we describe the work done at the computing facility hosted by INFN Torino to entirely convert former Grid resources into cloud ones, eventually running Grid use cases on top along with many others in a more flexible way. Integration and conversion problems are duly described. The second part covers the development of solutions for automatizing the orchestration of cloud workers based on the load of a batch queue and the development of HEP applications based on ROOT's PROOF that can adapt at runtime to a changing number of workers.

  12. Computational and experimental optimization of the exhaust air energy recovery wind turbine generator

    International Nuclear Information System (INIS)

    Tabatabaeikia, Seyedsaeed; Ghazali, Nik Nazri Bin Nik; Chong, Wen Tong; Shahizare, Behzad; Izadyar, Nima; Esmaeilzadeh, Alireza; Fazlizan, Ahmad

    2016-01-01

    Highlights: • Studying the viability of harvesting wasted energy by exhaust air recovery generator. • Optimizing the design using response surface methodology. • Validation of optimization and computation result by performing experimental tests. • Investigation of flow behaviour using computational fluid dynamic simulations. • Performing the technical and economic study of the exhaust air recovery generator. - Abstract: This paper studies the optimization of an innovative exhaust air recovery wind turbine generator through computational fluid dynamic (CFD) simulations. The optimization strategy aims to optimize the overall system energy generation and simultaneously guarantee that it does not violate the cooling tower performance in terms of decreasing airflow intake and increasing fan motor power consumption. The wind turbine rotor position, modifying diffuser plates, and introducing separator plates to the design are considered as the variable factors for the optimization. The generated power coefficient is selected as optimization objective. Unlike most of previous optimizations in field of wind turbines, in this study, response surface methodology (RSM) as a method of analytical procedures optimization has been utilised by using multivariate statistic techniques. A comprehensive study on CFD parameters including the mesh resolution, the turbulence model and transient time step values is presented. The system is simulated using SST K-ω turbulence model and then both computational and optimization results are validated by experimental data obtained in laboratory. Results show that the optimization strategy can improve the wind turbine generated power by 48.6% compared to baseline design. Meanwhile, it is able to enhance the fan intake airflow rate and decrease fan motor power consumption. The obtained optimization equations are also validated by both CFD and experimental results and a negligible deviation in range of 6–8.5% is observed.

  13. Proceedings of the GPU computing in high-energy physics conference 2014 GPUHEP2014

    International Nuclear Information System (INIS)

    Bonati, Claudio; D'Elia, Massimo; Lamanna, Gianluca; Sozzi, Marco

    2015-06-01

    The International Conference on GPUs in High-Energy Physics was held from September 10 to 12, 2014 at the University of Pisa, Italy. It represented a larger scale follow-up to a set of workshops which indicated the rising interest of the HEP community, experimentalists and theorists alike, towards the use of inexpensive and massively parallel computing devices, for very diverse purposes. The conference was organized in plenary sessions of invited and contributed talks, and poster presentations on the following topics: - GPUs in triggering applications - Low-level trigger systems based on GPUs - Use of GPUs in high-level trigger systems - GPUs in tracking and vertexing - Challenges for triggers in future HEP experiments - Reconstruction and Monte Carlo software on GPUs - Software frameworks and tools for GPU code integration - Hard real-time use of GPUs - Lattice QCD simulation - GPUs in phenomenology - GPUs for medical imaging purposes - GPUs in neutron and photon science - Massively parallel computations in HEP - Code parallelization. ''GPU computing in High-Energy Physics'' attracted 78 registrants to Pisa. The 38 oral presentations included talks on specific topics in experimental and theoretical applications of GPUs, as well as review talks on applications and technology. 5 posters were also presented, and were introduced by a short plenary oral illustration. A company exhibition was hosted on site. The conference consisted of 12 plenary sessions, together with a social program which included a banquet and guided excursions around Pisa. It was overall an enjoyable experience, offering an opportunity to share ideas and opinions, and getting updated on other participants' work in this emerging field, as well as being a valuable introduction for newcomers interested to learn more about the use of GPUs as accelerators for scientific progress on the elementary constituents of matter and energy.

  14. Proceedings of the GPU computing in high-energy physics conference 2014 GPUHEP2014

    Energy Technology Data Exchange (ETDEWEB)

    Bonati, Claudio; D' Elia, Massimo; Lamanna, Gianluca; Sozzi, Marco (eds.)

    2015-06-15

    The International Conference on GPUs in High-Energy Physics was held from September 10 to 12, 2014 at the University of Pisa, Italy. It represented a larger scale follow-up to a set of workshops which indicated the rising interest of the HEP community, experimentalists and theorists alike, towards the use of inexpensive and massively parallel computing devices, for very diverse purposes. The conference was organized in plenary sessions of invited and contributed talks, and poster presentations on the following topics: - GPUs in triggering applications - Low-level trigger systems based on GPUs - Use of GPUs in high-level trigger systems - GPUs in tracking and vertexing - Challenges for triggers in future HEP experiments - Reconstruction and Monte Carlo software on GPUs - Software frameworks and tools for GPU code integration - Hard real-time use of GPUs - Lattice QCD simulation - GPUs in phenomenology - GPUs for medical imaging purposes - GPUs in neutron and photon science - Massively parallel computations in HEP - Code parallelization. ''GPU computing in High-Energy Physics'' attracted 78 registrants to Pisa. The 38 oral presentations included talks on specific topics in experimental and theoretical applications of GPUs, as well as review talks on applications and technology. 5 posters were also presented, and were introduced by a short plenary oral illustration. A company exhibition was hosted on site. The conference consisted of 12 plenary sessions, together with a social program which included a banquet and guided excursions around Pisa. It was overall an enjoyable experience, offering an opportunity to share ideas and opinions, and getting updated on other participants' work in this emerging field, as well as being a valuable introduction for newcomers interested to learn more about the use of GPUs as accelerators for scientific progress on the elementary constituents of matter and energy.

  15. Dual energy computer tomography. Objectve dosimetry, image quality and dose efficiency; Dual Energy Computertomographie. Objektive Dosimetrie, Bildqualitaet und Dosiseffizienz

    Energy Technology Data Exchange (ETDEWEB)

    Schenzle, Jan Christian

    2012-05-24

    The aim of the present studies was an objective reflection of newly developed methods of modern imaging techniques concerning radiation exposure to the human body. Dual Source computed tomography has opened up a broad variety of new diagnostic possibilities. Using two X-ray sources with an angular offset of about 90 in a single gantry, images with a high spatiotemporal resolution can be achieved, for example in patients suffering acute chest pain. The Dual Energy Mode is based on the acquisition of two data sets with two different X-ray spectra which make it possible to identify certain substances with different spectral properties like bone, iodine or other organic material. [6-17] There is no doubt that this technical innovation will make an essential contribution to clinical diagnostics, but it remained to be proven that there is no additional dose. An anthropomorphic Phantom and thermoluminiscent detectors were used to objectively quantify the radiation dose resulting from the different examination protocols. For Dual Energy CT examinations, it was possible to verify dose neutrality in combination with comparable image quality and even improved contrast to noise ratio. Nowadays, this protocol is used in clinical routine examinations, e.g. for the evaluation of pulmonary embolism. A milestone in dose reduction was reached with modern triple rule out protocols. Causes of acute chest pain such as heart attack, pulmonary embolism or aortic rupture can be differentiated in a single examination with a high precision and a fractional amount of dose compared to conventional methods.

  16. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Bach, Matthias

    2014-07-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  17. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    International Nuclear Information System (INIS)

    Bach, Matthias

    2014-01-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  18. New energy technology

    Energy Technology Data Exchange (ETDEWEB)

    Michrowski, A [ed.

    1990-01-01

    A conference was held to exchange information on energy systems which draw on natural supply, do not release residue, are inexpensive, and are universally applicable. Some of these systems are still in the theoretical stage and derive from research on the vacuum of space-time, magnetic fields, and ether physics. Papers were presented on fundamentals of zero-point energy or electrogravitational systems, propulsion systems relying on inertial forces, solar collectors, improved internal combustion engines and electric motors, solar cells, aneutronic (nonradioactive) nuclear power development, charged-aerosol air purifiers, and wireless transmission of electrical power. Separate abstracts have been prepared for 16 papers from this conference.

  19. Statistical Multiplexing of Computations in C-RAN with Tradeoffs in Latency and Energy

    DEFF Research Database (Denmark)

    Kalør, Anders Ellersgaard; Agurto Agurto, Mauricio Ignacio; Pratas, Nuno

    2017-01-01

    frame duration, then this may result in additional access latency and limit the energy savings. In this paper we investigate the tradeoff by considering two extreme time-scales for the resource multiplexing: (i) long-term, where the computational resources are adapted over periods much larger than...... the access frame durations; (ii) short-term, where the adaption is below the access frame duration.We develop a general C-RAN queuing model that models the access latency and show, for Poisson arrivals, that long-term multiplexing achieves savings comparable to short-term multiplexing, while offering low...

  20. A Computer Program for Modeling the Conversion of Organic Waste to Energy

    Directory of Open Access Journals (Sweden)

    Pragasen Pillay

    2011-11-01

    Full Text Available This paper presents a tool for the analysis of conversion of organic waste into energy. The tool is a program that uses waste characterization parameters and mass flow rates at each stage of the waste treatment process to predict the given products. The specific waste treatment process analysed in this paper is anaerobic digestion. The different waste treatment stages of the anaerobic digestion process are: conditioning of input waste, secondary treatment, drying of sludge, conditioning of digestate, treatment of digestate, storage of liquid and solid effluent, disposal of liquid and solid effluents, purification, utilization and storage of combustible gas. The program uses mass balance equations to compute the amount of CH4, NH3, CO2 and H2S produced from anaerobic digestion of organic waste, and hence the energy available. Case studies are also presented.

  1. Segmentation of Synchrotron Radiation micro-Computed Tomography Images using Energy Minimization via Graph Cuts

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Anderson A.M. [Federal University of Western Para (Brazil); Physics Institute, Rio de Janeiro State University (Brazil); Giusti, Alessandro [IDSIA (Dalle Molle Institute for Artificial Intelligence), University of Lugano (Switzerland); Almeida, Andre P. de, E-mail: apalmeid@gmail.com [Physics Institute, Rio de Janeiro State University (Brazil); Nuclear Engineering Program, Federal University of Rio de Janeiro (Brazil); Nogueira, Liebert; Braz, Delson [Nuclear Engineering Program, Federal University of Rio de Janeiro (Brazil); Almeida, Carlos E. de [Radiological Sciences Laboratory, Rio de Janeiro State University (Brazil); Barroso, Regina C. [Physics Institute, Rio de Janeiro State University (Brazil)

    2012-07-15

    The research on applications of segmentation algorithms to Synchrotron Radiation X-Ray micro-Computed Tomography (SR-{mu}CT) is an open problem, due to the interesting and well-known characteristics of SR images, such as the phase contrast effect. The Energy Minimization via Graph Cuts (EMvGC) algorithm represents state-of-art segmentation algorithm, presenting an enormous potential of application in SR-{mu}CT imaging. We describe the application of the algorithm EMvGC with swap move for the segmentation of bone images acquired at the ELETTRA Laboratory (Trieste, Italy). - Highlights: Black-Right-Pointing-Pointer Microstructures of Wistar rats' ribs are investigated with Synchrotron Radiation {mu}CT imaging. Black-Right-Pointing-Pointer The present work is part of a research on the effects of radiotherapy on the thoracic region. Black-Right-Pointing-Pointer Application of the Energy Minimization via Graph Cuts algorithm for segmentation is described.

  2. Segmentation of Synchrotron Radiation micro-Computed Tomography Images using Energy Minimization via Graph Cuts

    International Nuclear Information System (INIS)

    Meneses, Anderson A.M.; Giusti, Alessandro; Almeida, André P. de; Nogueira, Liebert; Braz, Delson; Almeida, Carlos E. de; Barroso, Regina C.

    2012-01-01

    The research on applications of segmentation algorithms to Synchrotron Radiation X-Ray micro-Computed Tomography (SR-μCT) is an open problem, due to the interesting and well-known characteristics of SR images, such as the phase contrast effect. The Energy Minimization via Graph Cuts (EMvGC) algorithm represents state-of-art segmentation algorithm, presenting an enormous potential of application in SR-μCT imaging. We describe the application of the algorithm EMvGC with swap move for the segmentation of bone images acquired at the ELETTRA Laboratory (Trieste, Italy). - Highlights: ► Microstructures of Wistar rats' ribs are investigated with Synchrotron Radiation μCT imaging. ► The present work is part of a research on the effects of radiotherapy on the thoracic region. ► Application of the Energy Minimization via Graph Cuts algorithm for segmentation is described.

  3. An original piecewise model for computing energy expenditure from accelerometer and heart rate signals.

    Science.gov (United States)

    Romero-Ugalde, Hector M; Garnotel, M; Doron, M; Jallon, P; Charpentier, G; Franc, S; Huneker, E; Simon, C; Bonnet, S

    2017-07-28

    Activity energy expenditure (EE) plays an important role in healthcare, therefore, accurate EE measures are required. Currently available reference EE acquisition methods, such as doubly labeled water and indirect calorimetry, are complex, expensive, uncomfortable, and/or difficult to apply on real time. To overcome these drawbacks, the goal of this paper is to propose a model for computing EE in real time (minute-by-minute) from heart rate and accelerometer signals. The proposed model, which consists of an original branched model, uses heart rate signals for computing EE on moderate to vigorous physical activities and a linear combination of heart rate and counts per minute for computing EE on light to moderate physical activities. Model parameters were estimated from a given data set composed of 53 subjects performing 25 different physical activities (light-, moderate- and vigorous-intensity), and validated using leave-one-subject-out. A different database (semi-controlled in-city circuit), was used in order to validate the versatility of the proposed model. Comparisons are done versus linear and nonlinear models, which are also used for computing EE from accelerometer and/or HR signals. The proposed piecewise model leads to more accurate EE estimations ([Formula: see text], [Formula: see text] and [Formula: see text] J kg -1 min -1 and [Formula: see text], [Formula: see text], and [Formula: see text] J kg -1 min -1 on each validation database). This original approach, which is more conformable and less expensive than the reference methods, allows accurate EE estimations, in real time (minute-by-minute), during a large variety of physical activities. Therefore, this model may be used on applications such as computing the time that a given subject spent on light-intensity physical activities and on moderate to vigorous physical activities (binary classification accuracy of 0.8155).

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  5. Comparison of energy expenditure in adolescents when playing new generation and sedentary computer games: cross sectional study

    Science.gov (United States)

    2007-01-01

    Objective To compare the energy expenditure of adolescents when playing sedentary and new generation active computer games. Design Cross sectional comparison of four computer games. Setting Research laboratories. Participants Six boys and five girls aged 13-15 years. Procedure Participants were fitted with a monitoring device validated to predict energy expenditure. They played four computer games for 15 minutes each. One of the games was sedentary (XBOX 360) and the other three were active (Wii Sports). Main outcome measure Predicted energy expenditure, compared using repeated measures analysis of variance. Results Mean (standard deviation) predicted energy expenditure when playing Wii Sports bowling (190.6 (22.2) kJ/kg/min), tennis (202.5 (31.5) kJ/kg/min), and boxing (198.1 (33.9) kJ/kg/min) was significantly greater than when playing sedentary games (125.5 (13.7) kJ/kg/min) (P<0.001). Predicted energy expenditure was at least 65.1 (95% confidence interval 47.3 to 82.9) kJ/kg/min greater when playing active rather than sedentary games. Conclusions Playing new generation active computer games uses significantly more energy than playing sedentary computer games but not as much energy as playing the sport itself. The energy used when playing active Wii Sports games was not of high enough intensity to contribute towards the recommended daily amount of exercise in children. PMID:18156227

  6. A control method for agricultural greenhouses heating based on computational fluid dynamics and energy prediction model

    International Nuclear Information System (INIS)

    Chen, Jiaoliao; Xu, Fang; Tan, Dapeng; Shen, Zheng; Zhang, Libin; Ai, Qinglin

    2015-01-01

    Highlights: • A novel control method for the heating greenhouse with SWSHPS is proposed. • CFD is employed to predict the priorities of FCU loops for thermal performance. • EPM is act as an on-line tool to predict the total energy demand of greenhouse. • The CFD–EPM-based method can save energy and improve control accuracy. • The energy savings potential is between 8.7% and 15.1%. - Abstract: As energy heating is one of the main production costs, many efforts have been made to reduce the energy consumption of agricultural greenhouses. Herein, a novel control method of greenhouse heating using computational fluid dynamics (CFD) and energy prediction model (EPM) is proposed for energy savings and system performance. Based on the low-Reynolds number k–ε turbulence principle, a CFD model of heating greenhouse is developed, applying the discrete ordinates model for the radiative heat transfers and porous medium approach for plants considering plants sensible and latent heat exchanges. The CFD simulations have been validated, and used to analyze the greenhouse thermal performance and the priority of fan coil units (FCU) loops under the various heating conditions. According to the heating efficiency and temperature uniformity, the priorities of each FCU loop can be predicted to generate a database with priorities for control system. EPM is built up based on the thermal balance, and used to predict and optimize the energy demand of the greenhouse online. Combined with the priorities of FCU loops from CFD simulations offline, we have developed the CFD–EPM-based heating control system of greenhouse with surface water source heat pumps system (SWSHPS). Compared with conventional multi-zone independent control (CMIC) method, the energy savings potential is between 8.7% and 15.1%, and the control temperature deviation is decreased to between 0.1 °C and 0.6 °C in the investigated greenhouse. These results show the CFD–EPM-based method can improve system

  7. Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    OpenAIRE

    Buyya, Rajkumar; Beloglazov, Anton; Abawajy, Jemal

    2010-01-01

    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational cos...

  8. Energy-Efficient Caching for Mobile Edge Computing in 5G Networks

    Directory of Open Access Journals (Sweden)

    Zhaohui Luo

    2017-05-01

    Full Text Available Mobile Edge Computing (MEC, which is considered a promising and emerging paradigm to provide caching capabilities in proximity to mobile devices in 5G networks, enables fast, popular content delivery of delay-sensitive applications at the backhaul capacity of limited mobile networks. Most existing studies focus on cache allocation, mechanism design and coding design for caching. However, grid power supply with fixed power uninterruptedly in support of a MEC server (MECS is costly and even infeasible, especially when the load changes dynamically over time. In this paper, we investigate the energy consumption of the MECS problem in cellular networks. Given the average download latency constraints, we take the MECS’s energy consumption, backhaul capacities and content popularity distributions into account and formulate a joint optimization framework to minimize the energy consumption of the system. As a complicated joint optimization problem, we apply a genetic algorithm to solve it. Simulation results show that the proposed solution can effectively determine the near-optimal caching placement to obtain better performance in terms of energy efficiency gains compared with conventional caching placement strategies. In particular, it is shown that the proposed scheme can significantly reduce the joint cost when backhaul capacity is low.

  9. Energy from sugarcane bagasse under electricity rationing in Brazil: a computable general equilibrium model

    International Nuclear Information System (INIS)

    Scaramucci, Jose A.; Perin, Clovis; Pulino, Petronio; Bordoni, Orlando F.J.G.; Cunha, Marcelo P. da; Cortez, Luis A.B.

    2006-01-01

    In the midst of the institutional reforms of the Brazilian electric sectors initiated in the 1990s, a serious electricity shortage crisis developed in 2001. As an alternative to blackout, the government instituted an emergency plan aimed at reducing electricity consumption. From June 2001 to February 2002, Brazilians were compelled to curtail electricity use by 20%. Since the late 1990s, but especially after the electricity crisis, energy policy in Brazil has been directed towards increasing thermoelectricity supply and promoting further gains in energy conservation. Two main issues are addressed here. Firstly, we estimate the economic impacts of constraining the supply of electric energy in Brazil. Secondly, we investigate the possible penetration of electricity generated from sugarcane bagasse. A computable general equilibrium (CGE) model is used. The traditional sector of electricity and the remainder of the economy are characterized by a stylized top-down representation as nested CES (constant elasticity of substitution) production functions. The electricity production from sugarcane bagasse is described through a bottom-up activity analysis, with a detailed representation of the required inputs based on engineering studies. The model constructed is used to study the effects of the electricity shortage in the preexisting sector through prices, production and income changes. It is shown that installing capacity to generate electricity surpluses by the sugarcane agroindustrial system could ease the economic impacts of an electric energy shortage crisis on the gross domestic product (GDP)

  10. GAMUT: A computer code for γ-ray energy and intensity analysis

    International Nuclear Information System (INIS)

    Firestone, R.B.

    1991-05-01

    GAMUT is a computer code to analyze γ-ray energies and intensities. It does a linear least-squares fit of measured γ-ray energies from one or more experiments to the level scheme. GAMUT also performs a non-linear least-squares analysis of branching intensities. For both energy and intensity data, a statistical Chi-square analysis is performed with an iterative uncertainty adjustment. The uncertainties of outlying measured values and sets of measurements with x 2 /f>1 are increased, and the calculation is repeated until the uncertainties are consistent with the fitted values. GAMUT accepts input from standard or special-format ENSDF data sets. The special-format ENSDF data sets were designed to permit analysis of more than one set of measurements associated with a single ENSDF data set. GAMUT prepares a standard ENSDF format output data set containing the adjusted values. If more than one input ENSDF data set is provided, GAMUT creates an ADOPTED LEVELS, GAMMAS data set containing the adjusted level and γ-ray energies and branching intensities from each level normalized to 100 for the strongest γ-ray. GAMUT also provides a summary of the results and an extensive log of the iterative analysis. GAMUT is interactive prompting the user for input and output file names and for default calculation options. This version of GAMUT has adjustable dimensions so that any maximum number of data sets, levels, and γ-rays can be established at the time of implementation. 6 refs

  11. Initial experience with visualizing hand and foot tendons by dual-energy computed tomography.

    Science.gov (United States)

    Deng, Kai; Sun, Cong; Liu, Cheng; Ma, Rui

    2009-01-01

    To assess the feasibility of visualizing hand and foot tendons by dual-energy computed tomography (CT). Twenty patients who suffered from hand or feet pains were scanned on dual-source CT (Definition, Forchheim, Germany) with dual-energy mode at tube voltages of 140 and 80 kV and a corresponding ratio of 1:4 between tube currents. The reconstructed images were postprocessed by volume rendering techniques (VRT) and multiplanar reconstruction (MPR). All of the suspected lesions were confirmed by surgery or follow-up studies. Twelve patients (total of 24 hands and feet, respectively) were found to be normal and the other eight patients (total of nine hands and feet, respectively) were found abnormal. Dual-energy techniques are very useful in visualizing tendons of the hands and feet, such as flexor pollicis longus tendon, flexor digitorum superficialis/profundus tendon, Achilles tendon, extensor hallucis longus tendon, and extensor digitorum longus tendon, etc. It can depict the whole shape of the tendons and their fixation points clearly. Peroneus longus tendon in the sole of the foot was not displayed very well. The distal ends of metacarpophalangeal joints with extensor digitoium tendon and extensor pollicis longus tendon were poorly shown. The lesions of tendons such as the circuitry, thickening, and adherence were also shown clearly. Dual-energy CT offers a new method to visualize tendons of the hand and foot. It could clearly display both anatomical structures and pathologic changes of hand and foot tendons.

  12. Projection decomposition algorithm for dual-energy computed tomography via deep neural network.

    Science.gov (United States)

    Xu, Yifu; Yan, Bin; Chen, Jian; Zeng, Lei; Li, Lei

    2018-03-15

    Dual-energy computed tomography (DECT) has been widely used to improve identification of substances from different spectral information. Decomposition of the mixed test samples into two materials relies on a well-calibrated material decomposition function. This work aims to establish and validate a data-driven algorithm for estimation of the decomposition function. A deep neural network (DNN) consisting of two sub-nets is proposed to solve the projection decomposition problem. The compressing sub-net, substantially a stack auto-encoder (SAE), learns a compact representation of energy spectrum. The decomposing sub-net with a two-layer structure fits the nonlinear transform between energy projection and basic material thickness. The proposed DNN not only delivers image with lower standard deviation and higher quality in both simulated and real data, and also yields the best performance in cases mixed with photon noise. Moreover, DNN costs only 0.4 s to generate a decomposition solution of 360 × 512 size scale, which is about 200 times faster than the competing algorithms. The DNN model is applicable to the decomposition tasks with different dual energies. Experimental results demonstrated the strong function fitting ability of DNN. Thus, the Deep learning paradigm provides a promising approach to solve the nonlinear problem in DECT.

  13. ENDIX. A computer program to simulate energy dispersive X-ray and synchrotron powder diffraction diagrams

    International Nuclear Information System (INIS)

    Hovestreydt, E.; Karlsruhe Univ.; Parthe, E.; Benedict, U.

    1987-01-01

    A Fortran 77 computer program is described which allows the simulation of energy dispersive X-ray and synchrotron powder diffraction diagrams. The input consists of structural data (space group, unit cell dimensions, atomic positional and displacement parameters) and information on the experimental conditions (chosen Bragg angle, type of X-ray tube and applied voltage or operating power of synchrotron radiation source). The output consists of the normalized intensities of the diffraction lines, listed by increasing energy (in keV), and of an optional intensity-energy plot. The intensities are calculated with due consideration of the wave-length dependence of both the anomalous dispersion and the absorption coefficients. For a better agreement between observed and calculated spectra provision is made to optionally superimpose, on the calculated diffraction line spectrum, all additional lines such as fluorescence and emission lines and escape peaks. The different effects which have been considered in the simulation are discussed in some detail. A sample calculation of the energy dispersive powder diffraction pattern of UPt 3 (Ni 3 Sn structure type) is given. Warning: the user of ENDIX should be aware that for a successful application it is necessary to adapt the program to correspond to the actual experimental conditions. Even then, due to the only approximately known values of certain functions, the agreement between observed and calculated intensities will not be as good as for angle dispersive diffraction methods

  14. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  15. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  16. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  17. Computer simulation to predict energy use, greenhouse gas emissions and costs for production of fluid milk using alternative processing methods

    Science.gov (United States)

    Computer simulation is a useful tool for benchmarking the electrical and fuel energy consumption and water use in a fluid milk plant. In this study, a computer simulation model of the fluid milk process based on high temperature short time (HTST) pasteurization was extended to include models for pr...

  18. Computational Study of Environmental Effects on Torsional Free Energy Surface of N-Acetyl-N'-methyl-L-alanylamide Dipeptide

    Science.gov (United States)

    Carlotto, Silvia; Zerbetto, Mirco

    2014-01-01

    We propose an articulated computational experiment in which both quantum mechanics (QM) and molecular mechanics (MM) methods are employed to investigate environment effects on the free energy surface for the backbone dihedral angles rotation of the small dipeptide N-Acetyl-N'-methyl-L-alanylamide. This computation exercise is appropriate for an…

  19. A computationally efficient electricity price forecasting model for real time energy markets

    International Nuclear Information System (INIS)

    Feijoo, Felipe; Silva, Walter; Das, Tapas K.

    2016-01-01

    Highlights: • A fast hybrid forecast model for electricity prices. • Accurate forecast model that combines K-means and machine learning techniques. • Low computational effort by elimination of feature selection techniques. • New benchmark results by using market data for year 2012 and 2015. - Abstract: Increased significance of demand response and proliferation of distributed energy resources will continue to demand faster and more accurate models for forecasting locational marginal prices. This paper presents such a model (named K-SVR). While yielding prediction accuracy comparable with the best known models in the literature, K-SVR requires a significantly reduced computational time. The computational reduction is attained by eliminating the use of a feature selection process, which is commonly used by the existing models in the literature. K-SVR is a hybrid model that combines clustering algorithms, support vector machine, and support vector regression. K-SVR is tested using Pennsylvania–New Jersey–Maryland market data from the periods 2005–6, 2011–12, and 2014–15. Market data from 2006 has been used to measure performance of many of the existing models. Authors chose these models to compare performance and demonstrate strengths of K-SVR. Results obtained from K-SVR using the market data from 2012 and 2015 are new, and will serve as benchmark for future models.

  20. Review on the applications of the very high speed computing technique to atomic energy field

    International Nuclear Information System (INIS)

    Hoshino, Tsutomu

    1981-01-01

    The demand of calculation in atomic energy field is enormous, and the physical and technological knowledge obtained by experiments are summarized into mathematical models, and accumulated as the computer programs for design, safety analysis of operational management. These calculation code systems are classified into reactor physics, reactor technology, operational management and nuclear fusion. In this paper, the demand of calculation speed in the diffusion and transport of neutrons, shielding, technological safety, core control and particle simulation is explained as the typical calculation. These calculations are divided into two models, the one is fluid model which regards physical systems as continuum, and the other is particle model which regards physical systems as composed of finite number of particles. The speed of computers in present state is too slow, and the capability 1000 to 10000 times as much as the present general purpose machines is desirable. The calculation techniques of pipeline system and parallel processor system are described. As an example of the practical system, the computer network OCTOPUS in the Lorence Livermore Laboratory is shown. Also, the CHI system in UCLA is introduced. (Kako, I.)

  1. Quantum computing applied to calculations of molecular energies: CH2 benchmark.

    Science.gov (United States)

    Veis, Libor; Pittner, Jiří

    2010-11-21

    Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.

  2. Modified energy-deposition model, for the computation of the stopping-power ratio for small cavity sizes

    International Nuclear Information System (INIS)

    Janssens, A.C.A.

    1981-01-01

    This paper presents a modification to the Spencer-Attix theory, which allows application of the theory to larger cavity sizes. The modified theory is in better agreement with the actual process of energy deposition by delta rays. In the first part of the paper it is recalled how the Spencer-Attix theory can be derived from basic principles, which allows a physical interpretation of the theory in terms of a function describing the space and direction average of the deposited energy. A realistic model for the computation of this function is described and the resulting expression for the stopping-power ratio is calculated. For the comparison between the Spencer-Attix theory and this modified expression a correction factor to the ''Bragg-Gray inhomogeneous term'' has been defined. This factor has been computed as a function of cavity size for different source energies and mean excitation energies; thus, general properties of this factor have been elucidated. The computations have been extended to include the density effect. It has been shown that the computation of the inhomogeneous term can be performed for any expression describing the energy loss per unit distance of the electrons as a function of their energy. Thus an expression has been calculated which is in agreement with a quadratic range-energy relationship. In conclusion, the concrete procedure for computing the stopping-power ratio is reviewed

  3. Comparison of energy expenditure in adolescents when playing new generation and sedentary computer games: cross sectional study.

    Science.gov (United States)

    Graves, Lee; Stratton, Gareth; Ridgers, N D; Cable, N T

    2007-12-22

    To compare the energy expenditure of adolescents when playing sedentary and new generation active computer games. Cross sectional comparison of four computer games. Research laboratories. Six boys and five girls aged 13-15 years. Procedure Participants were fitted with a monitoring device validated to predict energy expenditure. They played four computer games for 15 minutes each. One of the games was sedentary (XBOX 360) and the other three were active (Wii Sports). Predicted energy expenditure, compared using repeated measures analysis of variance. Mean (standard deviation) predicted energy expenditure when playing Wii Sports bowling (190.6 (22.2) kJ/kg/min), tennis (202.5 (31.5) kJ/kg/min), and boxing (198.1 (33.9) kJ/kg/min) was significantly greater than when playing sedentary games (125.5 (13.7) kJ/kg/min) (Pgames. Playing new generation active computer games uses significantly more energy than playing sedentary computer games but not as much energy as playing the sport itself. The energy used when playing active Wii Sports games was not of high enough intensity to contribute towards the recommended daily amount of exercise in children.

  4. PREFACE: 21st International Conference on Computing in High Energy and Nuclear Physics (CHEP2015)

    Science.gov (United States)

    Sakamoto, H.; Bonacorsi, D.; Ueda, I.; Lyon, A.

    2015-12-01

    The International Conference on Computing in High Energy and Nuclear Physics (CHEP) is a major series of international conferences intended to attract physicists and computing professionals to discuss on recent developments and trends in software and computing for their research communities. Experts from the high energy and nuclear physics, computer science, and information technology communities attend CHEP events. This conference series provides an international forum to exchange experiences and the needs of a wide community, and to present and discuss recent, ongoing, and future activities. At the beginning of the successful series of CHEP conferences in 1985, the latest developments in embedded systems, networking, vector and parallel processing were presented in Amsterdam. The software and computing ecosystem massively evolved since then, and along this path each CHEP event has marked a step further. A vibrant community of experts on a wide range of different high-energy and nuclear physics experiments, as well as technology explorer and industry contacts, attend and discuss the present and future challenges, and shape the future of an entire community. In such a rapidly evolving area, aiming to capture the state-of-the-art on software and computing through a collection of proceedings papers on a journal is a big challenge. Due to the large attendance, the final papers appear on the journal a few months after the conference is over. Additionally, the contributions often report about studies at very heterogeneous statuses, namely studies that are completed, or are just started, or yet to be done. It is not uncommon that by the time a specific paper appears on the journal some of the work is over a year old, or the investigation actually happened in different directions and with different methodologies than originally presented at the conference just a few months before. And by the time the proceedings appear in journal form, new ideas and explorations have

  5. Microscopic description of fission in odd-mass uranium and plutonium nuclei with the Gogny energy density functional

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez-Guzman, R. [Kuwait University, Physics Department, Kuwait (Kuwait); Robledo, L.M. [Universidad Autonoma de Madrid, Departamento de Fisica Teorica, Madrid (Spain); Universidad Politecnica de Madrid, Center for Computational Simulation, Boadilla del Monte (Spain)

    2017-12-15

    The parametrization D1M of the Gogny energy density functional is used to study fission in the odd-mass Uranium and Plutonium isotopes with A = 233,.., 249 within the framework of the Hartree-Fock-Bogoliubov (HFB) Equal Filling Approximation (EFA). Ground state quantum numbers and deformations, pairing energies, one-neutron separation energies, barrier heights and fission isomer excitation energies are given. Fission paths, collective masses and zero point rotational and vibrational quantum corrections are used to compute the systematic of the spontaneous fission half-lives t{sub SF}, the masses and charges of the fission fragments as well as their intrinsic shapes. Although there exits a strong variance of the predicted fission rates with respect to the details involved in their computation, it is shown that both the specialization energy and the pairing quenching effects, taken into account fully variationally within the HFB-EFA blocking scheme, lead to larger spontaneous fission half-lives in odd-mass U and Pu nuclei as compared with the corresponding even-even neighbors. It is shown that modifications of a few percent in the strengths of the neutron and proton pairing fields can have a significant impact on the collective masses leading to uncertainties of several orders of magnitude in the predicted t{sub SF} values. Alpha-decay lifetimes have also been computed using a parametrization of the Viola-Seaborg formula. (orig.)

  6. PREFACE: International Conference on Computing in High Energy and Nuclear Physics (CHEP'09)

    Science.gov (United States)

    Gruntorad, Jan; Lokajicek, Milos

    2010-11-01

    The 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP) was held on 21-27 March 2009 in Prague, Czech Republic. CHEP is a major series of international conferences for physicists and computing professionals from the worldwide High Energy and Nuclear Physics community, Computer Science, and Information Technology. The CHEP conference provides an international forum to exchange information on computing experience and needs for the community, and to review recent, ongoing and future activities. Recent conferences were held in Victoria, Canada 2007, Mumbai, India in 2006, Interlaken, Switzerland in 2004, San Diego, USA in 2003, Beijing, China in 2001, Padua, Italy in 2000. The CHEP'09 conference had 600 attendees with a program that included plenary sessions of invited oral presentations, a number of parallel sessions comprising 200 oral and 300 poster presentations, and an industrial exhibition. We thanks all the presenters, for the excellent scientific content of their contributions to the conference. Conference tracks covered topics on Online Computing, Event Processing, Software Components, Tools and Databases, Hardware and Computing Fabrics, Grid Middleware and Networking Technologies, Distributed Processing and Analysis and Collaborative Tools. The conference included excursions to Prague and other Czech cities and castles and a banquet held at the Zofin palace in Prague. The next CHEP conference will be held in Taipei, Taiwan on 18-22 October 2010. We would like thank the Ministry of Education Youth and Sports of the Czech Republic and the EU ACEOLE project for the conference support, further to commercial sponsors, the International Advisory Committee, the Local Organizing Committee members representing the five collaborating Czech institutions Jan Gruntorad (co-chair), CESNET, z.s.p.o., Prague Andrej Kugler, Nuclear Physics Institute AS CR v.v.i., Rez Rupert Leitner, Charles University in Prague, Faculty of Mathematics and

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  8. The impact of increased efficiency in the industrial use of energy: A computable general equilibrium analysis for the United Kingdom

    International Nuclear Information System (INIS)

    Allan, Grant; Hanley, Nick; McGregor, Peter; Swales, Kim; Turner, Karen

    2007-01-01

    The conventional wisdom is that improving energy efficiency will lower energy use. However, there is an extensive debate in the energy economics/policy literature concerning 'rebound' effects. These occur because an improvement in energy efficiency produces a fall in the effective price of energy services. The response of the economic system to this price fall at least partially offsets the expected beneficial impact of the energy efficiency gain. In this paper we use an economy-energy-environment computable general equilibrium (CGE) model for the UK to measure the impact of a 5% across the board improvement in the efficiency of energy use in all production sectors. We identify rebound effects of the order of 30-50%, but no backfire (no increase in energy use). However, these results are sensitive to the assumed structure of the labour market, key production elasticities, the time period under consideration and the mechanism through which increased government revenues are recycled back to the economy

  9. Complementary contrast media for metal artifact reduction in dual-energy computed tomography.

    Science.gov (United States)

    Lambert, Jack W; Edic, Peter M; FitzGerald, Paul F; Torres, Andrew S; Yeh, Benjamin M

    2015-07-01

    Metal artifacts have been a problem associated with computed tomography (CT) since its introduction. Recent techniques to mitigate this problem have included utilization of high-energy (keV) virtual monochromatic spectral (VMS) images, produced via dual-energy CT (DECT). A problem with these high-keV images is that contrast enhancement provided by all commercially available contrast media is severely reduced. Contrast agents based on higher atomic number elements can maintain contrast at the higher energy levels where artifacts are reduced. This study evaluated three such candidate elements: bismuth, tantalum, and tungsten, as well as two conventional contrast elements: iodine and barium. A water-based phantom with vials containing these five elements in solution, as well as different artifact-producing metal structures, was scanned with a DECT scanner capable of rapid operating voltage switching. In the VMS datasets, substantial reductions in the contrast were observed for iodine and barium, which suffered from contrast reductions of 97% and 91%, respectively, at 140 versus 40 keV. In comparison under the same conditions, the candidate agents demonstrated contrast enhancement reductions of only 20%, 29%, and 32% for tungsten, tantalum, and bismuth, respectively. At 140 versus 40 keV, metal artifact severity was reduced by 57% to 85% depending on the phantom configuration.

  10. Computer simulation program for medium-energy ion scattering and Rutherford backscattering spectrometry

    Science.gov (United States)

    Nishimura, Tomoaki

    2016-03-01

    A computer simulation program for ion scattering and its graphical user interface (MEISwin) has been developed. Using this program, researchers have analyzed medium-energy ion scattering and Rutherford backscattering spectrometry at Ritsumeikan University since 1998, and at Rutgers University since 2007. The main features of the program are as follows: (1) stopping power can be chosen from five datasets spanning several decades (from 1977 to 2011), (2) straggling can be chosen from two datasets, (3) spectral shape can be selected as Gaussian or exponentially modified Gaussian, (4) scattering cross sections can be selected as Coulomb or screened, (5) simulations adopt the resonant elastic scattering cross section of 16O(4He, 4He)16O, (6) pileup simulation for RBS spectra is supported, (7) natural and specific isotope abundances are supported, and (8) the charge fraction can be chosen from three patterns (fixed, energy-dependent, and ion fraction with charge-exchange parameters for medium-energy ion scattering). This study demonstrates and discusses the simulations and their results.

  11. Computer simulation program for medium-energy ion scattering and Rutherford backscattering spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Nishimura, Tomoaki, E-mail: t-nishi@hosei.ac.jp

    2016-03-15

    A computer simulation program for ion scattering and its graphical user interface (MEISwin) has been developed. Using this program, researchers have analyzed medium-energy ion scattering and Rutherford backscattering spectrometry at Ritsumeikan University since 1998, and at Rutgers University since 2007. The main features of the program are as follows: (1) stopping power can be chosen from five datasets spanning several decades (from 1977 to 2011), (2) straggling can be chosen from two datasets, (3) spectral shape can be selected as Gaussian or exponentially modified Gaussian, (4) scattering cross sections can be selected as Coulomb or screened, (5) simulations adopt the resonant elastic scattering cross section of {sup 16}O({sup 4}He, {sup 4}He){sup 16}O, (6) pileup simulation for RBS spectra is supported, (7) natural and specific isotope abundances are supported, and (8) the charge fraction can be chosen from three patterns (fixed, energy-dependent, and ion fraction with charge-exchange parameters for medium-energy ion scattering). This study demonstrates and discusses the simulations and their results.

  12. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    Science.gov (United States)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  13. Grand Challenges of Advanced Computing for Energy Innovation Report from the Workshop Held July 31-August 2, 2012

    Energy Technology Data Exchange (ETDEWEB)

    Larzelere, Alex R.; Ashby, Steven F.; Christensen, Dana C.; Crawford, Dona L.; Khaleel, Mohammad A.; John, Grosh; Stults, B. Ray; Lee, Steven L.; Hammond, Steven W.; Grover, Benjamin T.; Neely, Rob; Dudney, Lee Ann; Goldstein, Noah C.; Wells, Jack; Peltz, Jim

    2013-03-06

    On July 31-August 2 of 2012, the U.S. Department of Energy (DOE) held a workshop entitled Grand Challenges of Advanced Computing for Energy Innovation. This workshop built on three earlier workshops that clearly identified the potential for the Department and its national laboratories to enable energy innovation. The specific goal of the workshop was to identify the key challenges that the nation must overcome to apply the full benefit of taxpayer-funded advanced computing technologies to U.S. energy innovation in the ways that the country produces, moves, stores, and uses energy. Perhaps more importantly, the workshop also developed a set of recommendations to help the Department overcome those challenges. These recommendations provide an action plan for what the Department can do in the coming years to improve the nation’s energy future.

  14. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  15. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  16. PREFACE: International Conference on Computing in High Energy and Nuclear Physics (CHEP 2010)

    Science.gov (United States)

    Lin, Simon C.; Shen, Stella; Neufeld, Niko; Gutsche, Oliver; Cattaneo, Marco; Fisk, Ian; Panzer-Steindel, Bernd; Di Meglio, Alberto; Lokajicek, Milos

    2011-12-01

    The International Conference on Computing in High Energy and Nuclear Physics (CHEP) was held at Academia Sinica in Taipei from 18-22 October 2010. CHEP is a major series of international conferences for physicists and computing professionals from the worldwide High Energy and Nuclear Physics community, Computer Science, and Information Technology. The CHEP conference provides an international forum to exchange information on computing progress and needs for the community, and to review recent, ongoing and future activities. CHEP conferences are held at roughly 18 month intervals, alternating between Europe, Asia, America and other parts of the world. Recent CHEP conferences have been held in Prauge, Czech Republic (2009); Victoria, Canada (2007); Mumbai, India (2006); Interlaken, Switzerland (2004); San Diego, California(2003); Beijing, China (2001); Padova, Italy (2000) CHEP 2010 was organized by Academia Sinica Grid Computing Centre. There was an International Advisory Committee (IAC) setting the overall themes of the conference, a Programme Committee (PC) responsible for the content, as well as Conference Secretariat responsible for the conference infrastructure. There were over 500 attendees with a program that included plenary sessions of invited speakers, a number of parallel sessions comprising around 260 oral and 200 poster presentations, and industrial exhibitions. We thank all the presenters, for the excellent scientific content of their contributions to the conference. Conference tracks covered topics on Online Computing, Event Processing, Software Engineering, Data Stores, and Databases, Distributed Processing and Analysis, Computing Fabrics and Networking Technologies, Grid and Cloud Middleware, and Collaborative Tools. The conference included excursions to various attractions in Northern Taiwan, including Sanhsia Tsu Shih Temple, Yingko, Chiufen Village, the Northeast Coast National Scenic Area, Keelung, Yehliu Geopark, and Wulai Aboriginal Village

  17. A procedure to compute equilibrium concentrations in multicomponent systems by Gibbs energy minimization on spreadsheets

    International Nuclear Information System (INIS)

    Lima da Silva, Aline; Heck, Nestor Cesar

    2003-01-01

    Equilibrium concentrations are traditionally calculated with the help of equilibrium constant equations from selected reactions. This procedure, however, is only useful for simpler problems. Analysis of the equilibrium state in a multicomponent and multiphase system necessarily involves solution of several simultaneous equations, and, as the number of system components grows, the required computation becomes more complex and tedious. A more direct and general method for solving the problem is the direct minimization of the Gibbs energy function. The solution for the nonlinear problem consists in minimizing the objective function (Gibbs energy of the system) subjected to the constraints of the elemental mass-balance. To solve it, usually a computer code is developed, which requires considerable testing and debugging efforts. In this work, a simple method to predict equilibrium composition in multicomponent systems is presented, which makes use of an electronic spreadsheet. The ability to carry out these calculations within a spreadsheet environment shows several advantages. First, spreadsheets are available 'universally' on nearly all personal computers. Second, the input and output capabilities of spreadsheets can be effectively used to monitor calculated results. Third, no additional systems or programs need to be learned. In this way, spreadsheets can be as suitable in computing equilibrium concentrations as well as to be used as teaching and learning aids. This work describes, therefore, the use of the Solver tool, contained in the Microsoft Excel spreadsheet package, on computing equilibrium concentrations in a multicomponent system, by the method of direct Gibbs energy minimization. The four phases Fe-Cr-O-C-Ni system is used as an example to illustrate the method proposed. The pure stoichiometric phases considered in equilibrium calculations are: Cr 2 O 3 (s) and FeO C r 2 O 3 (s). The atmosphere consists of O 2 , CO e CO 2 constituents. The liquid iron

  18. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-05-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. 6 figs

  19. Towards a self-consistent computation of vacuum energy in 11-dimensional supergravity

    International Nuclear Information System (INIS)

    Randjbar-Daemi, S.; Salam, A.; Strathdee, J.

    1984-02-01

    An attempt is made to balance the negative vacuum energy associated with the Freund-Rubin compactification of the 11-dimensional supergravity theory against the contribution from vacuum fluctuations. We do this in order to obtain a ground state geometry which has four physical (flat) dimensions and is of the form (Minkowski) 4 xB 7 where B 7 is one of the 7-dimensional manifolds: S 7 , S 5 xS 2 , S 4 xS 3 , CP 2 xS 3 , S 3 xS 2 xS 2 or a 5-parameter family of SU(3)xSU(2)xU(1) invariant spaces Msup(pqrst). We find that all of these solutions are unstable. As a side-issue the facility for computation of the particle spectra, which results from the use of lightcone gauge, is emphasized. (author)

  20. Emerging materials and devices in spintronic integrated circuits for energy-smart mobile computing and connectivity

    International Nuclear Information System (INIS)

    Kang, S.H.; Lee, K.

    2013-01-01

    A spintronic integrated circuit (IC) is made of a combination of a semiconductor IC and a dense array of nanometer-scale magnetic tunnel junctions. This emerging field is of growing scientific and engineering interest, owing to its potential to bring disruptive device innovation to the world of electronics. This technology is currently being pursued not only for scalable non-volatile spin-transfer-torque magnetoresistive random access memory, but also for various forms of non-volatile logic (Spin-Logic). This paper reviews recent advances in spintronic IC. Key discoveries and breakthroughs in materials and devices are highlighted in light of the broader perspective of their application in low-energy mobile computing and connectivity systems, which have emerged as leading drivers for the prevailing electronics ecosystem