Calculate Your Body Mass Index
... Can! ) Health Professional Resources Calculate Your Body Mass Index Body mass index (BMI) is a measure of body fat based ... Health Information Email Alerts Jobs and Careers Site Index About NHLBI National Institute of Health Department of ...
Whorton, E.; Headman, A.; Shean, D. E.; McCann, E.
2017-12-01
Understanding the implications of glacier recession on water resources in the western U.S. requires quantifying glacier mass change across large regions over several decades. Very few glaciers in North America have long-term continuous field measurements of glacier mass balance. However, systematic aerial photography campaigns began in 1957 on many glaciers in the western U.S. and Alaska. These historical, vertical aerial stereo-photographs documenting glacier evolution have recently become publically available. Digital elevation models (DEM) of the transient glacier surface preserved in each imagery timestamp can be derived, then differenced to calculate glacier volume and mass change to improve regional geodetic solutions of glacier mass balance. In order to batch process these data, we use Python-based algorithms and Agisoft Photoscan structure from motion (SfM) photogrammetry software to semi-automate DEM creation, and orthorectify and co-register historical aerial imagery in a high-performance computing environment. Scanned photographs are rotated to reduce scaling issues, cropped to the same size to remove fiducials, and batch histogram equalization is applied to improve image quality and aid pixel-matching algorithms using the Python library OpenCV. Processed photographs are then passed to Photoscan through the Photoscan Python library to create DEMs and orthoimagery. To extend the period of record, the elevation products are co-registered to each other, airborne LiDAR data, and DEMs derived from sub-meter commercial satellite imagery. With the exception of the placement of ground control points, the process is entirely automated with Python. Current research is focused on: one, applying these algorithms to create geodetic mass balance time series for the 90 photographed glaciers in Washington State and two, evaluating the minimal amount of positional information required in Photoscan to prevent distortion effects that cannot be addressed during co
Stanke, Monika; Bralin, Amir; Bubin, Sergiy; Adamowicz, Ludwik
2018-01-01
In this work we report progress in the development and implementation of quantum-mechanical methods for calculating bound ground and excited states of small atomic systems. The work concerns singlet states with the L =1 total orbital angular momentum (P states). The method is based on the finite-nuclear-mass (non-Born-Oppenheimer; non-BO) approach and the use of all-particle explicitly correlated Gaussian functions for expanding the nonrelativistic wave function of the system. The development presented here includes derivation and implementation of algorithms for calculating the leading relativistic corrections for singlet states. The corrections are determined in the framework of the perturbation theory as expectation values of the corresponding effective operators using the non-BO wave functions. The method is tested in the calculations of the ten lowest 1P states of the helium atom and the four lowest 1P states of the beryllium atom.
Firth, David; Bell, Leonard; Squires, Martin; Estdale, Sian; McKee, Colin
2015-09-15
We present the demonstration of a rapid "middle-up" liquid chromatography mass spectrometry (LC-MS)-based workflow for use in the characterization of thiol-conjugated maleimidocaproyl-monomethyl auristatin F (mcMMAF) and valine-citrulline-monomethyl auristatin E (vcMMAE) antibody-drug conjugates. Deconvoluted spectra were generated following a combination of deglycosylation, IdeS (immunoglobulin-degrading enzyme from Streptococcus pyogenes) digestion, and reduction steps that provide a visual representation of the product for rapid lot-to-lot comparison-a means to quickly assess the integrity of the antibody structure and the applied conjugation chemistry by mass. The relative abundance of the detected ions also offer information regarding differences in drug conjugation levels between samples, and the average drug-antibody ratio can be calculated. The approach requires little material (<100 μg) and, thus, is amenable to small-scale process development testing or as an early component of a complete characterization project facilitating informed decision making regarding which aspects of a molecule might need to be examined in more detail by orthogonal methodologies. Copyright © 2015 Elsevier Inc. All rights reserved.
Mass: Fortran program for calculating mass-absorption coefficients
International Nuclear Information System (INIS)
Nielsen, Aa.; Svane Petersen, T.
1980-01-01
Determinations of mass-absorption coefficients in the x-ray analysis of trace elements are an important and time consuming part of the arithmetic calculation. In the course of time different metods have been used. The program MASS calculates the mass-absorption coefficients from a given major element analysis at the x-ray wavelengths normally used in trace element determinations and lists the chemical analysis and the mass-absorption coefficients. The program is coded in FORTRAN IV, and is operational on the IBM 370/165 computer, on the UNIVAC 1110 and on PDP 11/05. (author)
Gaseous Nitrogen Orifice Mass Flow Calculator
Ritrivi, Charles
2013-01-01
The Gaseous Nitrogen (GN2) Orifice Mass Flow Calculator was used to determine Space Shuttle Orbiter Water Spray Boiler (WSB) GN2 high-pressure tank source depletion rates for various leak scenarios, and the ability of the GN2 consumables to support cooling of Auxiliary Power Unit (APU) lubrication during entry. The data was used to support flight rationale concerning loss of an orbiter APU/hydraulic system and mission work-arounds. The GN2 mass flow-rate calculator standardizes a method for rapid assessment of GN2 mass flow through various orifice sizes for various discharge coefficients, delta pressures, and temperatures. The calculator utilizes a 0.9-lb (0.4 kg) GN2 source regulated to 40 psia (.276 kPa). These parameters correspond to the Space Shuttle WSB GN2 Source and Water Tank Bellows, but can be changed in the spreadsheet to accommodate any system parameters. The calculator can be used to analyze a leak source, leak rate, gas consumables depletion time, and puncture diameter that simulates the measured GN2 system pressure drop.
Hartree-Fock calculations of nuclear masses
International Nuclear Information System (INIS)
Quentin, P.
1976-01-01
Hartree-Fock calculations pertaining to the determination of nuclear binding energies throughout the whole chart of nuclides are reviewed. Such an approach is compared with other methods. Main techniques in use are shortly presented. Advantages and drawbacks of these calculations are also discussed with a special emphasis on the extrapolation towards nuclei far from the stability valley. Finally, a discussion of some selected results from light to superheavy nuclei, is given [fr
Bhatnagar, Shashank; Alemu, Lmenew
2018-02-01
In this work we calculate the mass spectra of charmonium for 1 P ,…,4 P states of 0++ and 1++, for 1 S ,…,5 S states of 0-+, and for 1 S ,…,4 D states of 1- along with the two-photon decay widths of the ground and first excited states of 0++ quarkonia for the process O++→γ γ in the framework of a QCD-motivated Bethe-Salpeter equation (BSE). In this 4 ×4 BSE framework, the coupled Salpeter equations are first shown to decouple for the confining part of the interaction (under the heavy-quark approximation) and are analytically solved, and later the one-gluon-exchange interaction is perturbatively incorporated, leading to mass spectral equations for various quarkonia. The analytic forms of wave functions obtained are used for the calculation of the two-photon decay widths of χc 0. Our results are in reasonable agreement with data (where available) and other models.
The cellular approach to band structure calculations
International Nuclear Information System (INIS)
Verwoerd, W.S.
1982-01-01
A short introduction to the cellular approach in band structure calculations is given. The linear cellular approach and its potantial applicability in surface structure calculations is given some consideration in particular
van Stee, Leo L P; Brinkman, Udo A Th
2011-10-28
A method is presented to facilitate the non-target analysis of data obtained in temperature-programmed comprehensive two-dimensional (2D) gas chromatography coupled to time-of-flight mass spectrometry (GC×GC-ToF-MS). One main difficulty of GC×GC data analysis is that each peak is usually modulated several times and therefore appears as a series of peaks (or peaklets) in the one-dimensionally recorded data. The proposed method, 2DAid, uses basic chromatographic laws to calculate the theoretical shape of a 2D peak (a cluster of peaklets originating from the same analyte) in order to define the area in which the peaklets of each individual compound can be expected to show up. Based on analyte-identity information obtained by means of mass spectral library searching, the individual peaklets are then combined into a single 2D peak. The method is applied, amongst others, to a complex mixture containing 362 analytes. It is demonstrated that the 2D peak shapes can be accurately predicted and that clustering and further processing can reduce the final peak list to a manageable size. Copyright © 2011 Elsevier B.V. All rights reserved.
Improving the accuracy of dynamic mass calculation
Directory of Open Access Journals (Sweden)
Oleksandr F. Dashchenko
2015-06-01
Full Text Available With the acceleration of goods transporting, cargo accounting plays an important role in today's global and complex environment. Weight is the most reliable indicator of the materials control. Unlike many other variables that can be measured indirectly, the weight can be measured directly and accurately. Using strain-gauge transducers, weight value can be obtained within a few milliseconds; such values correspond to the momentary load, which acts on the sensor. Determination of the weight of moving transport is only possible by appropriate processing of the sensor signal. The aim of the research is to develop a methodology for weighing freight rolling stock, which increases the accuracy of the measurement of dynamic mass, in particular wagon that moves. Apart from time-series methods, preliminary filtration for improving the accuracy of calculation is used. The results of the simulation are presented.
A calculation of the physical mass of sigma meson
International Nuclear Information System (INIS)
Morones-Ibarra, J.R.; Santos-Guevara, Ayax
2007-01-01
We calculate the physical mass and the width of the sigma meson by considering that it couples in vacuum to two virtual pions. The mass is calculated by using the spectral function, and we find that it is about 600 MeV. In addition, we obtained 220 MeV as the value for the width of its spectral function. The value obtained for the mass is in good agreement with that reported in the Particle Data Book for the σ meson, which is also named f 0 (600). This result also shows that σ-meson can be considered as a two-pion resonance. (author)
Calculation of the minimum critical mass of fissile nuclides
International Nuclear Information System (INIS)
Wright, R.Q.; Hopper, Calvin Mitchell
2008-01-01
The OB-1 method for the calculation of the minimum critical mass of fissile actinides in metal/water systems was described in a previous paper. A fit to the calculated minimum critical mass data using the extended criticality parameter is the basis of the revised method. The solution density (grams/liter) for the minimum critical mass is also obtained by a fit to calculated values. Input to the calculation consists of the Maxwellian averaged fission and absorption cross sections and the thermal values of nubar. The revised method gives more accurate values than the original method does for both the minimum critical mass and the solution densities. The OB-1 method has been extended to calculate the uncertainties in the minimum critical mass for 12 different fissile nuclides. The uncertainties for the fission and capture cross sections and the estimated nubar uncertainties are used to determine the uncertainties in the minimum critical mass, either in percent or grams. Results have been obtained for U-233, U-235, Pu-236, Pu-239, Pu-241, Am-242m, Cm-243, Cm-245, Cf-249, Cf-251, Cf-253, and Es-254. Eight of these 12 nuclides are included in the ANS-8.15 standard.
Calculating the mass fraction of primordial black holes
Energy Technology Data Exchange (ETDEWEB)
Young, Sam; Byrnes, Christian T. [Department of Physics and Astronomy, University of Sussex, North-South Road, Brighton (United Kingdom); Sasaki, Misao, E-mail: sy81@sussex.ac.uk, E-mail: ctb22@sussex.ac.uk, E-mail: misao@yukawa.kyoto-u.ac.jp [Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan)
2014-07-01
We reinspect the calculation for the mass fraction of primordial black holes (PBHs) which are formed from primordial perturbations, finding that performing the calculation using the comoving curvature perturbation R{sub c} in the standard way vastly overestimates the number of PBHs, by many orders of magnitude. This is because PBHs form shortly after horizon entry, meaning modes significantly larger than the PBH are unobservable and should not affect whether a PBH forms or not—this important effect is not taken into account by smoothing the distribution in the standard fashion. We discuss alternative methods and argue that the density contrast, Δ, should be used instead as super-horizon modes are damped by a factor k{sup 2}. We make a comparison between using a Press-Schechter approach and peaks theory, finding that the two are in close agreement in the region of interest. We also investigate the effect of varying the spectral index, and the running of the spectral index, on the abundance of primordial black holes.
Calculating the mass fraction of primordial black holes
International Nuclear Information System (INIS)
Young, Sam; Byrnes, Christian T.; Sasaki, Misao
2014-01-01
We reinspect the calculation for the mass fraction of primordial black holes (PBHs) which are formed from primordial perturbations, finding that performing the calculation using the comoving curvature perturbation R c in the standard way vastly overestimates the number of PBHs, by many orders of magnitude. This is because PBHs form shortly after horizon entry, meaning modes significantly larger than the PBH are unobservable and should not affect whether a PBH forms or not—this important effect is not taken into account by smoothing the distribution in the standard fashion. We discuss alternative methods and argue that the density contrast, Δ, should be used instead as super-horizon modes are damped by a factor k 2 . We make a comparison between using a Press-Schechter approach and peaks theory, finding that the two are in close agreement in the region of interest. We also investigate the effect of varying the spectral index, and the running of the spectral index, on the abundance of primordial black holes
Mass formula dependence of calculated spallation reaction product distributions
International Nuclear Information System (INIS)
Nishida, Takahiko; Nakahara, Yasuaki
1990-01-01
A new version of the spallation reaction simulation code NUCLEUS was developed by incorporating Uno and Yamada's mass formula. This version was used to calculate the distribution of products from the spallation of uranium nuclei by high-energy protons. The dependence of the distributions on the mass formula was examined by comparing the results with those from the original version, which is based on Cameron's mass formula and the mass table compiled by Wapstra et al. As regards the fission component of spallation products, the new version reproduces the reaction product data obtained from thin foil experiments much better, especially on the neutron excess side. (orig.) [de
Calculating Cluster Masses via the Sunyaev-Zel'dovich Effect
Lindley, Ashley; Landry, D.; Bonamente, M.; Joy, M.; Bulbul, E.; Carlstrom, J. E.; Culverhouse, T. L.; Gralla, M.; Greer, C.; Hawkins, D.; Lamb, J. W.; Leitch, E. M.; Marrone, D. P.; Miller, A.; Mroczkowski, T.; Muchovej, S.; Plagge, T.; Woody, D.
2012-05-01
Accurate measurements of the total mass of galaxy clusters are key for measuring the cluster mass function and therefore investigating the evolution of the universe. We apply two new methods to measure cluster masses for five galaxy clusters contained within the Brightest Cluster Sample (BCS), an X-ray luminous statistically complete sample of 35 clusters at z=0.15-0.30. These methods distinctively use only observations of the Sunyaev-Zel'dovich (SZ) effect, for which the brightness is redshift independent. At the low redshifts of the BCS, X-ray observations can easily be used to determine cluster masses, providing convenient calibrators for our SZ mass calculations. These clusters have been observed with the Sunyaev-Zel'dovich Array (SZA), an interferometer that is part of the Combined Array for Research in Millimeter-wave Astronomy (CARMA) that has been optimized for accurate measurement of the SZ effect in clusters of galaxies at 30 GHz. One method implements a scaling relation that relates the integrated pressure, Y, as determined by the SZ observations to the mass of the cluster calculated via optical weak lensing. The second method makes use of the Virial theorem to determine the mass given the integrated pressure of the cluster. We find that masses calculated utilizing these methods within a radius r500 are consistent with X-ray masses, calculated by manipulating the surface brightness and temperature data within the same radius, thus concluding that these are viable methods for the determination of cluster masses via the SZ effect. We present preliminary results of our analysis for five galaxy clusters.
Two-loop Higgs mass calculations beyond the MSSM with SARAH and SPheno
Energy Technology Data Exchange (ETDEWEB)
Nickel, Kilian [Physikalisches Institut, Universitaet Bonn (Germany); Staub, Florian [Theory Division, CERN, Geneva (Switzerland); Goodsell, Mark [LPTHE, UPMC Univ. Paris 06 (France)
2015-07-01
We present a recent extension to the Mathematica package SARAH which allows for Higgs mass calculations at the two-loop level in a wide range of supersymmetric models beyond the MSSM. These calculations are based on the effective potential approach. For the numerical evaluation Fortran code for SPheno is generated by SARAH. This allows to predict the Higgs mass in more complicated SUSY theories with a similar precision as most state-of-the-art spectrum generators do for the MSSM.
Calculation of the collective mass-parameter including RPA corrections
International Nuclear Information System (INIS)
Pal, M.K.; Zawischa, D.; Speth, J.
1975-01-01
A derivation of the vibrational mass-parameter B is given which makes the consistency with RPA calculations explicit. The expected enhancement by the residual particle-hole and particle-particle interaction is demonstrated by solving the quasiparticle-RPA for deformed nuclei in the rare earth region. (orig.) [de
AN IMPROVEMENT ON MASS CALCULATIONS OF SOLAR CORONAL MASS EJECTIONS VIA POLARIMETRIC RECONSTRUCTION
International Nuclear Information System (INIS)
Dai, Xinghua; Wang, Huaning; Huang, Xin; Du, Zhanle; He, Han
2015-01-01
The mass of a coronal mass ejection (CME) is calculated from the measured brightness and assumed geometry of Thomson scattering. The simplest geometry for mass calculations is to assume that all of the electrons are in the plane of the sky (POS). With additional information like source region or multiviewpoint observations, the mass can be calculated more precisely under the assumption that the entire CME is in a plane defined by its trajectory. Polarization measurements provide information on the average angle of the CME electrons along the line of sight of each CCD pixel from the POS, and this can further improve the mass calculations as discussed here. A CME event initiating on 2012 July 23 at 2:20 UT observed by the Solar Terrestrial Relations Observatory is employed to validate our method
Calculations of mass and moment of inertia for neutron stars
International Nuclear Information System (INIS)
Moelnvik, T.; Oestgaard, E.
1985-01-01
Masses and moments of inertia for slowly-rotating neutron stars are calculated from the Tolman-Oppenheimer-Volkoff equations and various equations of state for neutron-star matter. We have also obtained pressure and density as a function of the distance from the centre of the star. Generally, two different equations of state are applied for particle densities n>0.47 fm -3 and n -3 . The maximum mass is, in our calculations for all equations of state except for the unrealistic non-relativistic ideal Fermi gas, given by 1.50 Msub(sun) 44 gxcm 2 45 gxcm 2 , which also seem to agree very well with 'experimental results'. The radius of the star corresponding to maximum mass and maximum moment of inertia is given by 8.2 km< R<10.0 km, but a smaller central density rhosub(c) will give a larger radius. (orig.)
Pfennig, Brian W.; Schaefer, Amy K.
2011-01-01
A general chemistry laboratory experiment is described that introduces students to instrumental analysis using gas chromatography-mass spectrometry (GC-MS), while simultaneously reinforcing the concepts of mass percent and the calculation of atomic mass. Working in small groups, students use the GC to separate and quantify the percent composition…
Calculation of Added Mass for Submerged Reactor with Complex Shape
Energy Technology Data Exchange (ETDEWEB)
Sun, Jong-Oh; Kim, Gyeongho; Choo, Yeon-Seok; Yoo, Yeon-Sik [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-10-15
Kijang Research Reactor (KJRR) is currently under construction. Its reactor is located on the bottom of a reactor pool which is filled with water to a depth of 12m. Some components are installed on or inside the reactor and their structural integrity and safety performance need to be verified under seismic situations. For the verification, time history data or Floor Response Spectrum (FRS) on their support location, which is the reactor, should be obtained. A Finite Element (FE) model with fluid elements can give very accurate results for the matter; however, it costs too many resources and takes too much time for the transient analyses. In order to make the model more efficient and simple, added masses are often used to simulate the effect of water instead of the fluid elements. Many literatures introduce methods to calculate the added mass according to the exterior shape of structures. In this paper, how to calculate added masses for complex shaped structure was suggested. The proposed method was applied to RSA for KJRR and its accuracy was verified through comparison of the natural frequencies of RSA with fluid elements and the added masses. They showed the differences less than 1.5% between two models. Finally, it is concluded that the proposed method is quite useful to obtain added masses for complex shaped structure.
Nuclear structure calculations in the dynamic-interaction propagator approach
International Nuclear Information System (INIS)
Engelbrecht, C.A.; Hahne, F.J.W.; Heiss, W.D.
1978-01-01
The dynamic-interaction propagator approach provides a natural method for the handling of energy-dependent effective two-body interactions induced by collective excitations of a many-body system. In this work this technique is applied to the calculation of energy spectra and two-particle strengths in mass-18 nuclei. The energy dependence is induced by the dynamic exchange of the lowest 3 - octupole phonon in O 16 , which is described within a normal static particle-hole RPA. This leads to poles in the two-body self-energy, which can be calculated if other fermion lines are restricted to particle states. The two-body interaction parameters are chosen to provide the correct phonon energy and reasonable negative-parity mass-17 and positive-parity mass-18 spectra. The fermion lines must be dressed consistently with the same exchange phonon to avoid redundant solutions or ghosts. The negative-parity states are then calculated in a parameter-free way which gives good agreement with the observed spectra [af
Numerical calculation of hadron masses in lattice quantum chromodynamics
International Nuclear Information System (INIS)
Montvay, I.
1985-07-01
Recent numerical Monte Carlo simulations of the hadron spectrum are reviewed. After a general introduction, different ways of calculating the hadron masses in the ''quenched approximation'' (i.e. neglecting virtual quark loops) are described and the latest results are summarized. The pseudofermion method and the iterative hopping expansion method for the introduction of dynamical quarks is discussed, and the first results about the hadron spectrum including the effect of virtual quark loops are reviewed. A separate section is devoted to the discussion of the questions related to scaling with dynamical quarks. (orig./HSI)
Dynamical gluon masses in perturbative calculations at the loop level
International Nuclear Information System (INIS)
Machado, Fatima A.; Natale, Adriano A.
2013-01-01
Full text: In the phenomenology of strong interactions one always has to deal at some extent with the interplay between perturbative and non-perturbative QCD. On one hand, the former has quite developed tools, yielded by asymptotic freedom. On the other, concerning the latter, we nowadays envisage the following scenario: 1) There are strong evidences for a dynamically massive gluon propagator and infrared finite coupling constant; 2) There is an extensive and successful use of an infrared finite coupling constant in phenomenological calculations at tree level; 3) The infrared finite coupling improves the perturbative series convergence; 4) The dynamical gluon mass provides a natural infrared cutoff in the physical processes at the tree level. Considering this scenario it is natural to ask how these non-perturbative results can be used in perturbative calculations of physical observables at the loop level. Recent papers discuss how off-shell gauge and renormalization group invariant Green functions can be computed with the use of the Pinch Technique (PT), with IR divergences removed by the dynamical gluon mass, and using a well defined effective charge. In this work we improve the former results by the authors, which evaluate 1-loop corrections to some two- and three-point functions of SU(3) pure Yang-Mills, investigating the dressing of quantities that could account for an extension of loop calculations to the infrared domain of the theory, in a way applicable to phenomenological calculations. One of these improvements is maintaining the gluon propagator transverse in such a scheme. (author)
The effect of dynamical quark mass on the calculation of a strange quark star's structure
Institute of Scientific and Technical Information of China (English)
Gholam Hossein Bordbar; Babak Ziaei
2012-01-01
We discuss the dynamical behavior of strange quark matter components,in particular the effects of density dependent quark mass on the equation of state of strange quark matter.The dynamical masses of quarks are computed within the Nambu-Jona-Lasinio model,then we perform strange quark matter calculations employing the MIT bag model with these dynamical masses.For the sake of comparing dynamical mass interaction with QCD quark-quark interaction,we consider the one-gluon-exchange term as the effective interaction between quarks for the MIT bag model.Our dynamical approach illustrates an improvement in the obtained equation of state values.We also investigate the structure of the strange quark star using TolmanOppenheimer-Volkoff equations for all applied models.Our results show that dynamical mass interaction leads to lower values for gravitational mass.
A Tabular Approach to Titration Calculations
Lim, Kieran F.
2012-01-01
Titrations are common laboratory exercises in high school and university chemistry courses, because they are easy, relatively inexpensive, and they illustrate a number of fundamental chemical principles. While students have little difficulty with calculations involving a single titration step, there is a significant leap in conceptual difficulty…
Calculational approach to ionization spectrometer design
International Nuclear Information System (INIS)
Gabriel, T.A.
1974-01-01
Many factors contribute to the design and overall performance of an ionization spectrometer. These factors include the conditions under which the spectrometer is to be used, the required performance, the development of the hadronic and electromagnetic cascades, leakage and binding energies, saturation effects of densely ionizing particles, nonuniform light collection, sampling fluctuations, etc. The calculational procedures developed at Oak Ridge National Laboratory that have been applied to many spectrometer designs and that include many of the influencing factors in spectrometer design are discussed. The incident-particle types which can be considered with some generality are protons, neutrons, pions, muons, electrons, positrons, and gamma rays. Charged kaons can also be considered but with less generality. The incident-particle energy range can extend into the hundreds of GeV range. The calculations have been verified by comparison with experimental data but only up to approximately 30 GeV. Some comparisons with experimental data are also discussed and presented so that the flexibility of the calculational methods can be demonstrated. (U.S.)
40 CFR 75.83 - Calculation of Hg mass emissions and heat input rate.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Calculation of Hg mass emissions and... (CONTINUED) AIR PROGRAMS (CONTINUED) CONTINUOUS EMISSION MONITORING Hg Mass Emission Provisions § 75.83 Calculation of Hg mass emissions and heat input rate. The owner or operator shall calculate Hg mass emissions...
A meshless approach to radionuclide transport calculations
International Nuclear Information System (INIS)
Perko, J.; Sarler, B.
2005-01-01
Over the past thirty years numerical modelling has emerged as an interdisciplinary scientific discipline which has a significant impact in engineering and design. In the field of numerical modelling of transport phenomena in porous media, many commercial codes exist, based on different numerical methods. Some of them are widely used for performance assessment and safety analysis of radioactive waste repositories and groundwater modelling. Although they proved to be an accurate and reliable tool, they have certain limitations and drawbacks. Realistic problems often involve complex geometry which is difficult and time consuming to discretize. In recent years, meshless methods have attracted much attention due to their flexibility in solving engineering and scientific problems. In meshless methods the cumbersome polygonization of calculation domain is not necessary. By this the discretization time is reduced. In addition, the simulation is not as discretization density dependent as in traditional methods because of the lack of polygon interfaces. In this work fully meshless Diffuse Approximate Method (DAM) is used for calculation of radionuclide transport. Two cases are considered; First 1D comparison of 226 Ra transport and decay solved by the commercial Finite Volume Method (FVM) and Finite Element Method (FEM) based packages and DAM. This case shows the level of discretization density dependence. And second realistic 2D case of near-field modelling of radionuclide transport from the radioactive waste repository. Comparison is made again between FVM based code and DAM simulation for two radionuclides: Long-lived 14 C and short-lived 3 H. Comparisons indicate great capability of meshless methods to simulate complex transport problems and show that they should be seriously considered in future commercial simulation tools. (author)
Gamow's calculation of the neutron star's critical mass revised
International Nuclear Information System (INIS)
Ludwig, Hendrik; Ruffini, Remo
2014-01-01
It has at times been indicated that Landau introduced neutron stars in his classic paper of 1932. This is clearly impossible because the discovery of the neutron by Chadwick was submitted more than one month after Landau's work. Therefore, and according to his calculations, what Landau really did was to study white dwarfs, and the critical mass he obtained clearly matched the value derived by Stoner and later by Chandrasekhar. The birth of the concept of a neutron star is still today unclear. Clearly, in 1934, the work of Baade and Zwicky pointed to neutron stars as originating from supernovae. Oppenheimer in 1939 is also well known to have introduced general relativity (GR) in the study of neutron stars. The aim of this note is to point out that the crucial idea for treating the neutron star has been advanced in Newtonian theory by Gamow. However, this pioneering work was plagued by mistakes. The critical mass he should have obtained was 6.9 M, not the one he declared, namely, 1.5 M. Probably, he was taken to this result by the work of Landau on white dwarfs. We revise Gamow's calculation of the critical mass regarding calculational and conceptual aspects and discuss whether it is justified to consider it the first neutron-star critical mass. We compare Gamow's approach to other early and modern approaches to the problem.
Systematic Approach to Calculate the Concentration of Chemical Species in Multi-Equilibrium Problems
Baeza-Baeza, Juan Jose; Garcia-Alvarez-Coque, Maria Celia
2011-01-01
A general systematic approach is proposed for the numerical calculation of multi-equilibrium problems. The approach involves several steps: (i) the establishment of balances involving the chemical species in solution (e.g., mass balances, charge balance, and stoichiometric balance for the reaction products), (ii) the selection of the unknowns (the…
Missing mass calculator as a technique to reconstruct the mass of resonances decaying into tau pairs
Energy Technology Data Exchange (ETDEWEB)
Blumenschein, Ulla; De Maria, Antonio; Quadt, Arnulf; Zinonos, Zinonas [II. Physikalisches Institut, Georg-August-Universitaet Goettingen (Germany)
2016-07-01
An accurate reconstruction of a resonance mass decaying into a pair of tau leptons is a difficult task because of the presence of multiple undetected neutrinos from the tau decays. The Missing Mass Calculator (MMC) is a sophisticated method to optimise the reconstruction of this events. It is based on the requirement that mutual orientations of the neutrinos and other decay products are consistent with the mass and decay kinematics of a tau lepton. This is achieved by minimizing a likelihood function defined in the kinematically allowed phase space region. MMC was one of the most powerful tools used in SM-Higgs to tau tau searches in Run1 at LHC. Now, in Run2, LHC collides proton-proton at center of mass energy √(s) = 13 TeV and at higher luminosity. Therefore, many efforts need to be done to optimise the analysis tools to the new experimental conditions. Amongst these tools, MMC requires to be retuned in order to play a key role again in the searches of the Higgs boson in di-tau final states. This talk outlines the main aspects of the MMC retuning and the impact on its performance.
Body Mass Index: Calculator for Child and Teen
... Healthy Weight Sample Link BMI Percentile Calculator for Child and Teen English Version Language: English Español (Spanish) ... and Weight Accurately At Home BMI Calculator for Child and Teen ( English | Metric ) 1. Birth Date : Month: ...
Baran, Richard; Northen, Trent R
2013-10-15
Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.
Rammelmüller, Lukas; Porter, William J.; Drut, Joaquín E.; Braun, Jens
2017-11-01
The calculation of the ground state and thermodynamics of mass-imbalanced Fermi systems is a challenging many-body problem. Even in one spatial dimension, analytic solutions are limited to special configurations and numerical progress with standard Monte Carlo approaches is hindered by the sign problem. The focus of the present work is on the further development of methods to study imbalanced systems in a fully nonperturbative fashion. We report our calculations of the ground-state energy of mass-imbalanced fermions using two different approaches which are also very popular in the context of the theory of the strong interaction (quantum chromodynamics, QCD): (a) the hybrid Monte Carlo algorithm with imaginary mass imbalance, followed by an analytic continuation to the real axis; and (b) the complex Langevin algorithm. We cover a range of on-site interaction strengths that includes strongly attractive as well as strongly repulsive cases which we verify with nonperturbative renormalization group methods and perturbation theory. Our findings indicate that, for strong repulsive couplings, the energy starts to flatten out, implying interesting consequences for short-range and high-frequency correlation functions. Overall, our results clearly indicate that the complex Langevin approach is very versatile and works very well for imbalanced Fermi gases with both attractive and repulsive interactions.
Forensic analysis of explosions: Inverse calculation of the charge mass
Voort, M.M. van der; Wees, R.M.M. van; Brouwer, S.D.; Jagt-Deutekom, M.J. van der; Verreault, J.
2015-01-01
Forensic analysis of explosions consists of determining the point of origin, the explosive substance involved, and the charge mass. Within the EU fP7 project Hyperion, TNO developed the Inverse Explosion Analysis (TNO-IEA) tool to estïmate the charge mass and point of origin based on observed damage
Ab initio calculation of the neutron-proton mass difference
Borsanyi, Sz.; Durr, S.; Fodor, Z.; Hoelbling, C.; Katz, S. D.; Krieg, S.; Lellouch, L.; Lippert, T.; Portelli, A.; Szabo, K. K.; Toth, B. C.
2015-03-01
The existence and stability of atoms rely on the fact that neutrons are more massive than protons. The measured mass difference is only 0.14% of the average of the two masses. A slightly smaller or larger value would have led to a dramatically different universe. Here, we show that this difference results from the competition between electromagnetic and mass isospin breaking effects. We performed lattice quantum-chromodynamics and quantum-electrodynamics computations with four nondegenerate Wilson fermion flavors and computed the neutron-proton mass-splitting with an accuracy of 300 kilo-electron volts, which is greater than 0 by 5 standard deviations. We also determine the splittings in the Σ, Ξ, D, and Ξcc isospin multiplets, exceeding in some cases the precision of experimental measurements.
Analytic calculations of masses in Hamiltonian lattice theories
International Nuclear Information System (INIS)
Horn, D.
1985-01-01
The t-expansion of the vacuum energy function is discussed and several relations involving the connected matrix elements of powers of the hamiltonian are established. On the basis of these relations we show that the masses of the lowest lying O ++ states can be expressed as ratios of derivatives of the energy function. Other sectors of Hilbert space are discussed and a recent result for the SU(2) glueball mass, derived by using such relations as described here, is briefly reviewed. (author)
Implications of improved Higgs mass calculations for supersymmetric models.
Buchmueller, O; Dolan, M J; Ellis, J; Hahn, T; Heinemeyer, S; Hollik, W; Marrouche, J; Olive, K A; Rzehak, H; de Vries, K J; Weiglein, G
We discuss the allowed parameter spaces of supersymmetric scenarios in light of improved Higgs mass predictions provided by FeynHiggs 2.10.0. The Higgs mass predictions combine Feynman-diagrammatic results with a resummation of leading and subleading logarithmic corrections from the stop/top sector, which yield a significant improvement in the region of large stop masses. Scans in the pMSSM parameter space show that, for given values of the soft supersymmetry-breaking parameters, the new logarithmic contributions beyond the two-loop order implemented in FeynHiggs tend to give larger values of the light CP-even Higgs mass, [Formula: see text], in the region of large stop masses than previous predictions that were based on a fixed-order Feynman-diagrammatic result, though the differences are generally consistent with the previous estimates of theoretical uncertainties. We re-analyse the parameter spaces of the CMSSM, NUHM1 and NUHM2, taking into account also the constraints from CMS and LHCb measurements of [Formula: see text]and ATLAS searches for [Formula: see text] events using 20/fb of LHC data at 8 TeV. Within the CMSSM, the Higgs mass constraint disfavours [Formula: see text], though not in the NUHM1 or NUHM2.
Implications of improved Higgs mass calculations for supersymmetric models
Energy Technology Data Exchange (ETDEWEB)
Buchmueller, O. [Imperial College, London (United Kingdom). High Energy Physics Group; Dolan, M.J. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Theory Group; Ellis, J. [King' s College, London (United Kingdom). Theoretical Particle Physics and Cosmology Group; and others
2014-03-15
We discuss the allowed parameter spaces of supersymmetric scenarios in light of improved Higgs mass predictions provided by FeynHiggs 2.10.0. The Higgs mass predictions combine Feynman-diagrammatic results with a resummation of leading and subleading logarithmic corrections from the stop/top sector, which yield a significant improvement in the region of large stop masses. Scans in the pMSSM parameter space show that, for given values of the soft supersymmetry-breaking parameters, the new logarithmic contributions beyond the two-loop order implemented in FeynHiggs tend to give larger values of the light CP-even Higgs mass, M{sub h}, in the region of large stop masses than previous predictions that were based on a fixed-order Feynman-diagrammatic result, though the differences are generally consistent with the previous estimates of theoretical uncertainties. We re-analyze the parameter spaces of the CMSSM, NUHM1 and NUHM2, taking into account also the constraints from CMS and LHCb measurements of BR(B{sub s}→μ{sup +}μ{sup -}) and ATLAS searches for E{sub T} events using 20/fb of LHC data at 8 TeV. Within the CMSSM, the Higgs mass constraint disfavours tan β
Status of glueball mass calculations in lattice gauge theory
International Nuclear Information System (INIS)
Kronfeld, A.S.
1989-11-01
The status of glueball spectrum calculations in lattice gauge theory is briefly reviewed, with focus on the comparison between Monte Carlo simulations and small-volume analytical calculations in SU(3). The agreement gives confidence that the large-volume Monte Carlo results are accurate, at least in the context of the pure gauge theory. An overview of some of the technical questions, which is aimed at non-experts, serves as an introduction. 19 refs., 1 fig
A review of Higgs mass calculations in supersymmetric models
DEFF Research Database (Denmark)
Draper, P.; Rzehak, H.
2016-01-01
The discovery of the Higgs boson is both a milestone achievement for the Standard Model and an exciting probe of new physics beyond the SM. One of the most important properties of the Higgs is its mass, a number that has proven to be highly constraining for models of new physics, particularly those...... related to the electroweak hierarchy problem. Perhaps the most extensively studied examples are supersymmetric models, which, while capable of producing a 125 GeV Higgs boson with SM-like properties, do so in non-generic parts of their parameter spaces. We review the computation of the Higgs mass...
Ab initio calculation of the neutron-proton mass difference
CERN. Geneva
2015-01-01
The existence and stability of atoms relies on the fact that neutrons are more massive than protons. The mass difference is only 0.14% of the average and has significant astrophysical and cosmological implications. A slightly smaller or larger value would have led to a dramatically different universe. After an introduction to the problem and to lattice quantum chromodynamics (QCD), I will show how this difference can be computed precisely by carefully accounting for electromagnetic and mass isospin breaking effects in lattice computations. I will also report on results for splittings in the \\Sigma, \\Xi, D and \\Xi_{cc} isospin multiplets, some of which are predictions. The computations are performed in lattice QCD plus QED with four, non-degenerate quark flavors.
International Nuclear Information System (INIS)
Núñez, M A; Mendoza, R
2015-01-01
Several methods to estimate the velocity field of atmospheric flows, have been proposed to the date for applications such as emergency response systems, transport calculations and for budget studies of all kinds. These applications require a wind field that satisfies the conservation of mass but, in general, estimated wind fields do not satisfy exactly the continuity equation. An approach to reduce the effect of using a divergent wind field as input in the transport-diffusion equations, was proposed in the literature. In this work, a linear local analysis of a wind field, is used to show analytically that the perturbation of a large-scale nondivergent flow can yield a divergent flow with a substantially different structure. The effects of these structural changes in transport calculations are illustrated by means of analytic solutions of the transport equation
Variational approach to thermal masses in compactified models
Energy Technology Data Exchange (ETDEWEB)
Dominici, Daniele [Dipartimento di Fisica e Astronomia Università di Firenze and INFN - Sezione di Firenze,Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Roditi, Itzhak [Centro Brasileiro de Pesquisas Físicas - CBPF/MCT,Rua Dr. Xavier Sigaud 150, 22290-180, Rio de Janeiro, RJ (Brazil)
2015-08-20
We investigate by means of a variational approach the effective potential of a 5DU(1) scalar model at finite temperature and compactified on S{sup 1} and S{sup 1}/Z{sub 2} as well as the corresponding 4D model obtained through a trivial dimensional reduction. We are particularly interested in the behavior of the thermal masses of the scalar field with respect to the Wilson line phase and the results obtained are compared with those coming from a one-loop effective potential calculation. We also explore the nature of the phase transition.
Approach to equilibrium calculations for the dragon HTR design
Energy Technology Data Exchange (ETDEWEB)
Hansen, U
1971-06-10
The calculational methods and the model used in representing the core and the fuel management operations are described. Different layouts of the first core and approach to equilibrium schemes for the Dragon HTR design are investigated. A simple fuelling modus is found and the tchnological and economical implications are discussed in detail.
Bayesian inference in mass flow simulations - from back calculation to prediction
Kofler, Andreas; Fischer, Jan-Thomas; Hellweger, Valentin; Huber, Andreas; Mergili, Martin; Pudasaini, Shiva; Fellin, Wolfgang; Oberguggenberger, Michael
2017-04-01
Mass flow simulations are an integral part of hazard assessment. Determining the hazard potential requires a multidisciplinary approach, including different scientific fields such as geomorphology, meteorology, physics, civil engineering and mathematics. An important task in snow avalanche simulation is to predict process intensities (runout, flow velocity and depth, ...). The application of probabilistic methods allows one to develop a comprehensive simulation concept, ranging from back to forward calculation and finally to prediction of mass flow events. In this context optimized parameter sets for the used simulation model or intensities of the modeled mass flow process (e.g. runout distances) are represented by probability distributions. Existing deterministic flow models, in particular with respect to snow avalanche dynamics, contain several parameters (e.g. friction). Some of these parameters are more conceptual than physical and their direct measurement in the field is hardly possible. Hence, parameters have to be optimized by matching simulation results to field observations. This inverse problem can be solved by a Bayesian approach (Markov chain Monte Carlo). The optimization process yields parameter distributions, that can be utilized for probabilistic reconstruction and prediction of avalanche events. Arising challenges include the limited amount of observations, correlations appearing in model parameters or observed avalanche characteristics (e.g. velocity and runout) and the accurate handling of ensemble simulations, always taking into account the related uncertainties. Here we present an operational Bayesian simulation framework with r.avaflow, the open source GIS simulation model for granular avalanches and debris flows.
Raznikova, M O; Raznikov, V V
2015-01-01
In this work, information relating to charge states of biomolecule ions in solution obtained using the electrospray ionization mass spectrometry of different biopolymers is analyzed. The data analyses have mainly been carried out by solving an inverse problem of calculating the probabilities of retention of protons and other charge carriers by ionogenic groups of biomolecules with known primary structures. The approach is a new one and has no known to us analogues. A program titled "Decomposition" was developed and used to analyze the charge distribution of ions of native and denatured cytochrome c mass spectra. The possibility of splitting of the charge-state distribution of albumin into normal components, which likely corresponds to various conformational states of the biomolecule, has been demonstrated. The applicability criterion for using previously described method of decomposition of multidimensional charge-state distributions with two charge carriers, e.g., a proton and a sodium ion, to characterize the spatial structure of biopolymers in solution has been formulated. In contrast to known mass-spectrometric approaches, this method does not require the use of enzymatic hydrolysis or collision-induced dissociation of the biopolymers.
Calculation of wind turbine aeroelastic behaviour. The Garrad Hassan approach
Energy Technology Data Exchange (ETDEWEB)
Quarton, D C [Garrad Hassan and Partners Ltd., Bristol (United Kingdom)
1996-09-01
The Garrad Hassan approach to the prediction of wind turbine loading and response has been developed over the last decade. The goal of this development has been to produce calculation methods that contain realistic representation of the wind, include sensible aerodynamic and dynamic models of the turbine and can be used to predict fatigue and extreme loads for design purposes. The Garrad Hassan calculation method is based on a suite of four key computer programs: WIND3D for generation of the turbulent wind field; EIGEN for modal analysis of the rotor and support structure; BLADED for time domain calculation of the structural loads; and SIGNAL for post-processing of the BLADED predictions. The interaction of these computer programs is illustrated. A description of the main elements of the calculation method will be presented. (au)
Reconciling EFT and hybrid calculations of the light MSSM Higgs-boson mass
Energy Technology Data Exchange (ETDEWEB)
Bahl, Henning; Hollik, Wolfgang [Max-Planck Institut fuer Physik, Munich (Germany); Heinemeyer, Sven [Campus of International Excellence UAM+CSIC, Madrid (Spain); Universidad Autonoma de Madrid, Instituto de Fisica Teorica, (UAM/CSIC), Madrid (Spain); Instituto de Fisica Cantabria (CSIC-UC), Santander (Spain); Weiglein, Georg [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany)
2018-01-15
Various methods are used in the literature for predicting the lightest CP-even Higgs boson mass in the Minimal Supersymmetric Standard Model (MSSM). Fixed-order diagrammatic calculations capture all effects at a given order and yield accurate results for scales of supersymmetric (SUSY) particles that are not separated too much from the weak scale. Effective field theory calculations allow a resummation of large logarithmic contributions up to all orders and therefore yield accurate results for a high SUSY scale. A hybrid approach, where both methods have been combined, is implemented in the computer code FeynHiggs. So far, however, at large scales sizeable differences have been observed between FeynHiggs and other pure EFT codes. In this work, the various approaches are analytically compared with each other in a simple scenario in which all SUSY mass scales are chosen to be equal to each other. Three main sources are identified that account for the major part of the observed differences. Firstly, it is shown that the scheme conversion of the input parameters that is commonly used for the comparison of fixed-order results is not adequate for the comparison of results containing a series of higher-order logarithms. Secondly, the treatment of higher-order terms arising from the determination of the Higgs propagator pole is addressed. Thirdly, the effect of different parametrizations in particular of the top Yukawa coupling in the non-logarithmic terms is investigated. Taking into account all of these effects, in the considered simple scenario very good agreement is found for scales above 1 TeV between the results obtained using the EFT approach and the hybrid approach of FeynHiggs. (orig.)
Reconciling EFT and hybrid calculations of the light MSSM Higgs-boson mass
International Nuclear Information System (INIS)
Bahl, Henning; Hollik, Wolfgang; Heinemeyer, Sven; Weiglein, Georg
2017-06-01
Various methods are used in the literature for predicting the lightest CP-even Higgs boson mass in the Minimal Supersymmetric Standard Model (MSSM). Fixed-order diagrammatic calculations capture all effects at a given order and yield accurate results for scales of supersymmetric (SUSY) particles that are not separated too much from the weak scale. Effective field theory calculations allow a resummation of large logarithmic contributions up to all orders and therefore yield accurate results for a high SUSY scale. A hybrid approach, where both methods have been combined, is implemented in the computer code FeynHiggs. So far, however, at large scales sizeable differences have been observed between FeynHiggs and other pure EFT codes. In this work, the various approaches are analytically compared with each other in a simple scenario in which all SUSY mass scales are chosen to be equal to each other. Three main sources are identified that account for the major part of the observed differences. Firstly, it is shown that the scheme conversion of the input parameters that is commonly used for the comparison of fixed-order results is not adequate for the comparison of results containing a series of higher-order logarithms. Secondly, the treatment of higher-order terms arising from the determination of the Higgs propagator pole is addressed. Thirdly, the effect of different parametrizations in particular of the top Yukawa coupling in the non-logarithmic terms is investigated. Taking into account all of these effects, in the considered simple scenario very good agreement is found for scales above 1 TeV between the results obtained using the EFT approach and the hybrid approach of FeynHiggs.
Reconciling EFT and hybrid calculations of the light MSSM Higgs-boson mass
Energy Technology Data Exchange (ETDEWEB)
Bahl, Henning; Hollik, Wolfgang [Max-Planck-Institut fuer Physik, Muenchen (Germany); Heinemeyer, Sven [Campus of International Excellence UAM+CSIC, Madrid (Spain); Univ. Autonoma de Madrid (Spain). Inst. de Fisica Teorica; Instituto de Fisica Cantabria (CSIC-UC), Santander (Spain); Weiglein, Georg [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2017-06-15
Various methods are used in the literature for predicting the lightest CP-even Higgs boson mass in the Minimal Supersymmetric Standard Model (MSSM). Fixed-order diagrammatic calculations capture all effects at a given order and yield accurate results for scales of supersymmetric (SUSY) particles that are not separated too much from the weak scale. Effective field theory calculations allow a resummation of large logarithmic contributions up to all orders and therefore yield accurate results for a high SUSY scale. A hybrid approach, where both methods have been combined, is implemented in the computer code FeynHiggs. So far, however, at large scales sizeable differences have been observed between FeynHiggs and other pure EFT codes. In this work, the various approaches are analytically compared with each other in a simple scenario in which all SUSY mass scales are chosen to be equal to each other. Three main sources are identified that account for the major part of the observed differences. Firstly, it is shown that the scheme conversion of the input parameters that is commonly used for the comparison of fixed-order results is not adequate for the comparison of results containing a series of higher-order logarithms. Secondly, the treatment of higher-order terms arising from the determination of the Higgs propagator pole is addressed. Thirdly, the effect of different parametrizations in particular of the top Yukawa coupling in the non-logarithmic terms is investigated. Taking into account all of these effects, in the considered simple scenario very good agreement is found for scales above 1 TeV between the results obtained using the EFT approach and the hybrid approach of FeynHiggs.
Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio
2016-10-01
We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Localized-overlap approach to calculations of intermolecular interactions
Rob, Fazle
Symmetry-adapted perturbation theory (SAPT) based on the density functional theory (DFT) description of the monomers [SAPT(DFT)] is one of the most robust tools for computing intermolecular interaction energies. Currently, one can use the SAPT(DFT) method to calculate interaction energies of dimers consisting of about a hundred atoms. To remove the methodological and technical limits and extend the size of the systems that can be calculated with the method, a novel approach has been proposed that redefines the electron densities and polarizabilities in a localized way. In the new method, accurate but computationally expensive quantum-chemical calculations are only applied for the regions where it is necessary and for other regions, where overlap effects of the wave functions are negligible, inexpensive asymptotic techniques are used. Unlike other hybrid methods, this new approach is mathematically rigorous. The main benefit of this method is that with the increasing size of the system the calculation scales linearly and, therefore, this approach will be denoted as local-overlap SAPT(DFT) or LSAPT(DFT). As a byproduct of developing LSAPT(DFT), some important problems concerning distributed molecular response, in particular, the unphysical charge-flow terms were eliminated. Additionally, to illustrate the capabilities of SAPT(DFT), a potential energy function has been developed for an energetic molecular crystal of 1,1-diamino-2,2-dinitroethylene (FOX-7), where an excellent agreement with the experimental data has been found.
Calculability of the n-p mass difference in gauge theories
International Nuclear Information System (INIS)
Kiskis, J.
1980-01-01
The requirement of a calculable n-p mass difference leads to a consideration of unified gauge theories. Future developments in grand unified models may provide a realistic framework for the calculation of the n-p mass difference. The possibility that the relatively soft ultraviolet behavior of QCD softens the divergence in the lowest-order electromagnetic mass shift is considered in detail. It is shown that, if the bare mass and QCD coupling are constrained to be independent of the electromagnetic coupling, as is natural, then the lowest-order electromagnetic shifts of the renormalized mass and QCD coupling are infinite
Precise Higgs mass calculations in (non-)minimal supersymmetry at both high and low scales
Energy Technology Data Exchange (ETDEWEB)
Athron, Peter [Monash Univ., Victoria (Australia). School of Physics and Astronomy; Park, Jae-hyeon [Korea Institute for Advanced Study, Seoul (Korea, Republic of). Quantum Universe Center; Steudtner, Tom; Stoeckinger, Dominik [TU Dresden (Germany). Inst. fuer Kern- und Teilchenphysik; Voigt, Alexander [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2016-09-15
We present FlexibleEFTHiggs, a method for calculating the SM-like Higgs pole mass in SUSY (and even non-SUSY) models, which combines an effective field theory approach with a diagrammatic calculation. It thus achieves an all order resummation of leading logarithms together with the inclusion of all non-logarithmic 1-loop contributions. We implement this method into FlexibleSUSY and study its properties in the MSSM, NMSSM, E{sub 6}SSM and MRSSM. In the MSSM, it correctly interpolates between the known results of effective field theory calculations in the literature for a high SUSY scale and fixed-order calculations in the full theory for a sub-TeV SUSY scale. We compare our MSSM results to those from public codes and identify the origin of the most significant deviations between the DR programs. We then perform a similar comparison in the remaining three non-minimal models. For all four models we estimate the theoretical uncertainty of FlexibleEFTHiggs and the fixed-order DR programs thereby finding that the former becomes more precise than the latter for a SUSY scale above a few TeV. Even for sub-TeV SUSY scales, FlexibleEFTHiggs maintains the uncertainty estimate around 2-3 GeV, remaining a competitive alternative to existing fixed-order computations.
Precise Higgs mass calculations in (non-)minimal supersymmetry at both high and low scales
Energy Technology Data Exchange (ETDEWEB)
Athron, Peter [ARC Centre of Excellence for Particle Physics at the Terascale,School of Physics and Astronomy, Monash University,Melbourne, Victoria 3800 (Australia); Park, Jae-hyeon [Quantum Universe Center, Korea Institute for Advanced Study,85 Hoegiro Dongdaemungu, Seoul 02455 (Korea, Republic of); Steudtner, Tom; Stöckinger, Dominik [Institut für Kern- und Teilchenphysik, TU Dresden,Zellescher Weg 19, 01069 Dresden (Germany); Voigt, Alexander [Deutsches Elektronen-Synchrotron DESY,Notkestraße 85, 22607 Hamburg (Germany)
2017-01-18
We present FlexibleEFTHiggs, a method for calculating the SM-like Higgs pole mass in SUSY (and even non-SUSY) models, which combines an effective field theory approach with a diagrammatic calculation. It thus achieves an all order resummation of leading logarithms together with the inclusion of all non-logarithmic 1-loop contributions. We implement this method into FlexibleSUSY and study its properties in the MSSM, NMSSM, E{sub 6}SSM and MRSSM. In the MSSM, it correctly interpolates between the known results of effective field theory calculations in the literature for a high SUSY scale and fixed-order calculations in the full theory for a sub-TeV SUSY scale. We compare our MSSM results to those from public codes and identify the origin of the most significant deviations between the (DR)-bar programs. We then perform a similar comparison in the remaining three non-minimal models. For all four models we estimate the theoretical uncertainty of FlexibleEFTHiggs and the fixed-order (DR)-bar programs thereby finding that the former becomes more precise than the latter for a SUSY scale above a few TeV. Even for sub-TeV SUSY scales, FlexibleEFTHiggs maintains the uncertainty estimate around 2–3 GeV, remaining a competitive alternative to existing fixed-order computations.
Calculation of Post-Closure Natural Convection Heat and Mass Transfer in Yucca Mountain Drifts
International Nuclear Information System (INIS)
Webb, S.; Itamura, M.
2004-01-01
Natural convection heat and mass transfer under post-closure conditions has been calculated for Yucca Mountain drifts using the computational fluid dynamics (CFD) code FLUENT. Calculations have been performed for 300, 1000, 3000, and 10,000 years after repository closure. Effective dispersion coefficients that can be used to calculate mass transfer in the drift have been evaluated as a function of time and boundary temperature tilt
A new approach to the electron self energy calculation
International Nuclear Information System (INIS)
Persson, H.; Lindgren, I.; Salomonson, S.
1993-01-01
We present a new practical way to calculate the first order self energy in any model potential (local or non-local). The main idea is to introduce a new straightforward way of renormalization to avoid the usual potential expansion implying a large number of diagrams in higher order QED effects. The renormalization procedure is based on defining the divergent mass term in coordinate space and decomposing it into a divergent sum over finite partial wave contributions. The unrenormalized bound self energy is equally decomposed into a partial wave (l) sum. For each partial wave the difference is taken and the sum becomes convergent. The comparably rapid asymptotic behaviour of the method is l -3 . The method is applied to lithium-like uranium, and the self energy in a Coulomb field, the finite nucleus effect and the screened self energy is calculated to an accuracy of at least one tenth of an eV. (orig.)
Calculation of gas migration in fractured rock - a continuum approach
International Nuclear Information System (INIS)
Braester, C.
1987-09-01
A study of gas migration from low level radioactive repositories in which the fractured rock mass was conceptualized as a continuum, was carried out by the aid of a computer program based on a finite difference numerical method of solution to the equations of flow. The calculations are intended to correspond to the prevailing in the Forsmark low level repository area where radioactive waste repository caverns are planned to be located at a depth of about 50 metres below the sea level. Calculations were worked out for a constant gas flow rate equivalent to a gas production of 20 000 normal cubic metres per year. The investigated flow domain was a vertical cross-section passing through the repository. The results show that in the empty cavern the gas formed in the cavern moves almost instantaneously upward amd accumulates below the roof of the cavern. (orig./DG)
Fatigue approach for addressing environmental effects in fatigue usage calculation
Energy Technology Data Exchange (ETDEWEB)
Wilhelm, Paul; Rudolph, Juergen [AREVA GmbH, Erlangen (Germany); Steinmann, Paul [Erlangen-Nuremberg Univ., erlangen (Germany). Chair of Applied Mechanics
2015-04-15
Laboratory tests consider simple trapezoidal, triangle, and sinusoidal signals. However, actual plant components are characterized by complex loading patterns and periods of holds. Fatigue tests in water environment show, that the damage from a realistic strain variation or the presence of hold-times within cyclic loading results in an environmental reduction factor (Fen) only half that of a simple waveform. This study proposes a new fatigue approach for addressing environmental effects in fatigue usage calculation for class 1 boiler and pressure vessel reactor components. The currently accepted method of fatigue assessment has been used as a base model and all cycles, which have been comparable with realistic fatigue tests, have been excluded from the code-based fatigue calculation and evaluated directly with the test data. The results presented show that the engineering approach can successfully be integrated in the code-based fatigue assessment. The cumulative usage factor can be reduced considerably.
Fatigue approach for addressing environmental effects in fatigue usage calculation
International Nuclear Information System (INIS)
Wilhelm, Paul; Rudolph, Juergen; Steinmann, Paul
2015-01-01
Laboratory tests consider simple trapezoidal, triangle, and sinusoidal signals. However, actual plant components are characterized by complex loading patterns and periods of holds. Fatigue tests in water environment show, that the damage from a realistic strain variation or the presence of hold-times within cyclic loading results in an environmental reduction factor (Fen) only half that of a simple waveform. This study proposes a new fatigue approach for addressing environmental effects in fatigue usage calculation for class 1 boiler and pressure vessel reactor components. The currently accepted method of fatigue assessment has been used as a base model and all cycles, which have been comparable with realistic fatigue tests, have been excluded from the code-based fatigue calculation and evaluated directly with the test data. The results presented show that the engineering approach can successfully be integrated in the code-based fatigue assessment. The cumulative usage factor can be reduced considerably.
A new approach for soil-plant transfer calculations
International Nuclear Information System (INIS)
Dorp, F. van; Eleveld, R.; Frissel, M.J.
1979-01-01
Models to calculate radiation doses to man caused by normal or accidental release of radionuclides from nuclear industries often include the transfer of these nuclides from soil to plant. This soil-plant transfer is mostly described with a black box approach by using concentration factors. This approach has several disadvantages, the most important being the lack of physical meaning of a concentration factor. We propose to describe the soil-plant transfer of radionuclides as a function of plant and soil parameters all having a physical meaning. The separate parameters are open to experimental determination but a realistic estimation of the parameters is also possible, or the use of a combination of both. Depending on the purpose of the calculation, realistic or conservative values of the parameters can be used and the degree of conservatism can be indicated. (author)
International Nuclear Information System (INIS)
Marguet, S.D.
1993-01-01
Owing to the prevalence in France of nuclear generated electricity, the french utility, EDF focuses much research on fuel cycle strategy. In this context, analysis of scenarios combining problems related to planning and economics, but also reactor physics, necessitate a relatively thorough understanding of fuel response to irradiation. The main purpose of the fuel strategy program codes is to predict mass balance modifications with time for the main actinides involved in the cycle, including the minor actinides associated with the current back end fuel cycle key issues. Considering the large number of calculations performed by a strategy code in an iterative process covering a range of about a hundred years, it was important to develop basic computation modules for both the ''reactor'' and ''fabrication'' items. These had to be high speed routines, but on an accuracy level compatible with the strategy code efficiency. At the end of 1992, the EDF Research and Development Division (EDF/DER) developed a very simple, extremely fast method of calculating transuranian isotope masses. This approach, which resulted in the STRAPONTIN software, considerably increased the scope of the EDF/DER fuel strategy code TIRELIRE without undue impairment of machine time requirements for a scenario. (author). 2 figs., 2 tabs., 3 refs
Energy Technology Data Exchange (ETDEWEB)
Bahl, Henning; Hollik, Wolfgang [Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut), Munich (Germany)
2016-09-15
In the Minimal Supersymmetric Standard Model heavy superparticles introduce large logarithms in the calculation of the lightest CP-even Higgs-boson mass. These logarithmic contributions can be resummed using effective field theory techniques. For light superparticles, however, fixed-order calculations are expected to be more accurate. To gain a precise prediction also for intermediate mass scales, the two approaches have to be combined. Here, we report on an improvement of this method in various steps: the inclusion of electroweak contributions, of separate electroweakino and gluino thresholds, as well as resummation at the NNLL level. These improvements can lead to significant numerical effects. In most cases, the lightest CP-even Higgs-boson mass is shifted downwards by about 1 GeV. This is mainly caused by higher-order corrections to the MS top-quark mass. We also describe the implementation of the new contributions in the code FeynHiggs. (orig.)
New approaches for metabolomics by mass spectrometry
Energy Technology Data Exchange (ETDEWEB)
Vertes, Akos [George Washington Univ., Washington, DC (United States)
2017-07-10
Small molecules constitute a large part of the world around us, including fossil and some renewable energy sources. Solar energy harvested by plants and bacteria is converted into energy rich small molecules on a massive scale. Some of the worst contaminants of the environment and compounds of interest for national security also fall in the category of small molecules. The development of large scale metabolomic analysis methods lags behind the state of the art established for genomics and proteomics. This is commonly attributed to the diversity of molecular classes included in a metabolome. Unlike nucleic acids and proteins, metabolites do not have standard building blocks, and, as a result, their molecular properties exhibit a wide spectrum. This impedes the development of dedicated separation and spectroscopic methods. Mass spectrometry (MS) is a strong contender in the quest for a quantitative analytical tool with extensive metabolite coverage. Although various MS-based techniques are emerging for metabolomics, many of these approaches include extensive sample preparation that make large scale studies resource intensive and slow. New ionization methods are redefining the range of analytical problems that can be solved using MS. This project developed new approaches for the direct analysis of small molecules in unprocessed samples, as well as pushed the limits of ultratrace analysis in volume limited complex samples. The projects resulted in techniques that enabled metabolomics investigations with enhanced molecular coverage, as well as the study of cellular response to stimuli on a single cell level. Effectively individual cells became reaction vessels, where we followed the response of a complex biological system to external perturbation. We established two new analytical platforms for the direct study of metabolic changes in cells and tissues following external perturbation. For this purpose we developed a novel technique, laser ablation electrospray
A matched expansion approach to practical self-force calculations
International Nuclear Information System (INIS)
Anderson, Warren G; Wiseman, Alan G
2005-01-01
We discuss a practical method of computing the self-force on a particle moving through a curved spacetime. This method involves two expansions to calculate the self-force, one arising from the particle's immediate past and the other from the more distant past. The expansion in the immediate past is a covariant Taylor series and can be carried out for all geometries. The more distant expansion is a mode sum, and may be carried out in those cases where the wave equation for the field mediating the self-force admits a mode expansion of the solution. In particular, this method can be used to calculate the gravitational self-force for a particle of mass μ orbiting a black hole of mass M to order μ 2 , provided μ/M << 1. We discuss how to use these two expansions to construct a full self-force, and in particular investigate criteria for matching the two expansions. As with all methods of computing self-forces for particles moving in black hole spacetimes, one encounters considerable technical difficulty in applying this method; nevertheless, it appears that the convergence of each series is good enough that a practical implementation may be plausible
Orifice Mass Flow Calculation in NASA's W-8 Single Stage Axial Compressor Facility
Bozak, Richard F.
2018-01-01
Updates to the orifice mass flow calculation for the W-8 Single Stage Axial Compressor Facility at NASA Glenn Research Center are provided to include the effect of humidity and incorporate ISO 5167. A methodology for including the effect of humidity into the inlet orifice mass flow calculation is provided. Orifice mass flow calculations provided by ASME PTC-19.5-2004, ASME MFC-3M-2004, ASME Fluid Meters, and ISO 5167 are compared for W-8's atmospheric inlet orifice plate. Differences in expansion factor and discharge coefficient given by these standards give a variation of about +/- 75% mass flow except for a few cases. A comparison of the calculations with an inlet static pressure mass flow correlation and a fan exit mass flow integration using test data from a 2017 turbofan rotor test in W-8 show good agreement between the inlet static pressure mass flow correlation, ISO 5167, and ASME Fluid Meters. While W-8's atmospheric inlet orifice plate violates the pipe diameter limit defined by each of the standards, the ISO 5167 is chosen to be the primary orifice mass flow calculation to use in the W-8 facility.
Effective source approach to self-force calculations
International Nuclear Information System (INIS)
Vega, Ian; Wardell, Barry; Diener, Peter
2011-01-01
Numerical evaluation of the self-force on a point particle is made difficult by the use of delta functions as sources. Recent methods for self-force calculations avoid delta functions altogether, using instead a finite and extended 'effective source' for a point particle. We provide a review of the general principles underlying this strategy, using the specific example of a scalar point charge moving in a black hole spacetime. We also report on two new developments: (i) the construction and evaluation of an effective source for a scalar charge moving along a generic orbit of an arbitrary spacetime, and (ii) the successful implementation of hyperboloidal slicing that significantly improves on previous treatments of boundary conditions used for effective-source-based self-force calculations. Finally, we identify some of the key issues related to the effective source approach that will need to be addressed by future work.
Calculation of isotopic mass and energy production by a matrix operator method
International Nuclear Information System (INIS)
Lee, C.E.
1976-08-01
The Volterra method of the multiplicative integral is used to determine the isotopic density, mass, and energy production in linear systems. The solution method, assumptions, and limitations are discussed. The method allows a rapid accurate calculation of the change in isotopic density, mass, and energy production independent of the magnitude of the time steps, production or decay rates, or flux levels
The calculation of nucleus-nucleus interaction cross sections at high energy in the Glauber approach
International Nuclear Information System (INIS)
Gal'perin, A.G.; Uzhinskij, V.V.
1994-01-01
Total, inelastic and elastic cross sections of nucleus-nucleus (AA)-interactions at high energy (HE) are calculated on the base of Glauber approach. The calculation scheme is realized as a set of routines. The statistical average method is used in calculations. Program runs in an interactive regime. User is prompted about charge and mass numbers of nuclei and NN-interaction characters at the energy he is interested in: total cross section, the slope parameter of differential cross section of elastic scattering and ratio of real part to imaginary part of elastic scattering amplitude at zero momentum transfer. These data can be extracted from proper compilations. Results of calculations are displayed and are written on user defined output file. The program runs on PC. 21 refs., 1 tab
International Nuclear Information System (INIS)
KLEM, M.J.
2000-01-01
The purpose of these calculations is to develop the material balances for documentation of the Canister Storage Building (CSB) Process Flow Diagram (PFD) and future reference. The attached mass balances were prepared to support revision two of the PFD for the CSB. The calculations refer to diagram H-2-825869
Multiple diagnostic approaches to palpable breast mass
Energy Technology Data Exchange (ETDEWEB)
Chin, Soo Yil; Kim, Kie Hwan; Moon, Nan Mo; Kim, Yong Kyu; Jang, Ja June [Korea Cancer Center Hospital, Seoul (Korea, Republic of)
1985-12-15
The combination of the various diagnostic methods of palpable breast mass has improved the diagnostic accuracy. From September 1983 to August 1985 pathologically proven 85 patients with palpable breast masses examined with x-ray mammography, ultrasonography, penumomammography and aspiration cytology at Korea Cancer Center Hospital were analyzed. The diagnostic accuracies of each methods were 77.6% of mammogram, 74.1% of ultrasonogram, 90.5% of penumomammogram and 92.4% of aspiration cytology. Pneumomammograms was accomplished without difficulty or complication and depicted more clearly delineated mass with various pathognomonic findings; air-ductal pattern in fibroadenoma (90.4%) and cystosarcoma phylloides (100%), air-halo in fibrocystic disease (14.2%), fibroadenoma (100%), cystosarcoma phylloides (100%), air-cystogram in cystic type of fibrocystic disease (100%) and vaculoar pattern or irregular air collection without retained peripheral gas in carcinoma.
Multiple diagnostic approaches to palpable breast mass
International Nuclear Information System (INIS)
Chin, Soo Yil; Kim, Kie Hwan; Moon, Nan Mo; Kim, Yong Kyu; Jang, Ja June
1985-01-01
The combination of the various diagnostic methods of palpable breast mass has improved the diagnostic accuracy. From September 1983 to August 1985 pathologically proven 85 patients with palpable breast masses examined with x-ray mammography, ultrasonography, penumomammography and aspiration cytology at Korea Cancer Center Hospital were analyzed. The diagnostic accuracies of each methods were 77.6% of mammogram, 74.1% of ultrasonogram, 90.5% of penumomammogram and 92.4% of aspiration cytology. Pneumomammograms was accomplished without difficulty or complication and depicted more clearly delineated mass with various pathognomonic findings; air-ductal pattern in fibroadenoma (90.4%) and cystosarcoma phylloides (100%), air-halo in fibrocystic disease (14.2%), fibroadenoma (100%), cystosarcoma phylloides (100%), air-cystogram in cystic type of fibrocystic disease (100%) and vaculoar pattern or irregular air collection without retained peripheral gas in carcinoma
On calculating double logarithmical asymptotics of vertex functions defined on the mass shell
International Nuclear Information System (INIS)
Belokurov, V.V.; Usyukina, N.I.
1981-01-01
The essence of the calculation method of double logarithmical asymptotics of vertex functions defined on the mass shell is presented. Using the method the asymptotics of the form-factor of electron is calculated. The ladder and cross-ladder diagrams are asymptotically considerable in every order of the perturbation theory. The way in which the asymptotics of the 4-order diagrams is calculated has been shown. The diagrams of this order and reduction procedures for them are given in a graphic form. The photon mass μ 2 not equal to 0 plays the role of a regulator, removing infrared divergencies. The double logarithmical asymptotics of the form-factor of electron on the mass shell is calculated rigorously in an arbitrary order of the perturbation theory [ru
Water-Exit Process Modeling and Added-Mass Calculation of the Submarine-Launched Missile
Directory of Open Access Journals (Sweden)
Yang Jian
2017-11-01
Full Text Available In the process that the submarine-launched missile exits the water, there is the complex fluid solid coupling phenomenon. Therefore, it is difficult to establish the accurate water-exit dynamic model. In the paper, according to the characteristics of the water-exit motion, based on the traditional method of added mass, considering the added mass changing rate, the water-exit dynamic model is established. And with help of the CFX fluid simulation software, a new calculation method of the added mass that is suit for submarine-launched missile is proposed, which can effectively solve the problem of fluid solid coupling in modeling process. Then by the new calculation method, the change law of the added mass in water-exit process of the missile is obtained. In simulated analysis, for the water-exit process of the missile, by comparing the results of the numerical simulation and the calculation of theoretical model, the effectiveness of the new added mass calculation method and the accuracy of the water-exit dynamic model that considers the added mass changing rate are verified.
Calculating the mass distribution of heavy nucleus fission product by neutrons
International Nuclear Information System (INIS)
Gudkov, A.N.; Koldobskij, A.B.; Kolobashkin, V.M.; Semenova, E.V.
1981-01-01
The technique of calculating the fission product mass yields by neutrons which are necessary for performing nucleus physical calculations in designing nuclear reactor cores is considered. The technique is based on the approximation of fission product mass distribution over the whole mass range by five Gauss functions. New analytical expressions for determining energy weights of used gaussians are proposed. The results of comparison of experimental data with calculated values for fission product mass obtained for reference processes in the capacity of which the fission reactions are chosen: 233 U, 235 U fission by thermal neutrons, 232 Th, 233 U, 235 U, 238 U by fission spectrum neutrons and 14 MeV neutrons and for 232 Th fission reactions by 11 MeV neutrons and 238 U by 7.7 MeV neutrons. On the basis of the analysis of results obtained the conclusion is drawn on a good agreement of fission product mass yield calculation values obtained using recommended values of mass distribution parameters with experimental data [ru
Approaches to proton single-event rate calculations
International Nuclear Information System (INIS)
Petersen, E.L.
1996-01-01
This article discusses the fundamentals of proton-induced single-event upsets and of the various methods that have been developed to calculate upset rates. Two types of approaches are used based on nuclear-reaction analysis. Several aspects can be analyzed using analytic methods, but a complete description is not available. The paper presents an analytic description for the component due to elastic-scattering recoils. There have been a number of studies made using Monte Carlo methods. These can completely describe the reaction processes, including the effect of nuclear reactions occurring outside the device-sensitive volume. They have not included the elastic-scattering processes. The article describes the semiempirical approaches that are most widely used. The quality of previous upset predictions relative to space observations is discussed and leads to comments about the desired quality of future predictions. Brief sections treat the possible testing limitation due to total ionizing dose effects, the relationship of proton and heavy-ion upsets, upsets due to direct proton ionization, and relative proton and cosmic-ray upset rates
Physical consequences of the alpha/beta rule which accurately calculates particle masses
Energy Technology Data Exchange (ETDEWEB)
Greulich, Karl Otto [Fritz Lipmann Institute, Beutenbergstr.11, D07745 Jena (Germany)
2015-07-01
Using the fine structure constant α (=1/137.036), the proton vs. electron mass ratio β (= 1836.2) and the integers m and n, the α/β rule: m{sub particle} = α{sup -n} x β m x 27.2 eV/c{sup 2} allows almost exact calculation of particle masses. (K.O.Greulich, DPG Spring meeting 2014, Mainz, T99.4) With n=2, m=0 the electron mass becomes 510.79 keV/c{sup 2} (experimental 511 keV/c{sup 2}) With n=2, m=1 the proton mass is 937.9 MeV/c{sup 2} (literature 938.3 MeV/c{sup 2}). For n=3 and m=1 a particle with 128.6 GeV/c{sup 2} close to the reported Higgs mass, is expected. For n=14 and m=-1 the Planck mass results. The calculated masses for gauge bosons and for quarks have similar accuracy. All masses fit into the same scheme (the alpha/beta rule), indicating that non of these particle masses play an extraordinary role. Particularly, the Higgs Boson, often termed the *God particle* plays in this sense no extraordinary role. In addition, particle masses are intimately correlated with the fine structure constant α. If particle masses have been constant over all times, α must have been constant over these times. In addition, the ionization energy of the hydrogen atom (13.6 eV) needs to have been constant if particle masses have been unchanged or vice versa. In conclusion, the α/β rule needs to be taken into account when cosmological models are developed.
International Nuclear Information System (INIS)
Faulkner, D.J.; Wood, P.R.
1984-01-01
Evolutionary calculations for nuclei of planetary nebulae are described. They were made using assumptions regarding mass of the NPN, phase in the He shell flash cycle at which the NPN leaves the AGB, and time variation of the mass loss rate. Comparison of the evolutionary tracks with the observational Harman-Seaton sequence indicates that some recently published NPN luminosities may be too low by a factor of three. Comparison of the calculated timescales with the observed properties of NPN and of white dwarfs provides marginal evidence for the PN ejection being initiated by the helium shell flash itself
A new approach to calculating endurance in electric flight and comparing fuel cells and batteries
International Nuclear Information System (INIS)
Donateo, Teresa; Ficarella, Antonio; Spedicato, Luigi; Arista, Alessandro; Ferraro, Marco
2017-01-01
Highlights: • Gross endurance of an UAV calculated with literature correlations. • Net endurance calculated with an innovative mission-based approach. • Three state-of-the-art battery technologies compared to a PEM fuel cell. • Analysis with different values of energy stored on board. • Effect of powertrain mass and volume of aircraft empty mass and wing area. - Abstract: Electric flight is of increasing interest in order to reduce emissions of pollution and greenhouse gases in the aviation field in particular when the takeoff mass is low, as in the case of lightweight cargo transport or remotely controlled drones. The present investigation addresses two key issues in electric flight, namely the correct calculation of the endurance and the comparison between batteries and fuel cells, with a mission-based approach. As a test case, a light Unmanned Aerial Vehicle (UAV) powered exclusively by a Polymer Electrolyte Membrane fuel cell with a gaseous hydrogen tank was compared with the same aircraft powered by different kinds of Lithium batteries sized to match the energy stored in the hydrogen tank. The mass and the volume of each powertrain were calculated with literature data about existing technologies for propellers, motors, batteries and fuel cells. The empty mass and the wing area of the UAV were amended with the mass of the proposed powertrain to explore the range of application of the proposed technologies. To evaluate the efficiency of the whole powertrain a simulation software was used instead of considering only level flight. This software allowed an in-depth analysis on the efficiency of all sub-systems along the flight. The secondary demand of power for auxiliaries was taken into account along with the propulsive power. The main parameter for the comparison was the endurance but the takeoff performance, the volume of the powertrain and the environmental impact were also taken into account. The battery-based powertrain was found to be the most
New approaches for calculating Moran's index of spatial autocorrelation.
Chen, Yanguang
2013-01-01
Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran's index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran's index. Moran's scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran's index and Geary's coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran's index and Geary's coefficient will be clarified and defined. One of theoretical findings is that Moran's index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation.
New approaches for calculating Moran's index of spatial autocorrelation.
Directory of Open Access Journals (Sweden)
Yanguang Chen
Full Text Available Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran's index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran's index. Moran's scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran's index and Geary's coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran's index and Geary's coefficient will be clarified and defined. One of theoretical findings is that Moran's index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation.
A practical approach for electron monitor unit calculation
International Nuclear Information System (INIS)
Choi, David; Patyal, Baldev; Cho, Jongmin; Cheng, Ing Y; Nookala, Prashanth
2009-01-01
Electron monitor unit (MU) calculation requires measured beam data such as the relative output factor (ROF) of a cone, insert correction factor (ICF) and effective source-to-surface distance (ESD). Measuring the beam data to cover all possible clinical cases is not practical for a busy clinic because it takes tremendous time and labor. In this study, we propose a practical approach to reduce the number of data measurements without affecting accuracy. It is based on two findings of dosimetric properties of electron beams. One is that the output ratio of two inserts is independent of the cone used, and the other is that ESD is a function of field size but independent of cone and jaw opening. For the measurements to prove the findings, a parallel plate ion chamber (Markus, PTW 23343) with an electrometer (Cardinal Health 35040) was used. We measured the outputs to determine ROF, ICF and ESD of different energies (5-21 MeV). Measurements were made in a Plastic Water(TM) phantom or in water. Three linear accelerators were used: Siemens MD2 (S/N 2689), Siemens Primus (S/N 3305) and Varian Clinic 21-EX (S/N 1495). With these findings, the number of data set to be measured can be reduced to less than 20% of the data points. (note)
Review: Management of adnexal masses: An age-guided approach ...
African Journals Online (AJOL)
Adnexal masses in different age groups may need different management approaches. By elimination of the type of mass that is less likely in a specific age group one can identify the most prevalent group which can guide management. Most important of all, the probability of malignancy must either be identified or ruled out.
International Nuclear Information System (INIS)
KLEM, M.J.
2000-01-01
The purpose of this calculation document is to develop the bases for the material balances of the Multi-Canister Overpack (MCO) Level 1 Process Flow Diagram (PFD). The attached mass balances support revision two of the PFD for the MCO and provide future reference
Higgs-boson masses and mixing matrices in the NMSSM. Analysis of on-shell calculations
Energy Technology Data Exchange (ETDEWEB)
Drechsel, Peter; Weiglein, Georg [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Groeber, Ramona [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology; INFN, Sezione di Roma Tre (Italy); Heinemeyer, Sven [Univ. Autonoma de Madrid (UAM/CSIC) (Spain). Inst. de Fisica Teorica; Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); UAM + CSIC Campus of International Excellence, Madrid (Spain); Muehlleitner, Milada [Karlsruhe Institute of Technology, Karlsruhe (Germany). Inst. for Theoretical Physics; Rzehak, H. [Univ. of Southern Denmark, Odense (Denmark). CP3-Origins
2016-12-22
We analyze the Higgs-boson masses and mixing matrices in the NMSSM based on an on-shell (OS) renormalization of the gauge-boson and Higgs-boson masses and the parameters of the top/scalar top sector. We compare the implementation of the OS calculations in the codes NMSSMCALC and NMSSM-FeynHiggs up to O(α{sub t}α{sub s}). We identify the sources of discrepancies at the one- and at the two-loop level. Finally we compare the OS and DR evaluation as implemented in NMSSMCALC. The results are important ingredients for an estimate of the theoretical precision of Higgs-boson mass calculations in the NMSSM.
Higgs-boson masses and mixing matrices in the NMSSM: analysis of on-shell calculations
Energy Technology Data Exchange (ETDEWEB)
Drechsel, P.; Weiglein, G. [DESY, Hamburg (Germany); Groeber, R. [Durham University, Department of Physics, Institute for Particle Physics Phenomenology, Durham (United Kingdom); INFN, Sezione di Roma Tre, Rome (Italy); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Universidad Autonoma de Madrid, Instituto de Fisica Teorica, (UAM/CSIC), Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Muehlleitner, M. [Karlsruhe Institute of Technology, Institute for Theoretical Physics, Karlsruhe (Germany); Rzehak, H. [University of Southern Denmark, CP3-Origins, Odense M (Denmark)
2017-06-15
We analyze the Higgs-boson masses and mixing matrices in the NMSSM based on an on-shell (OS) renormalization of the gauge-boson and Higgs-boson masses and the parameters of the top/scalar top sector. We compare the implementation of the OS calculations in the codes NMSSMCALC and NMSSM-FeynHiggs up to O(α{sub t}α{sub s}). We identify the sources of discrepancies at the one- and at the two-loop level. Finally we compare the OS and DR evaluation as implemented in NMSSMCALC. The results are important ingredients for an estimate of the theoretical precision of Higgs-boson mass calculations in the NMSSM. (orig.)
Critical mass calculations for 241Am, 242mAm and 243Am
International Nuclear Information System (INIS)
Dias, Hemanth; Tancock, Nigel; Clayton, Angela
2003-01-01
Criticality mass calculations are reported for 241 Am, 242m Am and 243 Am using the MONK and MCNP computer codes with the UKNDL, JEF-2.2, ENDF/B-VI and JENDL-3.2 nuclear data libraries. Results are reported for spheres of americium metal and dioxide in bare, water reflected and steel reflected systems. Comparison of results led to the identification of a serious inconsistency in the 241 Am ENDF/B-VI DICE library used by MONK - this demonstrates the importance of using different codes to verify critical mass calculations. The 241 Am critical mass estimates obtained using UKNDL and ENDF/B-VI show good agreement with experimentally inferred data, whilst both JEF-2.2 and JENDL-3.2 produce higher estimates of critical mass. The computed critical mass estimates for 242m Am obtained using ENDF/B-VI are lower than the results produced using the other nuclear data libraries - the ENDF/B-VI fission cross-section for 242m Am is significantly higher than the other evaluations in the fast region and is not supported by recent experimental data. There is wide variation in the computed 243 Am critical mass estimates suggesting that there is still considerable uncertainty in the 243 Am nuclear data. (author)
Structure function of off-mass-shell pions and the calculation of the Sullivan process
International Nuclear Information System (INIS)
Shakin, C.M.; Sun, W.
1994-01-01
We construct a model for the pion (valence) structure function that fits the experimental data obtained in the study of the Drell-Yan process. The model may also be used to calculate the structure function of off-mass-shell pions. We apply our model in the study of deep-inelastic scattering from off-mass-shell pions found in the nucleon and are thus able to resolve a problem encountered in the standard analysis of such processes. The usual analysis is made using the structure function of on-mass-shell pions and requires the use of a soft πNN form factor that is inconsistent with standard nuclear physics phenomenology. The use of our off-mass-shell structure functions allows for a fit to the data for nonperturbative aspects of the nucleon ''sea'' with a pion-nucleon form factor of the standard form
International Nuclear Information System (INIS)
Hosokawa, Takashi; Omukai, Kazuyuki
2009-01-01
The final mass of a newborn star is set at the epoch when the mass accretion onto the star is terminated. We study the evolution of accreting protostars and the limits of accretion in low-metallicity environments under spherical symmetry. Accretion rates onto protostars are estimated via the temperature evolution of prestellar cores with different metallicities. The derived rates increase with decreasing metallicity, from M-dot≅10 -6 M odot yr -1 at Z = Z sun to 10 -3 M sun yr -1 at Z = 0. With the derived accretion rates, the protostellar evolution is numerically calculated. We find that, at lower metallicity, the protostar has a larger radius and reaches the zero-age main sequence (ZAMS) at higher stellar mass. Using this protostellar evolution, we evaluate the upper stellar mass limit where the mass accretion is hindered by radiative feedback. We consider the effects of radiation pressure exerted on the accreting envelope, and expansion of an H II region. The mass accretion is finally terminated by radiation pressure on dust grains in the envelope for Z ∼> 10 -3 Z sun and by the expanding H II region for lower metallicity. The mass limit from these effects increases with decreasing metallicity from M * ≅ 10 M sun at Z = Z sun to ≅300 M sun at Z = 10 -6 Z sun . The termination of accretion occurs after the central star arrives at the ZAMS at all metallicities, which allows us to neglect protostellar evolution effects in discussing the upper mass limit by stellar feedback. The fragmentation induced by line cooling in low-metallicity clouds yields prestellar cores with masses large enough that the final stellar mass is set by the feedback effects. Although relaxing the assumption of spherical symmetry will alter feedback effects, our results will be a benchmark for more realistic evolution to be explored in future studies.
Calculation of the mass transfer coefficient for the combustion of a carbon particle
Energy Technology Data Exchange (ETDEWEB)
Scala, Fabrizio [Istituto di Ricerche sulla Combustione - CNR, P.le Tecchio 80, 80125 Napoli (Italy)
2010-01-15
In this paper we address the calculation of the mass transfer coefficient around a burning carbon particle in an atmosphere of O{sub 2}, N{sub 2}, CO{sub 2}, CO, and H{sub 2}O. The complete set of Stefan-Maxwell equations is analytically solved under the assumption of no homogeneous reaction in the boundary layer. An expression linking the oxygen concentration and the oxygen flux at the particle surface (as a function of the bulk gas composition) is derived which can be used to calculate the mass transfer coefficient. A very simple approximate explicit expression is also given for the mass transfer coefficient, that is shown to be valid in the low oxygen flux limit or when the primary combustion product is CO{sub 2}. The results are given in terms of a correction factor to the equimolar counter-diffusion mass transfer coefficient, which is typically available in the literature for specific geometries and/or fluid-dynamic conditions. The significance of the correction factor and the accuracy of the different available expressions is illustrated for several cases of practical interest. Results show that under typical combustion conditions the use of the equimolar counter-diffusion mass transfer coefficient can lead to errors up to 10%. Larger errors are possible in oxygen-enriched conditions, while the error is generally low in oxy-combustion. (author)
Comparison of different source calculations in two-nucleon channel at large quark mass
Yamazaki, Takeshi; Ishikawa, Ken-ichi; Kuramashi, Yoshinobu
2018-03-01
We investigate a systematic error coming from higher excited state contributions in the energy shift of light nucleus in the two-nucleon channel by comparing two different source calculations with the exponential and wall sources. Since it is hard to obtain a clear signal of the wall source correlation function in a plateau region, we employ a large quark mass as the pion mass is 0.8 GeV in quenched QCD. We discuss the systematic error in the spin-triplet channel of the two-nucleon system, and the volume dependence of the energy shift.
A new calculation of atmospheric neutrino flux: the FLUKA approach
International Nuclear Information System (INIS)
Battistoni, G.; Bloise, C.; Cavalli, D.; Ferrari, A.; Montaruli, T.; Rancati, T.; Resconi, S.; Ronga, F.; Sala, P.R.
1999-01-01
Preliminary results from a full 3-D calculation of atmospheric neutrino fluxes using the FLUKA interaction model are presented and compared to previous existing calculations. This effort is motivated mainly by the 3-D capability and the satisfactory degree of accuracy of the hadron-nucleus models embedded in the FLUKA code. Here we show examples of benchmarking tests of the model with cosmic ray experiment results. A comparison of our calculation of the atmospheric neutrino flux with that of the Bartol group, for E ν > 1 GeV, is presented
Calculation of mass discharge of the Greenland ice sheet in the Earth System Model
Directory of Open Access Journals (Sweden)
O. O. Rybak
2016-01-01
Full Text Available Mass discharge calculation is a challenging task for the ice sheet modeling aimed at evaluation of their contribution to the global sea level rise during past interglacials, as well as one of the consequences of future climate change. In Greenland, ablation is the major source of fresh water runoff. It is approximately equal to the dynamical discharge (iceberg calving. Its share might have still larger during the past interglacials when the margins of the GrIS retreated inland. Refreezing of the melted water and its retention are two poorly known processes playing as a counterpart of melting and, thus, exerting influence on the run off. Interaction of ice sheets and climate is driven by energy and mass exchange processes and is complicated by numerous feed-backs. To study the complex of these processes, coupling of an ice sheet model and a climate model (i.e. models of the atmosphere and the ocean in one model is required, which is often called the Earth System Model (ESM. Formalization of processes of interaction between the ice sheets and climate within the ESM requires elaboration of special techniques to deal with dramatic differences in spatial and temporal variability scales within each of three ESM’s blocks. In this paper, we focus on the method of coupling of a Greenland ice sheet model (GrISM with the climate model INMCM having been developed in the Institute of Numerical Mathematics of Russian Academy of Sciences. Our coupling approach consists in applying of a special buffer model, which serves as an interface between GrISM and INMCM. A simple energy and water exchange model (EWBM-G allows realistic description of surface air temperature and precipitation fields adjusted to a relief of elevation of the GrIS surface. In a series of diagnostic numerical experiments with the present-day GrIS geometry and the modeled climate we studied sensitivity of the modeled surface mass balance and run off to the key EWBM-G parameters and compared
Calculation of the fissile mass of a graphite moderated critical assembly using 93% enriched uranium
International Nuclear Information System (INIS)
Correa, F.; Marzo, M.A.S.; Collussi, I.; Ferreira, A.C.A.
1976-01-01
The critical mass of uranium has been calculated for a graphite moderated set fueled with 93% enriched uranium to be mounted on the Instituto de Energia Atomica split table Zero Power Reactor. The core composition was optimized to permit the maximum number of configurations to be studied. Analysis of three core compositions shows that 8 Kg of uranium enriched to 93% - U-235 (by weight) and 100 Kg of thorium would be sufficient for criticality experiments [pt
Calculation of stresses in a rock mass and lining in stagewise face drivage
Seryakov, VM; Zhamalova, BR
2018-03-01
Using the method of calculating mechanical state of a rock mass for the conditions of stagewise drivage of a production face in large cross-section excavations, the specific features of stress redistribution in lining of excavations are found. The zones of tensile stresses in the lining are detected. The authors discuss the influence of the initial stress state of rocks on the tension stress zones induced in the lining in course of the heading advance
Energy Technology Data Exchange (ETDEWEB)
Kneur, J.L
2006-06-15
This document is divided into 2 parts. The first part describes a particular re-summation technique of perturbative series that can give a non-perturbative results in some cases. We detail some applications in field theory and in condensed matter like the calculation of the effective temperature of Bose-Einstein condensates. The second part deals with the minimal supersymmetric standard model. We present an accurate calculation of the mass spectrum of supersymmetric particles, a calculation of the relic density of supersymmetric black matter, and the constraints that we can infer from models.
Critical and subcritical mass calculations of fissionable nuclides based on JENDL-3.2+
International Nuclear Information System (INIS)
Okuno, H.
2002-01-01
We calculated critical and subcritical masses of 10 fissionable actinides ( 233 U, 235 U, 238 Pu, 239 Pu, 241 Pu, 242m Am, 243 Cm, 244 Cm, 249 Cf and 251 Cf) in metal and in metal-water mixtures (except 238 Pu and 244 Cm). The calculation was made with a combination of a continuous energy Monte Carlo neutron transport code, MCNP-4B2, and the latest released version of the Japanese Evaluated Nuclear Data Library, JENDL-3.2. Other evaluated nuclear data files, ENDF/B-VI, JEF-2.2, and JENDL-3.3 in its preliminary version were also applied to find differences in results originated from different nuclear data files. For the so-called big three fissiles ( 233 U, 235 U and 239 Pu), analyzing the criticality experiments cited in ICSBEP Handbook validated the code-library combination, and calculation errors were consequently evaluated. Estimated critical and lower limit critical masses of the big three in a sphere with/without a water or SS-304 reflector were supplied, and they were compared with the subcritical mass limits of ANS-8.1. (author)
Gu, Huidong; Wang, Jian; Aubry, Anne-Françoise; Jiang, Hao; Zeng, Jianing; Easter, John; Wang, Jun-sheng; Dockens, Randy; Bifano, Marc; Burrell, Richard; Arnold, Mark E
2012-06-05
A methodology for the accurate calculation and mitigation of isotopic interferences in liquid chromatography-mass spectrometry/mass spectrometry (LC-MS/MS) assays and its application in supporting microdose absolute bioavailability studies are reported for the first time. For simplicity, this calculation methodology and the strategy to minimize the isotopic interference are demonstrated using a simple molecule entity, then applied to actual development drugs. The exact isotopic interferences calculated with this methodology were often much less than the traditionally used, overestimated isotopic interferences simply based on the molecular isotope abundance. One application of the methodology is the selection of a stable isotopically labeled internal standard (SIL-IS) for an LC-MS/MS bioanalytical assay. The second application is the selection of an SIL analogue for use in intravenous (i.v.) microdosing for the determination of absolute bioavailability. In the case of microdosing, the traditional approach of calculating isotopic interferences can result in selecting a labeling scheme that overlabels the i.v.-dosed drug or leads to incorrect conclusions on the feasibility of using an SIL drug and analysis by LC-MS/MS. The methodology presented here can guide the synthesis by accurately calculating the isotopic interferences when labeling at different positions, using different selective reaction monitoring (SRM) transitions or adding more labeling positions. This methodology has been successfully applied to the selection of the labeled i.v.-dosed drugs for use in two microdose absolute bioavailability studies, before initiating the chemical synthesis. With this methodology, significant time and cost saving can be achieved in supporting microdose absolute bioavailability studies with stable labeled drugs.
International Nuclear Information System (INIS)
Boccaccini, L.V.
1986-07-01
To take advantages of the semi-implicit computer models - to solve the two phase flow differential system - a proper averaging procedure is also needed for the source terms. In fact, in some cases, the correlations normally used for the source terms - not time averaged - fail using the theoretical time step that arises from the linear stability analysis used on the right handside. Such a time averaging procedure is developed with reference to the bubbly flow regime. Moreover, the concept of mass that must be exchanged to reach equilibrium from a non-equilibrium state is introduced to limit the mass transfer during a time step. Finally some practical calculations are performed to compare the different correlations for the average mass transfer rate developed in this work. (orig.) [de
Statistical approach for calculating opacities of high-Z plasmas
International Nuclear Information System (INIS)
Nishikawa, Takeshi; Nakamura, Shinji; Takabe, Hideaki; Mima, Kunioki
1992-01-01
For simulating the X-ray radiation from laser produced high-Z plasma, an appropriate atomic modeling is necessary. Based on the average ion model, we have used a rather simple atomic model for opacity calculation in a hydrodynamic code and obtained a fairly good agreement with the experiment on the X-ray spectra from the laser-produced plasmas. We have investigated the accuracy of the atomic model used in the hydrodynamic code. It is found that transition energies of 4p-4d, 4d-4f, 4p-5d, 4d-5f and 4f-5g, which are important in laser produced high-Z plasma, can be given within an error of 15 % compared to the values by the Hartree-Fock-Slater (HFS) calculation and their oscillator strengths obtained by HFS calculation vary by a factor two according to the difference of charge state. We also propose a statistical method to carry out detail configuration accounting for electronic state by use of the population of bound electrons calculated with the average ion model. The statistical method is relatively simple and provides much improvement in calculating spectral opacities of line radiation, when we use the average ion model to determine electronic state. (author)
Spherical bootstrap calculation of qqq-baryon and multiquark-hadron masses
International Nuclear Information System (INIS)
Balazs, L.A.P.; Nicolescu, B.
1979-12-01
Recently a way of implementing the dual-topological unitarization program has been found, in which baryons and other multiquark hadrons are put on the sphere and appear at the same topological-complexity level as ordinary anti-qq mesons. This permits one to have a lowest-order 'spherical bootstrap', within which unitarity, duality and crossing can be consistently satisfied. In the present paper, this framework to calculate hadron masses has been used by imposing duality on an infinite sum of ladder graphs generated from spherical unitarity. By making a certain simple dynamical approximation, an explicit generic Regge-trajectory formula is derived for any given process. If one then makes certain reasonable dynamical assumptions and requires simultaneous consistency for entire sets of processes, it is possible to calculate the masses of all the lowest states and the Regge trajectories associated with each of them. The only arbitrary parameter is the mass of the rho which merely serves to set the mass scale
Approaches to reducing photon dose calculation errors near metal implants
Energy Technology Data Exchange (ETDEWEB)
Huang, Jessie Y.; Followill, David S.; Howell, Rebecca M.; Mirkovic, Dragan; Kry, Stephen F., E-mail: sfkry@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States); Liu, Xinming [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States); Stingo, Francesco C. [Department of Biostatistics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States)
2016-09-15
Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact
Approaches to reducing photon dose calculation errors near metal implants
International Nuclear Information System (INIS)
Huang, Jessie Y.; Followill, David S.; Howell, Rebecca M.; Mirkovic, Dragan; Kry, Stephen F.; Liu, Xinming; Stingo, Francesco C.
2016-01-01
Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact
Amputations in natural disasters and mass casualties: staged approach.
Wolfson, Nikolaj
2012-10-01
Amputation is a commonly performed procedure during natural disasters and mass casualties related to industrial accidents and military conflicts where large civilian populations are subjected to severe musculoskeletal trauma. Crush injuries and crush syndrome, an often-overwhelming number of casualties, delayed presentations, regional cultural and other factors, all can mandate a surgical approach to amputation that is different than that typically used under non-disaster conditions. The following article will review the subject of amputation during natural disasters and mass casualties with emphasis on a staged approach to minimise post-surgical complications, especially infection.
A new approach to calculating spatial impulse responses
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
1997-01-01
Using linear acoustics the emitted and scattered ultrasound field can be found by using spatial impulse responses as developed by Tupholme (1969) and Stepanishen (1971). The impulse response is calculated by the Rayleigh integral by summing the spherical waves emitted from all of the aperture...
Large-scale subduction of continental crust implied by India-Asia mass-balance calculation
Ingalls, Miquela; Rowley, David B.; Currie, Brian; Colman, Albert S.
2016-11-01
Continental crust is buoyant compared with its oceanic counterpart and resists subduction into the mantle. When two continents collide, the mass balance for the continental crust is therefore assumed to be maintained. Here we use estimates of pre-collisional crustal thickness and convergence history derived from plate kinematic models to calculate the crustal mass balance in the India-Asia collisional system. Using the current best estimates for the timing of the diachronous onset of collision between India and Eurasia, we find that about 50% of the pre-collisional continental crustal mass cannot be accounted for in the crustal reservoir preserved at Earth's surface today--represented by the mass preserved in the thickened crust that makes up the Himalaya, Tibet and much of adjacent Asia, as well as southeast Asian tectonic escape and exported eroded sediments. This implies large-scale subduction of continental crust during the collision, with a mass equivalent to about 15% of the total oceanic crustal subduction flux since 56 million years ago. We suggest that similar contamination of the mantle by direct input of radiogenic continental crustal materials during past continent-continent collisions is reflected in some ocean crust and ocean island basalt geochemistry. The subduction of continental crust may therefore contribute significantly to the evolution of mantle geochemistry.
Directory of Open Access Journals (Sweden)
M.A. Zayed
2017-03-01
Full Text Available Naproxen (C14H14O3 is a non-steroidal anti-inflammatory drug (NSAID. It is important to investigate its structure to know the active groups and weak bonds responsible for medical activity. In the present study, naproxen was investigated by mass spectrometry (MS, thermal analysis (TA measurements (TG/DTG and DTA and confirmed by semi empirical molecular orbital (MO calculation, using PM3 procedure. These calculations included, bond length, bond order, bond strain, partial charge distribution, ionization energy and heat of formation (ΔHf. The mass spectra and thermal analysis fragmentation pathways were proposed and compared to select the most suitable scheme representing the correct fragmentation pathway of the drug in both techniques. The PM3 procedure reveals that the primary cleavage site of the charged molecule is the rupture of the COOH group (lowest bond order and high strain which followed by CH3 loss of the methoxy group. Thermal analysis of the neutral drug reveals a high response to the temperature variation with very fast rate. It decomposed in several sequential steps in the temperature range 80–400 °C. These mass losses appear as two endothermic and one exothermic peaks which required energy values of 255.42, 10.67 and 371.49 J g−1 respectively. The initial thermal ruptures are similar to that obtained by mass spectral fragmentation (COOH rupture. It was followed by the loss of the methyl group and finally by ethylene loss. Therefore, comparison between MS and TA helps in selection of the proper pathway representing its fragmentation. This comparison is successfully confirmed by MO-calculation.
The principal axis approach to value-added calculation.
He, Q.; Tymms, P.
2014-01-01
The assessment of the achievement of students and the quality of schools has drawn increasing attention from educational researchers, policy makers, and practitioners. Various test-based accountability and feedback systems involving the use of value-added techniques have been developed for evaluating the effectiveness of individual teaching professionals and schools. A variety of models have been employed for calculating value-added measures, including the use of linear regression models whic...
An approach to the neck mass | Thandar | Continuing Medical ...
African Journals Online (AJOL)
An approach to the neck mass. MA Thandar, NE Jonas. Abstract. No Abstract. Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT · AJOL African Journals Online. HOW TO USE AJOL... for Researchers · for Librarians · for Authors · FAQ's · More about AJOL ...
Grid-based electronic structure calculations: The tensor decomposition approach
Energy Technology Data Exchange (ETDEWEB)
Rakhuba, M.V., E-mail: rakhuba.m@gmail.com [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Oseledets, I.V., E-mail: i.oseledets@skoltech.ru [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Institute of Numerical Mathematics, Russian Academy of Sciences, Gubkina St. 8, 119333 Moscow (Russian Federation)
2016-05-01
We present a fully grid-based approach for solving Hartree–Fock and all-electron Kohn–Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 8192{sup 3} and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.
Model-Based Systems Engineering Approach to Managing Mass Margin
Chung, Seung H.; Bayer, Todd J.; Cole, Bjorn; Cooke, Brian; Dekens, Frank; Delp, Christopher; Lam, Doris
2012-01-01
When designing a flight system from concept through implementation, one of the fundamental systems engineering tasks ismanaging the mass margin and a mass equipment list (MEL) of the flight system. While generating a MEL and computing a mass margin is conceptually a trivial task, maintaining consistent and correct MELs and mass margins can be challenging due to the current practices of maintaining duplicate information in various forms, such as diagrams and tables, and in various media, such as files and emails. We have overcome this challenge through a model-based systems engineering (MBSE) approach within which we allow only a single-source-of-truth. In this paper we describe the modeling patternsused to capture the single-source-of-truth and the views that have been developed for the Europa Habitability Mission (EHM) project, a mission concept study, at the Jet Propulsion Laboratory (JPL).
Subcriticality calculations for the FFTF reverse approach to critical experiment
International Nuclear Information System (INIS)
Selby, D.L.; Flanagan, G.F.
1975-01-01
The reverse approach to critical (RAC) experiments were performed in the ZPR-IX critical facility at Argonne National Laboratory. One of the major objectives of this project is to determine the adequacy of the low-level flux monitor (LLFM) detectors for initial loading of the Fast Flux Test Facility (FFTF). 5 references
Comparison of Calculation Approaches for Monopiles for Offshore Wind Turbines
DEFF Research Database (Denmark)
Augustesen, Anders Hust; Sørensen, Søren Peder Hyldal; Ibsen, Lars Bo
2010-01-01
Large-diameter (4 to 6m) monopiles are often used as foundations for offshore wind turbines. The monopiles are subjected to large horizontal forces and overturning moments and they are traditionally designed based on the p-y curve method (Winkler type approach). The p-y curves recommended in offs...
Green's function approach to calculate spin injection in quantum dot
International Nuclear Information System (INIS)
Tan, S.G.; Jalil, M.B.A.; Liew, Thomas; Teo, K.L.
2006-01-01
We present a theoretical model to study spin injection (η) through a quantum dot system sandwiched by two ferromagnetic contacts. The effect of contact magnetization on η was studied using Green's function descriptions of the density of states. Green's function models have the advantages that coherent effects of temperature, electron occupation in the QD, and lead perturbation on the state wave function and hence the current can be formally included in the calculations. In addition, self-consistent treatment of current with applied electrochemical potential or lead conductivity, a necessary step which has not been considered in previous works, has also been implemented in our model
Simplified approach for quantitative calculations of optical pumping
International Nuclear Information System (INIS)
Atoneche, Fred; Kastberg, Anders
2017-01-01
We present a simple and pedagogical method for quickly calculating optical pumping processes based on linearised population rate equations. The method can easily be implemented on mathematical software run on modest personal computers, and can be generalised to any number of concrete situations. We also show that the method is still simple with realistic experimental complications taken into account, such as high level degeneracy, impure light polarisation, and an added external magnetic field. The method and the associated mathematical toolbox should be of value in advanced physics teaching, and can also facilitate the preparation of research tasks. (paper)
Simplified approach for quantitative calculations of optical pumping
Atoneche, Fred; Kastberg, Anders
2017-07-01
We present a simple and pedagogical method for quickly calculating optical pumping processes based on linearised population rate equations. The method can easily be implemented on mathematical software run on modest personal computers, and can be generalised to any number of concrete situations. We also show that the method is still simple with realistic experimental complications taken into account, such as high level degeneracy, impure light polarisation, and an added external magnetic field. The method and the associated mathematical toolbox should be of value in advanced physics teaching, and can also facilitate the preparation of research tasks.
Bootstrap calculation of the dynamical quark mass in QCD4 at finite temperature
International Nuclear Information System (INIS)
Cabo, A.; Kalashnikov, O.K.; Veliev, E.Kh.
1988-01-01
Nonperturbative calculations of the dynamical quark mass m(T) are given in QCD 4 , based on the bootstrap solution of the Schwinger-Dyson equation for the quark Green function at finite temperatures. A closed nonlinear equation is obtained for m(T) whose solution is found under some simplifying assumptions. We used a particular approximation for the effective charge and the nonperturbative expressions of the gluon magnetic and electric masses. The singular behavior of m(T) is established and its parameters are determined numerically. The singularity found is shown to correctly reproduce the chiral phase transition and the temperature limits obtained for m(T) are qualitatively correct. The complete phase diagram of QCD 4 in the (μ,T) plane is briefly discussed. (orig.)
g-factor calculations from the generalized seniority approach
Maheshwari, Bhoomika; Jain, Ashok Kumar
2018-05-01
The generalized seniority approach proposed by us to understand the B(E1)/B(E2)/B(E3) properties of semi-magic nuclei has been widely successful in the explanation of the same and has led to an expansion in the scope of seniority isomers. In the present paper, we apply the generalized seniority scheme to understand the behavior of g-factors in semi-magic nuclei. We find that the magnetic moment and the gfactors do show a particle number independent behavior as expected and the understanding is consistent with the explanation of transition probabilities.
Eigenvalues calculation algorithms for {lambda}-modes determination. Parallelization approach
Energy Technology Data Exchange (ETDEWEB)
Vidal, V. [Universidad Politecnica de Valencia (Spain). Departamento de Sistemas Informaticos y Computacion; Verdu, G.; Munoz-Cobo, J.L. [Universidad Politecnica de Valencia (Spain). Departamento de Ingenieria Quimica y Nuclear; Ginestart, D. [Universidad Politecnica de Valencia (Spain). Departamento de Matematica Aplicada
1997-03-01
In this paper, we review two methods to obtain the {lambda}-modes of a nuclear reactor, Subspace Iteration method and Arnoldi`s method, which are popular methods to solve the partial eigenvalue problem for a given matrix. In the developed application for the neutron diffusion equation we include improved acceleration techniques for both methods. Also, we propose two parallelization approaches for these methods, a coarse grain parallelization and a fine grain one. We have tested the developed algorithms with two realistic problems, focusing on the efficiency of the methods according to the CPU times. (author).
Glass viscosity calculation based on a global statistical modelling approach
Energy Technology Data Exchange (ETDEWEB)
Fluegel, Alex
2007-02-01
A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.
Endoscopic endonasal approach for mass resection of the pterygopalatine fossa
Directory of Open Access Journals (Sweden)
Jan Plzák
Full Text Available OBJECTIVES: Access to the pterygopalatine fossa is very difficult due to its complex anatomy. Therefore, an open approach is traditionally used, but morbidity is unavoidable. To overcome this problem, an endoscopic endonasal approach was developed as a minimally invasive procedure. The surgical aim of the present study was to evaluate the utility of the endoscopic endonasal approach for the management of both benign and malignant tumors of the pterygopalatine fossa. METHOD: We report our experience with the endoscopic endonasal approach for the management of both benign and malignant tumors and summarize recent recommendations. A total of 13 patients underwent surgery via the endoscopic endonasal approach for pterygopalatine fossa masses from 2014 to 2016. This case group consisted of 12 benign tumors (10 juvenile nasopharyngeal angiofibromas and two schwannomas and one malignant tumor. RESULTS: No recurrent tumor developed during the follow-up period. One residual tumor (juvenile nasopharyngeal angiofibroma that remained in the cavernous sinus was stable. There were no significant complications. Typical sequelae included hypesthesia of the maxillary nerve, trismus, and dry eye syndrome. CONCLUSION: The low frequency of complications together with the high efficacy of resection support the use of the endoscopic endonasal approach as a feasible, safe, and beneficial technique for the management of masses in the pterygopalatine fossa.
Endoscopic endonasal approach for mass resection of the pterygopalatine fossa
Plzák, Jan; Kratochvil, Vít; Kešner, Adam; Šurda, Pavol; Vlasák, Aleš; Zvěřina, Eduard
2017-01-01
OBJECTIVES: Access to the pterygopalatine fossa is very difficult due to its complex anatomy. Therefore, an open approach is traditionally used, but morbidity is unavoidable. To overcome this problem, an endoscopic endonasal approach was developed as a minimally invasive procedure. The surgical aim of the present study was to evaluate the utility of the endoscopic endonasal approach for the management of both benign and malignant tumors of the pterygopalatine fossa. METHOD: We report our experience with the endoscopic endonasal approach for the management of both benign and malignant tumors and summarize recent recommendations. A total of 13 patients underwent surgery via the endoscopic endonasal approach for pterygopalatine fossa masses from 2014 to 2016. This case group consisted of 12 benign tumors (10 juvenile nasopharyngeal angiofibromas and two schwannomas) and one malignant tumor. RESULTS: No recurrent tumor developed during the follow-up period. One residual tumor (juvenile nasopharyngeal angiofibroma) that remained in the cavernous sinus was stable. There were no significant complications. Typical sequelae included hypesthesia of the maxillary nerve, trismus, and dry eye syndrome. CONCLUSION: The low frequency of complications together with the high efficacy of resection support the use of the endoscopic endonasal approach as a feasible, safe, and beneficial technique for the management of masses in the pterygopalatine fossa. PMID:29069259
A Variational Approach to Enhanced Sampling and Free Energy Calculations
Parrinello, Michele
2015-03-01
The presence of kinetic bottlenecks severely hampers the ability of widely used sampling methods like molecular dynamics or Monte Carlo to explore complex free energy landscapes. One of the most popular methods for addressing this problem is umbrella sampling which is based on the addition of an external bias which helps overcoming the kinetic barriers. The bias potential is usually taken to be a function of a restricted number of collective variables. However constructing the bias is not simple, especially when the number of collective variables increases. Here we introduce a functional of the bias which, when minimized, allows us to recover the free energy. We demonstrate the usefulness and the flexibility of this approach on a number of examples which include the determination of a six dimensional free energy surface. Besides the practical advantages, the existence of such a variational principle allows us to look at the enhanced sampling problem from a rather convenient vantage point.
Variational Approach to Enhanced Sampling and Free Energy Calculations
Valsson, Omar; Parrinello, Michele
2014-08-01
The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.
International Nuclear Information System (INIS)
Yu-Min, Liu; Zhong-Yuan, Yu; Xiao-Min, Ren
2009-01-01
Calculations of electronic structures about the semiconductor quantum dot and the semiconductor quantum ring are presented in this paper. To reduce the calculation costs, for the quantum dot and the quantum ring, their simplified axially symmetric shapes are utilized in our analysis. The energy dependent effective mass is taken into account in solving the Schrödinger equations in the single band effective mass approximation. The calculated results show that the energy dependent effective mass should be considered only for relatively small volume quantum dots or small quantum rings. For large size quantum materials, both the energy dependent effective mass and the parabolic effective mass can give the same results. The energy states and the effective masses of the quantum dot and the quantum ring as a function of geometric parameters are also discussed in detail. (general)
UAV-based NDVI calculation over grassland: An alternative approach
Mejia-Aguilar, Abraham; Tomelleri, Enrico; Asam, Sarah; Zebisch, Marc
2016-04-01
The Normalised Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring and assessing vegetation in remote sensing. The index relies on the reflectance difference between the near infrared (NIR) and red light and is thus able to track variations of structural, phenological, and biophysical parameters for seasonal and long-term monitoring. Conventionally, NDVI is inferred from space-borne spectroradiometers, such as MODIS, with moderate resolution up to 250 m ground resolution. In recent years, a new generation of miniaturized radiometers and integrated hyperspectral sensors with high resolution became available. Such small and light instruments are particularly adequate to be mounted on airborne unmanned aerial vehicles (UAV) used for monitoring services reaching ground sampling resolution in the order of centimetres. Nevertheless, such miniaturized radiometers and hyperspectral sensors are still very expensive and require high upfront capital costs. Therefore, we propose an alternative, mainly cheaper method to calculate NDVI using a camera constellation consisting of two conventional consumer-grade cameras: (i) a Ricoh GR modified camera that acquires the NIR spectrum by removing the internal infrared filter. A mounted optical filter additionally obstructs all wavelengths below 700 nm. (ii) A Ricoh GR in RGB configuration using two optical filters for blocking wavelengths below 600 nm as well as NIR and ultraviolet (UV) light. To assess the merit of the proposed method, we carry out two comparisons: First, reflectance maps generated by the consumer-grade camera constellation are compared to reflectance maps produced with a hyperspectral camera (Rikola). All imaging data and reflectance maps are processed using the PIX4D software. In the second test, the NDVI at specific points of interest (POI) generated by the consumer-grade camera constellation is compared to NDVI values obtained by ground spectral measurements using a
Prediction of fission mass-yield distributions based on cross section calculations
International Nuclear Information System (INIS)
Hambsch, F.-J.; G.Vladuca; Tudora, Anabella; Oberstedt, S.; Ruskov, I.
2005-01-01
For the first time, fission mass-yield distributions have been predicted based on an extended statistical model for fission cross section calculations. In this model, the concept of the multi-modality of the fission process has been incorporated. The three most dominant fission modes, the two asymmetric standard I (S1) and standard II (S2) modes and the symmetric superlong (SL) mode are taken into account. De-convoluted fission cross sections for S1, S2 and SL modes for 235,238 U(n, f) and 237 Np(n, f), based on experimental branching ratios, were calculated for the first time in the incident neutron energy range from 0.01 to 5.5 MeV providing good agreement with the experimental fission cross section data. The branching ratios obtained from the modal fission cross section calculations have been used to deduce the corresponding fission yield distributions, including mean values also for incident neutron energies hitherto not accessible to experiment
A simple approach to calculate active power of electrosurgical units
Directory of Open Access Journals (Sweden)
André Luiz Regis Monteiro
Full Text Available Abstract Introduction: Despite of more than a hundred years of electrosurgery, only a few electrosurgical equipment manufacturers have developed methods to regulate the active power delivered to the patient, usually around an arbitrary setpoint. In fact, no manufacturer has a method to measure the active power actually delivered to the load. Measuring the delivered power and computing it fast enough so as to avoid injury to the organic tissue is challenging. If voltage and current signals can be sampled in time and discretized in the frequency domain, a simple and very fast multiplication process can be used to determine the active power. Methods This paper presents an approach for measuring active power at the output power stage of electrosurgical units with mathematical shortcuts based on a simple multiplication procedure of discretized variables – frequency domain vectors – obtained through Discrete Fourier Transform (DFT applied on time-sampled voltage and current vectors. Results Comparative results between simulations and a practical experiment are presented – all being in accordance with the requirements of the applicable industry standards. Conclusion An analysis is presented comparing the active power analytically obtained through well-known voltage and current signals against a computational methodology based on vector manipulation using DFT only for time-to-frequency domain transformation. The greatest advantage of this method is to determine the active power of noisy and phased out signals with neither complex DFT or ordinary transform methodologies nor sophisticated computing techniques such as convolution. All results presented errors substantially lower than the thresholds defined by the applicable standards.
Calculation of the stationary mass velocity of steam mixtures and of the recoil forces occurring
International Nuclear Information System (INIS)
Pana, P.
1976-11-01
The best known theories for steam flow (e.g. after pipe rupture within the primary coolant loop of a nuclear power plant) are deeply discussed, the theory of the modified-Bernoulli-equation for the subcooled region, the Moody theory and the homogenious euquilibrium theory for the steam-water region, and the theory of the perfect gas for the superheated region. The calculated mass velocity and thrust coefficient is shown for the whole h-s chart, including various initial pressures and Zeta values as parameter. The comparison of the results leads to important conclusions, concerning conservatism and appropreateness of the considered theories, for friction and frictionless flow. (orig./HP) [de
Martínez-Cifuentes, Maximiliano; Clavijo-Allancan, Graciela; Zuñiga-Hormazabal, Pamela; Aranda, Braulio; Barriga, Andrés; Weiss-López, Boris; Araya-Maturana, Ramiro
2016-07-05
A series of a new type of tetracyclic carbazolequinones incorporating a carbonyl group at the ortho position relative to the quinone moiety was synthesized and analyzed by tandem electrospray ionization mass spectrometry (ESI/MS-MS), using Collision-Induced Dissociation (CID) to dissociate the protonated species. Theoretical parameters such as molecular electrostatic potential (MEP), local Fukui functions and local Parr function for electrophilic attack as well as proton affinity (PA) and gas phase basicity (GB), were used to explain the preferred protonation sites. Transition states of some main fragmentation routes were obtained and the energies calculated at density functional theory (DFT) B3LYP level were compared with the obtained by ab initio quadratic configuration interaction with single and double excitation (QCISD). The results are in accordance with the observed distribution of ions. The nature of the substituents in the aromatic ring has a notable impact on the fragmentation routes of the molecules.
Energy Technology Data Exchange (ETDEWEB)
Bencs, László, E-mail: bencs.laszlo@wigner.mta.hu [Institute for Solid State Physics and Optics, Wigner Research Centre for Physics, Hungarian Academy of Sciences, P.O. Box 49, H-1525 Budapest (Hungary); Laczai, Nikoletta [Institute for Solid State Physics and Optics, Wigner Research Centre for Physics, Hungarian Academy of Sciences, P.O. Box 49, H-1525 Budapest (Hungary); Ajtony, Zsolt [Institute of Food Science, University of West Hungary, H-9200 Mosonmagyaróvár, Lucsony utca 15–17 (Hungary)
2015-07-01
A combination of former convective–diffusive vapor-transport models is described to extend the calculation scheme for sensitivity (characteristic mass — m{sub 0}) in graphite furnace atomic absorption spectrometry (GFAAS). This approach encompasses the influence of forced convection of the internal furnace gas (mini-flow) combined with concentration diffusion of the analyte atoms on the residence time in a spatially isothermal furnace, i.e., the standard design of the transversely heated graphite atomizer (THGA). A couple of relationships for the diffusional and convectional residence times were studied and compared, including in factors accounting for the effects of the sample/platform dimension and the dosing hole. These model approaches were subsequently applied for the particular cases of Ag, As, Cd, Co, Cr, Cu, Fe, Hg, Mg, Mn, Mo, Ni, Pb, Sb, Se, Sn, V and Zn analytes. For the verification of the accuracy of the calculations, the experimental m{sub 0} values were determined with the application of a standard THGA furnace, operating either under stopped, or mini-flow (50 cm{sup 3} min{sup −1}) of the internal sheath gas during atomization. The theoretical and experimental ratios of m{sub 0}(mini-flow)-to-m{sub 0}(stop-flow) were closely similar for each study analyte. Likewise, the calculated m{sub 0} data gave a fairly good agreement with the corresponding experimental m{sub 0} values for stopped and mini-flow conditions, i.e., it ranged between 0.62 and 1.8 with an average of 1.05 ± 0.27. This indicates the usability of the current model calculations for checking the operation of a given GFAAS instrument and the applied methodology. - Highlights: • A calculation scheme for convective–diffusive vapor loss in GFAAS is described. • Residence time (τ) formulas were compared for sensitivity (m{sub 0}) in a THGA furnace. • Effects of the sample/platform dimension and dosing hole on τ were assessed. • Theoretical m{sub 0} of 18 analytes were
Thomas-Fermi approach to nuclear mass formula. Pt. 1
International Nuclear Information System (INIS)
Dutta, A.K.; Arcoragi, J.P.; Pearson, J.M.; Tondeur, F.
1986-01-01
With a view to having a more secure basis for the nuclear mass formula than is provided by the drop(let) model, we make a preliminary study of the possibilities offered by the Skyrme-ETF method. Two ways of incorporating shell effects are considered: the ''Strutinsky-integral'' method of Chu et al., and the ''expectation-value'' method of Brack et al. Each of these methods is compared with the HF method in an attempt to see how reliably they extrapolate from the known region of the nuclear chart out to the neutron-drip line. The Strutinsky-integral method is shown to perform particularly well, and to offer a promising approach to a more reliable mass formula. (orig.)
International Nuclear Information System (INIS)
Janecky, D.R.
1988-01-01
A computational modeling code (EQPSreverse arrowS) has been developed to examine sulfur isotopic distribution pathways coupled with calculations of chemical mass transfer pathways. A post processor approach to EQ6 calculations was chosen so that a variety of isotopic pathways could be examined for each reaction pathway. Two types of major bounding conditions were implemented: (1) equilibrium isotopic exchange between sulfate and sulfide species or exchange only accompanying chemical reduction and oxidation events, and (2) existence or lack of isotopic exchange between solution species and precipitated minerals, parallel to the open and closed chemical system formulations of chemical mass transfer modeling codes. All of the chemical data necessary to explicitly calculate isotopic distribution pathways is generated by most mass transfer modeling codes and can be input to the EQPS code. Routines are built in to directly handle EQ6 tabular files. Chemical reaction models of seafloor hydrothermal vent processes and accompanying sulfur isotopic distribution pathways illustrate the capabilities of coupling EQPSreverse arrowS with EQ6 calculations, including the extent of differences that can exist due to the isotopic bounding condition assumptions described above. 11 refs., 2 figs
Trépanier, Sylvain; Mathieu, Lucie; Daigneault, Réal; Faure, Stéphane
2016-04-01
This study proposes an artificial neural networks-based method for predicting the unaltered (precursor) chemical compositions of hydrothermally altered volcanic rock. The method aims at predicting precursor's major components contents (SiO2, FeOT, MgO, CaO, Na2O, and K2O). The prediction is based on ratios of elements generally immobile during alteration processes; i.e. Zr, TiO2, Al2O3, Y, Nb, Th, and Cr, which are provided as inputs to the neural networks. Multi-layer perceptron neural networks were trained on a large dataset of least-altered volcanic rock samples that document a wide range of volcanic rock types, tectonic settings and ages. The precursors thus predicted are then used to perform mass balance calculations. Various statistics were calculated to validate the predictions of precursors' major components, which indicate that, overall, the predictions are precise and accurate. For example, rank-based correlation coefficients were calculated to compare predicted and analysed values from a least-altered test dataset that had not been used to train the networks. Coefficients over 0.87 were obtained for all components, except for Na2O (0.77), indicating that predictions for alkali might be less performant. Also, predictions are performant for most volcanic rock compositions, except for ultra-K rocks. The proposed method provides an easy and rapid solution to the often difficult task of determining appropriate volcanic precursor compositions to rocks modified by hydrothermal alteration. It is intended for large volcanic rock databases and is most useful, for example, to mineral exploration performed in complex or poorly known volcanic settings. The method is implemented as a simple C++ console program.
40 CFR 75.19 - Optional SO2, NOX, and CO2 emissions calculation for low mass emissions (LME) units.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Optional SO2, NOX, and CO2 emissions... § 75.19 Optional SO2, NOX, and CO2 emissions calculation for low mass emissions (LME) units. (a... input, NOX, SO2, and CO2 mass emissions, and NOX emission rate under this part. If the owner or operator...
Downscaling of the global climate model data for the mass balance calculation of mountain glaciers
Directory of Open Access Journals (Sweden)
P. A. Morozova
2017-01-01
Full Text Available In this paper, we consider a hybrid method of downscaling of the GCM‑generated meteorological fields to the characteristic spatial resolution which is usually used for modeling of a single mountain glacier mass balance. The main purpose of the study is to develop a reliable forecasting method to evaluate future state of moun‑ tain glaciation under changing climatic conditions. The method consists of two stages. In the first or dynamical stage, we use results of calculations of the regional numerical model HadRM3P for the Black Sea‑Caspian region with a spatial resolution of 25 km [22]. Initial conditions for the HadRM3P were provided by the GCM devel‑ oped in the Institute of Numerical Mathematics of RAS (INMCM4 [18]. Calculations were carried out for two time periods: the present climate (1971–2000 and climate in the late 21st century (2071–2100 according to the scenario of greenhouse gas emissions RCP 8.5. On the second stage of downscaling, further regionalization is achieved by projecting of RCM‑generated data to the high‑resolution (25 m digital altitude model in a domain enclosing a target glacier. Altitude gradients of the surface air temperature and precipitation were derived from the model data. Further on, both were corrected using data of observations. Incoming shortwave radiation was calculated in the mass balance model separately, taking into account characteristics of the slope, i.e. exposition and shading of each cell. Then, the method was tested for glaciers Marukh (Western Caucasus and Jankuat (Central Caucasus, both for the present‑day and for future climates. At the end of the 21st century, the air tem‑ perature rise predicted for the summer months was calculated to be about 5–6 °C, and the result for the winter to be minus 2–3 °C. Change in annual precipitation is not significant, less than 10%. Increase in the total short‑ wave radiation will be about 5%. These changes will result in the fact that
Gohel, M C; Sarvaiya, K G; Shah, A R; Brahmbhatt, B K
2009-03-01
The objective of the present work was to propose a method for calculating weight in the Moore and Flanner Equation. The percentage coefficient of variation in reference and test formulations at each time point was considered for calculating weight. The literature reported data are used to demonstrate applicability of the method. The advantages and applications of new approach are narrated. The results show a drop in the value of similarity factor as compared to the approach proposed in earlier work. The scientists who need high accuracy in calculation may use this approach.
International Nuclear Information System (INIS)
Haffner, D.R.
1976-01-01
1 - Description of problem or function: PACTOLUS is a code for computing nuclear power costs using the discounted cash flow method. The cash flows are generated from input unit costs, time schedules and burnup data. CLOTHO calculates and communicates to PACTOLUS mass flow data to match a specified load factor history. 2 - Method of solution: Plant lifetime power costs are calculated using the discounted cash flow method. 3 - Restrictions on the complexity of the problem - Maxima of: 40 annual time periods into which all costs and mass flows are accumulated, 20 isotopic mass flows charged into and discharged from the reactor model
Piermattei, Livia; Carturan, Luca; Calligaro, Simone; Blasone, Giacomo; Guarnieri, Alberto; Tarolli, Paolo; Dalla Fontana, Giancarlo; Vettore, Antonio
2014-05-01
Digital elevation models (DEMs) of glaciated terrain are commonly used to measure changes in geometry and hence infer the mass balance of glaciers. Different tools and methods exist to obtain information about the 3D geometry of terrain. Recent improvements on the quality and performance of digital cameras for close-range photogrammetry, and the development of automatic digital photogrammetric processing makes the 'structure from motion' photogrammetric technique (SfM) competitive for high quality 3D models production, compared to efficient but also expensive and logistically-demanding survey technologies such as airborn and terrestrial laser scanner (TLS). The purpose of this work is to test the SfM approach, using a consumer-grade SLR camera and the low-cost computer vision-based software package Agisoft Photoscan (Agisoft LLC), to monitor the mass balance of Montasio Occidentale glacier, a 0.07km2, low-altitude, debris-covered glacier located in the Eastern Italian Alps. The quality of the 3D models produced by the SfM process has been assessed by comparison with digital terrain models obtained through TLS surveys carried out at the same dates. TLS technique has indeed proved to be very effective in determining the volume change of this glacier in the last years. Our results shows that the photogrammetric approach can produce point cloud densities comparable to those derived from TLS measurements. Furthermore, the horizontal and vertical accuracies are also of the same order of magnitude as for TLS (centimetric to decimetric). The effect of different landscape characteristics (e.g. distance from the camera or terrain gradient) and of different substrata (rock, debris, ice, snow and firn) was also evaluated in terms of SfM reconstruction's accuracy vs. TLS. Given the good results obtained on the Montasio Occidentale glacier, it can be concluded that the terrestrial photogrammetry, with the advantageous features of portability, ease of use and above all low costs
REFLOS, Fuel Loading and Cost from Burnup and Heavy Atomic Mass Flow Calculation in HWR
International Nuclear Information System (INIS)
Boettcher, W.; Schmidt, E.
1969-01-01
1 - Nature of physical problem solved: REFLOS is a programme for the evaluation of fuel-loading schemes in heavy water moderated reactors. The problems involved in this study are: a) Burn-up calculation for the reactor cell. b) Determination of reactivity behaviour, power distribution, attainable burn-up for both the running-in period and the equilibrium of a 3-dimensional heterogeneous reactor model; investigation of radial fuel movement schemes. c) Evaluation of mass flows of heavy atoms through the reactor and fuel cycle costs for the running-in, the equilibrium, and the shut down of a power reactor. If the subroutine for treating the reactor cell were replaced by a suitable routine, other reactors with weakly absorbing moderators could be analyzed. 2 - Method of solution: Nuclear constants and isotopic compositions of the different fuels in the reactor are calculated by the cell-burn-up programme and tabulated as functions of the burn-up rate (MWD/T). Starting from a known state of the reactor, the 3-dimensional heterogeneous reactor programme (applying an extension of the technique of Feinberg and Galanin) calculates reactivity and neutron flux distribution using one thermal and one or two fast neutron groups. After a given irradiation time, the new state of the reactor is determined, and new nuclear constants are assigned to the various defined locations in the reactor. Reloading of fuel may occur if the prescribed life of the reactor is reached or if the effective multiplication factor or the power form factor falls below a specified level. The scheme of reloading to be carried out is specified by a load vector, giving the number of channels to be discharged, the kind of movement from one to another channel and the type of fresh fuel to be charged for each single reloading event. After having determined the core states characterizing the equilibrium period, and having decided the fuel reloading scheme for the running-in period of the reactor life, the fuel
Rigo-Bonnin, Raül; Blanco-Font, Aurora; Canalias, Francesca
2018-05-08
Values of mass concentration of tacrolimus in whole blood are commonly used by the clinicians for monitoring the status of a transplant patient and for checking whether the administered dose of tacrolimus is effective. So, clinical laboratories must provide results as accurately as possible. Measurement uncertainty can allow ensuring reliability of these results. The aim of this study was to estimate measurement uncertainty of whole blood mass concentration tacrolimus values obtained by UHPLC-MS/MS using two top-down approaches: the single laboratory validation approach and the proficiency testing approach. For the single laboratory validation approach, we estimated the uncertainties associated to the intermediate imprecision (using long-term internal quality control data) and the bias (utilizing a certified reference material). Next, we combined them together with the uncertainties related to the calibrators-assigned values to obtain a combined uncertainty for, finally, to calculate the expanded uncertainty. For the proficiency testing approach, the uncertainty was estimated in a similar way that the single laboratory validation approach but considering data from internal and external quality control schemes to estimate the uncertainty related to the bias. The estimated expanded uncertainty for single laboratory validation, proficiency testing using internal and external quality control schemes were 11.8%, 13.2%, and 13.0%, respectively. After performing the two top-down approaches, we observed that their uncertainty results were quite similar. This fact would confirm that either two approaches could be used to estimate the measurement uncertainty of whole blood mass concentration tacrolimus values in clinical laboratories. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Differential population synthesis approach to mass segregation in M92
International Nuclear Information System (INIS)
Tobin, W.J.
1979-01-01
Spectra are presented of 26 low-metal stars and of the center and one-quarter intensity positions of M92. Spectral coverage is from 390 to 870 nm with resolution better than 1 nm in the blue and 2 nm in the red. Individual pixel signal-to-noise is about 100. Dwarf features are notably absent from the M92 spectra. Numerical estimates of 36 absorption features are extracted from every spectrum, as are two continuum indices. Mathematical models are constructed describing each feature's dependence on stellar color, luminosity, and metal content and then used to estimate the metal content of 6 of the stars for which the metal content is not known. For 10 features reliably measured in M92's center and edge a mass segregation sensitivity parameter is derived from each feature's deduced luminosity dependence. The ratio of feature equivalent widths at cluster edge and center are compared to this sensitivity: no convincing evidence of mass segregation is seen. The only possible edge-to-center difference seen is in the Mg b 517.4 nm feature. Three of the 10 cluster features can be of interstellar origin, at least in part; in particular the luminosity-sensitive Na D line cannot be used as a segregation indicator. The experience gained suggests that an integrated spectrum approach to globular cluster mass segregation is very difficult. An appendix describes in detail the capabilities of the Pine Bluff Observatory .91 m telescope, Cassegrain grating spectrograph, and intensified Reticon dual diode-array detector. It is possible to determine a highly consistent wavelength calibration
A Real-Time Temperature Data Transmission Approach for Intelligent Cooling Control of Mass Concrete
Directory of Open Access Journals (Sweden)
Peng Lin
2014-01-01
Full Text Available The primary aim of the study presented in this paper is to propose a real-time temperature data transmission approach for intelligent cooling control of mass concrete. A mathematical description of a digital temperature control model is introduced in detail. Based on pipe mounted and electrically linked temperature sensors, together with postdata handling hardware and software, a stable, real-time, highly effective temperature data transmission solution technique is developed and utilized within the intelligent mass concrete cooling control system. Once the user has issued the relevant command, the proposed programmable logic controllers (PLC code performs all necessary steps without further interaction. The code can control the hardware, obtain, read, and perform calculations, and display the data accurately. Hardening concrete is an aggregate of complex physicochemical processes including the liberation of heat. The proposed control system prevented unwanted structural change within the massive concrete blocks caused by these exothermic processes based on an application case study analysis. In conclusion, the proposed temperature data transmission approach has proved very useful for the temperature monitoring of a high arch dam and is able to control thermal stresses in mass concrete for similar projects involving mass concrete.
A new approach for accurate mass assignment on a multi-turn time-of-flight mass spectrometer.
Hondo, Toshinobu; Jensen, Kirk R; Aoki, Jun; Toyoda, Michisato
2017-12-01
A simple, effective accurate mass assignment procedure for a time-of-flight mass spectrometer is desirable. External mass calibration using a mass calibration standard together with an internal mass reference (lock mass) is a common technique for mass assignment, however, using polynomial fitting can result in mass-dependent errors. By using the multi-turn time-of-flight mass spectrometer infiTOF-UHV, we were able to obtain multiple time-of-flight data from an ion monitored under several different numbers of laps that was then used to calculate a mass calibration equation. We have developed a data acquisition system that simultaneously monitors spectra at several different lap conditions with on-the-fly centroid determination and scan law estimation, which is a function of acceleration voltage, flight path, and instrumental time delay. Less than 0.9 mDa mass errors were observed for assigned mass to charge ratios ( m/z) ranging between 4 and 134 using only 40 Ar + as a reference. It was also observed that estimating the scan law on-the-fly provides excellent mass drift compensation.
A mass balance approach to investigate arsenic cycling in a petroleum plume.
Ziegler, Brady A; Schreiber, Madeline E; Cozzarelli, Isabelle M; Crystal Ng, G-H
2017-12-01
Natural attenuation of organic contaminants in groundwater can give rise to a series of complex biogeochemical reactions that release secondary contaminants to groundwater. In a crude oil contaminated aquifer, biodegradation of petroleum hydrocarbons is coupled with the reduction of ferric iron (Fe(III)) hydroxides in aquifer sediments. As a result, naturally occurring arsenic (As) adsorbed to Fe(III) hydroxides in the aquifer sediment is mobilized from sediment into groundwater. However, Fe(III) in sediment of other zones of the aquifer has the capacity to attenuate dissolved As via resorption. In order to better evaluate how long-term biodegradation coupled with Fe-reduction and As mobilization can redistribute As mass in contaminated aquifer, we quantified mass partitioning of Fe and As in the aquifer based on field observation data. Results show that Fe and As are spatially correlated in both groundwater and aquifer sediments. Mass partitioning calculations demonstrate that 99.9% of Fe and 99.5% of As are associated with aquifer sediment. The sediments act as both sources and sinks for As, depending on the redox conditions in the aquifer. Calculations reveal that at least 78% of the original As in sediment near the oil has been mobilized into groundwater over the 35-year lifespan of the plume. However, the calculations also show that only a small percentage of As (∼0.5%) remains in groundwater, due to resorption onto sediment. At the leading edge of the plume, where groundwater is suboxic, sediments sequester Fe and As, causing As to accumulate to concentrations 5.6 times greater than background concentrations. Current As sinks can serve as future sources of As as the plume evolves over time. The mass balance approach used in this study can be applied to As cycling in other aquifers where groundwater As results from biodegradation of an organic carbon point source coupled with Fe reduction. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Silva Neto, A.J. da; Alvim, A.C.M.
1989-01-01
This work describes the thermalhydraulics code CROSS, designed for micro-computer calculation of heat and mass flow distributions in LWR nuclear reactor cores using the Hardy Cross method. Equations to calculate the pressure variations in the coolant channels are presented, along with derivation of a linear system of equations to calculate the energy balance. This system is solved through the Benachievicz method. A case study is presented, showing that the methodology developed in this work can be used in place of the forward marching multi-channel codes. (author) [pt
International Nuclear Information System (INIS)
Kumar, K.
1990-03-01
It is shown that the quasi-classical condition for the validity of the WKB approximation is satisfied although the mass associated with the spontaneous fission of 240 Pu varies by a factor of 12. A general numerical estimate of the allowed variations in such a mass is also given. (author)
Classification of mass matrices and the calculability of the Cabibbo angle
International Nuclear Information System (INIS)
Rizzo, T.G.
1981-01-01
We have analyzed all possible 2 x 2 mass matrices with two nonzero elements in an attempt to find which matrices yield a reasonable value of the Cabibbo angle upon diagonalization. We do not concern ourselves with the origin of these mass matrices (spontaneous symmetry breaking, bare-mass term, etc.). We find that, in the limit m/sub u//m/sub c/→0, only four possible relationships exist between sin 2 theta/sub C/ and the quark mass ratio m/sub d//m/sub s/, only one of which is reasonable for the usual value of m/sub d//m/sub s/ (approx.1/20). This limits the possible forms of the quark mass matrix to be two in number, both of which have been discussed previously in the literature
Modelling of large sodium fires: A coupled experimental and calculational approach
International Nuclear Information System (INIS)
Astegiano, J.C.; Balard, F.; Cartier, L.; De Pascale, C.; Forestier, A.; Merigot, C.; Roubin, P.; Tenchine, D.; Bakouta, N.
1996-01-01
The consequences of large sodium leaks in secondary circuit of Super-Phenix have been studied mainly with the FEUMIX code, on the basis of sodium fire experiments. This paper presents the status of the coupled AIRBUS (water experiment) FEUMIX approach under development in order to strengthen the extrapolation made for the Super-Phenix secondary circuits calculations for large leakage flow. FEUMIX code is a point code based on the concept of a global interfacial area between sodium and air. Mass and heat transfers through this global area is supposed to be similar. Then, global interfacial transfer coefficient Sih is an important parameter of the model. Correlations for the interfacial area are extracted from a large number of sodium tests. For the studies of hypothetical large sodium leak in secondary circuit of Super-Phenix, flow rates of more than 1 t/s have been considered and extrapolation was made from the existing results (maximum flow rate 225 kg/s). In order to strengthen the extrapolation, water test has been contemplated, on the basis of a thermal hydraulic similarity. The principle is to measure the interfacial area of a hot water jet in air, then to transpose the Sih to sodium without combustion, and to use this value in FEUMIX with combustion modelling. AIRBUS test section is a parallelepipedic gastight tank, 106 m 3 (5.7 x 3.7 x 5) internally insulated. Water jet is injected from heated external auxiliary tank into the cell using pressurized air tank and specific valve. The main measurements performed during each test are injected flow rate air pressure water temperature gas temperature A first series of tests were performed in order to qualify the methodology: typical FCA and IGNA sodium fire tests were represented in AIRBUS, and a comparison of the FEUMIX calculation using Sih value deduced from water experiments show satisfactory agreement. A second series of test for large flow rate, corresponding to large sodium leak in secondary circuit of Super
A fully relativistic approach for calculating atomic data for highly charged ions
Energy Technology Data Exchange (ETDEWEB)
Zhang, Hong Lin [Los Alamos National Laboratory; Fontes, Christopher J [Los Alamos National Laboratory; Sampson, Douglas H [PENNSYLVANIA STATE UNIV
2009-01-01
We present a review of our fully relativistic approach to calculating atomic data for highly charged ions, highlighting a research effort that spans twenty years. Detailed discussions of both theoretical and numerical techniques are provided. Our basic approach is expected to provide accurate results for ions that range from approximately half ionized to fully stripped. Options for improving the accuracy and range of validity of this approach are also discussed. In developing numerical methods for calculating data within this framework, considerable emphasis is placed on techniques that are robust and efficient. A variety of fundamental processes are considered including: photoexcitation, electron-impact excitation, electron-impact ionization, autoionization, electron capture, photoionization and photorecombination. Resonance contributions to a variety of these processes are also considered, including discussions of autoionization, electron capture and dielectronic recombination. Ample numerical examples are provided in order to illustrate the approach and to demonstrate its usefulness in providing data for large-scale plasma modeling.
A semi-mechanistic approach to calculate the probability of fuel defects
International Nuclear Information System (INIS)
Tayal, M.; Millen, E.; Sejnoha, R.
1992-10-01
In this paper the authors describe the status of a semi-mechanistic approach to the calculation of the probability of fuel defects. This approach expresses the defect probability in terms of fundamental parameters such as local stresses, local strains, and fission product concentration. The calculations of defect probability continue to reflect the influences of the conventional parameters like power ramp, burnup and CANLUB. In addition, the new approach provides a mechanism to account for the impacts of additional factors involving detailed fuel design and reactor operation, for example pellet density, pellet shape and size, sheath diameter and thickness, pellet/sheath clearance, and coolant temperature and pressure. The approach has been validated against a previous empirical correlation. AN illustrative example shows how the defect thresholds are influenced by changes in the internal design of the element and in the coolant pressure. (Author) (7 figs., tab., 12 refs.)
Sibiriakova Olena Oleksandrivna
2015-01-01
In this research the author examines changes to approaches of observation of mass communication. As a result of systemization of key theoretical models of communication, the author comes to conclusion of evolution of ideas about the process of mass communication measurement from linear to multisided and multiple.
An algorithm for mass matrix calculation of internally constrained molecular geometries.
Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz
2008-01-28
Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model.
A first look at maximally twisted mass lattice QCD calculations at the physical point
Energy Technology Data Exchange (ETDEWEB)
Abdel-Rehim, A. [The Cyprus Institute, Nicosia (Cyprus). CaSToRC; Boucaud, P. [Paris XI Univ., Orsay (France). Laboratoire de Physique Theorique; Carrasco, N. [Valencia-CSIC Univ. (Spain). Dept. de Fisica Teorica; IFIC, Valencia (Spain); and others
2013-11-15
In this contribution, a first look at simulations using maximally twisted mass Wilson fermions at the physical point is presented. A lattice action including clover and twisted mass terms is presented and the Monte Carlo histories of one run with two mass-degenerate flavours at a single lattice spacing are shown. Measurements from the light and heavy-light pseudoscalar sectors are compared to previous N{sub f}=2 results and their phenomenological values. Finally, the strategy for extending simulations to N{sub f}=2+1+1 is outlined.
An algorithm for mass matrix calculation of internally constrained molecular geometries
International Nuclear Information System (INIS)
Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz
2008-01-01
Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model
A first look at maximally twisted mass lattice QCD calculations at the physical point
International Nuclear Information System (INIS)
Abdel-Rehim, A.
2013-11-01
In this contribution, a first look at simulations using maximally twisted mass Wilson fermions at the physical point is presented. A lattice action including clover and twisted mass terms is presented and the Monte Carlo histories of one run with two mass-degenerate flavours at a single lattice spacing are shown. Measurements from the light and heavy-light pseudoscalar sectors are compared to previous N f =2 results and their phenomenological values. Finally, the strategy for extending simulations to N f =2+1+1 is outlined.
Directory of Open Access Journals (Sweden)
In-Sung Cho
2017-11-01
Full Text Available The calculation of thermophysical properties of stainless steel castings and its application to casting simulation is discussed. It is considered that accurate thermophysical properties of the casting alloys are necessary for the valid simulation of the casting processes. Although previous thermophysical calculation software requires a specific knowledge of thermodynamics, the calculation method proposed in the present study does not require any special knowledge of thermodynamics, but only the information of compositions of the alloy. The proposed calculator is based on the CALPHAD approach for modeling of multi-component alloys, especially in stainless steels. The calculator proposed in the present study can calculate thermophysical properties of eight-component systems on an iron base alloy (Fe-C-Si-Cr-Mn-Ni-Cu-Mo, and several Korean standard stainless steel alloys were calculated and discussed. The calculator can evaluate the thermophysical properties of the alloys such as density, heat capacity, enthalpy, latent heat, etc, based on full Gibbs energy for each phase. It is expected the proposed method can help casting experts to devise the casting design and its process easily in the field of not only stainless steels but also other alloy systems such as aluminum, copper, zinc, etc.
A modified CAS-CI approach for an efficient calculation of magnetic exchange coupling constants
Fink, Karin; Staemmler, Volker
2013-09-01
A modification of the conventional wavefunction-based CAS-CI method for the calculation of magnetic exchange coupling constants J in small molecules and transition metal complexes is presented. In general, CAS-CI approaches yield much too small values for J since the energies of the important charge transfer configurations are calculated with the ground state orbitals and are therefore much too high. In the present approach we improve these energies by accounting for the relaxation of the orbitals in the charge transfer configurations. The necessary relaxation energies R can be obtained in separate calculations using mononuclear or binuclear model systems. The method is applied to a few examples, small molecules, binuclear transition metal complexes, and bulk NiO. It allows to obtaining fairly reliable estimates for J at costs that are not higher than those of conventional CAS-CI calculations. Therefore, extended and very time-consuming perturbation theory (PT2), configuration interaction (CI), or coupled cluster (CC) schemes on top of the CAS-CI calculation can be avoided and the modified CAS-CI (MCAS-CI) approach can be applied to rather large systems.
A semi-empirical approach to calculate gamma activities in environmental samples
International Nuclear Information System (INIS)
Palacios, D.; Barros, H.; Alfonso, J.; Perez, K.; Trujillo, M.; Losada, M.
2006-01-01
We propose a semi-empirical method to calculate radionuclide concentrations in environmental samples without the use of reference material and avoiding the typical complexity of Monte-Carlo codes. The calculation of total efficiencies was carried out from a relative efficiency curve (obtained from the gamma spectra data), and the geometric (simulated by Monte-Carlo), absorption, sample and intrinsic efficiencies at energies between 130 and 3000 keV. The absorption and sample efficiencies were determined from the mass absorption coefficients, obtained by the web program XCOM. Deviations between computed results and measured efficiencies for the RGTh-1 reference material are mostly within 10%. Radionuclide activities in marine sediment samples calculated by the proposed method and by the experimental relative method were in satisfactory agreement. The developed method can be used for routine environmental monitoring when efficiency uncertainties of 10% can be sufficient.(Author)
Error estimates for ice discharge calculated using the flux gate approach
Navarro, F. J.; Sánchez Gámez, P.
2017-12-01
Ice discharge to the ocean is usually estimated using the flux gate approach, in which ice flux is calculated through predefined flux gates close to the marine glacier front. However, published results usually lack a proper error estimate. In the flux calculation, both errors in cross-sectional area and errors in velocity are relevant. While for estimating the errors in velocity there are well-established procedures, the calculation of the error in the cross-sectional area requires the availability of ground penetrating radar (GPR) profiles transverse to the ice-flow direction. In this contribution, we use IceBridge operation GPR profiles collected in Ellesmere and Devon Islands, Nunavut, Canada, to compare the cross-sectional areas estimated using various approaches with the cross-sections estimated from GPR ice-thickness data. These error estimates are combined with those for ice-velocities calculated from Sentinel-1 SAR data, to get the error in ice discharge. Our preliminary results suggest, regarding area, that the parabolic cross-section approaches perform better than the quartic ones, which tend to overestimate the cross-sectional area for flight lines close to the central flowline. Furthermore, the results show that regional ice-discharge estimates made using parabolic approaches provide reasonable results, but estimates for individual glaciers can have large errors, up to 20% in cross-sectional area.
Generic model for calculating carbon footprint of milk using four different LCA modelling approaches
DEFF Research Database (Denmark)
Dalgaard, Randi; Schmidt, Jannick Højrup; Flysjö, Anna
2014-01-01
The aim of the study is to develop a tool, which can be used for calculation of carbon footprint (using a life cycle assessment (LCA) approach) of milk both at a farm level and at a national level. The functional unit is ‘1 kg energy corrected milk (ECM) at farm gate’ and the applied methodology...
Walitt, L.
1982-01-01
The VANS successive approximation numerical method was extended to the computation of three dimensional, viscous, transonic flows in turbomachines. A cross-sectional computer code, which conserves mass flux at each point of the cross-sectional surface of computation was developed. In the VANS numerical method, the cross-sectional computation follows a blade-to-blade calculation. Numerical calculations were made for an axial annular turbine cascade and a transonic, centrifugal impeller with splitter vanes. The subsonic turbine cascade computation was generated in blade-to-blade surface to evaluate the accuracy of the blade-to-blade mode of marching. Calculated blade pressures at the hub, mid, and tip radii of the cascade agreed with corresponding measurements. The transonic impeller computation was conducted to test the newly developed locally mass flux conservative cross-sectional computer code. Both blade-to-blade and cross sectional modes of calculation were implemented for this problem. A triplet point shock structure was computed in the inducer region of the impeller. In addition, time-averaged shroud static pressures generally agreed with measured shroud pressures. It is concluded that the blade-to-blade computation produces a useful engineering flow field in regions of subsonic relative flow; and cross-sectional computation, with a locally mass flux conservative continuity equation, is required to compute the shock waves in regions of supersonic relative flow.
Microscopic calculation of level densities: the shell model Monte Carlo approach
International Nuclear Information System (INIS)
Alhassid, Yoram
2012-01-01
The shell model Monte Carlo (SMMC) approach provides a powerful technique for the microscopic calculation of level densities in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We discuss a number of developments: (i) Spin distribution. We used a spin projection method to calculate the exact spin distribution of energy levels as a function of excitation energy. In even-even nuclei we find an odd-even staggering effect (in spin). Our results were confirmed in recent analysis of experimental data. (ii) Heavy nuclei. The SMMC approach was extended to heavy nuclei. We have studied the crossover between vibrational and rotational collectivity in families of samarium and neodymium isotopes in model spaces of dimension approx. 10 29 . We find good agreement with experimental results for both state densities and 2 > (where J is the total spin). (iii) Collective enhancement factors. We have calculated microscopically the vibrational and rotational enhancement factors of level densities versus excitation energy. We find that the decay of these enhancement factors in heavy nuclei is correlated with the pairing and shape phase transitions. (iv) Odd-even and odd-odd nuclei. The projection on an odd number of particles leads to a sign problem in SMMC. We discuss a novel method to calculate state densities in odd-even and odd-odd nuclei despite the sign problem. (v) State densities versus level densities. The SMMC approach has been used extensively to calculate state densities. However, experiments often measure level densities (where levels are counted without including their spin degeneracies.) A spin projection method enables us to also calculate level densities in SMMC. We have calculated the SMMC level density of 162 Dy and found it to agree well with experiments
Risk-oriented approach application at planning and orginizing antiepidemic provision of mass events
Directory of Open Access Journals (Sweden)
D.V. Efremenko
2017-03-01
Full Text Available Mass events tend to become more and more dangerous for population health, as they cause various health risks, including infectious pathologies risks. Our research goal was to work out scientifically grounded approaches to assessing and managing epidemiologic risks as well as analyze their application practices implemented during preparation to the Olympics-2014, the Games themselves, as well as other mass events which took place in 2014–2016. We assessed epidemiologic complications risks with the use of diagnostic test-systems and applying a new technique which allowed for mass events peculiarities. The technique is based on infections ranking as per 3 potential danger categories in accordance with created criteria which represented quantitative and qualitative predictive parameters (predictors. Application of risk-oriented approach and multi-factor analysis allowed us to detect exact possible maximum requirements for providing sanitary-epidemiologic welfare in terms of each separate nosologic form. As we enhanced our laboratory base with test-systems to provide specific indication as per accomplished calculations, it enabled us, on one hand, to secure the required preparations, and, on the other hand, to avoid unnecessary expenditures. To facilitate decision-making process during the Olympics-2014 we used an innovative product, namely, a computer program based on geoinformation system (GIS. It helped us to simplify and to accelerate information exchange within the frameworks of intra- and interdepartmental interaction. "Dynamic epidemiologic threshold" was daily calculated for measles, chickenpox, acute enteric infections and acute respiratory viral infections of various etiology. And if it was exceeded or possibility of "epidemiologic spot" for one or several nosologies occurred, an automatic warning appeared in GIS. Planning prevention activities regarding feral herd infections and zoogenous extremely dangerous infections which were endemic
Calculation of Complexity Costs – An Approach for Rationalizing a Product Program
DEFF Research Database (Denmark)
Hansen, Christian Lindschou; Mortensen, Niels Henrik; Hvam, Lars
2012-01-01
This paper proposes an operational method for rationalizing a product program based on the calculation of complexity costs. The method takes its starting point in the calculation of complexity costs on a product program level. This is done throughout the value chain ranging from component invento...... of a product program. These findings represent an improved decision basis for the planning of reactive and proactive initiatives of rationalizing a product program.......This paper proposes an operational method for rationalizing a product program based on the calculation of complexity costs. The method takes its starting point in the calculation of complexity costs on a product program level. This is done throughout the value chain ranging from component...... inventories at the factory sites, all the way to the distribution of finished goods from distribution centers to the customers. The method proposes a step-wise approach including the analysis, quantification and allocation of product program complexity costs by the means of identifying of a number...
Dynamics calculation with variable mass of mountain self-propelled chassis
Directory of Open Access Journals (Sweden)
R.M. Makharoblidze
2016-12-01
Full Text Available Many technological processes in the field of agricultural production mechanization, such as a grain crop, planting root-tuber fruits, fertilizing, spraying and dusting, pressing feed materials, harvesting of various cultures, etc. are performed by the machine-tractor units with variable mass of links or processed media and materials. In recent years, are also developing the systems of automatic control, adjusting and control of technological processes and working members in agriculture production. Is studied the dynamics of transition processes of mountain self-propelled chassis with variable mass at real change disconnect or joining masses that is most often used in the function of movement (m(t = ctm(t = ct. Are derived the formulas of change of velocity of movement on displacement of unit and is defined the dependence of this velocity on the tractor and technological machine performance, with taking into account the gradual increase or removing of agricultural materials masses. According to the equation is possible to define a linear movement of machine-tractor unit. According to the obtained expressions we can define the basic operating parameters of machine-tractor unit with variable mass. The results of research would be applied at definition of characteristics of units, at development of new agricultural tractors.
Mass Society/Culture/Media: An Eclectic Approach.
Clavner, Jerry B.
Instructors of courses in mass society, culture, and communication start out facing three types of difficulties: the historical orientation of learning, the parochialism of various disciplines, and negative intellectually elitist attitudes toward mass culture/media. Added to these problems is the fact that many instructors have little or no…
International Nuclear Information System (INIS)
Ali, Ahmed; Parkhomenko, Alexander Ya.; Rusov, Aleksey V.
2013-12-01
We present a precise calculation of the dilepton invariant-mass spectrum and the decay rate for B ± →π ± l + l - (l ± =e ± ,μ ± ) in the Standard Model (SM) based on the effective Hamiltonian approach for the b→dl + l - transitions. With the Wilson coefficients already known in the next-to-next-to-leading logarithmic (NNLL) accuracy, the remaining theoretical uncertainty in the short-distance contribution resides in the form factors f + (q 2 ), f 0 (q 2 ) and f T (q 2 ). Of these, f + (q 2 ) is well measured in the charged-current semileptonic decays B→πlν l and we use the B-factory data to parametrize it. The corresponding form factors for the B→K transitions have been calculated in the Lattice-QCD approach for large-q 2 and extrapolated to the entire q 2 -region using the so-called z-expansion. Using an SU(3) F -breaking Ansatz, we calculate the B→π tensor form factor, which is consistent with the recently reported lattice B→π analysis obtained at large q 2 . The prediction for the total branching fraction B(B ± →π ± μ + μ - )=(1.88 +0.32 -0.21 ) x 10 -8 is in good agreement with the experimental value obtained by the LHCb collaboration. In the low q 2 -region, the Heavy-Quark Symmetry (HQS) relates the three form factors with each other. Accounting for the leading-order symmetry-breaking effects, and using data from the charged-current process B→πlν l to determine f + (q 2 ), we calculate the dilepton invariant-mass distribution in the low q 2 -region in the B ± →π ± l + l - decay. This provides a model-independent and precise calculation of the partial branching ratio for this decay.
Traino, Antonio C.; Di Martino, Fabio; Grosso, Mariano; Monzani, Fabio; Dardano, Angela; Caraccio, Nadia; Mariani, Giuliano; Lazzeri, Mauro
2005-05-01
Substantial reductions in thyroid volume (up to 70-80%) after radioiodine therapy of Graves' hyperthyroidism are common and have been reported in the literature. A relationship between thyroid volume reduction and outcome of 131I therapy of Graves' disease has been reported by some authors. This important result could be used to decide individually the optimal radioiodine activity A0 (MBq) to administer to the patient, but a predictive model relating the change in gland volume to A0 is required. Recently, a mathematical model of thyroid mass reduction during the clearance phase (30-35 days) after 131I administration to patients with Graves' disease has been published and used as the basis for prescribing the therapeutic thyroid absorbed dose. It is well known that the thyroid volume reduction goes on until 1 year after therapy. In this paper, a mathematical model to predict the final mass of Graves' diseased thyroids submitted to 131I therapy is presented. This model represents a tentative explanation of what occurs macroscopically after the end of the clearance phase of radioiodine in the gland (the so-called second-order effects). It is shown that the final thyroid mass depends on its basal mass, on the radiation dose absorbed by the gland and on a constant value α typical of thyroid tissue. α has been evaluated based on a set of measurements made in 15 reference patients affected by Graves' disease and submitted to 131I therapy. A predictive equation for the calculation of the final mass of thyroid is presented. It is based on macroscopic parameters measurable after a diagnostic 131I capsule administration (0.37-1.85 MBq), before giving the therapy. The final mass calculated using this equation is compared to the final mass of thyroid measured 1 year after therapy administration in 22 Graves' diseased patients. The final masses calculated and measured 1 year after therapy are in fairly good agreement (R = 0.81). The possibility, for the physician, to decide a
International Nuclear Information System (INIS)
Traino, Antonio C; Martino, Fabio Di; Grosso, Mariano; Monzani, Fabio; Dardano, Angela; Caraccio, Nadia; Mariani, Giuliano; Lazzeri, Mauro
2005-01-01
Substantial reductions in thyroid volume (up to 70-80%) after radioiodine therapy of Graves' hyperthyroidism are common and have been reported in the literature. A relationship between thyroid volume reduction and outcome of 131 I therapy of Graves' disease has been reported by some authors. This important result could be used to decide individually the optimal radioiodine activity A 0 (MBq) to administer to the patient, but a predictive model relating the change in gland volume to A 0 is required. Recently, a mathematical model of thyroid mass reduction during the clearance phase (30-35 days) after 131 I administration to patients with Graves' disease has been published and used as the basis for prescribing the therapeutic thyroid absorbed dose. It is well known that the thyroid volume reduction goes on until 1 year after therapy. In this paper, a mathematical model to predict the final mass of Graves' diseased thyroids submitted to 131 I therapy is presented. This model represents a tentative explanation of what occurs macroscopically after the end of the clearance phase of radioiodine in the gland (the so-called second-order effects). It is shown that the final thyroid mass depends on its basal mass, on the radiation dose absorbed by the gland and on a constant value α typical of thyroid tissue. α has been evaluated based on a set of measurements made in 15 reference patients affected by Graves' disease and submitted to 131 I therapy. A predictive equation for the calculation of the final mass of thyroid is presented. It is based on macroscopic parameters measurable after a diagnostic 131 I capsule administration (0.37-1.85 MBq), before giving the therapy. The final mass calculated using this equation is compared to the final mass of thyroid measured 1 year after therapy administration in 22 Graves' diseased patients. The final masses calculated and measured 1 year after therapy are in fairly good agreement (R = 0.81). The possibility, for the physician, to
Calculation code of mass and heat transfer in a pulsed column for Purex process
International Nuclear Information System (INIS)
Tsukada, Takeshi; Takahashi, Keiki
1993-01-01
A calculation code for extraction behavior analysis in a pulsed column employed at an extraction process of a reprocessing plant was developed. This code was also combined with our previously developed calculation code for axial temperature profiles in a pulsed column. The one-dimensional dispersion model was employed for both of the extraction behavior analysis and the axial temperature profile analysis. The reported values of the fluid characteristics coefficient, the transfer coefficient and the diffusivities in the pulsed column were used. The calculated concentration profiles of HNO 3 , U and Pu for the steady state have a good agreement with the reported experimental results. The concentration and temperature profiles were calculated under the operation conditions which induce the abnormal U extraction behavior, i.e. U extraction zone is moved to the bottom of the column. Thought there is slight difference between calculated and experimental value, it is appeared that our developed code can be applied to the simulation under the normal operation condition and the relatively slowly transient condition. Pu accumulation phenomena was analyzed with this code and the accumulation tendency is similar to the reported analysis results. (author)
Pavlov, Igor Y.; Wilson, Andrew R.; Delgado, Julio C.
2010-01-01
Reference intervals (RI) play a key role in clinical interpretation of laboratory test results. Numerous articles are devoted to analyzing and discussing various methods of RI determination. The two most widely used approaches are the parametric method, which assumes data normality, and a nonparametric, rank-based procedure. The decision about which method to use is usually made arbitrarily. The goal of this study was to demonstrate that using a resampling approach for the comparison of RI determination techniques could help researchers select the right procedure. Three methods of RI calculation—parametric, transformed parametric, and quantile-based bootstrapping—were applied to multiple random samples drawn from 81 values of complement factor B observations and from a computer-simulated normally distributed population. It was shown that differences in RI between legitimate methods could be up to 20% and even more. The transformed parametric method was found to be the best method for the calculation of RI of non-normally distributed factor B estimations, producing an unbiased RI and the lowest confidence limits and interquartile ranges. For a simulated Gaussian population, parametric calculations, as expected, were the best; quantile-based bootstrapping produced biased results at low sample sizes, and the transformed parametric method generated heavily biased RI. The resampling approach could help compare different RI calculation methods. An algorithm showing a resampling procedure for choosing the appropriate method for RI calculations is included. PMID:20554803
Puzach, S. V.; Suleykin, E. V.; Akperov, R. G.; Nguyen, T. D.
2017-11-01
A new experimental-theoretical approach to the toxic gases concentrations assessment in case of fire indoors is offered. The analytical formulas for calculation of CO average volume density are received. These formulas do not contain the geometrical sizes of the room and surfaces dimensions of combustible materials and, therefore, are valid under conditions of as a small-scale fire as a large-scale fire. A small-scale experimental installation for modeling fire thermal and gas dynamics in the closed or open thermodynamic system has been designed. The results of the experiments on determining dependencies of CO average volume density from average volume temperature and oxygen average volume density as well as dependencies of specific coefficients of CO emission and specific mass rates of the combustible material gasification from the time of tests during the burning of wood, transformer oil and PVC cables shield are presented. The results of numerical experiments on CO density calculation in small and large scale rooms using the proposed analytical solutions, integral, zone and field models for calculation of fire thermal and gas dynamics are presented. The comparison with the experimental data obtained by the authors and given in the literature has been performed. It is shown that CO density calculation in the full-scale room at the incipient stage of the fire can be carried out taking into account only the experimental dependences of CO from temperature or O2 density, that have been obtained from small-scale experiments. Therefore the solution of the equation of carbon monoxide mass conservation law is not necessary.
International Nuclear Information System (INIS)
Weinhorst, Bastian; Fischer, Ulrich; Lu, Lei; Qiu, Yuefeng; Wilson, Paul
2015-01-01
Highlights: • Comparison of different approaches for the use of CAD geometry for Monte Carlo transport calculations. • Comparison with regard to user-friendliness and computation performance. • Three approaches, namely conversion with McCad, unstructured mesh feature of MCN6 and DAGMC. • Installation most complex for DAGMC, model preparation worst for McCad, computation performance worst for MCNP6. • Installation easiest for McCad, model preparation best for MCNP6, computation speed fastest for McCad. - Abstract: Computer aided design (CAD) is an important industrial way to produce high quality designs. Therefore, CAD geometries are in general used for engineering and the design of complex facilities like the ITER tokamak. Although Monte Carlo codes like MCNP are well suited to handle the complex 3D geometry of ITER for transport calculations, they rely on their own geometry description and are in general not able to directly use the CAD geometry. In this paper, three different approaches for the use of CAD geometries with MCNP calculations are investigated and assessed with regard to calculation performance and user-friendliness. The first method is the conversion of the CAD geometry into MCNP geometry employing the conversion software McCad developed by KIT. The second approach utilizes the MCNP6 mesh geometry feature for the particle tracking and relies on the conversion of the CAD geometry into a mesh model. The third method employs DAGMC, developed by the University of Wisconsin-Madison, for the direct particle tracking on the CAD geometry using a patched version of MCNP. The obtained results show that each method has its advantages depending on the complexity and size of the model, the calculation problem considered, and the expertise of the user.
Energy Technology Data Exchange (ETDEWEB)
Weinhorst, Bastian, E-mail: bastian.weinhorst@kit.edu [Karlsruhe Institute of Technology (KIT), Institute for Neutron Physics and Reactor Technology, Eggenstein-Leopoldshafen (Germany); Fischer, Ulrich; Lu, Lei; Qiu, Yuefeng [Karlsruhe Institute of Technology (KIT), Institute for Neutron Physics and Reactor Technology, Eggenstein-Leopoldshafen (Germany); Wilson, Paul [University of Wisconsin-Madison, Computational Nuclear Engineering Research Group, Madison, WI (United States)
2015-10-15
Highlights: • Comparison of different approaches for the use of CAD geometry for Monte Carlo transport calculations. • Comparison with regard to user-friendliness and computation performance. • Three approaches, namely conversion with McCad, unstructured mesh feature of MCN6 and DAGMC. • Installation most complex for DAGMC, model preparation worst for McCad, computation performance worst for MCNP6. • Installation easiest for McCad, model preparation best for MCNP6, computation speed fastest for McCad. - Abstract: Computer aided design (CAD) is an important industrial way to produce high quality designs. Therefore, CAD geometries are in general used for engineering and the design of complex facilities like the ITER tokamak. Although Monte Carlo codes like MCNP are well suited to handle the complex 3D geometry of ITER for transport calculations, they rely on their own geometry description and are in general not able to directly use the CAD geometry. In this paper, three different approaches for the use of CAD geometries with MCNP calculations are investigated and assessed with regard to calculation performance and user-friendliness. The first method is the conversion of the CAD geometry into MCNP geometry employing the conversion software McCad developed by KIT. The second approach utilizes the MCNP6 mesh geometry feature for the particle tracking and relies on the conversion of the CAD geometry into a mesh model. The third method employs DAGMC, developed by the University of Wisconsin-Madison, for the direct particle tracking on the CAD geometry using a patched version of MCNP. The obtained results show that each method has its advantages depending on the complexity and size of the model, the calculation problem considered, and the expertise of the user.
On the stress calculation within phase-field approaches: a model for finite deformations
Schneider, Daniel; Schwab, Felix; Schoof, Ephraim; Reiter, Andreas; Herrmann, Christoph; Selzer, Michael; Böhlke, Thomas; Nestler, Britta
2017-08-01
Numerical simulations based on phase-field methods are indispensable in order to investigate interesting and important phenomena in the evolution of microstructures. Microscopic phase transitions are highly affected by mechanical driving forces and therefore the accurate calculation of the stresses in the transition region is essential. We present a method for stress calculations within the phase-field framework, which satisfies the mechanical jump conditions corresponding to sharp interfaces, although the sharp interface is represented as a volumetric region using the phase-field approach. This model is formulated for finite deformations, is independent of constitutive laws, and allows using any type of phase inherent inelastic strains.
Calculation of weighted averages approach for the estimation of ping tolerance values
Silalom, S.; Carter, J.L.; Chantaramongkol, P.
2010-01-01
A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.
[Advances in mass spectrometry-based approaches for neuropeptide analysis].
Ji, Qianyue; Ma, Min; Peng, Xin; Jia, Chenxi; Ji, Qianyue
2017-07-25
Neuropeptides are an important class of endogenous bioactive substances involved in the function of the nervous system, and connect the brain and other neural and peripheral organs. Mass spectrometry-based neuropeptidomics are designed to study neuropeptides in a large-scale manner and obtain important molecular information to further understand the mechanism of nervous system regulation and the pathogenesis of neurological diseases. This review summarizes the basic strategies for the study of neuropeptides using mass spectrometry, including sample preparation and processing, qualitative and quantitative methods, and mass spectrometry imagining.
Gorodilov, LV; Rasputina, TB
2018-03-01
A liquid–solid hydrodynamic model is used to determine shapes and sizes of craters generated by impact rupture of rocks. Near the impact location, rock is modeled by an ideal incompressible liquid, in the distance—by an absolute solid. The calculated data are compared with the experimental results obtained under impact treatment of marble by a wedge-shaped tool.
The charged Higgs boson mass of the MSSM in the Feynman-diagrammatic approach
Energy Technology Data Exchange (ETDEWEB)
Frank, M. [Karlsruhe Univ. (Germany). Inst. fuer Theoretische Physik; Galeta, L.; Heinemeyer, S. [Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Hahn, T.; Hollik, W. [Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut), Muenchen (Germany); Rzehak, H. [CERN, Geneva (Switzerland); Weiglein, G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2013-06-15
The interpretation of the Higgs signal at {proportional_to}126 GeV within the Minimal Supersymmetric Standard Model (MSSM) depends crucially on the predicted properties of the other Higgs states of the model, as the mass of the charged Higgs boson, M{sub H}{sup {sub {+-}}}. This mass is calculated in the Feynman-diagrammatic approach within the MSSM with real parameters. The result includes the complete one-loop contributions and the two-loop contributions of O({alpha}{sub t}{alpha}{sub s}). The one-loop contributions lead to sizable shifts in the M{sub H}{sup {sub {+-}}} prediction, reaching up to {proportional_to}8 GeV for relatively small values of M{sub A}. Even larger effects can occur depending on the sign and size of the {mu} parameter that enters the corrections affecting the relation between the bottom-quark mass and the bottom Yukawa coupling. The two-loop O({alpha}{sub t}{alpha}{sub s}) terms can shift M{sub H}{sup {sub {+-}}} by more than 2 GeV. The two-loop contributions amount to typically about 30% of the one-loop corrections for the examples that we have studied. These effects can be relevant for precision analyses of the charged MSSM Higgs boson.
Neutrino Mass Matrix Textures: A Data-driven Approach
Bertuzzo, E; Machado, P A N
2013-01-01
We analyze the neutrino mass matrix entries and their correlations in a probabilistic fashion, constructing probability distribution functions using the latest results from neutrino oscillation fits. Two cases are considered: the standard three neutrino scenario as well as the inclusion of a new sterile neutrino that potentially explains the reactor and gallium anomalies. We discuss the current limits and future perspectives on the mass matrix elements that can be useful for model building.
Mackie, Jane E; Bruce, Catherine D
2016-05-01
Accurate calculation of medication dosages can be challenging for nursing students. Specific interventions related to types of errors made by nursing students may improve the learning of this important skill. The objective of this study was to determine areas of challenge for students in performing medication dosage calculations in order to design interventions to improve this skill. Strengths and weaknesses in the teaching and learning of medication dosage calculations were assessed. These data were used to create online interventions which were then measured for the impact on student ability to perform medication dosage calculations. The setting of the study is one university in Canada. The qualitative research participants were 8 nursing students from years 1-3 and 8 faculty members. Quantitative results are based on test data from the same second year clinical course during the academic years 2012 and 2013. Students and faculty participated in one-to-one interviews; responses were recorded and coded for themes. Tests were implemented and scored, then data were assessed to classify the types and number of errors. Students identified conceptual understanding deficits, anxiety, low self-efficacy, and numeracy skills as primary challenges in medication dosage calculations. Faculty identified long division as a particular content challenge, and a lack of online resources for students to practice calculations. Lessons and online resources designed as an intervention to target mathematical and concepts and skills led to improved results and increases in overall pass rates for second year students for medication dosage calculation tests. This study suggests that with concerted effort and a multi-modal approach to supporting nursing students, their abilities to calculate dosages can be improved. The positive results in this study also point to the promise of cross-discipline collaborations between nursing and education. Copyright © 2016 Elsevier Ltd. All rights
International Nuclear Information System (INIS)
Andersson, Eva; Sobek, Sebastian
2006-01-01
Carbon budgets are frequently used in order to understand the pathways of organic matter in ecosystems, and they also have an important function in the risk assessment of harmful substances. We compared two approaches, mass balance calculations and an ecosystem budget, to describe carbon processing in a shallow, oligotrophic hardwater lake. Both approaches come to the same main conclusion, namely that the lake is a net auto trophic ecosystem, in spite of its high dissolved organic carbon and low total phosphorus concentrations. However, there were several differences between the carbon budgets, e.g. in the rate of sedimentation and the air-water flux of CO 2 . The largest uncertainty in the mass balance is the contribution of emergent macrophytes to the carbon cycling of the lake, while the ecosystem budget is very sensitive towards the choice of conversion factors and literature values. While the mass balance calculations produced more robust results, the ecosystem budget gave valuable insights into the pathways of organic matter transfer in the ecosystem. We recommend that when using an ecosystem budget for the risk assessment of harmful substances, mass balance calculations should be performed in parallel in order to increase the robustness of the conclusions
International Nuclear Information System (INIS)
Hermanns, H.J.
1977-04-01
By the example of light-water cooled nuclear reactors, the state of the calculation methods at disposal for calculating mass flow and steam quality distribution (sub-channel analysis) is indicated. Particular regard was paid to the transport phenomena occurring in reactor fuel elements in the range of two phase flow. Experimentally determined values were compared with recalculations of these experiments with the sub-channel code COBRA; from the results of these comparing calculations, conclusions could be drawn on the suitability of this code for defined applications. Limits of reliability could be determined to some extent. Based on the experience gained and the study of individual physical model concepts, recognized as being important, a sub-channel model was drawn up and the corresponding numerical computer code (SIEWAS) worked out. Experiments made at GE could be reproduced with the code SIEWAS with sufficient accuracy. (orig.) [de
Learning Approach on the Ground State Energy Calculation of Helium Atom
International Nuclear Information System (INIS)
Shah, Syed Naseem Hussain
2010-01-01
This research investigated the role of learning approach on the ground state energy calculation of Helium atom in improving the concepts of science teachers at university level. As the exact solution of several particles is not possible here we used approximation methods. Using this method one can understand easily the calculation of ground state energy of any given function. Variation Method is one of the most useful approximation methods in estimating the energy eigen values of the ground state and the first few excited states of a system, which we only have a qualitative idea about the wave function.The objective of this approach is to introduce and involve university teacher in new research, to improve their class room practices and to enable teachers to foster critical thinking in students.
The (water + acetonitrile) mixture revisited: A new approach for calculating partial molar volumes
International Nuclear Information System (INIS)
Carmen Grande, Maria del; Julia, Jorge Alvarez; Barrero, Carmen R.; Marschoff, Carlos M.; Bianchi, Hugo L.
2006-01-01
Density and viscosity of (water + acetonitrile) mixtures were measured over the whole composition range at the temperatures: (298.15, 303.15, 308.15, 313.15, and 318.15) K. A new mathematical approach was developed which allows the calculation of the derivatives of density with respect to composition avoiding the appearance of local discontinuities. Thus, reliable partial molar volumes and thermal expansion coefficients were obtained
International Nuclear Information System (INIS)
Tan, Zhiqiang; Wilson, D.; Varghese, P.L.
1997-01-01
We consider an extension of the ordinary Riemann problem and present an efficient approximate solution that can be used to improve the calculations of aerodynamic forces on an accelerating body. The method is demonstrated with one-dimensional examples where the Euler equations and the body motion are solved in the non-inertial co-ordinate frame fixed to the accelerating body. 8 refs., 6 figs
Seiler, Christian; Evers, Ferdinand
2016-10-01
A formalism for electronic-structure calculations is presented that is based on the functional renormalization group (FRG). The traditional FRG has been formulated for systems that exhibit a translational symmetry with an associated Fermi surface, which can provide the organization principle for the renormalization group (RG) procedure. We here advance an alternative formulation, where the RG flow is organized in the energy-domain rather than in k space. This has the advantage that it can also be applied to inhomogeneous matter lacking a band structure, such as disordered metals or molecules. The energy-domain FRG (ɛ FRG) presented here accounts for Fermi-liquid corrections to quasiparticle energies and particle-hole excitations. It goes beyond the state of the art G W -BSE , because in ɛ FRG the Bethe-Salpeter equation (BSE) is solved in a self-consistent manner. An efficient implementation of the approach that has been tested against exact diagonalization calculations and calculations based on the density matrix renormalization group is presented. Similar to the conventional FRG, also the ɛ FRG is able to signalize the vicinity of an instability of the Fermi-liquid fixed point via runaway flow of the corresponding interaction vertex. Embarking upon this fact, in an application of ɛ FRG to the spinless disordered Hubbard model we calculate its phase boundary in the plane spanned by the interaction and disorder strength. Finally, an extension of the approach to finite temperatures and spin S =1 /2 is also given.
Energy Technology Data Exchange (ETDEWEB)
Laureau, A., E-mail: laureau.axel@gmail.com; Heuer, D.; Merle-Lucotte, E.; Rubiolo, P.R.; Allibert, M.; Aufiero, M.
2017-05-15
Highlights: • Neutronic ‘Transient Fission Matrix’ approach coupled to the CFD OpenFOAM code. • Fission Matrix interpolation model for fast spectrum homogeneous reactors. • Application for coupled calculations of the Molten Salt Fast Reactor. • Load following, over-cooling and reactivity insertion transient studies. • Validation of the reactor intrinsic stability for normal and accidental transients. - Abstract: In this paper we present transient studies of the Molten Salt Fast Reactor (MSFR). This generation IV reactor is characterized by a liquid fuel circulating in the core cavity, requiring specific simulation tools. An innovative neutronic approach called “Transient Fission Matrix” is used to perform spatial kinetic calculations with a reduced computational cost through a pre-calculation of the Monte Carlo spatial and temporal response of the system. Coupled to this neutronic approach, the Computational Fluid Dynamics code OpenFOAM is used to model the complex flow pattern in the core. An accurate interpolation model developed to take into account the thermal hydraulics feedback on the neutronics including reactivity and neutron flux variation is presented. Finally different transient studies of the reactor in normal and accidental operating conditions are detailed such as reactivity insertion and load following capacities. The results of these studies illustrate the excellent behavior of the MSFR during such transients.
International Nuclear Information System (INIS)
Laureau, A.; Heuer, D.; Merle-Lucotte, E.; Rubiolo, P.R.; Allibert, M.; Aufiero, M.
2017-01-01
Highlights: • Neutronic ‘Transient Fission Matrix’ approach coupled to the CFD OpenFOAM code. • Fission Matrix interpolation model for fast spectrum homogeneous reactors. • Application for coupled calculations of the Molten Salt Fast Reactor. • Load following, over-cooling and reactivity insertion transient studies. • Validation of the reactor intrinsic stability for normal and accidental transients. - Abstract: In this paper we present transient studies of the Molten Salt Fast Reactor (MSFR). This generation IV reactor is characterized by a liquid fuel circulating in the core cavity, requiring specific simulation tools. An innovative neutronic approach called “Transient Fission Matrix” is used to perform spatial kinetic calculations with a reduced computational cost through a pre-calculation of the Monte Carlo spatial and temporal response of the system. Coupled to this neutronic approach, the Computational Fluid Dynamics code OpenFOAM is used to model the complex flow pattern in the core. An accurate interpolation model developed to take into account the thermal hydraulics feedback on the neutronics including reactivity and neutron flux variation is presented. Finally different transient studies of the reactor in normal and accidental operating conditions are detailed such as reactivity insertion and load following capacities. The results of these studies illustrate the excellent behavior of the MSFR during such transients.
On a mass independent approach leading to planetary orbit discretization
International Nuclear Information System (INIS)
Oliveira Neto, Marcal de
2007-01-01
The present article discusses a possible fractal approach for understanding orbit configurations around a central force field in well known systems of our infinitely small and infinitely large universes, based on quantum atomic models. This approach is supported by recent important theoretical investigations reported in the literature. An application presents a study involving the three star system HD 188753 Cygni in an approach similar to that employed in molecular quantum mechanics investigations
Fragment approach to constrained density functional theory calculations using Daubechies wavelets
International Nuclear Information System (INIS)
Ratcliff, Laura E.; Genovese, Luigi; Mohr, Stephan; Deutsch, Thierry
2015-01-01
In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix of the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments
Fragment approach to constrained density functional theory calculations using Daubechies wavelets
Energy Technology Data Exchange (ETDEWEB)
Ratcliff, Laura E., E-mail: lratcliff@anl.gov [Argonne Leadership Computing Facility, Argonne National Laboratory, Lemont, Illinois 60439 (United States); Université de Grenoble Alpes, CEA, INAC-SP2M, L-Sim, F-38000 Grenoble (France); Genovese, Luigi; Mohr, Stephan; Deutsch, Thierry [Université de Grenoble Alpes, CEA, INAC-SP2M, L-Sim, F-38000 Grenoble (France)
2015-06-21
In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix of the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments.
Mudie, Kurt L; Gupta, Amitabh; Green, Simon; Hobara, Hiroaki; Clothier, Peter J
2017-02-01
This study assessed the agreement between K vert calculated from 4 different methods of estimating vertical displacement of the center of mass (COM) during single-leg hopping. Healthy participants (N = 38) completed a 10-s single-leg hopping effort on a force plate, with 3D motion of the lower limb, pelvis, and trunk captured. Derived variables were calculated for a total of 753 hop cycles using 4 methods, including: double integration of the vertical ground reaction force, law of falling bodies, a marker cluster on the sacrum, and a segmental analysis method. Bland-Altman plots demonstrated that K vert calculated using segmental analysis and double integration methods have a relatively small bias (0.93 kN⋅m -1 ) and 95% limits of agreement (-1.89 to 3.75 kN⋅m -1 ). In contrast, a greater bias was revealed between sacral marker cluster and segmental analysis (-2.32 kN⋅m -1 ), sacral marker cluster and double integration (-3.25 kN⋅m -1 ), and the law of falling bodies compared with all methods (17.26-20.52 kN⋅m -1 ). These findings suggest the segmental analysis and double integration methods can be used interchangeably for the calculation of K vert during single-leg hopping. The authors propose the segmental analysis method to be considered the gold standard for the calculation of K vert during single-leg, on-the-spot hopping.
Mass spectrometry imaging enriches biomarker discovery approaches with candidate mapping.
Scott, Alison J; Jones, Jace W; Orschell, Christie M; MacVittie, Thomas J; Kane, Maureen A; Ernst, Robert K
2014-01-01
Integral to the characterization of radiation-induced tissue damage is the identification of unique biomarkers. Biomarker discovery is a challenging and complex endeavor requiring both sophisticated experimental design and accessible technology. The resources within the National Institute of Allergy and Infectious Diseases (NIAID)-sponsored Consortium, Medical Countermeasures Against Radiological Threats (MCART), allow for leveraging robust animal models with novel molecular imaging techniques. One such imaging technique, MALDI (matrix-assisted laser desorption ionization) mass spectrometry imaging (MSI), allows for the direct spatial visualization of lipids, proteins, small molecules, and drugs/drug metabolites-or biomarkers-in an unbiased manner. MALDI-MSI acquires mass spectra directly from an intact tissue slice in discrete locations across an x, y grid that are then rendered into a spatial distribution map composed of ion mass and intensity. The unique mass signals can be plotted to generate a spatial map of biomarkers that reflects pathology and molecular events. The crucial unanswered questions that can be addressed with MALDI-MSI include identification of biomarkers for radiation damage that reflect the response to radiation dose over time and the efficacy of therapeutic interventions. Techniques in MALDI-MSI also enable integration of biomarker identification among diverse animal models. Analysis of early, sublethally irradiated tissue injury samples from diverse mouse tissues (lung and ileum) shows membrane phospholipid signatures correlated with histological features of these unique tissues. This paper will discuss the application of MALDI-MSI for use in a larger biomarker discovery pipeline.
Constraining East Antarctic mass trends using a Bayesian inference approach
Martin-Español, Alba; Bamber, Jonathan L.
2016-04-01
East Antarctica is an order of magnitude larger than its western neighbour and the Greenland ice sheet. It has the greatest potential to contribute to sea level rise of any source, including non-glacial contributors. It is, however, the most challenging ice mass to constrain because of a range of factors including the relative paucity of in-situ observations and the poor signal to noise ratio of Earth Observation data such as satellite altimetry and gravimetry. A recent study using satellite radar and laser altimetry (Zwally et al. 2015) concluded that the East Antarctic Ice Sheet (EAIS) had been accumulating mass at a rate of 136±28 Gt/yr for the period 2003-08. Here, we use a Bayesian hierarchical model, which has been tested on, and applied to, the whole of Antarctica, to investigate the impact of different assumptions regarding the origin of elevation changes of the EAIS. We combined GRACE, satellite laser and radar altimeter data and GPS measurements to solve simultaneously for surface processes (primarily surface mass balance, SMB), ice dynamics and glacio-isostatic adjustment over the period 2003-13. The hierarchical model partitions mass trends between SMB and ice dynamics based on physical principles and measures of statistical likelihood. Without imposing the division between these processes, the model apportions about a third of the mass trend to ice dynamics, +18 Gt/yr, and two thirds, +39 Gt/yr, to SMB. The total mass trend for that period for the EAIS was 57±20 Gt/yr. Over the period 2003-08, we obtain an ice dynamic trend of 12 Gt/yr and a SMB trend of 15 Gt/yr, with a total mass trend of 27 Gt/yr. We then imposed the condition that the surface mass balance is tightly constrained by the regional climate model RACMO2.3 and allowed height changes due to ice dynamics to occur in areas of low surface velocities (solution that satisfies all the input data, given these constraints. By imposing these conditions, over the period 2003-13 we obtained a mass
First-principles X-ray absorption dose calculation for time-dependent mass and optical density.
Berejnov, Viatcheslav; Rubinstein, Boris; Melo, Lis G A; Hitchcock, Adam P
2018-05-01
A dose integral of time-dependent X-ray absorption under conditions of variable photon energy and changing sample mass is derived from first principles starting with the Beer-Lambert (BL) absorption model. For a given photon energy the BL dose integral D(e, t) reduces to the product of an effective time integral T(t) and a dose rate R(e). Two approximations of the time-dependent optical density, i.e. exponential A(t) = c + aexp(-bt) for first-order kinetics and hyperbolic A(t) = c + a/(b + t) for second-order kinetics, were considered for BL dose evaluation. For both models three methods of evaluating the effective time integral are considered: analytical integration, approximation by a function, and calculation of the asymptotic behaviour at large times. Data for poly(methyl methacrylate) and perfluorosulfonic acid polymers measured by scanning transmission soft X-ray microscopy were used to test the BL dose calculation. It was found that a previous method to calculate time-dependent dose underestimates the dose in mass loss situations, depending on the applied exposure time. All these methods here show that the BL dose is proportional to the exposure time D(e, t) ≃ K(e)t.
Energy Technology Data Exchange (ETDEWEB)
Gamble, John King [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nielsen, Erik [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Baczewski, Andrew David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Moussa, Jonathan Edward [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gao, Xujiao [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Salinger, Andrew G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Muller, Richard P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-07-01
This paper describes our work over the past few years to use tools from quantum chemistry to describe electronic structure of nanoelectronic devices. These devices, dubbed "artificial atoms", comprise a few electrons, con ned by semiconductor heterostructures, impurities, and patterned electrodes, and are of intense interest due to potential applications in quantum information processing, quantum sensing, and extreme-scale classical logic. We detail two approaches we have employed: nite-element and Gaussian basis sets, exploring the interesting complications that arise when techniques that were intended to apply to atomic systems are instead used for artificial, solid-state devices.
Calculation of the top quark mass in the flipped SU(5)xU(1) superstring model
Energy Technology Data Exchange (ETDEWEB)
Leontaris, G.K.; Rizos, J.; Tamvakis, K. (Ioannina Univ. (Greece). Dept. of Physics)
1990-11-08
We present a complete renormalization group calculation of the top-quark mass in the SU(5)xU(1) superstring model. We solve the coupled renormalization group equations for the gauge and Yukawa couplings in the two-loop approximation and obtain the top-quark mass as a function of two parameters of the model which could be chosen to be ratios of singlet VEVs associated with the surplus (U(1)){sup 4} breaking. We obtain a heavy top-quark with 150 GeV{le}m{sub t}<200 GeV, for most part of the parameter space, while lower values are possible only in a very small extremal region. We also compute the allowed range of unification parameters (M{sub x}, sin{sup 2}{theta}{sub w}, {alpha}{sub 3}(M{sub W})) in the presence of a heavy top-quark. (orig.).
A practical approach for the calculation of the activation energy of the sintering
Directory of Open Access Journals (Sweden)
Pouchly Vaclav
2016-01-01
Full Text Available Newly developed software for calculation of activation energy (Qs in the following of sintering using the Wang and Raj model is presented. To demonstrate the practical potential of the software and to evaluate the behaviour of the Qs during the sintering process, alumina and cubic zirconia ceramic compacts were prepared from nanometric powders. The results obtained with both materials are in agreement with previously published data calculated by different approaches. In the interval of interest (relative densities from 60 % to almost 100 % of theoretical density, both materials show similar behaviour. Three distinct regions can be seen: the initial constant values of Qs 868 kJ/mol and 762 kJ/mol for alumina and cubic zirconia, respectively; a region containing linear drop of Qs and the final region of constant Qs values 625 kJ/mol and 645 kJ/mol for alumina and cubic zirconia, respectively.
International Nuclear Information System (INIS)
Voloshchenko, A.M.; Russkov, A.A.; Gurevich, M.I.; Olejnik, D.S.
2008-01-01
One analyzes a possibility to make use of the geometry approximations conserving the materials mass local balance in every mesh via adding of mixtures in the meshes containing several feed materials to perform the kinetic calculation of the reactor core neutron fields. To set the 3D-geometry of the reactor core one makes use of the combinatorial geometry methods implemented in the MCI Program to solve the diffusivity equations by the Monte Carlo method, to convert the combinatorial prescribing of the geometry into the mesh representation - the ray tracing method. According to the calculations of the WWER-1000 reactor core and the simulations of the spent fuel storage facility, the described procedure compares favorably with the conventional geometry approximations [ru
A finite element approach to self-consistent field theory calculations of multiblock polymers
Energy Technology Data Exchange (ETDEWEB)
Ackerman, David M. [Department of Mechanical Engineering, Iowa State University, Ames, IA 50011 (United States); Delaney, Kris; Fredrickson, Glenn H. [Materials Research Laboratory, University of California, Santa Barbara (United States); Ganapathysubramanian, Baskar, E-mail: baskarg@iastate.edu [Department of Mechanical Engineering, Iowa State University, Ames, IA 50011 (United States)
2017-02-15
Self-consistent field theory (SCFT) has proven to be a powerful tool for modeling equilibrium microstructures of soft materials, particularly for multiblock polymers. A very successful approach to numerically solving the SCFT set of equations is based on using a spectral approach. While widely successful, this approach has limitations especially in the context of current technologically relevant applications. These limitations include non-trivial approaches for modeling complex geometries, difficulties in extending to non-periodic domains, as well as non-trivial extensions for spatial adaptivity. As a viable alternative to spectral schemes, we develop a finite element formulation of the SCFT paradigm for calculating equilibrium polymer morphologies. We discuss the formulation and address implementation challenges that ensure accuracy and efficiency. We explore higher order chain contour steppers that are efficiently implemented with Richardson Extrapolation. This approach is highly scalable and suitable for systems with arbitrary shapes. We show spatial and temporal convergence and illustrate scaling on up to 2048 cores. Finally, we illustrate confinement effects for selected complex geometries. This has implications for materials design for nanoscale applications where dimensions are such that equilibrium morphologies dramatically differ from the bulk phases.
International Nuclear Information System (INIS)
Illing, Bjoern
2014-01-01
Dominated by the energy policy the decentralized German energy market is changing. One mature target of the government is to increase the contribution of renewable generation to the gross electricity consumption. In order to achieve this target disadvantages like an increased need for capacity management occurs. Load reduction and variable grid fees offer the grid operator solutions to realize capacity management by influencing the load profile. The evolution of the current grid fees towards more causality is required to adapt these approaches. Two calculation approaches are developed in this assignment. On the one hand multivariable grid fees keeping the current components demand and energy charge. Additional to the grid costs grid load dependent parameters like the amount of decentralized feed-ins, time and local circumstances as well as grid capacities are considered. On the other hand the grid fee flat-rate which represents a demand based model on a monthly level. Both approaches are designed to meet the criteria for future grid fees. By means of a case study the effects of the grid fees on the load profile at the low voltage grid is simulated. Thereby the consumption is represented by different behaviour models and the results are scaled at the benchmark grid area. The resulting load curve is analyzed concerning the effects of peak load reduction as well as the integration of renewable energy sources. Additionally the combined effect of grid fees and electricity tariffs is evaluated. Finally the work discusses the launching of grid fees in the tense atmosphere of politics, legislation and grid operation. Results of this work are two calculation approaches designed for grid operators to define the grid fees. Multivariable grid fees are based on the current calculation scheme. Hereby demand and energy charges are weighted by time, locational and load related dependencies. The grid fee flat-rate defines a limitation in demand extraction. Different demand levels
The mass balance calculation of hydrothermal alteration in Sarcheshmeh porphyry copper deposit
Directory of Open Access Journals (Sweden)
Mohammad Maanijou
2013-10-01
Full Text Available Sarcheshmeh porphyry copper deposit is located 65 km southwest of Rafsanjan in Kerman province. The Sarcheshmeh deposit belongs to the southeastern part of Urumieh-Dokhtar magmatic assemblage (i.e., Dehaj-Sarduyeh zone. Intrusion of Sarcheshmeh granodiorite stock in faulted and thrusted early-Tertiary volcano-sedimentary deposits, led to mineralization in Miocene. In this research, the mass changes and element mobilities during hydrothermal process of potassic alteration were studied relative to fresh rock from the deeper parts of the plutonic body, phyllic relative to potassic, argillic relative to phyllic and propylitic alteration relative to fresh andesites surrounding the deposit. In the potassic zone, enrichment in Fe2O3 and K2O is so clear, because of increasing Fe coming from biotite alteration and presence of K-feldspar, respectively. Copper and molybdenum enrichments resulted from presence of chalcopyrite, bornite and molybdenite mineralization in this zone. Enrichment of SiO2 and depletion of CaO, MgO, Na2O and K2O in the phyllic zone resulted from leaching of sodium, calcium and magnesium from the aluminosilicate rocks and alteration of K-feldspar to sericite and quartz. In the argillic zone, Al2O3, CaO, MgO, Na2O and MnO have also been enriched in which increasing Al2O3 may be from kaolinite and illite formation. Also, enrichment in SiO2, Al2O3 and CaO in propylitic alteration zone can be attributed to the formation of chlorite, epidote and calcite as indicative minerals of this zone.
International Nuclear Information System (INIS)
Chan Heangping; Wei Jun; Zhang Yiheng; Helvie, Mark A.; Moore, Richard H.; Sahiner, Berkman; Hadjiiski, Lubomir; Kopans, Daniel B.
2008-01-01
The authors are developing a computer-aided detection (CAD) system for masses on digital breast tomosynthesis mammograms (DBT). Three approaches were evaluated in this study. In the first approach, mass candidate identification and feature analysis are performed in the reconstructed three-dimensional (3D) DBT volume. A mass likelihood score is estimated for each mass candidate using a linear discriminant analysis (LDA) classifier. Mass detection is determined by a decision threshold applied to the mass likelihood score. A free response receiver operating characteristic (FROC) curve that describes the detection sensitivity as a function of the number of false positives (FPs) per breast is generated by varying the decision threshold over a range. In the second approach, prescreening of mass candidate and feature analysis are first performed on the individual two-dimensional (2D) projection view (PV) images. A mass likelihood score is estimated for each mass candidate using an LDA classifier trained for the 2D features. The mass likelihood images derived from the PVs are backprojected to the breast volume to estimate the 3D spatial distribution of the mass likelihood scores. The FROC curve for mass detection can again be generated by varying the decision threshold on the 3D mass likelihood scores merged by backprojection. In the third approach, the mass likelihood scores estimated by the 3D and 2D approaches, described above, at the corresponding 3D location are combined and evaluated using FROC analysis. A data set of 100 DBT cases acquired with a GE prototype system at the Breast Imaging Laboratory in the Massachusetts General Hospital was used for comparison of the three approaches. The LDA classifiers with stepwise feature selection were designed with leave-one-case-out resampling. In FROC analysis, the CAD system for detection in the DBT volume alone achieved test sensitivities of 80% and 90% at average FP rates of 1.94 and 3.40 per breast, respectively. With the
Coupling-matrix approach to the Chern number calculation in disordered systems
International Nuclear Information System (INIS)
Zhang Yi-Fu; Ju Yan; Sheng Li; Shen Rui; Xing Ding-Yu; Yang Yun-You; Sheng Dong-Ning
2013-01-01
The Chern number is often used to distinguish different topological phases of matter in two-dimensional electron systems. A fast and efficient coupling-matrix method is designed to calculate the Chern number in finite crystalline and disordered systems. To show its effectiveness, we apply the approach to the Haldane model and the lattice Hofstadter model, and obtain the correct quantized Chern numbers. The disorder-induced topological phase transition is well reproduced, when the disorder strength is increased beyond the critical value. We expect the method to be widely applicable to the study of topological quantum numbers. (rapid communication)
International Nuclear Information System (INIS)
Belov, S M; Avdonina, N B; Felfli, Z; Marletta, M; Msezane, A Z; Naboko, S N
2004-01-01
A simple semiclassical approach, based on the investigation of anti-Stokes line topology, is presented for calculating Regge poles for nonsingular (Thomas-Fermi type) potentials, namely potentials with singularities at the origin weaker than order -2. The anti-Stokes lines for Thomas-Fermi potentials have a more complicated structure than those of singular potentials and require careful application of complex analysis. The explicit solution of the Bohr-Sommerfeld quantization condition is used to obtain approximate Regge poles. We introduce and employ three hypotheses to obtain several terms of the Regge pole approximation
Channel Capacity Calculation at Large SNR and Small Dispersion within Path-Integral Approach
Reznichenko, A. V.; Terekhov, I. S.
2018-04-01
We consider the optical fiber channel modelled by the nonlinear Shrödinger equation with additive white Gaussian noise. Using Feynman path-integral approach for the model with small dispersion we find the first nonzero corrections to the conditional probability density function and the channel capacity estimations at large signal-to-noise ratio. We demonstrate that the correction to the channel capacity in small dimensionless dispersion parameter is quadratic and positive therefore increasing the earlier calculated capacity for a nondispersive nonlinear optical fiber channel in the intermediate power region. Also for small dispersion case we find the analytical expressions for simple correlators of the output signals in our noisy channel.
Energy Technology Data Exchange (ETDEWEB)
Morgans, D. L. [CH2M Hill Plateau Remediation Company, Richland, WA (United States); Lindberg, S. L. [Intera Inc., Austin, TX (United States)
2017-09-20
The purpose of this technical approach document (TAD) is to document the assumptions, equations, and methods used to perform the groundwater pathway radiological dose calculations for the revised Hanford Site Composite Analysis (CA). DOE M 435.1-1, states, “The composite analysis results shall be used for planning, radiation protection activities, and future use commitments to minimize the likelihood that current low-level waste disposal activities will result in the need for future corrective or remedial actions to adequately protect the public and the environment.”
Calculation of the tunneling time using the extended probability of the quantum histories approach
International Nuclear Information System (INIS)
Rewrujirek, Jiravatt; Hutem, Artit; Boonchui, Sutee
2014-01-01
The dwell time of quantum tunneling has been derived by Steinberg (1995) [7] as a function of the relation between transmission and reflection times τ t and τ r , weighted by the transmissivity and the reflectivity. In this paper, we reexamine the dwell time using the extended probability approach. The dwell time is calculated as the weighted average of three mutually exclusive events. We consider also the scattering process due to a resonance potential in the long-time limit. The results show that the dwell time can be expressed as the weighted sum of transmission, reflection and internal probabilities.
A new approach for calculation of volume confined by ECR surface and its area in ECR ion source
International Nuclear Information System (INIS)
Filippov, A.V.
2007-01-01
The volume confined by the resonance surface and its area are important parameters of the balance equations model for calculation of ion charge-state distribution (CSD) in the electron-cyclotron resonance (ECR) ion source. A new approach for calculation of these parameters is given. This approach allows one to reduce the number of parameters in the balance equations model
Effective Lagrangian approach to the fermion mass problem
International Nuclear Information System (INIS)
Shaw, D.S.; Volkas, R.R.
1994-01-01
An effective theory is proposed, combining the standard gauge group SU(3) C direct-product SU(2) L direct-product U(1) Y with a horizontal discrete symmetry. By assigning appropriate charges under this discrete symmetry to the various fermion fields and to (at least) two Higgs doublets, the broad spread of the fermion mass and mixing angle spectrum can be explained as a result of suppressed, non-renormalizable terms. A particular model is constructed which achieves the above while simultaneously suppressing neutral Higgs-induced flavour-changing processes. 9 refs., 3 tabs., 1 fig
Wu, Yu; Zhang, Hongpeng
2017-12-01
A new microfluidic chip is presented to enhance the sensitivity of a micro inductive sensor, and an approach to coil inductance change calculation is introduced for metal particle detection in lubrication oil. Electromagnetic knowledge is used to establish a mathematical model of an inductive sensor for metal particle detection, and the analytic expression of coil inductance change is obtained by a magnetic vector potential. Experimental verification is carried out. The results show that copper particles 50-52 µm in diameter have been detected; the relative errors between the theoretical and experimental values are 7.68% and 10.02% at particle diameters of 108-110 µm and 50-52 µm, respectively. The approach presented here can provide a theoretical basis for an inductive sensor in metal particle detection in oil and other areas of application.
Directory of Open Access Journals (Sweden)
Uğur YALÇIN
2004-02-01
Full Text Available In this study, quasi-optical scattering of finite source electromagnetic waves from a dielectric coated cylindrical surface is analysed with Physical Optics (PO approach. A linear electrical current source is chosen as the finite source. Reflection coefficient of the cylindrical surface is derived by using Geometrical Theory of Diffraction (GTD. Then, with the help of this coefficient, fields scattered from the surface are obtained. These field expressions are used in PO approach and surface scattering integral is determined. Evaluating this integral asymptotically, fields reflected from the surface and surface divergence coefficient are calculated. Finally, results obtained in this study are evaluated numerically and effects of the surface impedance to scattered fields are analysed. The time factor is taken as j te? in this study.
Winkler, Robert
2010-02-01
Electrospray ionization (ESI) ion trap mass spectrometers with relatively low resolution are frequently used for the analysis of natural products and peptides. Although ESI spectra of multiply charged protein molecules also can be measured on this type of devices, only average spectra are produced for the majority of naturally occurring proteins. Evaluating such ESI protein spectra would provide valuable information about the native state of investigated proteins. However, no suitable and freely available software could be found which allows the charge state determination and molecular weight calculation of single proteins from average ESI-MS data. Therefore, an algorithm based on standard deviation optimization (scatter minimization) was implemented for the analysis of protein ESI-MS data. The resulting software ESIprot was tested with ESI-MS data of six intact reference proteins between 12.4 and 66.7 kDa. In all cases, the correct charge states could be determined. The obtained absolute mass errors were in a range between -0.2 and 1.2 Da, the relative errors below 30 ppm. The possible mass accuracy allows for valid conclusions about the actual condition of proteins. Moreover, the ESIprot algorithm demonstrates an extraordinary robustness and allows spectral interpretation from as little as two peaks, given sufficient quality of the provided m/z data, without the necessity for peak intensity data. ESIprot is independent from the raw data format and the computer platform, making it a versatile tool for mass spectrometrists. The program code was released under the open-source GPLv3 license to support future developments of mass spectrometry software. Copyright 2010 John Wiley & Sons, Ltd.
(S)fermion masses and lepton flavor violation. A democratic approach
International Nuclear Information System (INIS)
Hamaguchi, K.; Kakizaki, Mitsuru; Yamaguchi, Masahiro
2004-01-01
It is well-known that flavor mixing among the sfermion masses must be quite suppressed to survive various FCNC experimental bounds. One of the solutions to this supersymmetric FCNC problem is an alignment mechanism in which sfermion masses and fermion masses have some common origin and thus they are somehow aligned to each other. We propose a democratic approach to realize this idea, and illustrate how it has different predictions in slepton masses as well as lepton flavor violation from a more conventional minimal supergravity approach. This talk is based on our work in Ref. 1. (author)
Energy Technology Data Exchange (ETDEWEB)
Borges, A.; Solomon, G. C. [Department of Chemistry and Nano-Science Center, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen Ø (Denmark)
2016-05-21
Single molecule conductance measurements are often interpreted through computational modeling, but the complexity of these calculations makes it difficult to directly link them to simpler concepts and models. Previous work has attempted to make this connection using maximally localized Wannier functions and symmetry adapted basis sets, but their use can be ambiguous and non-trivial. Starting from a Hamiltonian and overlap matrix written in a hydrogen-like basis set, we demonstrate a simple approach to obtain a new basis set that is chemically more intuitive and allows interpretation in terms of simple concepts and models. By diagonalizing the Hamiltonians corresponding to each atom in the molecule, we obtain a basis set that can be partitioned into pseudo-σ and −π and allows partitioning of the Landuaer-Büttiker transmission as well as create simple Hückel models that reproduce the key features of the full calculation. This method provides a link between complex calculations and simple concepts and models to provide intuition or extract parameters for more complex model systems.
Righter, K.; Ghiorso, M.
2009-01-01
Calculation of oxygen fugacity in high pressure and temperature experiments in metal-silicate systems is usually approximated by the ratio of Fe in the metal and FeO in the silicate melt: (Delta)IW=2*log(X(sub Fe)/X(sub FeO)), where IW is the iron-wustite reference oxygen buffer. Although this is a quick and easy calculation to make, it has been applied to a huge variety of metallic (Fe- Ni-S-C-O-Si systems) and silicate liquids (SiO2, Al2O3, TiO2, FeO, MgO, CaO, Na2O, K2O systems). This approach has surely led to values that have little meaning, yet are applied with great confidence, for example, to a terrestrial mantle at "IW-2". Although fO2 can be circumvented in some cases by consideration of Fe-M distribution coefficient, these do not eliminate the effects of alloy or silicate liquid compositional variation, or the specific chemical effects of S in the silicate liquid, for example. In order to address the issue of what the actual value of fO2 is in any given experiment, we have calculated fO2 from the equilibria 2Fe (metal) + SiO2 (liq) + O2 = Fe2SiO4 (liq).
Ananthakrishna, G.; K, Srikanth
2018-03-01
It is well known that plastic deformation is a highly nonlinear dissipative irreversible phenomenon of considerable complexity. As a consequence, little progress has been made in modeling some well-known size-dependent properties of plastic deformation, for instance, calculating hardness as a function of indentation depth independently. Here, we devise a method of calculating hardness by calculating the residual indentation depth and then calculate the hardness as the ratio of the load to the residual imprint area. Recognizing the fact that dislocations are the basic defects controlling the plastic component of the indentation depth, we set up a system of coupled nonlinear time evolution equations for the mobile, forest, and geometrically necessary dislocation densities. Within our approach, we consider the geometrically necessary dislocations to be immobile since they contribute to additional hardness. The model includes dislocation multiplication, storage, and recovery mechanisms. The growth of the geometrically necessary dislocation density is controlled by the number of loops that can be activated under the contact area and the mean strain gradient. The equations are then coupled to the load rate equation. Our approach has the ability to adopt experimental parameters such as the indentation rates, the geometrical parameters defining the Berkovich indenter, including the nominal tip radius. The residual indentation depth is obtained by integrating the Orowan expression for the plastic strain rate, which is then used to calculate the hardness. Consistent with the experimental observations, the increasing hardness with decreasing indentation depth in our model arises from limited dislocation sources at small indentation depths and therefore avoids divergence in the limit of small depths reported in the Nix-Gao model. We demonstrate that for a range of parameter values that physically represent different materials, the model predicts the three characteristic
Mass spectrometric based approaches in urine metabolomics and biomarker discovery.
Khamis, Mona M; Adamko, Darryl J; El-Aneed, Anas
2017-03-01
Urine metabolomics has recently emerged as a prominent field for the discovery of non-invasive biomarkers that can detect subtle metabolic discrepancies in response to a specific disease or therapeutic intervention. Urine, compared to other biofluids, is characterized by its ease of collection, richness in metabolites and its ability to reflect imbalances of all biochemical pathways within the body. Following urine collection for metabolomic analysis, samples must be immediately frozen to quench any biogenic and/or non-biogenic chemical reactions. According to the aim of the experiment; sample preparation can vary from simple procedures such as filtration to more specific extraction protocols such as liquid-liquid extraction. Due to the lack of comprehensive studies on urine metabolome stability, higher storage temperatures (i.e. 4°C) and repetitive freeze-thaw cycles should be avoided. To date, among all analytical techniques, mass spectrometry (MS) provides the best sensitivity, selectivity and identification capabilities to analyze the majority of the metabolite composition in the urine. Combined with the qualitative and quantitative capabilities of MS, and due to the continuous improvements in its related technologies (i.e. ultra high-performance liquid chromatography [UPLC] and hydrophilic interaction liquid chromatography [HILIC]), liquid chromatography (LC)-MS is unequivocally the most utilized and the most informative analytical tool employed in urine metabolomics. Furthermore, differential isotope tagging techniques has provided a solution to ion suppression from urine matrix thus allowing for quantitative analysis. In addition to LC-MS, other MS-based technologies have been utilized in urine metabolomics. These include direct injection (infusion)-MS, capillary electrophoresis-MS and gas chromatography-MS. In this article, the current progresses of different MS-based techniques in exploring the urine metabolome as well as the recent findings in providing
Exciton scattering approach for optical spectra calculations in branched conjugated macromolecules
International Nuclear Information System (INIS)
Li, Hao; Wu, Chao; Malinin, Sergey V.; Tretiak, Sergei; Chernyak, Vladimir Y.
2016-01-01
The exciton scattering (ES) technique is a multiscale approach based on the concept of a particle in a box and developed for efficient calculations of excited-state electronic structure and optical spectra in low-dimensional conjugated macromolecules. Within the ES method, electronic excitations in molecular structure are attributed to standing waves representing quantum quasi-particles (excitons), which reside on the graph whose edges and nodes stand for the molecular linear segments and vertices, respectively. Exciton propagation on the linear segments is characterized by the exciton dispersion, whereas exciton scattering at the branching centers is determined by the energy-dependent scattering matrices. Using these ES energetic parameters, the excitation energies are then found by solving a set of generalized “particle in a box” problems on the graph that represents the molecule. Similarly, unique energy-dependent ES dipolar parameters permit calculations of the corresponding oscillator strengths, thus, completing optical spectra modeling. Both the energetic and dipolar parameters can be extracted from quantum-chemical computations in small molecular fragments and tabulated in the ES library for further applications. Subsequently, spectroscopic modeling for any macrostructure within a considered molecular family could be performed with negligible numerical effort. We demonstrate the ES method application to molecular families of branched conjugated phenylacetylenes and ladder poly-para-phenylenes, as well as structures with electron donor and acceptor chemical substituents. Time-dependent density functional theory (TD-DFT) is used as a reference model for electronic structure. The ES calculations accurately reproduce the optical spectra compared to the reference quantum chemistry results, and make possible to predict spectra of complex macromolecules, where conventional electronic structure calculations are unfeasible.
Exciton scattering approach for optical spectra calculations in branched conjugated macromolecules
Li, Hao; Wu, Chao; Malinin, Sergey V.; Tretiak, Sergei; Chernyak, Vladimir Y.
2016-12-01
The exciton scattering (ES) technique is a multiscale approach based on the concept of a particle in a box and developed for efficient calculations of excited-state electronic structure and optical spectra in low-dimensional conjugated macromolecules. Within the ES method, electronic excitations in molecular structure are attributed to standing waves representing quantum quasi-particles (excitons), which reside on the graph whose edges and nodes stand for the molecular linear segments and vertices, respectively. Exciton propagation on the linear segments is characterized by the exciton dispersion, whereas exciton scattering at the branching centers is determined by the energy-dependent scattering matrices. Using these ES energetic parameters, the excitation energies are then found by solving a set of generalized "particle in a box" problems on the graph that represents the molecule. Similarly, unique energy-dependent ES dipolar parameters permit calculations of the corresponding oscillator strengths, thus, completing optical spectra modeling. Both the energetic and dipolar parameters can be extracted from quantum-chemical computations in small molecular fragments and tabulated in the ES library for further applications. Subsequently, spectroscopic modeling for any macrostructure within a considered molecular family could be performed with negligible numerical effort. We demonstrate the ES method application to molecular families of branched conjugated phenylacetylenes and ladder poly-para-phenylenes, as well as structures with electron donor and acceptor chemical substituents. Time-dependent density functional theory (TD-DFT) is used as a reference model for electronic structure. The ES calculations accurately reproduce the optical spectra compared to the reference quantum chemistry results, and make possible to predict spectra of complex macromolecules, where conventional electronic structure calculations are unfeasible.
Exciton scattering approach for optical spectra calculations in branched conjugated macromolecules
Energy Technology Data Exchange (ETDEWEB)
Li, Hao [Department of Chemistry, University of Houston, Houston, TX 77204 (United States); Wu, Chao [Electronic Structure Lab, Center of Microscopic Theory and Simulation, Frontier Institute of Science and Technology, Xian Jiaotong University, Xian 710054 (China); Malinin, Sergey V. [Department of Chemistry, Wayne State University, 5101 Cass Avenue, Detroit, MI 48202 (United States); Tretiak, Sergei, E-mail: serg@lanl.gov [Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Center for Integrated Nanotechnologies, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Chernyak, Vladimir Y., E-mail: chernyak@chem.wayne.edu [Department of Chemistry, Wayne State University, 5101 Cass Avenue, Detroit, MI 48202 (United States)
2016-12-20
The exciton scattering (ES) technique is a multiscale approach based on the concept of a particle in a box and developed for efficient calculations of excited-state electronic structure and optical spectra in low-dimensional conjugated macromolecules. Within the ES method, electronic excitations in molecular structure are attributed to standing waves representing quantum quasi-particles (excitons), which reside on the graph whose edges and nodes stand for the molecular linear segments and vertices, respectively. Exciton propagation on the linear segments is characterized by the exciton dispersion, whereas exciton scattering at the branching centers is determined by the energy-dependent scattering matrices. Using these ES energetic parameters, the excitation energies are then found by solving a set of generalized “particle in a box” problems on the graph that represents the molecule. Similarly, unique energy-dependent ES dipolar parameters permit calculations of the corresponding oscillator strengths, thus, completing optical spectra modeling. Both the energetic and dipolar parameters can be extracted from quantum-chemical computations in small molecular fragments and tabulated in the ES library for further applications. Subsequently, spectroscopic modeling for any macrostructure within a considered molecular family could be performed with negligible numerical effort. We demonstrate the ES method application to molecular families of branched conjugated phenylacetylenes and ladder poly-para-phenylenes, as well as structures with electron donor and acceptor chemical substituents. Time-dependent density functional theory (TD-DFT) is used as a reference model for electronic structure. The ES calculations accurately reproduce the optical spectra compared to the reference quantum chemistry results, and make possible to predict spectra of complex macromolecules, where conventional electronic structure calculations are unfeasible.
Numerical probabilistic analysis for slope stability in fractured rock masses using DFN-DEM approach
Directory of Open Access Journals (Sweden)
Alireza Baghbanan
2017-06-01
Full Text Available Due to existence of uncertainties in input geometrical properties of fractures, there is not any unique solution for assessing the stability of slopes in jointed rock masses. Therefore, the necessity of applying probabilistic analysis in these cases is inevitable. In this study a probabilistic analysis procedure together with relevant algorithms are developed using Discrete Fracture Network-Distinct Element Method (DFN-DEM approach. In the right abutment of Karun 4 dam and downstream of the dam body, five joint sets and one major joint have been identified. According to the geometrical properties of fractures in Karun river valley, instability situations are probable in this abutment. In order to evaluate the stability of the rock slope, different combinations of joint set geometrical parameters are selected, and a series of numerical DEM simulations are performed on generated and validated DFN models in DFN-DEM approach to measure minimum required support patterns in dry and saturated conditions. Results indicate that the distribution of required bolt length is well fitted with a lognormal distribution in both circumstances. In dry conditions, the calculated mean value is 1125.3 m, and more than 80 percent of models need only 1614.99 m of bolts which is a bolt pattern with 2 m spacing and 12 m length. However, as for the slopes with saturated condition, the calculated mean value is 1821.8 m, and more than 80 percent of models need only 2653.49 m of bolts which is equivalent to a bolt pattern with 15 m length and 1.5 m spacing. Comparison between obtained results with numerical and empirical method show that investigation of a slope stability with different DFN realizations which conducted in different block patterns is more efficient than the empirical methods.
Approach and methods to evaluate the uncertainty in system thermalhydraulic calculations
International Nuclear Information System (INIS)
D'Auria, F.
2004-01-01
The evaluation of uncertainty constitutes the necessary supplement of Best Estimate (BE) calculations performed to understand accident scenarios in water cooled nuclear reactors. The needs come from the imperfection of computational tools on the one side and from the interest in using such tool to get more precise evaluation of safety margins. In the present paper the approaches to uncertainty are outlined and the CIAU (Code with capability of Internal Assessment of Uncertainty) method proposed by the University of Pisa is described including ideas at the basis and results from applications. An activity in progress at the International Atomic Energy Agency (IAEA) is considered. Two approaches are distinguished that are characterized as 'propagation of code input uncertainty' and 'propagation of code output errors'. For both methods, the thermal-hydraulic code is at the centre of the process of uncertainty evaluation: in the former case the code itself is adopted to compute the error bands and to propagate the input errors, in the latter case the errors in code application to relevant measurements are used to derive the error bands. The CIAU method exploits the idea of the 'status approach' for identifying the thermalhydraulic conditions of an accident in any Nuclear Power Plant (NPP). Errors in predicting such status are derived from the comparison between predicted and measured quantities and, in the stage of the application of the method, are used to compute the uncertainty. (author)
Free Energy Calculations using a Swarm-Enhanced Sampling Molecular Dynamics Approach.
Burusco, Kepa K; Bruce, Neil J; Alibay, Irfan; Bryce, Richard A
2015-10-26
Free energy simulations are an established computational tool in modelling chemical change in the condensed phase. However, sampling of kinetically distinct substates remains a challenge to these approaches. As a route to addressing this, we link the methods of thermodynamic integration (TI) and swarm-enhanced sampling molecular dynamics (sesMD), where simulation replicas interact cooperatively to aid transitions over energy barriers. We illustrate the approach by using alchemical alkane transformations in solution, comparing them with the multiple independent trajectory TI (IT-TI) method. Free energy changes for transitions computed by using IT-TI grew increasingly inaccurate as the intramolecular barrier was heightened. By contrast, swarm-enhanced sampling TI (sesTI) calculations showed clear improvements in sampling efficiency, leading to more accurate computed free energy differences, even in the case of the highest barrier height. The sesTI approach, therefore, has potential in addressing chemical change in systems where conformations exist in slow exchange. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Effective Approach to Calculate Analysis Window in Infinite Discrete Gabor Transform
Directory of Open Access Journals (Sweden)
Rui Li
2018-01-01
Full Text Available The long-periodic/infinite discrete Gabor transform (DGT is more effective than the periodic/finite one in many applications. In this paper, a fast and effective approach is presented to efficiently compute the Gabor analysis window for arbitrary given synthesis window in DGT of long-periodic/infinite sequences, in which the new orthogonality constraint between analysis window and synthesis window in DGT for long-periodic/infinite sequences is derived and proved to be equivalent to the completeness condition of the long-periodic/infinite DGT. By using the property of delta function, the original orthogonality can be expressed as a certain number of linear equation sets in both the critical sampling case and the oversampling case, which can be fast and efficiently calculated by fast discrete Fourier transform (FFT. The computational complexity of the proposed approach is analyzed and compared with that of the existing canonical algorithms. The numerical results indicate that the proposed approach is efficient and fast for computing Gabor analysis window in both the critical sampling case and the oversampling case in comparison to existing algorithms.
Fuchs, Susanne I; Junge, Sibylle; Ellemunter, Helmut; Ballmann, Manfred; Gappa, Monika
2013-05-01
Volumetric capnography reflecting the course of CO2-exhalation is used to assess ventilation inhomogeneity. Calculation of the slope of expiratory phase 3 and the capnographic index (KPIv) from expirograms allows quantification of extent and severity of small airway impairment. However, technical limitations have hampered more widespread use of this technique. Using expiratory molar mass-volume-curves sampled with a handheld ultrasonic flow sensor during tidal breathing is a novel approach to extract similar information from expirograms in a simpler manner possibly qualifying as a screening tool for clinical routine. The aim of the present study was to evaluate calculation of the KPIv based on molar mass-volume-curves sampled with an ultrasonic flow sensor in patients with CF and controls by assessing feasibility, reproducibility and comparability with the Lung Clearance Index (LCI) derived from multiple breath washout (MBW) used as the reference method. Measurements were performed in patients with CF and healthy controls during a single test occasion using the EasyOne Pro, MBW Module (ndd Medical Technologies, Switzerland). Capnography and MBW were performed in 87/96 patients with CF and 38/42 controls, with a success rate of 90.6% for capnography. Mean age (range) was 12.1 (4-25) years. Mean (SD) KPIv was 6.94 (3.08) in CF and 5.10 (2.06) in controls (p=0.001). Mean LCI (SD) was 8.0 (1.4) in CF and 6.2 (0.4) in controls (p=molar mass-volume-curves is feasible. KPIv is significantly different between patients with CF and controls and correlates with the LCI. However, individual data revealed a relevant overlap between patients and controls requiring further evaluation, before this method can be recommended for clinical use. Copyright © 2012 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.
Mass Optimization of Battery/Supercapacitors Hybrid Systems Based on a Linear Programming Approach
Fleury, Benoit; Labbe, Julien
2014-08-01
The objective of this paper is to show that, on a specific launcher-type mission profile, a 40% gain of mass is expected using a battery/supercapacitors active hybridization instead of a single battery solution. This result is based on the use of a linear programming optimization approach to perform the mass optimization of the hybrid power supply solution.
Hüsken, G.
2010-01-01
This thesis provides a multifunctional design approach for sustainable concrete, particularly earth-moist concrete (EMC), with application to concrete mass products. EMC is a concrete with low water content and stiff consistency that is used for the production of concrete mass products, such as
Deference, Denial, and Beyond: A Repertoire Approach to Mass Media and Schooling
Rymes, Betsy
2011-01-01
In this article, the author outlines two general research approaches, within the education world, to these mass-mediated formations: "Deference" and "Denial." Researchers who recognize the social practices that give local meaning to mass media formations and ways of speaking do not attempt to recontextualize youth media in their own social…
Reich, H; Moens, Y; Braun, C; Kneissl, S; Noreikat, K; Reske, A
2014-12-01
Quantitative computer tomographic analysis (qCTA) is an accurate but time intensive method used to quantify volume, mass and aeration of the lungs. The aim of this study was to validate a time efficient interpolation technique for application of qCTA in ponies. Forty-one thoracic computer tomographic (CT) scans obtained from eight anaesthetised ponies positioned in dorsal recumbency were included. Total lung volume and mass and their distribution into four compartments (non-aerated, poorly aerated, normally aerated and hyperaerated; defined based on the attenuation in Hounsfield Units) were determined for the entire lung from all 5 mm thick CT-images, 59 (55-66) per animal. An interpolation technique validated for use in humans was then applied to calculate qCTA results for lung volumes and masses from only 10, 12, and 14 selected CT-images per scan. The time required for both procedures was recorded. Results were compared statistically using the Bland-Altman approach. The bias ± 2 SD for total lung volume calculated from interpolation of 10, 12, and 14 CT-images was -1.2 ± 5.8%, 0.1 ± 3.5%, and 0.0 ± 2.5%, respectively. The corresponding results for total lung mass were -1.1 ± 5.9%, 0.0 ± 3.5%, and 0.0 ± 3.0%. The average time for analysis of one thoracic CT-scan using the interpolation method was 1.5-2 h compared to 8 h for analysis of all images of one complete thoracic CT-scan. The calculation of pulmonary qCTA data by interpolation from 12 CT-images was applicable for equine lung CT-scans and reduced the time required for analysis by 75%. Copyright © 2014 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Colaninno, Robin C.; Vourlidas, Angelos
2009-01-01
The twin Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI) COR2 coronagraphs of the Solar Terrestrial Relations Observatory (STEREO) provide images of the solar corona from two viewpoints in the solar system. Since their launch in late 2006, the STEREO Ahead (A) and Behind (B) spacecraft have been slowly separating from Earth at a rate of 22. 0 5 per year. By the end of 2007, the two spacecraft were separated by more than 40 deg. from each other. At that time, we began to see large-scale differences in the morphology and total intensity between coronal mass ejections (CMEs) observed with SECCHI-COR2 on STEREO-A and B. Due to the effects of the Thomson scattering geometry, the intensity of an observed CME is dependent on the angle it makes with the observed plane of the sky. From the intensity images, we can calculate the integrated line-of-sight electron density and mass. We demonstrate that it is possible to simultaneously derive the direction and true total mass of the CME if we make the simple assumption that the same mass should be observed in COR2-A and B.
International Nuclear Information System (INIS)
Fuente P, A. de la; Dumenigo G, C.; Quevedo G, J.R.; Lopez F, Y.
2006-01-01
The evaluation of applications of construction licenses for the new services of radiotherapy has occupied a significant space in the activity developed by the National Center of Nuclear Safety (CNSN) in the last 2 years. Presently work the experiences of the authors in the evaluation of the required shield for the local where cobalt therapy equipment and lineal accelerators of medical use are used its are exposed, the practical problems detected are approached during the application of the methodologies recommended in both cases and its are discussed which have been the suppositions of items accepted by the Regulatory Authority for the realization of these shield calculations. The accumulated experience allows to assure that the realistic application of the item data and the rational use of the engineering logic makes possible to design local for radiotherapy equipment that fulfill the established dose restrictions in the in use legislation in Cuba, without it implies an excessive expense of construction materials. (Author)
International Nuclear Information System (INIS)
Caldwell, J.
1984-01-01
Martinelli and Morini have used an analytical method for calculating values and distribution of the magnetic field in superconducting magnets. Using Fourier series the magnetic field is determined by carrying out a series expansion of the current density distribution of the system of coils. This Fourier method can be modified to include axial iron to a far greater accuracy (for finite permeability) by incorporating the image series approach of Caldwell and Zisserman. Also an exact solution can be obtained for the case of infinite permeability. A comparison of the results derived from the expansion of Martinelli and Morini with the exact solution of Caldwell and Zisserman shows excellent agreement for the iron-free case but the accuracy deteriorates as the permeability μ/sub z/ increases. The exact solution should be used for infinite permeability and also gives satisfactory results for permeability μ/sub z/ >100. A symmetric geometry is used throughout the communication for simplicity of presentation
New approach to invariant-embedding methods in reactor physics calculations
International Nuclear Information System (INIS)
Forsbacka, M.J.; Rydin, R.A.
1997-01-01
Invariant-embedding methods offer an alternative approach to modeling physical phenomena and solving mathematical problems. Invariant embedding allows one to express traditional boundary-value problems as initial-value problems. In doing this, one effectively reformulates a problem to be solved in terms of an embedding parameter. In this paper, a hybrid method that consists of Monte Carlo-generated response functions that describe the neutronic properties of local spatial cells are coupled together in a global reactor model using the invariant embedding methodology, where the system multiplication factor k eff is used as the embedding parameter. Thus, k eff is computed directly rather than as the result of a secondary eigenvalue calculation. Because the response functions can represent any arbitrary material distribution within a local cell, this method shows promise to accurately assess the change in reactivity due to core disruptive accidents and other changes in system configuration such as changing control rod positions. This paper reports a series of proof-of-concept calculations that assess this method
Suryadarma, I. G. P.; Handziko, Rio Christy
2017-08-01
The presence and distribution of various cicada families as one of Javanese and Balinese ethnic farmers' season changing indicators. Comprehension of season changing is as an effort to optimize crop result as a mean to support basic needs. Society understands the behaviour of animal of cicada family based on season condition. Traditionally season guideline is based on the sun position from the equator, moon position, astronomy called as pranotomongso. Cicada's family is one of indicators of season changing between rain season and dry season. There were two-selected cicadas: garengpung as indicator of dry season and tenggoreknongas indicator of rainy season. This research aimed to discover the presence and growth dynamics of garengpung and tenggoreknong(cicada) as season changing indicator insects. This research is explorative research and data were taken through observation. Field observations on the manifestation and dynamics of two insects were conducted. The research was conducted in Yogyakarta Special Province and Bali Province during period of 2015 up to 2016. Correlation between the presence dynamics and emergence of garengpung and tenggoreknongvoice could be utilized as indicator of rainy and dry season dynamics change. Calculation of rainy and dry season change is relevant with pranotomongso calculation. The presence and dynamics of cicada emergence is still relevant as one of ecological information through ethnoecological approach.
Comparison of different approaches to the numerical calculation of the LMJ focal
Directory of Open Access Journals (Sweden)
Bourgeade A.
2013-11-01
Full Text Available The beam smoothing in the focal plane of high power lasers is of particular importance to laser-plasma interaction studies in order to minimize plasma parametric and hydrodynamic instabilities on the target. Here we investigate the focal spot structure in different geometrical configurations where standard paraxial hypotheses are no longer verified. We present numerical studies in the cases of single flat top square beam, LMJ quadruplet and complete ring of quads with large azimuth angle. Different calculations are made with Fresnel diffraction propagation model in the paraxial approximation and full vector Maxwell's equations. The first model is based on Fourier transform from near to far field method. The second model uses first spherical wave decomposition in plane waves with Fourier transform and propagates them to the focal spot. These two different approaches are compared with Miró [1] modeling results using paraxial or Feit and Fleck options. The methods presented here are generic for focal spot calculations. They can be used for other complex geometric configurations and various smoothing techniques. The results will be used as boundary conditions in plasma interaction computations.
International Nuclear Information System (INIS)
Jancauskas, J.R.
1994-01-01
The purpose of degrading voltage relays (DVRs) is to ensure that adequate voltage is available to operate all Class 1E loads at all voltage distribution levels. Should voltage drop below the setpoint of the DVRs, the Class 1E power system is disconnected from its supply and resequenced onto the diesel generators in order to restore system voltages to acceptable levels. These relays represent one of the two levels of voltage protection required for the onsite power system. Determining the proper setpoint for degraded voltage relays in nuclear generating stations is a complex task which requires a complete understanding of the Class 1E power distribution system. Despite the importance of degraded voltage relay setpoint calculations, most of the available references only give clues on how not to set these relays rather than provide guidance on how to determine the appropriate setpoint. This paper presents an approach for performing these calculations which attempts to ensure that all of the relevant design issues are addressed
International Nuclear Information System (INIS)
Quigg, Chris
2007-01-01
In the classical physics we inherited from Isaac Newton, mass does not arise, it simply is. The mass of a classical object is the sum of the masses of its parts. Albert Einstein showed that the mass of a body is a measure of its energy content, inviting us to consider the origins of mass. The protons we accelerate at Fermilab are prime examples of Einsteinian matter: nearly all of their mass arises from stored energy. Missing mass led to the discovery of the noble gases, and a new form of missing mass leads us to the notion of dark matter. Starting with a brief guided tour of the meanings of mass, the colloquium will explore the multiple origins of mass. We will see how far we have come toward understanding mass, and survey the issues that guide our research today.
Mass transfer and slag-metal reaction in ladle refining : a CFD approach
Ramström, Eva
2009-01-01
In order to optimise the ladle treatment mass transfer modelling of aluminium addition and homogenisation time was carried out. It was stressed that incorporating slag-metal reactions into the mass transfer modelling strongly would enhance the reliability and amount of information to be analyzed from the CFD calculations. In the present work, a thermodynamic model taking all the involved slag metal reactions into consideration was incorporated into a 2-D fluid flow model of an argon stirr...
A Recommended New Approach on Motorization Ratio Calculations of Stepper Motors
Nalbandian, Ruben; Blais, Thierry; Horth, Richard
2014-01-01
Stepper motors are widely used on most spacecraft mechanisms requiring repeatable and reliable performance. The unique detent torque characteristics of these type of motors makes them behave differently when subjected to low duty cycle excitations where the applied driving pulses are only energized for a fraction of the pulse duration. This phenomenon is even more pronounced in discrete permanent magnet stepper motors used in the space industry. While the inherent high detent properties of discrete permanent magnets provide desirable unpowered holding performance characteristics, it results in unique behavior especially in low duty cycles. Notably, the running torque reduces quickly to the unpowered holding torque when the duty cycle is reduced. The space industry's accepted methodology of calculating the Motorization Ratio (or Torque Margin) is more applicable to systems where the power is continuously applied to the motor coils like brushless DC motors where the cogging torques are low enough not to affect the linear performance of the motors as a function of applied current. This paper summarizes the theoretical and experimental studies performed on a number of space qualified motors under different pulse rates and duty cycles. It is the intention of this paper to introduce a new approach to calculate the Motorization Ratios for discrete permanent magnet steppers under all full and partial duty cycle regimes. The recommended approach defines two distinct relationships to calculate the Motorization Ratio for 100 percent duty cycle and partial duty cycle, when the motor detent (unpowered holding torque) is the main contributor to holding position. These two computations reflect accurately the stepper motor physical behavior as a function of the command phase (ON versus OFF times of the pulses), pointing out how the torque contributors combine. Important points highlighted under this study are the torque margin computations, in particular for well characterized
Guan, Jiwen; Hu, Yongjun; Zou, Hao; Cao, Lanlan; Liu, Fuyi; Shan, Xiaobin; Sheng, Liusi
2012-09-01
In present study, photoionization and dissociation of acetic acid dimers have been studied with the synchrotron vacuum ultraviolet photoionization mass spectrometry and theoretical calculations. Besides the intense signal corresponding to protonated cluster ions (CH3COOH)n.H+, the feature related to the fragment ions (CH3COOH)H+.COO (105 amu) via β-carbon-carbon bond cleavage is observed. By scanning photoionization efficiency spectra, appearance energies of the fragments (CH3COOH).H+ and (CH3COOH)H+.COO are obtained. With the aid of theoretical calculations, seven fragmentation channels of acetic acid dimer cations were discussed, where five cation isomers of acetic acid dimer are involved. While four of them are found to generate the protonated species, only one of them can dissociate into a C-C bond cleavage product (CH3COOH)H+.COO. After surmounting the methyl hydrogen-transfer barrier 10.84 ± 0.05 eV, the opening of dissociative channel to produce ions (CH3COOH)+ becomes the most competitive path. When photon energy increases to 12.4 eV, we also found dimer cations can be fragmented and generate new cations (CH3COOH).CH3CO+. Kinetics, thermodynamics, and entropy factors for these competitive dissociation pathways are discussed. The present report provides a clear picture of the photoionization and dissociation processes of the acetic acid dimer in the range of the photon energy 9-15 eV.
Use of spatial symmetry in atomic--integral calculations: an efficient permutational approach
International Nuclear Information System (INIS)
Rouzo, H.L.
1979-01-01
The minimal number of independent nonzero atomic integrals that occur over arbitrarily oriented basis orbitals of the form R(r).Y/sub lm/(Ω) is theoretically derived. The corresponding method can be easily applied to any point group, including the molecular continuous groups C/sub infinity v/ and D/sub infinity h/. On the basis of this (theoretical) lower bound, the efficiency of the permutational approach in generating sets of independent integrals is discussed. It is proved that lobe orbitals are always more efficient than the familiar Cartesian Gaussians, in the sense that GLOS provide the shortest integral lists. Moreover, it appears that the new axial GLOS often lead to a number of integrals, which is the theoretical lower bound previously defined. With AGLOS, the numbers of two-electron integrals to be computed, stored, and processed are divided by factors 2.9 (NH 3 ), 4.2 (C 5 H 5 ), and 3.6 (C 6 H 6 ) with reference to the corresponding CGTOS calculations. Remembering that in the permutational approach, atomic integrals are directly computed without any four-indice transformation, it appears that its utilization in connection with AGLOS provides one of the most powerful tools for treating symmetrical species. 34 references
Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel
2013-06-01
Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.
Energy Technology Data Exchange (ETDEWEB)
Graf, Peter; Damiani, Rick R.; Dykes, Katherine; Jonkman, Jason M.
2017-01-09
A new adaptive stratified importance sampling (ASIS) method is proposed as an alternative approach for the calculation of the 50 year extreme load under operational conditions, as in design load case 1.1 of the the International Electrotechnical Commission design standard. ASIS combines elements of the binning and extrapolation technique, currently described by the standard, and of the importance sampling (IS) method to estimate load probability of exceedances (POEs). Whereas a Monte Carlo (MC) approach would lead to the sought level of POE with a daunting number of simulations, IS-based techniques are promising as they target the sampling of the input parameters on the parts of the distributions that are most responsible for the extreme loads, thus reducing the number of runs required. We compared the various methods on select load channels as output from FAST, an aero-hydro-servo-elastic tool for the design and analysis of wind turbines developed by the National Renewable Energy Laboratory (NREL). Our newly devised method, although still in its infancy in terms of tuning of the subparameters, is comparable to the others in terms of load estimation and its variance versus computational cost, and offers great promise going forward due to the incorporation of adaptivity into the already powerful importance sampling concept.
International Nuclear Information System (INIS)
Kolevatov, R. S.; Boreskov, K. G.
2013-01-01
We apply the stochastic approach to the calculation of the Reggeon Field Theory (RFT) elastic amplitude and its single diffractive cut. The results for the total, elastic and single difractive cross sections with account of all Pomeron loops are obtained.
Energy Technology Data Exchange (ETDEWEB)
Kolevatov, R. S. [SUBATECH, Ecole des Mines de Nantes, 4 rue Alfred Kastler, 44307 Nantes Cedex 3 (France); Boreskov, K. G. [Institute of Theoretical and Experimental Physics, 117259, Moscow (Russian Federation)
2013-04-15
We apply the stochastic approach to the calculation of the Reggeon Field Theory (RFT) elastic amplitude and its single diffractive cut. The results for the total, elastic and single difractive cross sections with account of all Pomeron loops are obtained.
International Nuclear Information System (INIS)
Cheng Lan; Huang Weizhi; Zhou Baosen
1996-01-01
Using the matrix elements of M-3Y force as the equivalent G-matrix elements, the spectra of 210 Pb, 206 Pb, 206 Hg and 210 Po are calculated in the framework of the Folded Diagram Method. The results show that such equivalent matrix elements are suitable for microscopic calculations of the nuclear structure in heavy mass region
Capillary-HPLC with tandem mass spectrometry in analysis of alkaloid dyestuffs - a new approach.
Dąbrowski, Damian; Lech, Katarzyna; Jarosz, Maciej
2018-05-01
Development of the identification method of alkaloid compounds in Amur cork tree as well as not examined so far Oregon grape and European Barberry shrubs are presented. The novel approach to separation of alkaloids was applied and the capillary-high-performance liquid chromatography (capillary-HPLC) system was used, which has never previously been reported for alkaloid-based dyestuffs analysis. Its optimization was conducted with three different stationary phases (unmodified octadecylsilane-bonded silica, octadecylsilane modified with polar groups and silica-bonded pentaflourophenyls) as well as with different solvent buffers. Detection of the isolated compounds was carried out using diode-array detector (DAD) and tandem mass spectrometer with electrospray ionization (ESI MS/MS). The working parameters of ESI were optimized, whereas the multiple reactions monitoring (MRM) parameters of MS/MS detection were chosen based on the product ion spectra of the quasi-molecular ions. Calibration curve of berberine has been estimated (y = 1712091x + 4785.03 with the correlation coefficient 0.9999). Limit of detection and limit of quantification were calculated to be 3.2 and 9.7 ng/mL, respectively. Numerous alkaloids (i.e., berberine, jatrorrhizine and magnoflorine, as well as phellodendrine, menisperine and berbamine) were identified in the extracts from alkaloid plants and silk and wool fibers dyed with these dyestuffs, among them their markers. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Energy Technology Data Exchange (ETDEWEB)
Born, Sabine [Degussa AG, Hanau (Germany); Matsunami, Noriaki [Nagoya Univ. (Japan). Faculty of Engineering; Tawara, Hiroyuki [National Inst. for Fusion Science, Toki, Gifu (Japan)
2000-01-01
Direct current glow discharge mass spectrometry (dc-GDMS) has been applied to detect impurities in metals. The aim of this study is to understand quantitatively the processes taking place in GDMS and establish a model to calculate the relative ion yield (RIY), which is inversely proportional to the relative sensitivity factor (RSF), in order to achieve better agreement between the calculated and the experimental RIYs. A comparison is made between the calculated RIY of the present model and the experimental RIY, and also with other models. (author)
Meštrović, Zoran; Roje, Damir; Vulić, Marko; Zec, Mirela
2017-01-01
Optimal gestational weight gain has not yet been clearly defined and remains one of the most controversial issues in modern perinatology. The role of optimal weight gain during pregnancy is critical, as it has a strong effect on perinatal outcomes. In this study, gestational body mass index (BMI) change, accounting for maternal height, was investigated as a new criterion for gestational weight gain determination, in the context of fetal growth assessment. We had focused on underweight women only, and aimed to assess whether the Institute of Medicine (IOM) guidelines could be considered acceptable or additional corrections are required in this subgroup of women. The study included 1205 pre-pregnancy underweight mothers and their neonates. Only mothers with singleton term pregnancies (37th-42nd week of gestation) with pre-gestational BMI gestational age (SGA) infants in the study population was 16.2 %. Our results showed the minimal recommended gestational weight gain of 12-14 kg and BMI change of 4-5 kg/m 2 to be associated with a lower prevalence of SGA newborns. Based on our results, the recommended upper limit of gestational mass change could definitely be substantially higher. Optimal weight gain in underweight women could be estimated in the very beginning of pregnancy as recommended BMI change, but recalculated in kilograms according to body height, which modulates the numerical calculation of BMI. Our proposal presents a further step forward towards individualized approach for each pregnant woman.
BelBruno, J. J.; Sanville, E.; Burnin, A.; Muhangi, A. K.; Malyutin, A.
2009-08-01
Ga mS n clusters were generated by laser ablation of a solid sample of Ga 2S 3. The resulting molecules were analyzed in a time-of-flight mass spectrometer. In addition to atomic species, the spectra exhibited evidence for the existence of GaS3+, GaS4+, GaS5+, and GaS6+ clusters. The potential neutral and cationic structures of the observed Ga mS n clusters were computationally investigated using a density-functional approach. Reference is made to the kinetic pathways required for production of clusters from the starting point of the stoichiometric molecule or molecular ion. Cluster atomization enthalpies are compared with bulk values from the literature.
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Guan, Jiwen; Hu, Yongjun; Zou, Hao; Cao, Lanlan; Liu, Fuyi; Shan, Xiaobin; Sheng, Liusi
2012-09-28
In present study, photoionization and dissociation of acetic acid dimers have been studied with the synchrotron vacuum ultraviolet photoionization mass spectrometry and theoretical calculations. Besides the intense signal corresponding to protonated cluster ions (CH(3)COOH)(n)·H(+), the feature related to the fragment ions (CH(3)COOH)H(+)·COO (105 amu) via β-carbon-carbon bond cleavage is observed. By scanning photoionization efficiency spectra, appearance energies of the fragments (CH(3)COOH)·H(+) and (CH(3)COOH)H(+)·COO are obtained. With the aid of theoretical calculations, seven fragmentation channels of acetic acid dimer cations were discussed, where five cation isomers of acetic acid dimer are involved. While four of them are found to generate the protonated species, only one of them can dissociate into a C-C bond cleavage product (CH(3)COOH)H(+)·COO. After surmounting the methyl hydrogen-transfer barrier 10.84 ± 0.05 eV, the opening of dissociative channel to produce ions (CH(3)COOH)(+) becomes the most competitive path. When photon energy increases to 12.4 eV, we also found dimer cations can be fragmented and generate new cations (CH(3)COOH)·CH(3)CO(+). Kinetics, thermodynamics, and entropy factors for these competitive dissociation pathways are discussed. The present report provides a clear picture of the photoionization and dissociation processes of the acetic acid dimer in the range of the photon energy 9-15 eV.
International Nuclear Information System (INIS)
Shukla, Anil; Bogdanov, Bogdan
2015-01-01
Small cationic and anionic clusters of lithium formate were generated by electrospray ionization and their fragmentations were studied by tandem mass spectrometry (collision-induced dissociation with N 2 ). Singly as well as multiply charged clusters were formed in both positive and negative ion modes with the general formulae, (HCOOLi) n Li + , (HCOOLi) n Li m m+ , (HCOOLi) n HCOO − , and (HCOOLi) n (HCOO) m m− . Several magic number cluster (MNC) ions were observed in both the positive and negative ion modes although more predominant in the positive ion mode with (HCOOLi) 3 Li + being the most abundant and stable cluster ion. Fragmentations of singly charged positive clusters proceed first by the loss of a dimer unit ((HCOOLi) 2 ) followed by the loss of monomer units (HCOOLi) although the former remains the dominant dissociation process. In the case of positive cluster ions, all fragmentations lead to the magic cluster (HCOOLi) 3 Li + as the most abundant fragment ion at higher collision energies which then fragments further to dimer and monomer ions at lower abundances. In the negative ion mode, however, singly charged clusters dissociated via sequential loss of monomer units. Multiply charged clusters in both positive and negative ion modes dissociated mainly via Coulomb repulsion. Quantum chemical calculations performed for smaller cluster ions showed that the trimer ion has a closed ring structure similar to the phenalenylium structure with three closed rings connected to the central lithium ion. Further additions of monomer units result in similar symmetric structures for hexamer and nonamer cluster ions. Thermochemical calculations show that trimer cluster ion is relatively more stable than neighboring cluster ions, supporting the experimental observation of a magic number cluster with enhanced stability
Dryahina, K.; Spanel, P.
2005-07-01
A method to calculate diffusion coefficients of ions important for the selected ion flow tube mass spectrometry, SIFT-MS, is presented. The ions, on which this method is demonstrated, include the SIFT-MS precursors H3O+(H2O)0,1,2,3, NO.+(H2O)0,1,2 and O2+ and the product ions relevant to analysis of breath trace metabolites ammonia (NH3+(H2O)0,1,2, NH4+(H2O)0,1,2), acetaldehyde (C2H4OH+(H2O)0,1,2), acetone (CH3CO+, (CH3)2CO+, (CH3)2COH+(H2O)0,1, (CH3)2CO.NO+), ethanol (C2H5OHH+(H2O)0,1,2) and isoprene (C5H7+, C5H8+, C5H9+). Theoretical model of the (12, 4) potential for interaction between the ions and the helium atoms is used, with the repulsive part approximated by the mean hard-sphere cross section and the attractive part describing ion-induced dipole interactions. The reduced zero-field mobilities at 300 K are calculated using the Viehland and Mason theory [L.A. Viehland, S.L. Lin, E.A. Mason, At. Data Nucl. Data Tables, 60 (1995) 37-95], parameterised by a simple formula as a function of the mean hard-sphere cross section, and converted to diffusion coefficients using the Einstein relation. The method is tested on a set of experimental data for simple ions and cluster ions.
Heat and mass transfer intensification and shape optimization a multi-scale approach
2013-01-01
Is the heat and mass transfer intensification defined as a new paradigm of process engineering, or is it just a common and old idea, renamed and given the current taste? Where might intensification occur? How to achieve intensification? How the shape optimization of thermal and fluidic devices leads to intensified heat and mass transfers? To answer these questions, Heat & Mass Transfer Intensification and Shape Optimization: A Multi-scale Approach clarifies the definition of the intensification by highlighting the potential role of the multi-scale structures, the specific interfacial area, the distribution of driving force, the modes of energy supply and the temporal aspects of processes. A reflection on the methods of process intensification or heat and mass transfer enhancement in multi-scale structures is provided, including porous media, heat exchangers, fluid distributors, mixers and reactors. A multi-scale approach to achieve intensification and shape optimization is developed and clearly expla...
Multi-reference approach to the calculation of photoelectron spectra including spin-orbit coupling
Energy Technology Data Exchange (ETDEWEB)
Grell, Gilbert; Bokarev, Sergey I., E-mail: sergey.bokarev@uni-rostock.de; Kühn, Oliver [Institut für Physik, Universität Rostock, D-18051 Rostock (Germany); Winter, Bernd; Seidel, Robert [Helmholtz-Zentrum Berlin für Materialien und Energie, Methods for Material Development, Albert-Einstein-Strasse 15, D-12489 Berlin (Germany); Aziz, Emad F. [Helmholtz-Zentrum Berlin für Materialien und Energie, Methods for Material Development, Albert-Einstein-Strasse 15, D-12489 Berlin (Germany); Department of Physics, Freie Universität Berlin, Arnimalle 14, D-14159 Berlin (Germany); Aziz, Saadullah G. [Chemistry Department, Faculty of Science, King Abdulaziz University, 21589 Jeddah (Saudi Arabia)
2015-08-21
X-ray photoelectron spectra provide a wealth of information on the electronic structure. The extraction of molecular details requires adequate theoretical methods, which in case of transition metal complexes has to account for effects due to the multi-configurational and spin-mixed nature of the many-electron wave function. Here, the restricted active space self-consistent field method including spin-orbit coupling is used to cope with this challenge and to calculate valence- and core-level photoelectron spectra. The intensities are estimated within the frameworks of the Dyson orbital formalism and the sudden approximation. Thereby, we utilize an efficient computational algorithm that is based on a biorthonormal basis transformation. The approach is applied to the valence photoionization of the gas phase water molecule and to the core ionization spectrum of the [Fe(H{sub 2}O){sub 6}]{sup 2+} complex. The results show good agreement with the experimental data obtained in this work, whereas the sudden approximation demonstrates distinct deviations from experiments.
New approach to calculate the true-coincidence effect of HpGe detector
Energy Technology Data Exchange (ETDEWEB)
Alnour, I. A., E-mail: aaibrahim3@live.utm.my, E-mail: ibrahim.elnour@yahoo.com [Department of Physics, Faculty of Pure and Applied Science, International University of Africa, 12223 Khartoum (Sudan); Wagiran, H. [Department of Physics, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Skudai,Johor (Malaysia); Ibrahim, N. [Faculty of Defence Science and Technology, National Defence University of Malaysia, Kem Sungai Besi, 57000 Kuala Lumpur (Malaysia); Hamzah, S.; Elias, M. S. [Malaysia Nuclear Agency (MNA), Bangi, 43000 Kajang, Selangor D.E. (Malaysia); Siong, W. B. [Chemistry Department, Faculty of Resource Science & Technology, Universiti Malaysia Sarawak, 94300 Kota Samarahan, Sarawak (Malaysia)
2016-01-22
The corrections for true-coincidence effects in HpGe detector are important, especially at low source-to-detector distances. This work established an approach to calculate the true-coincidence effects experimentally for HpGe detectors of type Canberra GC3018 and Ortec GEM25-76-XLB-C, which are in operation at neutron activation analysis lab in Malaysian Nuclear Agency (NM). The correction for true-coincidence effects was performed close to detector at distances 2 and 5 cm using {sup 57}Co, {sup 60}Co, {sup 133}Ba and {sup 137}Cs as standard point sources. The correction factors were ranged between 0.93-1.10 at 2 cm and 0.97-1.00 at 5 cm for Canberra HpGe detector; whereas for Ortec HpGe detector ranged between 0.92-1.13 and 0.95-100 at 2 and 5 cm respectively. The change in efficiency calibration curve of the detector at 2 and 5 cm after correction was found to be less than 1%. Moreover, the polynomial parameters functions were simulated through a computer program, MATLAB in order to find an accurate fit to the experimental data points.
A first approach to calculate BIOCLIM variables and climate zones for Antarctica
Wagner, Monika; Trutschnig, Wolfgang; Bathke, Arne C.; Ruprecht, Ulrike
2018-02-01
For testing the hypothesis that macroclimatological factors determine the occurrence, biodiversity, and species specificity of both symbiotic partners of Antarctic lecideoid lichens, we present a first approach for the computation of the full set of 19 BIOCLIM variables, as available at http://www.worldclim.org/ for all regions of the world with exception of Antarctica. Annual mean temperature (Bio 1) and annual precipitation (Bio 12) were chosen to define climate zones of the Antarctic continent and adjacent islands as required for ecological niche modeling (ENM). The zones are based on data for the years 2009-2015 which was obtained from the Antarctic Mesoscale Prediction System (AMPS) database of the Ohio State University. For both temperature and precipitation, two separate zonings were specified; temperature values were divided into 12 zones (named 1 to 12) and precipitation values into five (named A to E). By combining these two partitions, we defined climate zonings where each geographical point can be uniquely assigned to exactly one zone, which allows an immediate explicit interpretation. The soundness of the newly calculated climate zones was tested by comparison with already published data, which used only three zones defined on climate information from the literature. The newly defined climate zones result in a more precise assignment of species distribution to the single habitats. This study provides the basis for a more detailed continental-wide ENM using a comprehensive dataset of lichen specimens which are located within 21 different climate regions.
Depletion benchmarks calculation of random media using explicit modeling approach of RMC
International Nuclear Information System (INIS)
Liu, Shichang; She, Ding; Liang, Jin-gang; Wang, Kan
2016-01-01
Highlights: • Explicit modeling of RMC is applied to depletion benchmark for HTGR fuel element. • Explicit modeling can provide detailed burnup distribution and burnup heterogeneity. • The results would serve as a supplement for the HTGR fuel depletion benchmark. • The method of adjacent burnup regions combination is proposed for full-core problems. • The combination method can reduce memory footprint, keeping the computing accuracy. - Abstract: Monte Carlo method plays an important role in accurate simulation of random media, owing to its advantages of the flexible geometry modeling and the use of continuous-energy nuclear cross sections. Three stochastic geometry modeling methods including Random Lattice Method, Chord Length Sampling and explicit modeling approach with mesh acceleration technique, have been implemented in RMC to simulate the particle transport in the dispersed fuels, in which the explicit modeling method is regarded as the best choice. In this paper, the explicit modeling method is applied to the depletion benchmark for HTGR fuel element, and the method of combination of adjacent burnup regions has been proposed and investigated. The results show that the explicit modeling can provide detailed burnup distribution of individual TRISO particles, and this work would serve as a supplement for the HTGR fuel depletion benchmark calculations. The combination of adjacent burnup regions can effectively reduce the memory footprint while keeping the computational accuracy.
submitter Biologically optimized helium ion plans: calculation approach and its in vitro validation
Mairani, A; Magro, G; Tessonnier, T; Kamp, F; Carlson, D J; Ciocca, M; Cerutti, F; Sala, P R; Ferrari, A; Böhlen, T T; Jäkel, O; Parodi, K; Debus, J; Abdollahi, A; Haberer, T
2016-01-01
Treatment planning studies on the biological effect of raster-scanned helium ion beams should be performed, together with their experimental verification, before their clinical application at the Heidelberg Ion Beam Therapy Center (HIT). For this purpose, we introduce a novel calculation approach based on integrating data-driven biological models in our Monte Carlo treatment planning (MCTP) tool. Dealing with a mixed radiation field, the biological effect of the primary $^4$He ion beams, of the secondary $^3$He and $^4$He (Z = 2) fragments and of the produced protons, deuterons and tritons (Z = 1) has to be taken into account. A spread-out Bragg peak (SOBP) in water, representative of a clinically-relevant scenario, has been biologically optimized with the MCTP and then delivered at HIT. Predictions of cell survival and RBE for a tumor cell line, characterized by ${{(\\alpha /\\beta )}_{\\text{ph}}}=5.4$ Gy, have been successfully compared against measured clonogenic survival data. The mean ...
A Lagrangian Approach for Calculating Microsphere Deposition in a One-Dimensional Lung-Airway Model.
Vaish, Mayank; Kleinstreuer, Clement
2015-09-01
Using the open-source software openfoam as the solver, a novel approach to calculate microsphere transport and deposition in a 1D human lung-equivalent trumpet model (TM) is presented. Specifically, for particle deposition in a nonlinear trumpetlike configuration a new radial force has been developed which, along with the regular drag force, generates particle trajectories toward the wall. The new semi-empirical force is a function of any given inlet volumetric flow rate, micron-particle diameter, and lung volume. Particle-deposition fractions (DFs) in the size range from 2 μm to 10 μm are in agreement with experimental datasets for different laminar and turbulent inhalation flow rates as well as total volumes. Typical run times on a single processor workstation to obtain actual total deposition results at comparable accuracy are 200 times less than that for an idealized whole-lung geometry (i.e., a 3D-1D model with airways up to 23rd generation in single-path only).
Salemi, Jason L; Comins, Meg M; Chandler, Kristen; Mogos, Mulubrhan F; Salihu, Hamisu M
2013-08-01
Comparative effectiveness research (CER) and cost-effectiveness analysis are valuable tools for informing health policy and clinical care decisions. Despite the increased availability of rich observational databases with economic measures, few researchers have the skills needed to conduct valid and reliable cost analyses for CER. The objectives of this paper are to (i) describe a practical approach for calculating cost estimates from hospital charges in discharge data using publicly available hospital cost reports, and (ii) assess the impact of using different methods for cost estimation in maternal and child health (MCH) studies by conducting economic analyses on gestational diabetes (GDM) and pre-pregnancy overweight/obesity. In Florida, we have constructed a clinically enhanced, longitudinal, encounter-level MCH database covering over 2.3 million infants (and their mothers) born alive from 1998 to 2009. Using this as a template, we describe a detailed methodology to use publicly available data to calculate hospital-wide and department-specific cost-to-charge ratios (CCRs), link them to the master database, and convert reported hospital charges to refined cost estimates. We then conduct an economic analysis as a case study on women by GDM and pre-pregnancy body mass index (BMI) status to compare the impact of using different methods on cost estimation. Over 60 % of inpatient charges for birth hospitalizations came from the nursery/labor/delivery units, which have very different cost-to-charge markups (CCR = 0.70) than the commonly substituted hospital average (CCR = 0.29). Using estimated mean, per-person maternal hospitalization costs for women with GDM as an example, unadjusted charges ($US14,696) grossly overestimated actual cost, compared with hospital-wide ($US3,498) and department-level ($US4,986) CCR adjustments. However, the refined cost estimation method, although more accurate, did not alter our conclusions that infant/maternal hospitalization costs
GIS-based Approaches to Catchment Area Analyses of Mass Transit
DEFF Research Database (Denmark)
Andersen, Jonas Lohmann Elkjær; Landex, Alex
2009-01-01
Catchment area analyses of stops or stations are used to investigate potential number of travelers to public transportation. These analyses are considered a strong decision tool in the planning process of mass transit especially railroads. Catchment area analyses are GIS-based buffer and overlay...... analyses with different approaches depending on the desired level of detail. A simple but straightforward approach to implement is the Circular Buffer Approach where catchment areas are circular. A more detailed approach is the Service Area Approach where catchment areas are determined by a street network...... search to simulate the actual walking distances. A refinement of the Service Area Approach is to implement additional time resistance in the network search to simulate obstacles in the walking environment. This paper reviews and compares the different GIS-based catchment area approaches, their level...
International Nuclear Information System (INIS)
Mikhaylov, A.V.; Moryakov, A.V.; Nikitin, A.V.
2012-09-01
. This mode makes it possible to simulate partial core refueling and reactor operation during several campaigns. With this done, the code will calculate the pHt and the solubility of spinel in the form of Fe 3-x Ni x O 4 . The next step is to solve the system of differential equations describing mass-transfer of non-irradiated and radioactive corrosion products. The total number of differential equations in the system can come to about a million, which leads to tougher requirements imposed on the hardware and more complicated input data setting. These difficulties are overcome with the help of the user-friendly interface and by the availability of two code versions - one designed for use at the IBM PC-compatible computer and the other for multiprocessor computer applications. While running, the code checks on the correctness of numerical computation and visualises the results as: - 3-D graphs in the axes of 'computation area - time - functional value'; - 2-D graphs (with the axes set by the user at will) and tables. (authors)
Global nuclear-structure calculations
International Nuclear Information System (INIS)
Moeller, P.; Nix, J.R.
1990-01-01
The revival of interest in nuclear ground-state octupole deformations that occurred in the 1980's was stimulated by observations in 1980 of particularly large deviations between calculated and experimental masses in the Ra region, in a global calculation of nuclear ground-state masses. By minimizing the total potential energy with respect to octupole shape degrees of freedom in addition to ε 2 and ε 4 used originally, a vastly improved agreement between calculated and experimental masses was obtained. To study the global behavior and interrelationships between other nuclear properties, we calculate nuclear ground-state masses, spins, pairing gaps and Β-decay and half-lives and compare the results to experimental qualities. The calculations are based on the macroscopic-microscopic approach, with the microscopic contributions calculated in a folded-Yukawa single-particle potential
Calculation of the Cost of an Adequate Education in Kentucky: A Professional Judgment Approach
Directory of Open Access Journals (Sweden)
Deborah A. Verstegen
2004-02-01
Full Text Available What is an adequate education and how much does it cost? In 1989, Kentucky’s State Supreme Court found the entire system of education unconstitutional-“all of its parts and parcels”. The Court called for all children to have access to an adequate education, one that is uniform and has as its goal the development of seven capacities, including: (i “sufficient oral and written communication skills to enable students to function in a complex and rapidly changing civilization . . . .and (vii sufficient levels of academic or vocational skills to enable public school students to compete favorably with their counterparts in surrounding states, in academics or in the job market”. Now, over a decade later, key questions remain regarding whether these objectives have been fulfilled. This research is designed to calculate the cost of an adequate education by aligning resources to State standards, laws and objectives, using a professional judgment approach. Seven focus groups were convened for this purpose and the scholarly literature was reviewed to provide multiple inputs into study findings. The study produced a per pupil base cost for each of three prototype school districts and an total statewide cost, with the funding gap between existing revenue and the revenue needed for current operations of $1.097 billion per year (2001-02. Additional key resource requirements needed to achieve an adequate education, identified by professional judgment panels, include: (1 extending the school year for students and teachers, (2 adding voluntary half-day preschool for three and four year olds, and (3 raising teacher salaries. This increases the funding gap to $1.23 billion and suggests that significant new funding is required over time if the Commonwealth of Kentucky is to provide an adequate and equitable education of high quality for all children and youth as directed by the State Supreme Court.
A clustering approach to segmenting users of internet-based risk calculators.
Harle, C A; Downs, J S; Padman, R
2011-01-01
Risk calculators are widely available Internet applications that deliver quantitative health risk estimates to consumers. Although these tools are known to have varying effects on risk perceptions, little is known about who will be more likely to accept objective risk estimates. To identify clusters of online health consumers that help explain variation in individual improvement in risk perceptions from web-based quantitative disease risk information. A secondary analysis was performed on data collected in a field experiment that measured people's pre-diabetes risk perceptions before and after visiting a realistic health promotion website that provided quantitative risk information. K-means clustering was performed on numerous candidate variable sets, and the different segmentations were evaluated based on between-cluster variation in risk perception improvement. Variation in responses to risk information was best explained by clustering on pre-intervention absolute pre-diabetes risk perceptions and an objective estimate of personal risk. Members of a high-risk overestimater cluster showed large improvements in their risk perceptions, but clusters of both moderate-risk and high-risk underestimaters were much more muted in improving their optimistically biased perceptions. Cluster analysis provided a unique approach for segmenting health consumers and predicting their acceptance of quantitative disease risk information. These clusters suggest that health consumers were very responsive to good news, but tended not to incorporate bad news into their self-perceptions much. These findings help to quantify variation among online health consumers and may inform the targeted marketing of and improvements to risk communication tools on the Internet.
Mass dispersions in a time-dependent mean-field approach
International Nuclear Information System (INIS)
Balian, R.; Bonche, P.; Flocard, H.; Veneroni, M.
1984-05-01
Characteristic functions for single-particle (s.p.) observables are evaluated by means of a time-dependent variational principle, which involves a state and an observable as conjugate variables. This provides a mean-field expression for fluctuations of s.p. observables, such as mass dispersions. The result differs from TDHF, it requires only the use of existing codes, and it presents attractive theoretical features. First numerical tests are encouraging. In particular, a calculation for 16 O + 16 O provides a significant increase of the predicted mass dispersion
Lattice Hamiltonian approach to the massless Schwinger model. Precise extraction of the mass gap
International Nuclear Information System (INIS)
Cichy, Krzysztof; Poznan Univ.; Kujawa-Cichy, Agnieszka; Szyniszewski, Marcin; Manchester Univ.
2012-12-01
We present results of applying the Hamiltonian approach to the massless Schwinger model. A finite basis is constructed using the strong coupling expansion to a very high order. Using exact diagonalization, the continuum limit can be reliably approached. This allows to reproduce the analytical results for the ground state energy, as well as the vector and scalar mass gaps to an outstanding precision better than 10 -6 %.
Lattice Hamiltonian approach to the massless Schwinger model. Precise extraction of the mass gap
Energy Technology Data Exchange (ETDEWEB)
Cichy, Krzysztof [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Poznan Univ. (Poland). Faculty of Physics; Kujawa-Cichy, Agnieszka [Poznan Univ. (Poland). Faculty of Physics; Szyniszewski, Marcin [Poznan Univ. (Poland). Faculty of Physics; Manchester Univ. (United Kingdom). NOWNano DTC
2012-12-15
We present results of applying the Hamiltonian approach to the massless Schwinger model. A finite basis is constructed using the strong coupling expansion to a very high order. Using exact diagonalization, the continuum limit can be reliably approached. This allows to reproduce the analytical results for the ground state energy, as well as the vector and scalar mass gaps to an outstanding precision better than 10{sup -6} %.
A perturbative approach to mass-generation - the non-linear sigma model
International Nuclear Information System (INIS)
Davis, A.C.; Nahm, W.
1985-01-01
A calculational scheme is presented to include non-perturbative effects into the perturbation expansion. As an example we use the O(N + 1) sigma model. The scheme uses a natural parametrisation such that the lagrangian can be written in a form normal-ordered with respect to the O(N + 1) symmetric vacuum plus vacuum expectation values, the latter calculated by symmetry alone. Including such expectation values automatically leads to the inclusion of a mass-gap in the perturbation series. (orig.)
Stur, J.; Bos, M.; van der Linden, W.E.
1984-01-01
The very fast calculation procedure described earlier is applied to calculate the titration curves of complicated redox systems. The theory is extended slightly to cover inhomogeneous redox systems. Titrations of iodine or 2,6-dichloroindophenol with ascorbic acid are described. It is shown that
Distorted wave approach to calculate Auger transition rates of ions in metals
Energy Technology Data Exchange (ETDEWEB)
Deutscher, Stefan A. E-mail: sad@utk.edu; Diez Muino, R.; Arnau, A.; Salin, A.; Zaremba, E
2001-08-01
We evaluate the role of target distortion in the determination of Auger transition rates for multicharged ions in metals. The required two electron matrix elements are calculated using numerical solutions of the Kohn-Sham equations for both the bound and continuum states. Comparisons with calculations performed using plane waves and hydrogenic orbitals are presented.
Sliding Mode Control for Mass Moment Aerospace Vehicles Using Dynamic Inversion Approach
Directory of Open Access Journals (Sweden)
Xiao-Yu Zhang
2013-01-01
Full Text Available The moving mass actuation technique offers significant advantages over conventional aerodynamic control surfaces and reaction control systems, because the actuators are contained entirely within the airframe geometrical envelope. Modeling, control, and simulation of Mass Moment Aerospace Vehicles (MMAV utilizing moving mass actuators are discussed. Dynamics of the MMAV are separated into two parts on the basis of the two time-scale separation theory: the dynamics of fast state and the dynamics of slow state. And then, in order to restrain the system chattering and keep the track performance of the system by considering aerodynamic parameter perturbation, the flight control system is designed for the two subsystems, respectively, utilizing fuzzy sliding mode control approach. The simulation results describe the effectiveness of the proposed autopilot design approach. Meanwhile, the chattering phenomenon that frequently appears in the conventional variable structure systems is also eliminated without deteriorating the system robustness.
Refining mass formulas for astrophysical applications: A Bayesian neural network approach
Utama, R.; Piekarewicz, J.
2017-10-01
Background: Exotic nuclei, particularly those near the drip lines, are at the core of one of the fundamental questions driving nuclear structure and astrophysics today: What are the limits of nuclear binding? Exotic nuclei play a critical role in both informing theoretical models as well as in our understanding of the origin of the heavy elements. Purpose: Our aim is to refine existing mass models through the training of an artificial neural network that will mitigate the large model discrepancies far away from stability. Methods: The basic paradigm of our two-pronged approach is an existing mass model that captures as much as possible of the underlying physics followed by the implementation of a Bayesian neural network (BNN) refinement to account for the missing physics. Bayesian inference is employed to determine the parameters of the neural network so that model predictions may be accompanied by theoretical uncertainties. Results: Despite the undeniable quality of the mass models adopted in this work, we observe a significant improvement (of about 40%) after the BNN refinement is implemented. Indeed, in the specific case of the Duflo-Zuker mass formula, we find that the rms deviation relative to experiment is reduced from σrms=0.503 MeV to σrms=0.286 MeV. These newly refined mass tables are used to map the neutron drip lines (or rather "drip bands") and to study a few critical r -process nuclei. Conclusions: The BNN approach is highly successful in refining the predictions of existing mass models. In particular, the large discrepancy displayed by the original "bare" models in regions where experimental data are unavailable is considerably quenched after the BNN refinement. This lends credence to our approach and has motivated us to publish refined mass tables that we trust will be helpful for future astrophysical applications.
International Nuclear Information System (INIS)
Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn
2017-01-01
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.
Davies, Stephen R; Alamgir, Mahiuddin; Chan, Benjamin K H; Dang, Thao; Jones, Kai; Krishnaswami, Maya; Luo, Yawen; Mitchell, Peter S R; Moawad, Michael; Swan, Hilton; Tarrant, Greg J
2015-10-01
The purity determination of organic calibration standards using the traditional mass balance approach is described. Demonstrated examples highlight the potential for bias in each measurement and the need to implement an approach that provides a cross-check for each result, affording fit for purpose purity values in a timely and cost-effective manner. Chromatographic techniques such as gas chromatography with flame ionisation detection (GC-FID) and high-performance liquid chromatography with UV detection (HPLC-UV), combined with mass and NMR spectroscopy, provide a detailed impurity profile allowing an efficient conversion of chromatographic peak areas into relative mass fractions, generally avoiding the need to calibrate each impurity present. For samples analysed by GC-FID, a conservative measurement uncertainty budget is described, including a component to cover potential variations in the response of each unidentified impurity. An alternative approach is also detailed in which extensive purification eliminates the detector response factor issue, facilitating the certification of a super-pure calibration standard which can be used to quantify the main component in less-pure candidate materials. This latter approach is particularly useful when applying HPLC analysis with UV detection. Key to the success of this approach is the application of both qualitative and quantitative (1)H NMR spectroscopy.
Probabilistic approach to external cloud dose calculations using onsite meteorological data
International Nuclear Information System (INIS)
Strenge, D.L.; Watson, E.C.; Bander, T.J.; Kennedy, W.E.
1976-01-01
A method is described for calculation of external total body and skin doses from accidental atmospheric releases of radionuclides based on hourly onsite meteorological data. The method involves calculation of dose values from a finite size cloud for each hourly observation for a given radionuclide inventory. These values are then used to determine the probability of occurrence of dose levels for specified release times ranging from one hour to 30 days
A new approach to make collapsed cross section for burnup calculation of subcritical system
International Nuclear Information System (INIS)
Matsunaka, Masayuki; Kondo, Keitaro; Miyamaru, Hiroyuki; Murata, Isao
2008-01-01
A general-purpose transport and burnup code system for precise analysis of subcritical reactors like a fusion-fission (FF) hybrid reactor was developed and used for analyzing their performance. The FF hybrid reactor is a subcritical system, which has a concept of fusion reactor with a blanket region containing nuclear fuel and has been under discussion by author's group for years because the present burnup calculation system mainly consists of a general-purpose Monte Carlo code MCNP-4B, a point burnup code ORIGEN2. JENDL-3.3 pointwise cross section library and JENDL Activation Cross Section File 96 were used as base cross section libraries to make group constant for burnup calculation. A new method has been proposed to make group constant for the burnup calculation as accurate as possible directly using output data of the neutron transport calculation by MCNP and evaluated nuclear data libraries. This method is strict and a general procedure to make one group cross sections in Monte Carlo calculations, while it takes very long computation time. Some speed-up techniques were discussed for the present group constant making process so as to decrease calculation time. Adoption of postprocessing to make group constant improved the calculation accuracy because of increasing number of cross sections to be updated in each burnup cycle. The present calculation system is capable of performing neutronics analysis of subcritical reactors more precise than our previous one. However, at the moment, it still takes long computation time to make group constants. Further speed-up techniques are now under investigation so as to apply the present system to neutronics design analysis for various subcritical systems. (author)
Implementation of upper limit calculation for a poisson variable by bayesian approach
International Nuclear Information System (INIS)
Zhu Yongsheng
2008-01-01
The calculation of Bayesian confidence upper limit for a Poisson variable including both signal and background with and without systematic uncertainties has been formulated. A Fortran 77 routine, BPULE, has been developed to implement the calculation. The routine can account for systematic uncertainties in the background expectation and signal efficiency. The systematic uncertainties may be separately parameterized by a Gaussian, Log-Gaussian or flat probability density function (pdf). Some technical details of BPULE have been discussed. (authors)
An integrated approach to calculate life cycle costs of arms and military equipment
Directory of Open Access Journals (Sweden)
Vlada S. Sokolović
2013-12-01
, costs are one of the most dominant parameters in decision-making. Modern trends in this area comprehensively perceive all costs during the life cycle of assets.In general, in the analysis of costs in the life cycle of AME there are two sets of costs: visible and invisible (hidden costs. The visible part of the costs is mainly present in decision-making and usually includes the cost of equipping units or purchase of assets. The invisible part of the costs is far more significant. Although it is larger than the visible part and covers more groups of costs, decision-makers often do not take it into account. The hidden costs include: distribution costs, operating costs, maintenance costs, training costs, inventory costs, information systems costs, the cost of disposal and write-offs, etc. The decision making problem about investment in the AME purchase and equipping is obviously of multicriteria nature, whether an optimum combination of costs for one technical system (AME is in question, or whether it is a choice of a system of AME among many offered. COST ANALYSIS OF A PARTICULAR ASSET For the illustration of an integrated approach to the analysis of the cost of assets in their life-cycle, a model from the US Naval Postgraduate School, was adjusted and applied on an example of a real asset. The model is applied to the case of two squadrons of identical aircraft based at different airports. With regard to the availability, confidentiality, and the variability of costs and reliability of the elements of AME, the calculations in the model are implemented on the basis of the estimated or orientation parameters. Essentially, the goal is to demonstrate the interdependence, mutual relations and influences of parameters and their ultimate impact on the overall cost of military assets. Applying the model to a particular example points to the fact that, in the first years of asset life, the dominant cost is that of asset procurement (cost of acquisition, cost of assets
Directory of Open Access Journals (Sweden)
Tongkun Lan
2018-01-01
Full Text Available AC (alternating current system backup protection setting calculation is an important basis for ensuring the safe operation of power grids. With the increasing integration of modular multilevel converter based high voltage direct current (MMC-HVDC into power grids, it has been a big challenge for the AC system backup protection setting calculation, as the MMC-HVDC lacks the fault self-clearance capability under pole-to-pole faults. This paper focused on the pole-to-pole faults analysis for the AC system backup protection setting calculation. The principles of pole-to-pole faults analysis were discussed first according to the standard of the AC system protection setting calculation. Then, the influence of fault resistance on the fault process was investigated. A simplified analytic approach of pole-to-pole faults in MMC-HVDC for the AC system backup protection setting calculation was proposed. In the proposed approach, the derived expressions of fundamental frequency current are applicable under arbitrary fault resistance. The accuracy of the proposed approach was demonstrated by PSCAD/EMTDC (Power Systems Computer-Aided Design/Electromagnetic Transients including DC simulations.
International Nuclear Information System (INIS)
Kupka, M.; Farkasovsky, P.C.
1992-01-01
Point-contact spectra have been calculated for normal metal -heavy-fermion metal system (described by means of a simplified model Hamiltonian). Two approaches are used: one of them states that the differential conductance reflects an energy-dependent quasi-particle density of states, and 2. one drives the differential conductance are compared
Darbandi, Masoud; Abrar, Bagher
2018-01-01
The spectral-line weighted-sum-of-gray-gases (SLW) model is considered as a modern global model, which can be used in predicting the thermal radiation heat transfer within the combustion fields. The past SLW model users have mostly employed the reference approach to calculate the local values of gray gases' absorption coefficient. This classical reference approach assumes that the absorption spectra of gases at different thermodynamic conditions are scalable with the absorption spectrum of gas at a reference thermodynamic state in the domain. However, this assumption cannot be reasonable in combustion fields, where the gas temperature is very different from the reference temperature. Consequently, the results of SLW model incorporated with the classical reference approach, say the classical SLW method, are highly sensitive to the reference temperature magnitude in non-isothermal combustion fields. To lessen this sensitivity, the current work combines the SLW model with a modified reference approach, which is a particular one among the eight possible reference approach forms reported recently by Solovjov, et al. [DOI: 10.1016/j.jqsrt.2017.01.034, 2017]. The combination is called "modified SLW method". This work shows that the modified reference approach can provide more accurate total emissivity calculation than the classical reference approach if it is coupled with the SLW method. This would be particularly helpful for more accurate calculation of radiation transfer in highly non-isothermal combustion fields. To approve this, we use both the classical and modified SLW methods and calculate the radiation transfer in such fields. It is shown that the modified SLW method can almost eliminate the sensitivity of achieved results to the chosen reference temperature in treating highly non-isothermal combustion fields.
Teo, Boon K.; Li, Wai-Kee
2011-01-01
This article is divided into two parts. In the first part, the atomic unit (au) system is introduced and the scales of time, space (length), and speed, as well as those of mass and energy, in the atomic world are discussed. In the second part, the utility of atomic units in quantum mechanical and spectroscopic calculations is illustrated with…
Evaluation of a mass-balance approach to determine consumptive water use in northeastern Illinois
Mills, Patrick C.; Duncker, James J.; Over, Thomas M.; Marian Domanski,; ,; Engel, Frank
2014-01-01
A principal component of evaluating and managing water use is consumptive use. This is the portion of water withdrawn for a particular use, such as residential, which is evaporated, transpired, incorporated into products or crops, consumed by humans or livestock, or otherwise removed from the immediate water environment. The amount of consumptive use may be estimated by a water (mass)-balance approach; however, because of the difficulty of obtaining necessary data, its application typically is restricted to the facility scale. The general governing mass-balance equation is: Consumptive use = Water supplied - Return flows.
International Nuclear Information System (INIS)
Okuno, Hiroshi
2002-01-01
Critical and subcritical masses were calculated for a sphere of five curium isotopes from 243 Cm to 247 Cm in metal and in metal-water mixtures considering three reflector conditions: bare, with a water reflector or a stainless steel reflector. The calculation were made mainly with a combination of a continuous energy Monte Carlo neutron transport calculation code, MCNP, and the Japanese Evaluated Nuclear Data Library, JENDL-3.2. Other evaluated nuclear data files, ENDF/B-VI and JEF-2.2, were also applied to find differences in calculation results of the neutron multiplication factor originated from different nuclear data files. A large dependence on the evaluated nuclear data files was found in the calculation results: more than 10%Δk/k relative differences in the neutron multiplication factor for a homogeneous mixture of 243 Cm metal and water when JENDL-3.2 was replaced with ENDF/B-VI and JEF-2.2, respectively; and a 44% reduction in the critical mass by changing from JENDL-3.2 to ENDF/B-VI for 246 Cm metal. The present study supplied basic information to the ANSI/ANS-8.15 Working Group for revision of the standard for nuclear criticality control of special actinide elements. The new or revised values of the subcritical mass limits for curium isotopes accepted by the ANSI/ANS-8.15 Working Group were finally summarized. (author)
Energy Technology Data Exchange (ETDEWEB)
Okuno, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kawasaki, Hiromitsu [CRC Solutions Corporation, Hitachinaka, Ibaraki (Japan)
2002-10-01
Critical and subcritical masses were calculated for a sphere of five curium isotopes from {sup 243}Cm to {sup 247}Cm in metal and in metal-water mixtures considering three reflector conditions: bare, with a water reflector or a stainless steel reflector. The calculation were made mainly with a combination of a continuous energy Monte Carlo neutron transport calculation code, MCNP, and the Japanese Evaluated Nuclear Data Library, JENDL-3.2. Other evaluated nuclear data files, ENDF/B-VI and JEF-2.2, were also applied to find differences in calculation results of the neutron multiplication factor originated from different nuclear data files. A large dependence on the evaluated nuclear data files was found in the calculation results: more than 10%{delta}k/k relative differences in the neutron multiplication factor for a homogeneous mixture of {sup 243}Cm metal and water when JENDL-3.2 was replaced with ENDF/B-VI and JEF-2.2, respectively; and a 44% reduction in the critical mass by changing from JENDL-3.2 to ENDF/B-VI for {sup 246}Cm metal. The present study supplied basic information to the ANSI/ANS-8.15 Working Group for revision of the standard for nuclear criticality control of special actinide elements. The new or revised values of the subcritical mass limits for curium isotopes accepted by the ANSI/ANS-8.15 Working Group were finally summarized. (author)
International Nuclear Information System (INIS)
LaPointe, P.R.; Cladouhos, T.T.
1999-02-01
The report outlines a possible approach to estimating rock movements due to earthquakes that may diminish canister safety. The method is based upon an approach developed for studying similar problems in Sweden at three generic Swedish sites. In the first part of the report, the problem of rock movements during earthquakes is described. The second section of the report outlines the approach used to estimate rock movements in Sweden, and discusses how the approach could be adapted to evaluating movements at Finnish repositories. This section also discusses data needs and potential problems in applying the approach in Finland. The next section presents some simple earthquake calculations for the four Finnish sites. These simulations use the discrete fracture network model geometric parameters developed by VTT (Technical Research Centre of Finland) for the use in hydrological calculations. The calculations are not meant for performance assessment purposes for reasons discussed in the report, but are designed to show (1) the importance of fracture size, intensity and orientation on induced displacement magnitudes; (2) the need for additional studies with regards to fracture size and intensity; and (3) the need to resolve issues regarding the role of post-glacial faulting, glacial rebound and tectonic processes in present-day and future earthquakes. (orig.)
Nuclear structure approach to the calculation of the imaginary alpha-nucleus optical potential
International Nuclear Information System (INIS)
Dermawan, H.; Osterfeld, F.; Madsen, V.A.
1981-01-01
A microscopic calculation of the second-order imaginary optical potential for 40 (Ca(α,α) is made for incident energies of 31 and 100 MeV using RPA transition densities for intermediate excited states. The projectile is treated as an elementary particle, and the alpha-nucleon interaction is normalized by fitting 3 - inelastic cross sections with a folded M3Y potential. The use of an optical Green's function for the intermediate propagator is found to be important. Equivalent local potentials are obtained and used to calculate elastic scattering cross sections. Agreement with low-angle experimental data is fair at 31 MeV, but at 100 MeV the calculated cross sections indicate much too little absorption. 9 figures, 1 table
International Nuclear Information System (INIS)
Pask, J.E.; Klein, B.M.; Fong, C.Y.; Sterne, P.A.
1999-01-01
We present an approach to solid-state electronic-structure calculations based on the finite-element method. In this method, the basis functions are strictly local, piecewise polynomials. Because the basis is composed of polynomials, the method is completely general and its convergence can be controlled systematically. Because the basis functions are strictly local in real space, the method allows for variable resolution in real space; produces sparse, structured matrices, enabling the effective use of iterative solution methods; and is well suited to parallel implementation. The method thus combines the significant advantages of both real-space-grid and basis-oriented approaches and so promises to be particularly well suited for large, accurate ab initio calculations. We develop the theory of our approach in detail, discuss advantages and disadvantages, and report initial results, including electronic band structures and details of the convergence of the method. copyright 1999 The American Physical Society
A numerical approach to calculate the induced voltage in the case of conduced perturbations
Energy Technology Data Exchange (ETDEWEB)
Andretzko, J.P.; Hedjiedj, A.; Babouri, A.; Guendouz, L.; Nadi, M. [Nancy-1 Univ. Henri Poincare, Lab. d' Instrumentation Electronique de Nancy, Faculte des Sciences, 54 - Vandoeuvre les Nancy (France)
2006-07-01
This paper presents a method of numerical simulation that makes it possible to calculate the induced tension to the terminals of the cardiac pacemaker subjected to conduced disturbances. The physical model used for simulation is an experimental test bed which makes it possible to study the behaviour of pacemaker, in vitro, subjected to electromagnetic disturbances in low frequencies range (50 hz - 500 khz). The test bed in which the pacemaker is implanted is described in this article. The process of calculation uses the admittance method adapted to the case of conducted disturbances. Results obtained by numerical simulation are close to experimental values. (authors)
International Nuclear Information System (INIS)
Li Qingzhong; Sun Chengwei; Zhao Feng; Gao Wen; Wen Shanggang; Liu Wenhan
1999-11-01
The generalized geometrical optics model for the detonation shock dynamics (DSD) has been incorporated into the two dimensional hydro-code WSU to form a combination code ADW for numerical simulation of explosive acceleration of metals. An analytical treatment of the coupling conditions at the nodes just behind the detonation front is proposed. The experiments on two kinds of explosive-flyer assemblies with different length/diameter ratio were carried out to verify the ADW calculations, where the tested explosive was HMX or TATB based. It is found that the combination of DSD and hydro-code can improve the calculation precision, and has advantages in larger meshes and less CPU time
A numerical approach to calculate the induced voltage in the case of conduced perturbations
International Nuclear Information System (INIS)
Andretzko, J.P.; Hedjiedj, A.; Babouri, A.; Guendouz, L.; Nadi, M.
2006-01-01
This paper presents a method of numerical simulation that makes it possible to calculate the induced tension to the terminals of the cardiac pacemaker subjected to conduced disturbances. The physical model used for simulation is an experimental test bed which makes it possible to study the behaviour of pacemaker, in vitro, subjected to electromagnetic disturbances in low frequencies range (50 hz - 500 khz). The test bed in which the pacemaker is implanted is described in this article. The process of calculation uses the admittance method adapted to the case of conducted disturbances. Results obtained by numerical simulation are close to experimental values. (authors)
A Clifford algebra approach to chiral symmetry breaking and fermion mass hierarchies
Lu, Wei
2017-09-01
We propose a Clifford algebra approach to chiral symmetry breaking and fermion mass hierarchies in the context of composite Higgs bosons. Standard model fermions are represented by algebraic spinors of six-dimensional binary Clifford algebra, while ternary Clifford algebra-related flavor projection operators control allowable flavor-mixing interactions. There are three composite electroweak Higgs bosons resulted from top quark, tau neutrino, and tau lepton condensations. Each of the three condensations gives rise to masses of four different fermions. The fermion mass hierarchies within these three groups are determined by four-fermion condensations, which break two global chiral symmetries. The four-fermion condensations induce axion-like pseudo-Nambu-Goldstone bosons and can be dark matter candidates. In addition to the 125 GeV Higgs boson observed at the Large Hadron Collider, we anticipate detection of tau neutrino composite Higgs boson via the charm quark decay channel.
A data base approach for prediction of deforestation-induced mass wasting events
Logan, T. L.
1981-01-01
A major topic of concern in timber management is determining the impact of clear-cutting on slope stability. Deforestation treatments on steep mountain slopes have often resulted in a high frequency of major mass wasting events. The Geographic Information System (GIS) is a potentially useful tool for predicting the location of mass wasting sites. With a raster-based GIS, digitally encoded maps of slide hazard parameters can be overlayed and modeled to produce new maps depicting high probability slide areas. The present investigation has the objective to examine the raster-based information system as a tool for predicting the location of the clear-cut mountain slopes which are most likely to experience shallow soil debris avalanches. A literature overview is conducted, taking into account vegetation, roads, precipitation, soil type, slope-angle and aspect, and models predicting mass soil movements. Attention is given to a data base approach and aspects of slide prediction.
DEFF Research Database (Denmark)
Møller, Charlotte; Sprenger, Richard R.; Stürup, Stefan
2011-01-01
Using insulin as a model protein for binding of oxaliplatin to proteins, various mass spectrometric approaches and techniques were compared. Several different platinum adducts were observed, e.g. addition of one or two diaminocyclohexane platinum(II) (Pt(dach)) molecules. By top-down analysis...... and fragmentation of the intact insulin-oxaliplatin adduct using nano-electrospray ionisation quadrupole time-of-flight mass spectrometry (nESI-Q-ToF-MS), the major binding site was assigned to histidine5 on the insulin B chain. In order to simplify the interpretation of the mass spectrum, the disulphide bridges...... were reduced. This led to the additional identification of cysteine6 on the A chain as a binding site along with histidine5 on the B chain. Digestion of insulin-oxaliplatin with endoproteinase Glu-C (GluC) followed by reduction led to the formation of five peptides with Pt(dach) attached...
A dynamical approach in exploring the unknown mass in the Solar system using pulsar timing arrays
Guo, Y. J.; Lee, K. J.; Caballero, R. N.
2018-04-01
The error in the Solar system ephemeris will lead to dipolar correlations in the residuals of pulsar timing array for widely separated pulsars. In this paper, we utilize such correlated signals, and construct a Bayesian data-analysis framework to detect the unknown mass in the Solar system and to measure the orbital parameters. The algorithm is designed to calculate the waveform of the induced pulsar-timing residuals due to the unmodelled objects following the Keplerian orbits in the Solar system. The algorithm incorporates a Bayesian-analysis suit used to simultaneously analyse the pulsar-timing data of multiple pulsars to search for coherent waveforms, evaluate the detection significance of unknown objects, and to measure their parameters. When the object is not detectable, our algorithm can be used to place upper limits on the mass. The algorithm is verified using simulated data sets, and cross-checked with analytical calculations. We also investigate the capability of future pulsar-timing-array experiments in detecting the unknown objects. We expect that the future pulsar-timing data can limit the unknown massive objects in the Solar system to be lighter than 10-11-10-12 M⊙, or measure the mass of Jovian system to a fractional precision of 10-8-10-9.
Directory of Open Access Journals (Sweden)
Marco Gonzalez
Full Text Available Abstract The analysis of cracked brittle mechanical components considering linear elastic fracture mechanics is usually reduced to the evaluation of stress intensity factors (SIFs. The SIF calculation can be carried out experimentally, theoretically or numerically. Each methodology has its own advantages but the use of numerical methods has become very popular. Several schemes for numerical SIF calculations have been developed, the J-integral method being one of the most widely used because of its energy-like formulation. Additionally, some variations of the J-integral method, such as displacement-based methods, are also becoming popular due to their simplicity. In this work, a simple displacement-based scheme is proposed to calculate SIFs, and its performance is compared with contour integrals. These schemes are all implemented with the Boundary Element Method (BEM in order to exploit its advantages in crack growth modelling. Some simple examples are solved with the BEM and the calculated SIF values are compared against available solutions, showing good agreement between the different schemes.
Calculational and experimental approaches to the equation of state of irradiated fuel
International Nuclear Information System (INIS)
Bober, M.; Breitung, W.; Karow, H.U.; Schumacher, G.
1977-07-01
The oxygen potential is an important parameter for the estimation of the vapor pressure of mixed oxide fuel and fission products. Dissolved fission products can have great influence on this potential in hypostoichiometric fuel. Therefore an attempt was made to calculate oxygen potentials of uranium-plutonium mixed oxides which contain fission products using models based on the equilibrium of oxygen defects. Vapor pressures have been calculated applying these data. The results of the calculation with various models differ especially at high temperatures above 4,000 K. Experimental work has been done to determine the vapor pressure of oxide fuel material at temperatures between 3,000 K and 5,000 K using laser beam heating. A measuring technique and a detailed evaluation model of laser evaporation measurements have been developed. The evaluation model describes the complex phenomena occurring during surface evaporation of liquid oxide fuel. Vapor pressure measurements with UO 2 have been carried out in the temperature region up to 4,500 K. With thermodynamic calculations the required equilibrium vapor pressures (EOS) can be derived from the vapor pressures measured. The caloric equation-of-state of the liquid-vapor equilibrium of the fuel up to temperatures of 5,000 K has been considered theoretically. (orig.) [de
A compressive sensing approach to the calculation of the inverse data space
Khan, Babar Hasan; Saragiotis, Christos; Alkhalifah, Tariq Ali
2012-01-01
Seismic processing in the Inverse Data Space (IDS) has its advantages like the task of removing the multiples simply becomes muting the zero offset and zero time data in the inverse domain. Calculation of the Inverse Data Space by sparse inversion
Troldborg, M.; Nowak, W.; Binning, P. J.; Bjerg, P. L.
2012-12-01
Estimates of mass discharge (mass/time) are increasingly being used when assessing risks of groundwater contamination and designing remedial systems at contaminated sites. Mass discharge estimates are, however, prone to rather large uncertainties as they integrate uncertain spatial distributions of both concentration and groundwater flow velocities. For risk assessments or any other decisions that are being based on mass discharge estimates, it is essential to address these uncertainties. We present a novel Bayesian geostatistical approach for quantifying the uncertainty of the mass discharge across a multilevel control plane. The method decouples the flow and transport simulation and has the advantage of avoiding the heavy computational burden of three-dimensional numerical flow and transport simulation coupled with geostatistical inversion. It may therefore be of practical relevance to practitioners compared to existing methods that are either too simple or computationally demanding. The method is based on conditional geostatistical simulation and accounts for i) heterogeneity of both the flow field and the concentration distribution through Bayesian geostatistics (including the uncertainty in covariance functions), ii) measurement uncertainty, and iii) uncertain source zone geometry and transport parameters. The method generates multiple equally likely realizations of the spatial flow and concentration distribution, which all honour the measured data at the control plane. The flow realizations are generated by analytical co-simulation of the hydraulic conductivity and the hydraulic gradient across the control plane. These realizations are made consistent with measurements of both hydraulic conductivity and head at the site. An analytical macro-dispersive transport solution is employed to simulate the mean concentration distribution across the control plane, and a geostatistical model of the Box-Cox transformed concentration data is used to simulate observed
Energy Technology Data Exchange (ETDEWEB)
Campanini, Renato [Department of Physics, University of Bologna, and INFN, Bologna (Italy); Dongiovanni, Danilo [Department of Physics, University of Bologna, and INFN, Bologna (Italy); Iampieri, Emiro [Department of Physics, University of Bologna, and INFN, Bologna (Italy); Lanconelli, Nico [Department of Physics, University of Bologna, and INFN, Bologna (Italy); Masotti, Matteo [Department of Physics, University of Bologna, and INFN, Bologna (Italy); Palermo, Giuseppe [Department of Physics, University of Bologna, and INFN, Bologna (Italy); Riccardi, Alessandro [Department of Physics, University of Bologna, and INFN, Bologna (Italy); Roffilli, Matteo [Department of Computer Science, University of Bologna, Bologna (Italy)
2004-03-21
In this work, we present a novel approach to mass detection in digital mammograms. The great variability of the appearance of masses is the main obstacle to building a mass detection method. It is indeed demanding to characterize all the varieties of masses with a reduced set of features. Hence, in our approach we have chosen not to extract any feature, for the detection of the region of interest; in contrast, we exploit all the information available on the image. A multiresolution overcomplete wavelet representation is performed, in order to codify the image with redundancy of information. The vectors of the very-large space obtained are then provided to a first support vector machine (SVM) classifier. The detection task is considered here as a two-class pattern recognition problem: crops are classified as suspect or not, by using this SVM classifier. False candidates are eliminated with a second cascaded SVM. To further reduce the number of false positives, an ensemble of experts is applied: the final suspect regions are achieved by using a voting strategy. The sensitivity of the presented system is nearly 80% with a false-positive rate of 1.1 marks per image, estimated on images coming from the USF DDSM database.
Grégory, Dubourg; Chaudet, Hervé; Lagier, Jean-Christophe; Raoult, Didier
2018-03-01
Describing the human hut gut microbiota is one the most exciting challenges of the 21 st century. Currently, high-throughput sequencing methods are considered as the gold standard for this purpose, however, they suffer from several drawbacks, including their inability to detect minority populations. The advent of mass-spectrometric (MS) approaches to identify cultured bacteria in clinical microbiology enabled the creation of the culturomics approach, which aims to establish a comprehensive repertoire of cultured prokaryotes from human specimens using extensive culture conditions. Areas covered: This review first underlines how mass spectrometric approaches have revolutionized clinical microbiology. It then highlights the contribution of MS-based methods to culturomics studies, paying particular attention to the extension of the human gut microbiota repertoire through the discovery of new bacterial species. Expert commentary: MS-based approaches have enabled cultivation methods to be resuscitated to study the human gut microbiota and thus to fill in the blanks left by high-throughput sequencing methods in terms of culturing minority populations. Continued efforts to recover new taxa using culture methods, combined with their rapid implementation in genomic databases, would allow for an exhaustive analysis of the gut microbiota through the use of a comprehensive approach.
International Nuclear Information System (INIS)
Song, Linze; Shi, Qiang
2015-01-01
We present a new non-perturbative method to calculate the charge carrier mobility using the imaginary time path integral approach, which is based on the Kubo formula for the conductivity, and a saddle point approximation to perform the analytic continuation. The new method is first tested using a benchmark calculation from the numerical exact hierarchical equations of motion method. Imaginary time path integral Monte Carlo simulations are then performed to explore the temperature dependence of charge carrier delocalization and mobility in organic molecular crystals (OMCs) within the Holstein and Holstein-Peierls models. The effects of nonlocal electron-phonon interaction on mobility in different charge transport regimes are also investigated
A survey of existing and proposed classical and quantum approaches to the photon mass
Spavieri, G.; Quintero, J.; Gillies, G. T.; Rodríguez, M.
2011-02-01
Over the past twenty years, there have been several careful experimental, observational and phenomenological investigations aimed at searching for and establishing ever tighter bounds on the possible mass of the photon. There are many fascinating and paradoxical physical implications that would arise from the presence of even a very small value for it, and thus such searches have always been well motivated in terms of the new physics that would result. We provide a brief overview of the theoretical background and classical motivations for this work and the early tests of the exactness of Coulomb's law that underlie it. We then go on to address the modern situation, in which quantum physics approaches come to attention. Among them we focus especially on the implications that the Aharonov-Bohm and Aharonov-Casher class of effects have on searches for a photon mass. These arise in several different ways and can lead to experiments that might involve the interaction of magnetic dipoles, electric dipoles, or charged particles with suitable potentials. Still other quantum-based approaches employ measurements of the g-factor of the electron. Plausible target sensitivities for limits on the photon mass as sought by the various quantum approaches are in the range of 10-53 to 10-54 g. Possible experimental arrangements for the associated experiments are discussed. We close with an assessment of the state of the art and a prognosis for future work.
A survey of existing and proposed classical and quantum approaches to the photon mass
International Nuclear Information System (INIS)
Spavieri, G.; Quintero, J.; Gillies, G.T.; Rodriguez, M.
2011-01-01
Over the past twenty years, there have been several careful experimental, observational and phenomenological investigations aimed at searching for and establishing ever tighter bounds on the possible mass of the photon. There are many fascinating and paradoxical physical implications that would arise from the presence of even a very small value for it, and thus such searches have always been well motivated in terms of the new physics that would result. We provide a brief overview of the theoretical background and classical motivations for this work and the early tests of the exactness of Coulomb's law that underlie it. We then go on to address the modern situation, in which quantum physics approaches come to attention. Among them we focus especially on the implications that the Aharonov-Bohm and Aharonov-Casher class of effects have on searches for a photon mass. These arise in several different ways and can lead to experiments that might involve the interaction of magnetic dipoles, electric dipoles, or charged particles with suitable potentials. Still other quantum-based approaches employ measurements of the g-factor of the electron. Plausible target sensitivities for limits on the photon mass as sought by the various quantum approaches are in the range of 10 -53 to 10 -54 g. Possible experimental arrangements for the associated experiments are discussed. We close with an assessment of the state of the art and a prognosis for future work. (authors)
An enhanced nonlinear damping approach accounting for system constraints in active mass dampers
Venanzi, Ilaria; Ierimonti, Laura; Ubertini, Filippo
2015-11-01
Active mass dampers are a viable solution for mitigating wind-induced vibrations in high-rise buildings and improve occupants' comfort. Such devices suffer particularly when they reach force saturation of the actuators and maximum extension of their stroke, which may occur in case of severe loading conditions (e.g. wind gust and earthquake). Exceeding actuators' physical limits can impair the control performance of the system or even lead to devices damage, with consequent need for repair or substitution of part of the control system. Controllers for active mass dampers should account for their technological limits. Prior work of the authors was devoted to stroke issues and led to the definition of a nonlinear damping approach, very easy to implement in practice. It consisted of a modified skyhook algorithm complemented with a nonlinear braking force to reverse the direction of the mass before reaching the stroke limit. This paper presents an enhanced version of this approach, also accounting for force saturation of the actuator and keeping the simplicity of implementation. This is achieved by modulating the control force by a nonlinear smooth function depending on the ratio between actuator's force and saturation limit. Results of a numerical investigation show that the proposed approach provides similar results to the method of the State Dependent Riccati Equation, a well-established technique for designing optimal controllers for constrained systems, yet very difficult to apply in practice.
Гуторов, Владимир Александрович
2013-01-01
The author analyses evolution of western political science’s approaches to mass media and mass media’s role in political process in liberal democracies. The author focuses on theories of “Minimal Effect,” “Mediocracy,” “Effects Research,” “Tеxt Analysis,” “Use and Gratification Approach.” The author discusses A. Giddens’s structuration theory as the most elaborate approach to interpreting the role of mass media in the western political and social science.Key words: mass and political communic...
International Nuclear Information System (INIS)
Papp, Z.; Plessas, W.
1996-01-01
We demonstrate the feasibility and efficiency of the Coulomb-Sturmian separable expansion method for generating accurate solutions of the Faddeev equations. Results obtained with this method are reported for several benchmark cases of bosonic and fermionic three-body systems. Correct bound-state results in agreement with the ones established in the literature are achieved for short-range interactions. We outline the formalism for the treatment of three-body Coulomb systems and present a bound-state calculation for a three-boson system interacting via Coulomb plus short-range forces. The corresponding result is in good agreement with the answer from a recent stochastic-variational-method calculation. copyright 1996 The American Physical Society
Uranium and radium in Finnsjoen - an experimental approach for calculation of transfer factors
International Nuclear Information System (INIS)
Evans, S.; Bergman, R.
1981-01-01
The radiological safety studies for underground disposal of HLW show that the future individual and collective doses to an important extent may originate from groundwater borne radium and uranium which reach the biosphere. Indications that the dispersion rates presently used give rise to overestimations of calculated doses justified an investigation for more realistic turnover rates of radium and uranium than those which now are in use. Within one of the sites selected for testing, the area around lake Finnsjoen, a small number of environmental samples were collected and analyzed with respect to radium and uranium and the new transfer coefficients between soil and lake water were derived. The dose rates obtained with the new transfer factors show a close agreement for radium and a slight increase for uranium compared with earlier calculations. (Auth.)
Calculation of the level density parameter using semi-classical approach
International Nuclear Information System (INIS)
Canbula, B.; Babacan, H.
2011-01-01
The level density parameters (level density parameter a and energy shift δ) for back-shifted Fermi gas model have been determined for 1136 nuclei for which complete level scheme is available. Level density parameter is calculated by using the semi-classical single particle level density, which can be obtained analytically through spherical harmonic oscillator potential. This method also enables us to analyze the Coulomb potential's effect on the level density parameter. The dependence of this parameter on energy has been also investigated. Another parameter, δ, is determined by fitting of the experimental level scheme and the average resonance spacings for 289 nuclei. Only level scheme is used for optimization procedure for remaining 847 nuclei. Level densities for some nuclei have been calculated by using these parameter values. Obtained results have been compared with the experimental level scheme and the resonance spacing data.
Calculation of far-field scattering from nonspherical particles using a geometrical optics approach
Hovenac, Edward A.
1991-01-01
A numerical method was developed using geometrical optics to predict far-field optical scattering from particles that are symmetric about the optic axis. The diffractive component of scattering is calculated and combined with the reflective and refractive components to give the total scattering pattern. The phase terms of the scattered light are calculated as well. Verification of the method was achieved by assuming a spherical particle and comparing the results to Mie scattering theory. Agreement with the Mie theory was excellent in the forward-scattering direction. However, small-amplitude oscillations near the rainbow regions were not observed using the numerical method. Numerical data from spheroidal particles and hemispherical particles are also presented. The use of hemispherical particles as a calibration standard for intensity-type optical particle-sizing instruments is discussed.
An approach to the calculation of many-loop massless Feynman integrals
International Nuclear Information System (INIS)
Gorishnii, S.G.; Isaev, A.P.
1985-01-01
A generalization of the identity of dimensionless regular-zation is proposed. The generalization is used to divide the complete set of dimensionally (and analytically) regularized Feynman integrals with one external momentum into classes of equal integrals, and also for calculating some of them. A nontrivial symmetry of the propagator integrals is revealed, on the basis of which a complete system of functional equations for determining two-loop integrals is derived. Possible generalizations of these equations are discussed
Calculation of Viscous Effects on Ship Wave Resistance Using Axisymmetric Boundary Layer Approaches
1985-05-13
Layers in Pressure Gradients," NSRDC Report 3308, April 1970. 38. Garcia, J.M. and Zazurca, J.A.A., " Calculo de la Resistencia Viscosa de un Buque a...none USERS 21 ABSTRACT SECURITY CLASSIFICATION UNCLASSIFIED 22a NAME OF RESPONSIBLE INDIVIDUAL Henry T. Wang 22b TELEPHONE (Include Area Code...theory. Since then, calculation of the resistance due to the waves generated by a surface ship advancing at constant forward speed has been an area of
Energy Technology Data Exchange (ETDEWEB)
Manfrini, Rozangela Magalhaes; Teixeira, Flavia Rodrigues; Pilo-Veloso, Dorila; Alcantara, Antonio Flavio de Carvalho, E-mail: aalcantara@zeus.qui.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Inst. de Ciencias Exatas. Dept. de Quimica; Nelson, David Lee [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Fac. de Farmacia. Dept. de Quimica; Siqueira, Ezequias Pessoa de [Centro de Pesquisas Rene Rachou (FIOCRUZ), Belo Horizonte, MG (Brazil)
2012-07-01
The stability of N-propylbutanimine (1) was investigated under different experimental conditions. The acid-catalyzed self-condensation that produced the E-enimine (4) and Z-inimine (5) was studied by experimental analyses and theoretical calculations. Since the calculations for the energy of 5 indicated that it had a lower energy than 4, yet 4 was the principal product, the self-condensation of 1 must be kinetically controlled. (author)
The length of the world's glaciers - a new approach for the global calculation of center lines
DEFF Research Database (Denmark)
Machguth, Horst; Huss, M.
2014-01-01
length using an automated method that relies on glacier surface slope, distance to the glacier margins and a set of trade-off functions. The method is developed for East Greenland, evaluated for East Greenland as well as for Alaska and eventually applied to all similar to 200 000 glaciers around...... appear to be related to characteristics of topography and glacier mass balance. The present study adds glacier length as a key parameter to global glacier inventories. Global and regional scaling laws might prove beneficial in conceptual glacier models....
International Nuclear Information System (INIS)
Ladeira, L.C.D.; Rezende, H.C.
1993-01-01
A simple generalized model, developed by K.H. Sun and R.B. Duffey, is applied in this work to calculate the ratio of mass effluents during bottom reflooding of a single tube carried out at the CDTN/CNEN. The results of the benchmark experiments indicate that the accuracy on mass effluence ratio prediction can be within 15% by using the Sun-Duffey model. The reasonable agreement obtained between experimental data and model predictions suggest that it could be used for analysis of single tube reflood tests, in similar conditions. (author)
Hummel, Sebastian; Bogner, Martin; Haub, Michael; Saegebarth, Joachim; Sandmaier, Hermann
2017-11-01
This paper presents a new simple analytical method to estimate the properties of falling droplets without solving complex differential equations. The derivation starts from the balance of forces and uses Newton’s second law and the equations of motion to calculate the volume of growing and detaching droplets and the time between two successive droplets falling out of a thin cylindrical capillary of borosilicate glass. In this specific case the reservoir is located above the capillary and the hydrostatic pressure of the fluid level leads to drop formation times about one second. In the second part of this paper experimental results are presented to validate the introduced calculation method. It is shown that the new approach describes the measuring results within a deviation of ±6.2%. The third part of the paper sums up the advantages of the new approach and an outlook is given on how the research on this topic will be continued.
Development of a Seismic Setpoint Calculation Methodology Using a Safety System Approach
International Nuclear Information System (INIS)
Lee, Chang Jae; Baik, Kwang Il; Lee, Sang Jeong
2013-01-01
The Automatic Seismic Trip System (ASTS) automatically actuates reactor trip when it detects seismic activities whose magnitudes are comparable to a Safe Shutdown Earthquake (SSE), which is the maximum hypothetical earthquake at the nuclear power plant site. To ensure that the reactor is tripped before the magnitude of earthquake exceeds the SSE, it is crucial to reasonably determine the seismic setpoint. The trip setpoint and allowable value for the ASTS for Advanced Power Reactor (APR) 1400 Nuclear Power Plants (NPPs) were determined by the methodology presented in this paper. The ASTS that trips the reactor when a large earthquake occurs is categorized as a non safety system because the system is not required by design basis event criteria. This means ASTS has neither specific analytical limit nor dedicated setpoint calculation methodology. Therefore, we developed the ASTS setpoint calculation methodology by conservatively considering that of PPS. By incorporating the developed methodology into the ASTS for APR1400, the more conservative trip setpoint and allowable value were determined. In addition, the ZPA from the Operating Basis Earthquake (OBE) FRS of the floor where the sensor module is located is 0.1g. Thus, the allowance of 0.17g between OBE of 0.1 g and ASTS trip setpoint of 0.27 g is sufficient to prevent the reactor trip before the magnitude of the earthquake exceeds the OBE. In result, the developed ASTS setpoint calculation methodology is evaluated as reasonable in both aspects of the safety and performance of the NPPs. This will be used to determine the ASTS trip setpoint and allowable for newly constructed plants
Crittenden, Barry D.
1991-01-01
A simple liquid-liquid equilibrium (LLE) system involving a constant partition coefficient based on solute ratios is used to develop an algebraic understanding of multistage contacting in a first-year separation processes course. This algebraic approach to the LLE system is shown to be operable for the introduction of graphical techniques…
Approach to Improve Speed of Sound Calculation within PC-SAFT Framework
DEFF Research Database (Denmark)
Liang, Xiaodong; Maribo-Mogensen, Bjørn; Thomsen, Kaj
2012-01-01
An extensive comparison of SRK, CPA and PC-SAFT for speed of sound in normal alkanes has been performed. The results reveal that PC-SAFT captures the curvature of speed of sound better than cubic EoS but the accuracy is not satisfactory. Two approaches have been proposed to improve PC-SAFT’s accu...... keeping acceptable accuracy for the primary properties, i.e. vapor pressure (2.1%) and liquid density (1.5%). The two approaches have also been applied to methanol, and both give very good results.......An extensive comparison of SRK, CPA and PC-SAFT for speed of sound in normal alkanes has been performed. The results reveal that PC-SAFT captures the curvature of speed of sound better than cubic EoS but the accuracy is not satisfactory. Two approaches have been proposed to improve PC......-SAFT’s accuracy for speed of sound: (i) putting speed of sound data into parameter estimation; (ii) putting speed of sound data into both universal constants regression and parameter estimation. The results have shown that the second approach can significantly improve the speed of sound (3.2%) prediction while...
International Nuclear Information System (INIS)
Alizadeh Pahlavani, M.R.; Shoulaie, A.
2010-01-01
In this paper, formulas are proposed for the self and mutual inductance calculations of the helical toroidal coil (HTC) by the direct and indirect methods at superconductivity conditions. The direct method is based on the Neumann's equation and the indirect approach is based on the toroidal and the poloidal components of the magnetic flux density. Numerical calculations show that the direct method is more accurate than the indirect approach at the expense of its longer computational time. Implementation of some engineering assumptions in the indirect method is shown to reduce the computational time without loss of accuracy. Comparison between the experimental measurements and simulated results for inductance, using the direct and the indirect methods indicates that the proposed formulas have high reliability. It is also shown that the self inductance and the mutual inductance could be calculated in the same way, provided that the radius of curvature is >0.4 of the minor radius, and that the definition of the geometric mean radius in the superconductivity conditions is used. Plotting contours for the magnetic flux density and the inductance show that the inductance formulas of helical toroidal coil could be used as the basis for coil optimal design. Optimization target functions such as maximization of the ratio of stored magnetic energy with respect to the volume of the toroid or the conductor's mass, the elimination or the balance of stress in some coordinate directions, and the attenuation of leakage flux could be considered. The finite element (FE) approach is employed to present an algorithm to study the three-dimensional leakage flux distribution pattern of the coil and to draw the magnetic flux density lines of the HTC. The presented algorithm, due to its simplicity in analysis and ease of implementation of the non-symmetrical and three-dimensional objects, is advantageous to the commercial software such as ANSYS, MAXWELL, and FLUX. Finally, using the
Automated-biasing approach to Monte Carlo shipping-cask calculations
International Nuclear Information System (INIS)
Hoffman, T.J.; Tang, J.S.; Parks, C.V.; Childs, R.L.
1982-01-01
Computer Sciences at Oak Ridge National Laboratory, under a contract with the Nuclear Regulatory Commission, has developed the SCALE system for performing standardized criticality, shielding, and heat transfer analyses of nuclear systems. During the early phase of shielding development in SCALE, it was established that Monte Carlo calculations of radiation levels exterior to a spent fuel shipping cask would be extremely expensive. This cost can be substantially reduced by proper biasing of the Monte Carlo histories. The purpose of this study is to develop and test an automated biasing procedure for the MORSE-SGC/S module of the SCALE system
A compressive sensing approach to the calculation of the inverse data space
Khan, Babar Hasan
2012-01-01
Seismic processing in the Inverse Data Space (IDS) has its advantages like the task of removing the multiples simply becomes muting the zero offset and zero time data in the inverse domain. Calculation of the Inverse Data Space by sparse inversion techniques has seen mitigation of some artifacts. We reformulate the problem by taking advantage of some of the developments from the field of Compressive Sensing. The seismic data is compressed at the sensor level by recording projections of the traces. We then process this compressed data directly to estimate the inverse data space. Due to the smaller number of data set we also gain in terms of computational complexity.
General approach for solving the density gradient theory in the interfacial tension calculations
DEFF Research Database (Denmark)
Liang, Xiaodong; Michelsen, Michael Locht
2017-01-01
Within the framework of the density gradient theory, the interfacial tension can be calculated by finding the density profiles that minimize an integral of two terms over the system of infinite width. It is found that the two integrands exhibit a constant difference along the interface for a finite...... property evaluations compared to other methods. The performance of the algorithm with recommended parameters is analyzed for various systems, and the efficiency is further compared with the geometric-mean density gradient theory, which only needs to solve nonlinear algebraic equations. The results show...... that the algorithm is only 5-10 times less efficient than solving the geometric-mean density gradient theory....
Chain Rule Approach for Calculating the Time-Derivative of Flux
Energy Technology Data Exchange (ETDEWEB)
Langenbrunner, James R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Booker, Jane M. [Booker Scientific, Fredericksburg, TX (United States)
2017-10-03
The reaction history (gamma-flux observable) is mathematically studied by using the chain rule for taking the total-time derivatives. That is, the total time-derivative of flux is written as the product of the ion temperature derivative with respect to time and the derivative of the flux with respect to ion temperature. Some equations are derived using the further simplification that the fusion reactivity is a parametrized function of ion temperature, T. Deuterium-tritium (D-T) fusion is used as the application with reactivity calculations from three established reactivity parametrizations.
Direct calculation of resonance energies and widths using an R-matrix approach
International Nuclear Information System (INIS)
Schneider, B.I.
1981-01-01
A modified R-matrix technique is presented which determines the eigenvalues and widths of resonant states by the direct diagonalization of a complex, non-Hermitian matrix. The method utilizes only real basis sets and requires a minimum of complex arithmetic. The method is applied to two problems, a set of coupled square wells and the Pi/sub g/ resonance of N 2 in the static-exchange approximation. The results of the calculation are in good agreement with other methods and converge very quickly with basis-set size
Samsudin, Hayati; Auras, Rafael; Mishra, Dharmendra; Dolan, Kirk; Burgess, Gary; Rubino, Maria; Selke, Susan; Soto-Valdez, Herlinda
2018-01-01
Migration studies of chemicals from contact materials have been widely conducted due to their importance in determining the safety and shelf life of a food product in their packages. The US Food and Drug Administration (FDA) and the European Food Safety Authority (EFSA) require this safety assessment for food contact materials. So, migration experiments are theoretically designed and experimentally conducted to obtain data that can be used to assess the kinetics of chemical release. In this work, a parameter estimation approach was used to review and to determine the mass transfer partition and diffusion coefficients governing the migration process of eight antioxidants from poly(lactic acid), PLA, based films into water/ethanol solutions at temperatures between 20 and 50°C. Scaled sensitivity coefficients were calculated to assess simultaneously estimation of a number of mass transfer parameters. An optimal experimental design approach was performed to show the importance of properly designing a migration experiment. Additional parameters also provide better insights on migration of the antioxidants. For example, the partition coefficients could be better estimated using data from the early part of the experiment instead at the end. Experiments could be conducted for shorter periods of time saving time and resources. Diffusion coefficients of the eight antioxidants from PLA films were between 0.2 and 19×10 -14 m 2 /s at ~40°C. The use of parameter estimation approach provided additional and useful insights about the migration of antioxidants from PLA films. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Aufiero, Manuele; Brovchenko, Mariya; Cammi, Antonio; Clifford, Ivor; Geoffroy, Olivier; Heuer, Daniel; Laureau, Axel; Losa, Mario; Luzzi, Lelio; Merle-Lucotte, Elsa; Ricotti, Marco E.; Rouch, Hervé
2014-01-01
Highlights: • Calculation of effective delayed neutron fraction in circulating-fuel reactors. • Extension of the Monte Carlo SERPENT-2 code for delayed neutron precursor tracking. • Forward and adjoint multi-group diffusion eigenvalue problems in OpenFOAM. • Analytical approach for β eff calculation in simple geometries and flow conditions. • Good agreement among the three proposed approaches in the MSFR test-case. - Abstract: This paper deals with the calculation of the effective delayed neutron fraction (β eff ) in circulating-fuel nuclear reactors. The Molten Salt Fast Reactor is adopted as test case for the comparison of the analytical, deterministic and Monte Carlo methods presented. The Monte Carlo code SERPENT-2 has been extended to allow for delayed neutron precursors drift, according to the fuel velocity field. The forward and adjoint eigenvalue multi-group diffusion problems are implemented and solved adopting the multi-physics tool-kit OpenFOAM, by taking into account the convective and turbulent diffusive terms in the precursors balance. These two approaches show good agreement in the whole range of the MSFR operating conditions. An analytical formula for the circulating-to-static conditions β eff correction factor is also derived under simple hypotheses, which explicitly takes into account the spatial dependence of the neutron importance. Its accuracy is assessed against Monte Carlo and deterministic results. The effects of in-core recirculation vortex and turbulent diffusion are finally analysed and discussed
International Nuclear Information System (INIS)
Schmidt, T.
1988-01-01
The numerical reliability calculation of cracked construction components under cyclical fatigue stress can be done with the help of models of probabilistic fracture mechanics. An alternative to the Monte Carlo simulation method is examined; the alternative method is based on the description of failure processes with the help of a Markov process. The Markov method is traced back directly to the stochastic parameters of a two-dimensional fracture mechanics model, the effects of inspections and repairs also being considered. The probability of failure and expected failure frequency can be determined as time functions with the transition and conditional probabilities of the original or derived Markov process. For concrete calculation, an approximative Markov chain is designed which, under certain conditions, is capable of giving a sufficient approximation of the original Markov process and the reliability characteristics determined by it. The application of the MARKOV program code developed into an algorithm reveals sufficient conformity with the Monte Carlo reference results. The starting point of the investigation was the 'Deutsche Risikostudie B (DWR)' ('German Risk Study B (DWR)'), specifically, the reliability of the main coolant line. (orig./HP) [de
Directory of Open Access Journals (Sweden)
Michael J. Leamy
2011-12-01
Full Text Available Dispersion calculations are presented for cylindrical carbon nanotubes using a manifold-based continuum-atomistic finite element formulation combined with Bloch analysis. The formulated finite elements allow any (n,m chiral nanotube, or mixed tubes formed by periodically-repeating heterojunctions, to be examined quickly and accurately using only three input parameters (radius, chiral angle, and unit cell length and a trivial structured mesh, thus avoiding the tedious geometry generation and energy minimization tasks associated with ab initio and lattice dynamics-based techniques. A critical assessment of the technique is pursued to determine the validity range of the resulting dispersion calculations, and to identify any dispersion anomalies. Two small anomalies in the dispersion curves are documented, which can be easily identified and therefore rectified. They include difficulty in achieving a zero energy point for the acoustic twisting phonon, and a branch veering in nanotubes with nonzero chiral angle. The twisting mode quickly restores its correct group velocity as wavenumber increases, while the branch veering is associated with a rapid exchange of eigenvectors at the veering point, which also lessens its impact. By taking into account the two noted anomalies, accurate predictions of acoustic and low-frequency optical branches can be achieved out to the midpoint of the first Brillouin zone.
New approach to multishell calculations in multiple angular momentum coupling schemes
International Nuclear Information System (INIS)
Chen, J.; Novoselsky, A.; Vallieres, M.; Gilmore, R.
1989-01-01
The procedure developed recently to calculate single-shell wave functions and matrix elements for multiple angular momentum shell-model calculations is extended to the multishell case. This was based on a factorization procedure introduced by Jahn. As a consequence of the factorization, coefficients of fractional parentage between states of arbitrary symmetry must be constructed to build up single-shell N-particle states from single-shell N-1-particle states. Multishell N-particle states are built up recursively from multishell N-1-particle states by using outer-product isoscalar factors. Symmetrized multishell states in one angular momentum subspace are combined with states of conjugate symmetry in a second angular momentum subspace to construct fermion wave functions. This is done using inner-product isoscalar factors. The coefficients of fractional parentage, outer-product isoscalar factors, and inner-product isoscalar factors are computed recursively using a matrix diagonalization algorithm. Shell-model matrix elements are constructed from these factors by using a new sum over path overlaps method. This computational procedure involving factorization is substantially more efficient than computational procedures which do not exploit factorization
An approach of sensitivity and uncertainty analyses methods installation in a safety calculation
International Nuclear Information System (INIS)
Pepin, G.; Sallaberry, C.
2003-01-01
Simulation of the migration in deep geological formations leads to solve convection-diffusion equations in porous media, associated with the computation of hydrogeologic flow. Different time-scales (simulation during 1 million years), scales of space, contrasts of properties in the calculation domain, are taken into account. This document deals more particularly with uncertainties on the input data of the model. These uncertainties are taken into account in total analysis with the use of uncertainty and sensitivity analysis. ANDRA (French national agency for the management of radioactive wastes) carries out studies on the treatment of input data uncertainties and their propagation in the models of safety, in order to be able to quantify the influence of input data uncertainties of the models on the various indicators of safety selected. The step taken by ANDRA consists initially of 2 studies undertaken in parallel: - the first consists of an international review of the choices retained by ANDRA foreign counterparts to carry out their uncertainty and sensitivity analysis, - the second relates to a review of the various methods being able to be used in sensitivity and uncertainty analysis in the context of ANDRA's safety calculations. Then, these studies are supplemented by a comparison of the principal methods on a test case which gathers all the specific constraints (physical, numerical and data-processing) of the problem studied by ANDRA
Mass Spectrometry Imaging of Biological Tissue: An Approach for Multicenter Studies
Energy Technology Data Exchange (ETDEWEB)
Rompp, Andreas; Both, Jean-Pierre; Brunelle, Alain; Heeren, Ronald M.; Laprevote, Olivier; Prideaux, Brendan; Seyer, Alexandre; Spengler, Bernhard; Stoeckli, Markus; Smith, Donald F.
2015-03-01
Mass spectrometry imaging has become a popular tool for probing the chemical complexity of biological surfaces. This led to the development of a wide range of instrumentation and preparation protocols. It is thus desirable to evaluate and compare the data output from different methodologies and mass spectrometers. Here, we present an approach for the comparison of mass spectrometry imaging data from different laboratories (often referred to as multicenter studies). This is exemplified by the analysis of mouse brain sections in five laboratories in Europe and the USA. The instrumentation includes matrix-assisted laser desorption/ionization (MALDI)-time-of-flight (TOF), MALDI-QTOF, MALDIFourier transform ion cyclotron resonance (FTICR), atmospheric-pressure (AP)-MALDI-Orbitrap, and cluster TOF-secondary ion mass spectrometry (SIMS). Experimental parameters such as measurement speed, imaging bin width, and mass spectrometric parameters are discussed. All datasets were converted to the standard data format imzML and displayed in a common open-source software with identical parameters for visualization, which facilitates direct comparison of MS images. The imzML conversion also allowed exchange of fully functional MS imaging datasets between the different laboratories. The experiments ranged from overview measurements of the full mouse brain to detailed analysis of smaller features (depending on spatial resolution settings), but common histological features such as the corpus callosum were visible in all measurements. High spatial resolution measurements of AP-MALDI-Orbitrap and TOF-SIMS showed comparable structures in the low-micrometer range. We discuss general considerations for planning and performing multicenter studies in mass spectrometry imaging. This includes details on the selection, distribution, and preparation of tissue samples as well as on data handling. Such multicenter studies in combination with ongoing activities for reporting guidelines, a common
DEFF Research Database (Denmark)
Vegge, Tejs; Sethna, J.P.; Cheong, S.-A.
2001-01-01
, and quantum tunneling rates fur dislocation kinks and jogs in copper screw dislocations. We find that jugs are unlikely to tunnel, but the kinks should have large quantum fluctuations. The kink motion involves hundreds of atoms each shifting a tiny amount, leading to a small effective mass and tunneling...
Soil structure interaction calculations: a comparison of methods
International Nuclear Information System (INIS)
Wight, L.; Zaslawsky, M.
1976-01-01
Two approaches for calculating soil structure interaction (SSI) are compared: finite element and lumped mass. Results indicate that the calculations with the lumped mass method are generally conservative compared to those obtained by the finite element method. They also suggest that a closer agreement between the two sets of calculations is possible, depending on the use of frequency-dependent soil springs and dashpots in the lumped mass calculations. There is a total lack of suitable guidelines for implementing the lumped mass method of calculating SSI, which leads to the conclusion that the finite element method is generally superior for calculative purposes
Soil structure interaction calculations: a comparison of methods
Energy Technology Data Exchange (ETDEWEB)
Wight, L.; Zaslawsky, M.
1976-07-22
Two approaches for calculating soil structure interaction (SSI) are compared: finite element and lumped mass. Results indicate that the calculations with the lumped mass method are generally conservative compared to those obtained by the finite element method. They also suggest that a closer agreement between the two sets of calculations is possible, depending on the use of frequency-dependent soil springs and dashpots in the lumped mass calculations. There is a total lack of suitable guidelines for implementing the lumped mass method of calculating SSI, which leads to the conclusion that the finite element method is generally superior for calculative purposes.
Energy Technology Data Exchange (ETDEWEB)
Penfold, S; Miller, A [University of Adelaide, Adelaide, SA (Australia)
2015-06-15
Purpose: Stoichiometric calibration of Hounsfield Units (HUs) for conversion to proton relative stopping powers (RStPs) is vital for accurate dose calculation in proton therapy. However proton dose distributions are not only dependent on RStP, but also on relative scattering power (RScP) of patient tissues. RScP is approximated from material density but a stoichiometric calibration of HU-density tables is commonly neglected. The purpose of this work was to quantify the difference in calculated dose of a commercial TPS when using HU-density tables based on tissue substitute materials and stoichiometric calibrated ICRU tissues. Methods: Two HU-density calibration tables were generated based on scans of the CIRS electron density phantom. The first table was based directly on measured HU and manufacturer quoted density of tissue substitute materials. The second was based on the same CT scan of the CIRS phantom followed by a stoichiometric calibration of ICRU44 tissue materials. The research version of Pinnacle{sup 3} proton therapy was used to compute dose in a patient CT data set utilizing both HU-density tables. Results: The two HU-density tables showed significant differences for bone tissues; the difference increasing with increasing HU. Differences in density calibration table translated to a difference in calculated RScP of −2.5% for ICRU skeletal muscle and 9.2% for ICRU femur. Dose-volume histogram analysis of a parallel opposed proton therapy prostate plan showed that the difference in calculated dose was negligible when using the two different HU-density calibration tables. Conclusion: The impact of HU-density calibration technique on proton therapy dose calculation was assessed. While differences were found in the calculated RScP of bony tissues, the difference in dose distribution for realistic treatment scenarios was found to be insignificant.
Bonacci, Ognjen; Željković, Ivana
2018-01-01
Different countries use varied methods for daily mean temperature calculation. None of them assesses precisely the true daily mean temperature, which is defined as the integral of continuous temperature measurements in a day. Of special scientific as well as practical importance is to find out how temperatures calculated by different methods and approaches deviate from the true daily mean temperature. Five mean daily temperatures were calculated (T0, T1, T2, T3, T4) using five different equations. The mean of 24-h temperature observations during the calendar day is accepted to represent the true, daily mean T0. The differences Δ i between T0 and four other mean daily temperatures T1, T2, T3, and T4 were calculated and analysed. In the paper, analyses were done with hourly data measured in a period from 1 January 1999 to 31 December 2014 (149,016 h, 192 months and 16 years) at three Croatian meteorological stations. The stations are situated in distinct climatological areas: Zagreb Grič in a mild climate, Zavižan in the cold mountain region and Dubrovnik in the hot Mediterranean. Influence of fog on the temperature is analysed. Special attention is given to analyses of extreme (maximum and minimum) daily differences occurred at three analysed stations. Selection of the fixed local hours, which is in use for calculation of mean daily temperature, plays a crucial role in diminishing of bias from the true daily temperature.
Yurchenko, Sergei N; Yachmenev, Andrey; Ovsyannikov, Roman I
2017-09-12
We present a general, numerically motivated approach to the construction of symmetry-adapted basis functions for solving ro-vibrational Schrödinger equations. The approach is based on the property of the Hamiltonian operator to commute with the complete set of symmetry operators and, hence, to reflect the symmetry of the system. The symmetry-adapted ro-vibrational basis set is constructed numerically by solving a set of reduced vibrational eigenvalue problems. In order to assign the irreducible representations associated with these eigenfunctions, their symmetry properties are probed on a grid of molecular geometries with the corresponding symmetry operations. The transformation matrices are reconstructed by solving overdetermined systems of linear equations related to the transformation properties of the corresponding wave functions on the grid. Our method is implemented in the variational approach TROVE and has been successfully applied to many problems covering the most important molecular symmetry groups. Several examples are used to illustrate the procedure, which can be easily applied to different types of coordinates, basis sets, and molecular systems.
International Nuclear Information System (INIS)
Dekens, O.; Marguet, S.; Risch, P.
1997-01-01
In order to optimize nuclear fuel utilization, as far as irradiation and storage are concerned, the Research and Development Division of Electricite de France (EDF) developed as fast and accurate software that simulates a fuel assembly life from the inside-reactor stay to the final repository: STRAPONTIN. The discrepancies between reference calculations and STRAPONTIN are generally smaller than 5 %. Moreover, the low calculation time enables to couple STRAPONTIN to any large code in order to widen its scope without impairing its CPU time. (authors)
Karthikeyan, N.; Joseph Prince, J.; Ramalingam, S.; Periandy, S.
2015-03-01
In this research work, the vibrational IR, polarization Raman, NMR and mass spectra of terephthalic acid (TA) were recorded. The observed fundamental peaks (IR, Raman) were assigned according to their distinctiveness region. The hybrid computational calculations were carried out for calculating geometrical and vibrational parameters by DFT (B3LYP and B3PW91) methods with 6-31++G(d,p) and 6-311++G(d,p) basis sets and the corresponding results were tabulated. The molecular mass spectral data related to base molecule and substitutional group of the compound was analyzed. The modification of the chemical property by the reaction mechanism of the injection of dicarboxylic group in the base molecule was investigated. The 13C and 1H NMR spectra were simulated by using the gauge independent atomic orbital (GIAO) method and the absolute chemical shifts related to TMS were compared with experimental spectra. The study on the electronic and optical properties; absorption wavelengths, excitation energy, dipole moment and frontier molecular orbital energies, were performed by hybrid Gaussian calculation methods. The orbital energies of different levels of HOMO and LUMO were calculated and the molecular orbital lobe overlapping showed the inter charge transformation between the base molecule and ligand group. From the frontier molecular orbitals (FMO), the possibility of electrophilic and nucleophilic hit also analyzed. The NLO activity of the title compound related to Polarizability and hyperpolarizability were also discussed. The present molecule was fragmented with respect to atomic mass and the mass variation depends on the substitutions have also been studied.
Karthikeyan, N; Prince, J Joseph; Ramalingam, S; Periandy, S
2015-03-15
In this research work, the vibrational IR, polarization Raman, NMR and mass spectra of terephthalic acid (TA) were recorded. The observed fundamental peaks (IR, Raman) were assigned according to their distinctiveness region. The hybrid computational calculations were carried out for calculating geometrical and vibrational parameters by DFT (B3LYP and B3PW91) methods with 6-31++G(d,p) and 6-311++G(d,p) basis sets and the corresponding results were tabulated. The molecular mass spectral data related to base molecule and substitutional group of the compound was analyzed. The modification of the chemical property by the reaction mechanism of the injection of dicarboxylic group in the base molecule was investigated. The (13)C and (1)H NMR spectra were simulated by using the gauge independent atomic orbital (GIAO) method and the absolute chemical shifts related to TMS were compared with experimental spectra. The study on the electronic and optical properties; absorption wavelengths, excitation energy, dipole moment and frontier molecular orbital energies, were performed by hybrid Gaussian calculation methods. The orbital energies of different levels of HOMO and LUMO were calculated and the molecular orbital lobe overlapping showed the inter charge transformation between the base molecule and ligand group. From the frontier molecular orbitals (FMO), the possibility of electrophilic and nucleophilic hit also analyzed. The NLO activity of the title compound related to Polarizability and hyperpolarizability were also discussed. The present molecule was fragmented with respect to atomic mass and the mass variation depends on the substitutions have also been studied. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
How to integrate divergent integrals: a pure numerical approach to complex loop calculations
International Nuclear Information System (INIS)
Caravaglios, F.
2000-01-01
Loop calculations involve the evaluation of divergent integrals. Usually [G. 't Hooft, M. Veltman, Nucl. Phys. B 44 (1972) 189] one computes them in a number of dimensions different than four where the integral is convergent and then one performs the analytical continuation and considers the Laurent expansion in powers of ε=n-4. In this paper we discuss a method to extract directly all coefficients of this expansion by means of concrete and well defined integrals in a five-dimensional space. We by-pass the formal and symbolic procedure of analytic continuation; instead we can numerically compute the integrals to extract directly both the coefficient of the pole 1/ε and the finite part
Energy Technology Data Exchange (ETDEWEB)
Ortenzi, Luciano
2013-10-17
In this thesis I study the interplay between magnetism and superconductivity in itinerant magnets and superconductors. I do this by applying a semiphenomenological method to four representative compounds. In particular I use the discrepancies (whenever present) between density functional theory (DFT) calculations and the experiments in order to construct phenomenological models which explain the magnetic, superconducting and optical properties of four representative systems. I focus my attention on the superconducting and normal state properties of the recently discovered APt3P superconductors, on superconducting hole-doped CuBiSO, on the optical properties of LaFePO and finally on the ferromagnetic-paramagnetic transition of Ni3Al under pressure. At the end I present a new method which aims to describe the effect of spin fluctuations in itinerant magnets and superconductors that can be used to monitor the evolution of the electronic structure from non magnetic to magnetic in systems close to a quantum critical point.
Continuum Electrostatics Approaches to Calculating pKas and Ems in Proteins.
Gunner, M R; Baker, N A
2016-01-01
Proteins change their charge state through protonation and redox reactions as well as through binding charged ligands. The free energy of these reactions is dominated by solvation and electrostatic energies and modulated by protein conformational relaxation in response to the ionization state changes. Although computational methods for calculating these interactions can provide very powerful tools for predicting protein charge states, they include several critical approximations of which users should be aware. This chapter discusses the strengths, weaknesses, and approximations of popular computational methods for predicting charge states and understanding the underlying electrostatic interactions. The goal of this chapter is to inform users about applications and potential caveats of these methods as well as outline directions for future theoretical and computational research. © 2016 Elsevier Inc. All rights reserved.
Continuum Electrostatics Approaches to Calculating pKas and Ems in Proteins
Energy Technology Data Exchange (ETDEWEB)
Gunner, Marilyn R.; Baker, Nathan A.
2016-06-20
Proteins change their charge state through protonation and redox reactions as well as through binding charged ligands. The free energy of these reactions are dominated by solvation and electrostatic energies and modulated by protein conformational relaxation in response to the ionization state changes. Although computational methods for calculating these interactions can provide very powerful tools for predicting protein charge states, they include several critical approximations of which users should be aware. This chapter discusses the strengths, weaknesses, and approximations of popular computational methods for predicting charge states and understanding their underlying electrostatic interactions. The goal of this chapter is to inform users about applications and potential caveats of these methods as well as outline directions for future theoretical and computational research.
Neural network approach for the calculation of potential coefficients in quantum mechanics
Ossandón, Sebastián; Reyes, Camilo; Cumsille, Patricio; Reyes, Carlos M.
2017-05-01
A numerical method based on artificial neural networks is used to solve the inverse Schrödinger equation for a multi-parameter class of potentials. First, the finite element method was used to solve repeatedly the direct problem for different parametrizations of the chosen potential function. Then, using the attainable eigenvalues as a training set of the direct radial basis neural network a map of new eigenvalues was obtained. This relationship was later inverted and refined by training an inverse radial basis neural network, allowing the calculation of the unknown parameters and therefore estimating the potential function. Three numerical examples are presented in order to prove the effectiveness of the method. The results show that the method proposed has the advantage to use less computational resources without a significant accuracy loss.
An approach to first principles electronic structure calculation by symbolic-numeric computation
Directory of Open Access Journals (Sweden)
Akihito Kikuchi
2013-04-01
Full Text Available There is a wide variety of electronic structure calculation cooperating with symbolic computation. The main purpose of the latter is to play an auxiliary role (but not without importance to the former. In the field of quantum physics [1-9], researchers sometimes have to handle complicated mathematical expressions, whose derivation seems almost beyond human power. Thus one resorts to the intensive use of computers, namely, symbolic computation [10-16]. Examples of this can be seen in various topics: atomic energy levels, molecular dynamics, molecular energy and spectra, collision and scattering, lattice spin models and so on [16]. How to obtain molecular integrals analytically or how to manipulate complex formulas in many body interactions, is one such problem. In the former, when one uses special atomic basis for a specific purpose, to express the integrals by the combination of already known analytic functions, may sometimes be very difficult. In the latter, one must rearrange a number of creation and annihilation operators in a suitable order and calculate the analytical expectation value. It is usual that a quantitative and massive computation follows a symbolic one; for the convenience of the numerical computation, it is necessary to reduce a complicated analytic expression into a tractable and computable form. This is the main motive for the introduction of the symbolic computation as a forerunner of the numerical one and their collaboration has won considerable successes. The present work should be classified as one such trial. Meanwhile, the use of symbolic computation in the present work is not limited to indirect and auxiliary part to the numerical computation. The present work can be applicable to a direct and quantitative estimation of the electronic structure, skipping conventional computational methods.
Ramshaw, B J; Sebastian, S E; McDonald, R D; Day, James; Tan, B S; Zhu, Z; Betts, J B; Liang, Ruixing; Bonn, D A; Hardy, W N; Harrison, N
2015-04-17
In the quest for superconductors with higher transition temperatures (T(c)), one emerging motif is that electronic interactions favorable for superconductivity can be enhanced by fluctuations of a broken-symmetry phase. Recent experiments have suggested the existence of the requisite broken-symmetry phase in the high-T(c) cuprates, but the impact of such a phase on the ground-state electronic interactions has remained unclear. We used magnetic fields exceeding 90 tesla to access the underlying metallic state of the cuprate YBa2Cu3O(6+δ) over a wide range of doping, and observed magnetic quantum oscillations that reveal a strong enhancement of the quasiparticle effective mass toward optimal doping. This mass enhancement results from increasing electronic interactions approaching optimal doping, and suggests a quantum critical point at a hole doping of p(crit) ≈ 0.18. Copyright © 2015, American Association for the Advancement of Science.
Calculating orthologs in bacteria and Archaea: a divide and conquer approach.
Directory of Open Access Journals (Sweden)
Mihail R Halachev
Full Text Available Among proteins, orthologs are defined as those that are derived by vertical descent from a single progenitor in the last common ancestor of their host organisms. Our goal is to compute a complete set of protein orthologs derived from all currently available complete bacterial and archaeal genomes. Traditional approaches typically rely on all-against-all BLAST searching which is prohibitively expensive in terms of hardware requirements or computational time (requiring an estimated 18 months or more on a typical server. Here, we present xBASE-Orth, a system for ongoing ortholog annotation, which applies a "divide and conquer" approach and adopts a pragmatic scheme that trades accuracy for speed. Starting at species level, xBASE-Orth carefully constructs and uses pan-genomes as proxies for the full collections of coding sequences at each level as it progressively climbs the taxonomic tree using the previously computed data. This leads to a significant decrease in the number of alignments that need to be performed, which translates into faster computation, making ortholog computation possible on a global scale. Using xBASE-Orth, we analyzed an NCBI collection of 1,288 bacterial and 94 archaeal complete genomes with more than 4 million coding sequences in 5 weeks and predicted more than 700 million ortholog pairs, clustered in 175,531 orthologous groups. We have also identified sets of highly conserved bacterial and archaeal orthologs and in so doing have highlighted anomalies in genome annotation and in the proposed composition of the minimal bacterial genome. In summary, our approach allows for scalable and efficient computation of the bacterial and archaeal ortholog annotations. In addition, due to its hierarchical nature, it is suitable for incorporating novel complete genomes and alternative genome annotations. The computed ortholog data and a continuously evolving set of applications based on it are integrated in the xBASE database, available
Calculation of vibrational frequencies through a variational reduced-coupling approach.
Scribano, Yohann; Benoit, David M
2007-10-28
In this study, we present a new method to perform accurate and efficient vibrational configuration interaction computations for large molecular systems. We use the vibrational self-consistent field (VSCF) method to compute an initial description of the vibrational wave function of the system, combined with the single-to-all approach to compute a sparse potential energy surface at the chosen ab initio level of theory. A Davidson scheme is then used to diagonalize the Hamiltonian matrix built on the VSCF virtual basis. Our method is applied to the computation of the OH-stretch frequency of formic acid and benzoic acid to demonstrate the efficiency and accuracy of this new technique.
Chen, Xin; Qin, Shanshan; Chen, Shuai; Li, Jinlong; Li, Lixin; Wang, Zhongling; Wang, Quan; Lin, Jianping; Yang, Cheng; Shui, Wenqing
2015-01-01
In fragment-based lead discovery (FBLD), a cascade combining multiple orthogonal technologies is required for reliable detection and characterization of fragment binding to the target. Given the limitations of the mainstream screening techniques, we presented a ligand-observed mass spectrometry approach to expand the toolkits and increase the flexibility of building a FBLD pipeline especially for tough targets. In this study, this approach was integrated into a FBLD program targeting the HCV RNA polymerase NS5B. Our ligand-observed mass spectrometry analysis resulted in the discovery of 10 hits from a 384-member fragment library through two independent screens of complex cocktails and a follow-up validation assay. Moreover, this MS-based approach enabled quantitative measurement of weak binding affinities of fragments which was in general consistent with SPR analysis. Five out of the ten hits were then successfully translated to X-ray structures of fragment-bound complexes to lay a foundation for structure-based inhibitor design. With distinctive strengths in terms of high capacity and speed, minimal method development, easy sample preparation, low material consumption and quantitative capability, this MS-based assay is anticipated to be a valuable addition to the repertoire of current fragment screening techniques. PMID:25666181
A DYNAMIC APPROACH TO CALCULATE SHADOW PRICES OF WATER RESOURCES FOR NINE MAJOR RIVERS IN CHINA
Institute of Scientific and Technical Information of China (English)
Jing HE; Xikang CHEN; Yong SHI
2006-01-01
China is experiencing from serious water issues. There are many differences among the Nine Major Rivers basins of China in the construction of dikes, reservoirs, floodgates, flood discharge projects, flood diversion projects, water ecological construction, water conservancy management, etc.The shadow prices of water resources for Nine Major Rivers can provide suggestions to the Chinese government. This article develops a dynamic shadow prices approach based on a multiperiod input-output optimizing model. Unlike previous approaches, the new model is based on the dynamic computable general equilibrium (DCGE) model to solve the problem of marginal long-term prices of water resources.First, definitions and algorithms of DCGE are elaborated. Second, the results of shadow prices of water resources for Nine Major Rivers in 1949-2050 in China using the National Water Conservancy input-holding-output table for Nine Major Rivers in 1999 are listed. A conclusion of this article is that the shadow prices of water resources for Nine Major Rivers are largely based on the extent of scarcity.Selling prices of water resources should be revised via the usage of parameters representing shadow prices.
Omori, S.
1973-01-01
The turbulent kinetic energy equation is coupled with boundary layer equations to solve the characteristics of compressible turbulent boundary layers with mass injection and combustion. The Reynolds stress is related to the turbulent kinetic energy using the Prandtl-Wieghardt formulation. When a lean mixture of hydrogen and nitrogen is injected through a porous plate into the subsonic turbulent boundary layer of air flow and ignited by external means, the turbulent kinetic energy increases twice as much as that of noncombusting flow with the same mass injection rate of nitrogen. The magnitudes of eddy viscosity between combusting and noncombusting flows with injection, however, are almost the same due to temperature effects, while the distributions are different. The velocity profiles are significantly affected by combustion; that is, combustion alters the velocity profile as if the mass injection rate is increased, reducing the skin-friction as a result of a smaller velocity gradient at the wall. If pure hydrogen as a transpiration coolant is injected into a rocket nozzle boundary layer flow of combustion products, the temperature drops significantly across the boundary layer due to the high heat capacity of hydrogen. At a certain distance from the wall, hydrogen reacts with the combustion products, liberating an extensive amount of heat. The resulting large increase in temperature reduces the eddy viscosity in this region.
International Nuclear Information System (INIS)
Winzer, A.
1978-01-01
It is shown that a direct proportionality exists between the activation energy for the mass transfer at the respective crystal faces of ionic crystals and the frequency of the phonones (longitudinal-optical), Planck's constant being found once more as a proportionality constant. Thus it could be demonstrated that the different activation energies measured at different time intervals for the mass transfer processes at phase boundaries of ionic crystals can be attributed to the specific growth of the crystal faces. Thus, NaCl crystal fractions which were mechanically stressed (pulverized and sifted) and consequently contained a great amount of [111]- and [110]-faces, respectively, experimentally yielded an activation energy which agrees with the values determined by quantum theory when the frequency of propagation of the phonons is inserted into a derived equation. This relation was also confirmed by NaCl crystal fractions predominantly containing cubic faces. This also indicates that in mass transfer processes on phase boundaries of ionic crystals quantum mechanical laws are of importance. (author)
International Nuclear Information System (INIS)
Aya, I.
1975-11-01
The proposed model was developed at ORNL to calculate mass flow rate and other quantities of two-phase flow in a pipe when the flow is dispersed with slip between the phases. The calculational model is based on assumptions concerning the characteristics of a turbine meter and a drag disk. The model should be validated with experimental data before being used in blowdown analysis. In order to compare dispersed flow and homogeneous flow, the ratio of readings from each flow regime for each device discussed is calculated for a given mass flow rate and steam quality. The sensitivity analysis shows that the calculated flow rate of a steam-water mixture (based on the measurements of a drag disk and a gamma densitometer in which the flow is assumed to be homogeneous even if there is some slip between phases) is very close to the real flow rate in the case of dispersed flow at a low quality. As the steam quality increases at a constant slip ratio, all models are prone to overestimate. At 20 percent quality the overestimates reach 8 percent in the proposed model, 15 percent in Rouhani's model, 38 percent in homogeneous model, and 75 percent in Popper's model
An Approach for Calculating Land Valuation by Using Inspire Data Models
Aydinoglu, A. C.; Bovkir, R.
2017-11-01
Land valuation is a highly important concept for societies and governments have always emphasis on the process especially for taxation, expropriation, market capitalization and economic activity purposes. To success an interoperable and standardised land valuation, INSPIRE data models can be very practical and effective. If data used in land valuation process produced in compliance with INSPIRE specifications, a reliable and effective land valuation process can be performed. In this study, possibility of the performing land valuation process with using the INSPIRE data models was analysed and with the help of Geographic Information Systems (GIS) a case study in Pendik was implemented. For this purpose, firstly data analysis and gathering was performed. After, different data structures were transformed according to the INSPIRE data model requirements. For each data set necessary ETL (Extract-Transform-Load) tools were produced and all data transformed according to the target data requirements. With the availability and practicability of spatial analysis tools of GIS software, land valuation calculations were performed for study area.
Approach synthesis of superheavy nuclei from some aspects of cross section calculations
International Nuclear Information System (INIS)
Liu Zuhua
2003-01-01
Several important aspects in the cross section calculations for the synthesis of superheavy nuclei have been inquired. They are the effects of the coupled-channels, the damping of shell correction energy, the collective enhancements in the level density and the spin distributions of evaporation residues. The channel coupling of relative motion with internal degrees of freedom will enhance significantly the capture cross section at sub-barrier energies. However, recent measurements of spin distributions for the survived compound nucleus show that only low partial waves contribute to the evaporation residues, which should at least partially cancel out the enhancement due to the effects of the channel coupling. The fission barriers are determined mainly by the shell correction energy in the case of superheavy nuclei. Therefore, it is especially important to determine as accurate as possible the damping parameter which describes the decrease of the shell effects influence. In addition, the collective enhancement factor in the level density also plays a very important role in the synthesis of heavy spherical nuclei
A power spectrum approach to tally convergence in Monte Carlo criticality calculation
International Nuclear Information System (INIS)
Ueki, Taro
2017-01-01
In Monte Carlo criticality calculation, confidence interval estimation is based on the central limit theorem (CLT) for a series of tallies from generations in equilibrium. A fundamental assertion resulting from CLT is the convergence in distribution (CID) of the interpolated standardized time series (ISTS) of tallies. In this work, the spectral analysis of ISTS has been conducted in order to assess the convergence of tallies in terms of CID. Numerical results obtained indicate that the power spectrum of ISTS is equal to the theoretically predicted power spectrum of Brownian motion for tallies of effective neutron multiplication factor; on the other hand, the power spectrum of ISTS of a strongly correlated series of tallies from local powers fluctuates wildly while maintaining the spectral form of fractional Brownian motion. The latter result is the evidence of a case where a series of tallies are away from CID, while the spectral form supports normality assumption on the sample mean. It is also demonstrated that one can make the unbiased estimation of the standard deviation of sample mean well before CID occurs. (author)
An alternative approach to calculating Area-Under-the-Curve (AUC) in delay discounting research.
Borges, Allison M; Kuang, Jinyi; Milhorn, Hannah; Yi, Richard
2016-09-01
Applied to delay discounting data, Area-Under-the-Curve (AUC) provides an atheoretical index of the rate of delay discounting. The conventional method of calculating AUC, by summing the areas of the trapezoids formed by successive delay-indifference point pairings, does not account for the fact that most delay discounting tasks scale delay pseudoexponentially, that is, time intervals between delays typically get larger as delays get longer. This results in a disproportionate contribution of indifference points at long delays to the total AUC, with minimal contribution from indifference points at short delays. We propose two modifications that correct for this imbalance via a base-10 logarithmic transformation and an ordinal scaling transformation of delays. These newly proposed indices of discounting, AUClog d and AUCor d, address the limitation of AUC while preserving a primary strength (remaining atheoretical). Re-examination of previously published data provides empirical support for both AUClog d and AUCor d . Thus, we believe theoretical and empirical arguments favor these methods as the preferred atheoretical indices of delay discounting. © 2016 Society for the Experimental Analysis of Behavior.
Temperature issues with white laser diodes, calculation and approach for new packages
Lachmayer, Roland; Kloppenburg, Gerolf; Stephan, Serge
2015-01-01
Bright white light sources are of significant importance for automotive front lighting systems. Today's upper class systems mainly use HID or LED light sources. As a further step laser diode based systems offer a high luminance, efficiency and allow the realization of new dynamic and adaptive light functions and styling concepts. The use of white laser diode systems in automotive applications is still limited to laboratories and prototypes even though announcements of laser based front lighting systems have been made. But the environment conditions for vehicles and other industry sectors differ from laboratory conditions. Therefor a model of the system's thermal behavior is set up. The power loss of a laser diode is transported as thermal flux from the junction layer to the diode's case and on to the environment. Therefor its optical power is limited by the maximum junction temperature (for blue diodes typically 125 - 150 °C), the environment temperature and the diode's packaging with its thermal resistances. In a car's headlamp the environment temperature can reach up to 80 °C. While the difference between allowed case temperature and environment temperature is getting small or negative the relevant heat flux also becomes small or negative. In early stages of LED development similar challenges had to be solved. Adapting LED packages to the conditions in a vehicle environment lead to today's efficient and bright headlights. In this paper the need to transfer these results to laser diodes is shown by calculating the diodes lifetimes based on the presented model.
Energy Technology Data Exchange (ETDEWEB)
Manning, Karessa L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dolislager, Fredrick G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bellamy, Michael B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2016-11-01
The Preliminary Remediation Goal (PRG) and Dose Compliance Concentration (DCC) calculators are screening level tools that set forth Environmental Protection Agency's (EPA) recommended approaches, based upon currently available information with respect to risk assessment, for response actions at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites, commonly known as Superfund. The screening levels derived by the PRG and DCC calculators are used to identify isotopes contributing the highest risk and dose as well as establish preliminary remediation goals. Each calculator has a residential gardening scenario and subsistence farmer exposure scenarios that require modeling of the transfer of contaminants from soil and water into various types of biota (crops and animal products). New publications of human intake rates of biota; farm animal intakes of water, soil, and fodder; and soil to plant interactions require updates be implemented into the PRG and DCC exposure scenarios. Recent improvements have been made in the biota modeling for these calculators, including newly derived biota intake rates, more comprehensive soil mass loading factors (MLFs), and more comprehensive soil to tissue transfer factors (TFs) for animals and soil to plant transfer factors (BV's). New biota have been added in both the produce and animal products categories that greatly improve the accuracy and utility of the PRG and DCC calculators and encompass greater geographic diversity on a national and international scale.
International Nuclear Information System (INIS)
Manning, Karessa L.; Dolislager, Fredrick G.; Bellamy, Michael B.
2016-01-01
The Preliminary Remediation Goal (PRG) and Dose Compliance Concentration (DCC) calculators are screening level tools that set forth Environmental Protection Agency's (EPA) recommended approaches, based upon currently available information with respect to risk assessment, for response actions at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites, commonly known as Superfund. The screening levels derived by the PRG and DCC calculators are used to identify isotopes contributing the highest risk and dose as well as establish preliminary remediation goals. Each calculator has a residential gardening scenario and subsistence farmer exposure scenarios that require modeling of the transfer of contaminants from soil and water into various types of biota (crops and animal products). New publications of human intake rates of biota; farm animal intakes of water, soil, and fodder; and soil to plant interactions require updates be implemented into the PRG and DCC exposure scenarios. Recent improvements have been made in the biota modeling for these calculators, including newly derived biota intake rates, more comprehensive soil mass loading factors (MLFs), and more comprehensive soil to tissue transfer factors (TFs) for animals and soil to plant transfer factors (BV's). New biota have been added in both the produce and animal products categories that greatly improve the accuracy and utility of the PRG and DCC calculators and encompass greater geographic diversity on a national and international scale.
DEFF Research Database (Denmark)
Troldborg, Mads; Nowak, Wolfgang; Binning, Philip John
and the hydraulic gradient across the control plane and are consistent with measurements of both hydraulic conductivity and head at the site. An analytical macro-dispersive transport solution is employed to simulate the mean concentration distribution across the control plane, and a geostatistical model of the Box-Cox...... transformed concentration data is used to simulate observed deviations from this mean solution. By combining the flow and concentration realizations, a mass discharge probability distribution is obtained. Tests show that the decoupled approach is both efficient and able to provide accurate uncertainty...
A Modified Approach in Modeling and Calculation of Contact Characteristics of Rough Surfaces
Directory of Open Access Journals (Sweden)
J.A. Abdo
2005-12-01
Full Text Available A mathematical formulation for the contact of rough surfaces is presented. The derivation of the contact model is facilitated through the definition of plastic asperities that are assumed to be embedded at a critical depth within the actual surface asperities. The surface asperities are assumed to deform elastically whereas the plastic asperities experience only plastic deformation. The deformation of plastic asperities is made to obey the law of conservation of volume. It is believed that the proposed model is advantageous since (a it provides a more accurate account of elasticplastic behavior of surfaces in contact and (b it is applicable to model formulations that involve asperity shoulder-to shoulder contact. Comparison of numerical results for estimating true contact area and contact force using the proposed model and the earlier methods suggest that the proposed approach provides a more realistic prediction of elastic-plastic contact behavior.
Regional Data Assimilation Using a Stretched-Grid Approach and Ensemble Calculations
Fox-Rabinovitz, M. S.; Takacs, L. L.; Govindaraju, R. C.; Atlas, Robert (Technical Monitor)
2002-01-01
The global variable resolution stretched grid (SG) version of the Goddard Earth Observing System (GEOS) Data Assimilation System (DAS) incorporating the GEOS SG-GCM (Fox-Rabinovitz 2000, Fox-Rabinovitz et al. 2001a,b), has been developed and tested as an efficient tool for producing regional analyses and diagnostics with enhanced mesoscale resolution. The major area of interest with enhanced regional resolution used in different SG-DAS experiments includes a rectangle over the U.S. with 50 or 60 km horizontal resolution. The analyses and diagnostics are produced for all mandatory levels from the surface to 0.2 hPa. The assimilated regional mesoscale products are consistent with global scale circulation characteristics due to using the SG-approach. Both the stretched grid and basic uniform grid DASs use the same amount of global grid-points and are compared in terms of regional product quality.
International Nuclear Information System (INIS)
Bernholc, J.
1998-01-01
The field of computational materials physics has grown very quickly in the past decade, and it is now possible to simulate properties of complex materials completely from first principles. The presentation has mostly focused on first-principles dynamic simulations. Such simulations have been pioneered by Car and Parrinello, who introduced a method for performing realistic simulations within the context of density functional theory. The Car-Parrinello method and related plane wave approaches are reviewed in depth. The Car-Parrinello method was reviewed and illustrated with several applications: the dynamics of the C 60 solid, diffusion across Si steps, and computing free energy differences. Alternative ab initio simulation schemes, which use preconditioned conjugate gradient techniques for energy minimization and dynamics were also discussed
Energy Technology Data Exchange (ETDEWEB)
Kirby, B.; King, J.; Milligan, M.
2012-06-01
The anticipated increase in variable generation in the Western Interconnection over the next several years has raised concerns about how to maintain system balance, especially in smaller Balancing Authority Areas (BAAs). Given renewable portfolio standards in the West, it is possible that more than 50 gigawatts of wind capacity will be installed by 2020. Significant quantities of solar generation are likely to be added as well. The consequent increase in variability and uncertainty that must be managed by the conventional generation fleet and responsive loads has resulted in a proposal for an Energy Imbalance Market (EIM). This paper extends prior work to estimate the reserve requirements for regulation, spinning, and non-spinning reserves with and without the EIM. We also discuss alternative approaches to allocating reserve requirements and show that some apparently attractive allocation methods have undesired consequences.
Directory of Open Access Journals (Sweden)
Brent Stephens
2018-02-01
Full Text Available High efficiency particle air filters are increasingly being recommended for use in heating, ventilating, and air-conditioning (HVAC systems to improve indoor air quality (IAQ. ISO Standard 16890-2016 provides a methodology for approximating mass-based particle removal efficiencies for PM1, PM2.5, and PM10 using size-resolved removal efficiency measurements for 0.3 µm to 10 µm particles. Two historical volume distribution functions for ambient aerosol distributions are assumed to represent ambient air in urban and rural areas globally. The goals of this work are to: (i review the ambient aerosol distributions used in ISO 16890, (ii evaluate the sensitivity of the mass-based removal efficiency calculation procedures described in ISO 16890 to various assumptions that are related to indoor and outdoor aerosol distributions, and (iii recommend several modifications to the standard that can yield more realistic estimates of mass-based removal efficiencies for HVAC filters, and thus provide a more realistic representation of a greater number of building scenarios. The results demonstrate that knowing the PM mass removal efficiency estimated using ISO 16890 is not sufficient to predict the PM mass removal efficiency in all of the environments in which the filter might be used. The main reason for this insufficiency is that the assumptions for aerosol number and volume distributions can substantially impact the results, albeit with some exceptions.
Evaluation of a rapid LMP-based approach for calculating marginal unit emissions
International Nuclear Information System (INIS)
Rogers, Michelle M.; Wang, Yang; Wang, Caisheng; McElmurry, Shawn P.; Miller, Carol J.
2013-01-01
Graphical abstract: Display Omitted - Highlights: • Pollutant emissions estimated based on locational marginal price and eGRID data. • Stochastic model using IEEE RTS-96 system used to evaluate LMP approach. • Incorporating membership function enhanced reliability of pollutant estimate. • Error in pollutant estimate typically 2 and X and SO 2 . - Abstract: To evaluate the sustainability of systems that draw power from electrical grids there is a need to rapidly and accurately quantify pollutant emissions associated with power generation. Air emissions resulting from electricity generation vary widely among power plants based on the types of fuel consumed, the efficiency of the plant, and the type of pollution control systems in service. To address this need, methods for estimating real-time air emissions from power generation based on locational marginal prices (LMPs) have been developed. Based on LMPs the type of the marginal generating unit can be identified and pollutant emissions are estimated. While conceptually demonstrated, this LMP approach has not been rigorously tested. The purpose of this paper is to (1) improve the LMP method for predicting pollutant emissions and (2) evaluate the reliability of this technique through power system simulations. Previous LMP methods were expanded to include marginal emissions estimates using an LMP Emissions Estimation Method (LEEM). The accuracy of emission estimates was further improved by incorporating a probability distribution function that characterize generator fuel costs and a membership function (MF) capable of accounting for multiple marginal generation units. Emission estimates were compared to those predicted from power flow simulations. The improved LEEM was found to predict the marginal generation type approximately 70% of the time based on typical system conditions (e.g. loads and fuel costs) without the use of a MF. With the addition of a MF, the LEEM was found to provide emission estimates with
Yan, Qing; Gao, Xu; Huang, Lei; Gan, Xiu-Mei; Zhang, Yi-Xin; Chen, You-Peng; Peng, Xu-Ya; Guo, Jin-Song
2014-03-01
The occurrence and fate of twenty-one pharmaceutically active compounds (PhACs) were investigated in different steps of the largest wastewater treatment plant (WWTP) in Southwest China. Concentrations of these PhACs were determined in both wastewater and sludge phases by a high-performance liquid chromatography coupled with electrospray ionization tandem mass spectrometry. Results showed that 21 target PhACs were present in wastewater and 18 in sludge. The calculated total mass load of PhACs per capita to the influent, the receiving water and sludge were 4.95mgd(-1)person(-1), 889.94μgd(-1)person(-1) and 78.57μgd(-1)person(-1), respectively. The overall removal efficiency of the individual PhACs ranged from "negative removal" to almost complete removal. Mass balance analysis revealed that biodegradation is believed to be the predominant removal mechanism, and sorption onto sludge was a relevant removal pathway for quinolone antibiotics, azithromycin and simvastatin, accounting for 9.35-26.96% of the initial loadings. However, the sorption of the other selected PhACs was negligible. The overall pharmaceutical consumption in Chongqing, China, was back-calculated based on influent concentration by considering the pharmacokinetics of PhACs in humans. The back-estimated usage was in good agreement with usage of ofloxacin (agreement ratio: 72.5%). However, the back-estimated usage of PhACs requires further verification. Generally, the average influent mass loads and back-calculated annual per capita consumption of the selected antibiotics were comparable to or higher than those reported in developed countries, while the case of other target PhACs was opposite. Copyright © 2013 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Preumont, A.; Shilab, S.; Cornaggia, L.; Reale, M.; Labbe, P.; Noe, H.
1992-01-01
This benchmark exercise is the continuation of the state-of-the-art review (EUR 11369 EN) which concluded that the random vibration approach could be an effective tool in seismic analysis of nuclear power plants, with potential advantages on time history and response spectrum techniques. As compared to the latter, the random vibration method provides an accurate treatment of multisupport excitations, non classical damping as well as the combination of high-frequency modal components. With respect to the former, the random vibration method offers direct information on statistical variability (probability distribution) and cheaper computations. The disadvantages of the random vibration method are that it is based on stationary results, and requires a power spectral density input instead of a response spectrum. A benchmark exercise to compare the three methods from the various aspects mentioned above, on one or several simple structures has been made. The following aspects have been covered with the simplest possible models: (i) statistical variability, (ii) multisupport excitation, (iii) non-classical damping. The random vibration method is therefore concluded to be a reliable method of analysis. Its use is recommended, particularly for preliminary design, owing to its computational advantage on multiple time history analysis
Goecke, Tobias; Fischer, Christian S.; Williams, Richard
2011-10-01
We present a calculation of the hadronic vacuum polarisation (HVP) tensor within the framework of Dyson-Schwinger equations. To this end we use a well-established phenomenological model for the quark-gluon interaction with parameters fixed to reproduce hadronic observables. From the HVP tensor we compute both the Adler function and the HVP contribution to the anomalous magnetic moment of the muon, aμ. We find aμHVP = 6760 ×10-11 which deviates about two percent from the value extracted from experiment. Additionally, we make comparison with a recent lattice determination of aμHVP and find good agreement within our approach. We also discuss the implications of our result for a corresponding calculation of the hadronic light-by-light scattering contribution to aμ.
Energy Technology Data Exchange (ETDEWEB)
Goecke, Tobias [Institut fuer Theoretische Physik, Universitaet Giessen, 35392 Giessen (Germany); Fischer, Christian S., E-mail: christian.fischer@theo.physik.uni-giessen.de [Institut fuer Theoretische Physik, Universitaet Giessen, 35392 Giessen (Germany); Gesellschaft fuer Schwerionenforschung mbH, Planckstr. 1, D-64291 Darmstadt (Germany); Williams, Richard [Dept. Fisica Teorica I, Universidad Complutense, 28040 Madrid (Spain)
2011-10-13
We present a calculation of the hadronic vacuum polarisation (HVP) tensor within the framework of Dyson-Schwinger equations. To this end we use a well-established phenomenological model for the quark-gluon interaction with parameters fixed to reproduce hadronic observables. From the HVP tensor we compute both the Adler function and the HVP contribution to the anomalous magnetic moment of the muon, a{sub {mu}}. We find a{sub {mu}}{sup HVP}=6760x10{sup -11} which deviates about two percent from the value extracted from experiment. Additionally, we make comparison with a recent lattice determination of a{sub {mu}}{sup HVP} and find good agreement within our approach. We also discuss the implications of our result for a corresponding calculation of the hadronic light-by-light scattering contribution to a{sub {mu}.}
International Nuclear Information System (INIS)
Goecke, Tobias; Fischer, Christian S.; Williams, Richard
2011-01-01
We present a calculation of the hadronic vacuum polarisation (HVP) tensor within the framework of Dyson-Schwinger equations. To this end we use a well-established phenomenological model for the quark-gluon interaction with parameters fixed to reproduce hadronic observables. From the HVP tensor we compute both the Adler function and the HVP contribution to the anomalous magnetic moment of the muon, a μ . We find a μ HVP =6760x10 -11 which deviates about two percent from the value extracted from experiment. Additionally, we make comparison with a recent lattice determination of a μ HVP and find good agreement within our approach. We also discuss the implications of our result for a corresponding calculation of the hadronic light-by-light scattering contribution to a μ .
International Nuclear Information System (INIS)
Ribeiro, Franciane; Mazer, Amanda Cristina; Hormaza, Joel Mesa
2016-01-01
In order to calculate the stopping power of protons, there are many very successful models at high energies, which are extrapolated to low-energy regions. From the point of view of application of proton beam in cancer treatment is just this low energy region the most relevant due to the dose deposition profile in depth for protons. In this work, we present a calculation of the stopping power of protons in a water target using the dielectric formalism in MELF-GOS approach. The results when compared to other models show good agreement for energies above 100 keV and lower values below this energy. This result should impact the range of values of protons and the Bragg peak position. (author)
International Nuclear Information System (INIS)
Khan, S.H.; Ivanov, A.A.
1993-01-01
This paper describes an approximate method for calculating the static characteristics of linear step motors (LSM), being developed for control rod drives (CRD) in large nuclear reactors. The static characteristic of such an LSM which is given by the variation of electromagnetic force with armature displacement determines the motor performance in its standing and dynamic modes. The approximate method of calculation of these characteristics is based on the permeance analysis method applied to the phase magnetic circuit of LSM. This is a simple, fast and efficient analytical approach which gives satisfactory results for small stator currents and weak iron saturation, typical to the standing mode of operation of LSM. The method is validated by comparing theoretical results with experimental ones. (Author)
Directory of Open Access Journals (Sweden)
Bo Dong
2015-01-01
Full Text Available During geomagnetic disturbances, the telluric currents which are driven by the induced electric fields will flow in conductive Earth. An approach to model the Earth conductivity structures with lateral conductivity changes for calculating geoelectric fields is presented in this paper. Numerical results, which are obtained by the Finite Element Method (FEM with a planar grid in two-dimensional modelling and a solid grid in three-dimensional modelling, are compared, and the flow of induced telluric currents in different conductivity regions is demonstrated. Then a three-dimensional conductivity structure is modelled and the induced currents in different depths and the geoelectric field at the Earth’s surface are shown. The geovoltages by integrating the geoelectric field along specific paths can be obtained, which are very important regarding calculations of geomagnetically induced currents (GIC in ground-based technical networks, such as power systems.
In silico approaches to study mass and energy flows in microbial consortia: a syntrophic case study
Directory of Open Access Journals (Sweden)
Mallette Natasha
2009-12-01
Full Text Available Abstract Background Three methods were developed for the application of stoichiometry-based network analysis approaches including elementary mode analysis to the study of mass and energy flows in microbial communities. Each has distinct advantages and disadvantages suitable for analyzing systems with different degrees of complexity and a priori knowledge. These approaches were tested and compared using data from the thermophilic, phototrophic mat communities from Octopus and Mushroom Springs in Yellowstone National Park (USA. The models were based on three distinct microbial guilds: oxygenic phototrophs, filamentous anoxygenic phototrophs, and sulfate-reducing bacteria. Two phases, day and night, were modeled to account for differences in the sources of mass and energy and the routes available for their exchange. Results The in silico models were used to explore fundamental questions in ecology including the prediction of and explanation for measured relative abundances of primary producers in the mat, theoretical tradeoffs between overall productivity and the generation of toxic by-products, and the relative robustness of various guild interactions. Conclusion The three modeling approaches represent a flexible toolbox for creating cellular metabolic networks to study microbial communities on scales ranging from cells to ecosystems. A comparison of the three methods highlights considerations for selecting the one most appropriate for a given microbial system. For instance, communities represented only by metagenomic data can be modeled using the pooled method which analyzes a community's total metabolic potential without attempting to partition enzymes to different organisms. Systems with extensive a priori information on microbial guilds can be represented using the compartmentalized technique, employing distinct control volumes to separate guild-appropriate enzymes and metabolites. If the complexity of a compartmentalized network creates an
An approach to 3D magnetic field calculation using numerical and differential algebra methods
International Nuclear Information System (INIS)
Caspi, S.; Helm, M.; Laslett, L.J.; Brady, V.O.
1992-01-01
Motivated by the need for new means for specification and determination of 3D fields that are produced by electromagnetic lens elements in the region interior to coil windings and seeking to obtain techniques that will be convenient for accurate conductor placement and dynamical study of particle motion, we have conveniently gene the representation of a 2D magnetic field to 3D. We have shown that the 3 dimensioal magnetic field components of a multipole magnet in the curl-fire divergence-fire region near the axis r=0 can be derived from one dimensional functions A n (z) and their derivatives (part 1). In the region interior to coil windings of accelerator magnets the three spatial components of magnet fields can be expressed in terms of ''harmonic components'' proportional to functions sin (nθ) or cos (nθ) of the azimuthal angle. The r,z dependence of any such component can then be expressed in terms of powers of r times functions A n (z) and their derivatives. For twodimensional configurations B z of course is identically zero, the derivatives of A n (z) vanish, and the harmonic components of the transverse field then acquire a simple proportionality B r,n ∝ r n-1 sin (nθ),B θ,n ∝ r n-1 cos (nθ), whereas in a 3-D configuration the more complex nature of the field gives rise to additional so-called ''psuedomultipole'' components as judged by additional powers of r required in the development of the field. Computation of the 3-D magnetic field arising at a sequence of field points, as a direct result of a specified current configuration or coil geometry, can be calculated explicitly through use of the Biot-Savart law and from such data the coefficients can then be derived for a general development of the type indicated above. We indicate, discuss, and illustrate two means by which this development may be performed
A New approach for evaluate a sandy soil infiltration to calculate the permeability
Mechergui, M. Mohamed; Latifa Dhaouadi, Ms
2016-04-01
10 sites were chosen in the four ha field of Research Regional Center of Oasis Agriculture in Deguache (Tozeur). The soil is homogeneous to the depth of 120 cm; with a sandy texture (60% big sand, 20% small sand 13% silt and 7% clay); with a mean bulk density equal to 1.43g/cm3 and with field capacity and welting point equal respectively to 11.9 and 6 %. The time duration for each infiltration essay lasted between 352 and 554 minutes. The number of observation points for each infiltration curve varies between 31 and 40. The shape of the infiltration curves observed in all sites is in part similar to what observed in literature (high increase with time of cumulative infiltration for a short time and then a linear increase of this parameter to a time varying between 122 to 197 minutes depending on the site) and then something special a slowdown in the cumulative infiltration to the end of the essay. The (F(t) / t 1/2 versus t 1/2) plotted curves showed two distinguished parts: A linear relation to the time varying between 122 and 197 minutes confirming the validity of Philips model and a second part showed a slowdown in the slope to a time varying between 231 and 347 minutes depending on the site and then drop down to the end of the essay. This is may be due to the rearrangement of particles after a long time of infiltration which led to a decrease in hydraulic conductivity. To improve the calculation of the saturated hydraulic conductivity, we choose only the part that is validated by Philips model, the linear part. The number of omitted points in the cumulative infiltration varies between 11 and 22 points. By this method, the saturated hydraulic conductivity varies between 1 and 3.72 m/day with a mean equal to 2.35. However the previous technique used gave a mean value equal to 2.07. The new method is accurate and gives better results of K and sorbtivity.
International Nuclear Information System (INIS)
Vladimir Ya Kumaev; Andrei A Lebezov; Victor V Alexeev
2005-01-01
Full text of publication follows: The report is devoted to the development and application of the two-dimensional MASKA-LM computer code intended for numerical calculations of lead coolant flows, temperatures and transport of impurities in BREST-type reactors of the integral design. The description of heat and mass transfer in liquid metal systems, proceeding in the coolant and at the interface 'coolant - structural materials', is a complex problem involving the joint simulation of thermal-hydraulic, physical and chemical processes in view of the real configuration of the reactor circuit. The report presents the state-of-the-art in the development of the two-dimensional code MASKA-LM and the results of trial calculations of heat and mass transfer in the primary circuit of the lead cooled reactor. The set of governing equations to be solved is based on the porous body model and describes the thermal-hydraulic processes in the reactor as a whole. The numerical method for solution of the governing equations is discussed. To check the code workability and study the technique by the way of solution of a particular task, calculations were performed in reference to the chosen version of the lead cooled BREST reactor under design. The examined domain of the reactor was simulated by a porous body with the parameters corresponding to those of the real reactor medium in terms of heat generation, resistance and the geometry of the hydraulic path of coolant. Analysis of the calculated two-dimensional fields of velocities, pressure and temperatures shows the existence of a complex coolant flow with stagnant and vortex zones. A nonuniform distribution of the coolant flow rate along the core radius was obtained. The results of calculations of the impurity transport of iron, oxygen and magnetite in the primary reactor circuit are discussed as well. The developed code MASKA-LM allows one to evaluate the issue of components of structural materials into coolant as impurities, their
International Nuclear Information System (INIS)
El-Nabulsi, Ahmad Rami
2009-01-01
Multidimensional fractional actionlike variational problem with time-dependent dynamical fractional exponents is constructed. Fractional Euler-Lagrange equations are derived and discussed in some details. The results obtained are used to explore some novel aspects of fractional quantum field theory where many interesting consequences are revealed, in particular the complexification of quantum field theory, in particular Dirac operators and the novel notion of 'mass without mass'.
Kou, Qiang; Wu, Si; Tolic, Nikola; Paša-Tolic, Ljiljana; Liu, Yunlong; Liu, Xiaowen
2017-05-01
Although proteomics has rapidly developed in the past decade, researchers are still in the early stage of exploring the world of complex proteoforms, which are protein products with various primary structure alterations resulting from gene mutations, alternative splicing, post-translational modifications, and other biological processes. Proteoform identification is essential to mapping proteoforms to their biological functions as well as discovering novel proteoforms and new protein functions. Top-down mass spectrometry is the method of choice for identifying complex proteoforms because it provides a 'bird's eye view' of intact proteoforms. The combinatorial explosion of various alterations on a protein may result in billions of possible proteoforms, making proteoform identification a challenging computational problem. We propose a new data structure, called the mass graph, for efficient representation of proteoforms and design mass graph alignment algorithms. We developed TopMG, a mass graph-based software tool for proteoform identification by top-down mass spectrometry. Experiments on top-down mass spectrometry datasets showed that TopMG outperformed existing methods in identifying complex proteoforms. http://proteomics.informatics.iupui.edu/software/topmg/. xwliu@iupui.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Energy Technology Data Exchange (ETDEWEB)
Chen, Hanning; McMahon, J. M.; Ratner, Mark A.; Schatz, George C.
2010-09-02
A new multiscale computational methodology was developed to effectively incorporate the scattered electric field of a plasmonic nanoparticle into a quantum mechanical (QM) optical property calculation for a nearby dye molecule. For a given location of the dye molecule with respect to the nanoparticle, a frequency-dependent scattering response function was first determined by the classical electrodynamics (ED) finite-difference time-domain (FDTD) approach. Subsequently, the time-dependent scattered electric field at the dye molecule was calculated using the FDTD scattering response function through a multidimensional Fourier transform to reflect the effect of polarization of the nanoparticle on the local field at the molecule. Finally, a real-time time-dependent density function theory (RT-TDDFT) approach was employed to obtain a desired optical property (such as absorption cross section) of the dye molecule in the presence of the nanoparticle’s scattered electric field. Our hybrid QM/ED methodology was demonstrated by investigating the absorption spectrum of the N3 dye molecule and the Raman spectrum of pyridine, both of which were shown to be significantly enhanced by a 20 nm diameter silver sphere. In contrast to traditional quantum mechanical optical calculations in which the field at the molecule is entirely determined by intensity and polarization direction of the incident light, in this work we show that the light propagation direction as well as polarization and intensity are important to nanoparticle-bound dye molecule response. At no additional computation cost compared to conventional ED and QM calculations, this method provides a reliable way to couple the response of the dye molecule’s individual electrons to the collective dielectric response of the nanoparticle.
Sigmund, Gerd; Koch, Anja; Orlovius, Anne-Katrin; Guddat, Sven; Thomas, Andreas; Schänzer, Wilhelm; Thevis, Mario
2014-01-01
Since January 2014, the anti-anginal drug trimetazidine [1-(2,3,4-trimethoxybenzyl)-piperazine] has been classified as prohibited substance by the World Anti-Doping Agency (WADA), necessitating specific and robust detection methods in sports drug testing laboratories. In the present study, the implementation of the intact therapeutic agent into two different initial testing procedures based on gas chromatography-mass spectrometry (GC-MS) and liquid chromatography-tandem mass spectrometry (LC-MS/MS) is reported, along with the characterization of urinary metabolites by electrospray ionization-high resolution/high accuracy (tandem) mass spectrometry. For GC-MS analyses, urine samples were subjected to liquid-liquid extraction sample preparation, while LC-MS/MS analyses were conducted by established 'dilute-and-inject' approaches. Both screening methods were validated for trimetazidine concerning specificity, limits of detection (0.5-50 ng/mL), intra-day and inter-day imprecision (doping control samples were used to complement the LC-MS/MS-based assay, although intact trimetazidine was found at highest abundance of the relevant trimetazidine-related analytes in all tested sports drug testing samples. Retrospective data mining regarding doping control analyses conducted between 1999 and 2013 at the Cologne Doping Control Laboratory concerning trimetazidine revealed a considerable prevalence of the drug particularly in endurance and strength sports accounting for up to 39 findings per year. Copyright © 2014 John Wiley & Sons, Ltd.
Musah, Rabi A.; Espinoza, Edgard O.; Cody, Robert B.; Lesiak, Ashton D.; Christensen, Earl D.; Moore, Hannah E.; Maleknia, Simin; Drijfhout, Falko P.
2015-01-01
A high throughput method for species identification and classification through chemometric processing of direct analysis in real time (DART) mass spectrometry-derived fingerprint signatures has been developed. The method entails introduction of samples to the open air space between the DART ion source and the mass spectrometer inlet, with the entire observed mass spectral fingerprint subjected to unsupervised hierarchical clustering processing. A range of both polar and non-polar chemotypes are instantaneously detected. The result is identification and species level classification based on the entire DART-MS spectrum. Here, we illustrate how the method can be used to: (1) distinguish between endangered woods regulated by the Convention for the International Trade of Endangered Flora and Fauna (CITES) treaty; (2) assess the origin and by extension the properties of biodiesel feedstocks; (3) determine insect species from analysis of puparial casings; (4) distinguish between psychoactive plants products; and (5) differentiate between Eucalyptus species. An advantage of the hierarchical clustering approach to processing of the DART-MS derived fingerprint is that it shows both similarities and differences between species based on their chemotypes. Furthermore, full knowledge of the identities of the constituents contained within the small molecule profile of analyzed samples is not required. PMID:26156000
Geochemical modelling of Na-SO4 type groundwater at Palmottu using a mass balance approach
International Nuclear Information System (INIS)
Pitkaenen, P.
1993-01-01
The mass balance chemical modelling technique has been applied to the groundwaters at the Palmottu analogue study site (in southwestern Finland) for radioactive waste disposal. The geochemical modelling concentrates on the evolution of Na-SO 4 type groundwater, which is spatially connected to the uranium mineralization. The results calculated along an assumed flow path are consistent with available field data and thermodynamic constraints. The results show that essential production of sulphides is unrealistic in the prevailing conditions. The increasing concentrations of Na, SO 4 and Cl along the evolution trend seem to have the same source and they could originate mainly from the leakage of fluid inclusions. Some mixing of relict sea water is also possible
Phuc, Huynh V.; Hieu, Nguyen N.; Hoi, Bui D.; Hieu, Nguyen V.; Thu, Tran V.; Hung, Nguyen M.; Ilyasov, Victor V.; Poklonski, Nikolai A.; Nguyen, Chuong V.
2018-01-01
In this paper, we studied the electronic properties, effective masses, and carrier mobility of monolayer MoS_2 using density functional theory calculations. The carrier mobility was considered by means of ab initio calculations using the Boltzmann transport equation coupled with deformation potential theory. The effects of mechanical biaxial strain on the electronic properties, effective mass, and carrier mobility of monolayer MoS_2 were also investigated. It is demonstrated that the electronic properties, such as band structure and density of state, of monolayer MoS_2 are very sensitive to biaxial strain, leading to a direct-indirect transition in semiconductor monolayer MoS_2. Moreover, we found that the carrier mobility and effective mass can be enhanced significantly by biaxial strain and by lowering temperature. The electron mobility increases over 12 times with a biaxial strain of 10%, while the carrier mobility gradually decreases with increasing temperature. These results are very useful for the future nanotechnology, and they make monolayer MoS_2 a promising candidate for application in nanoelectronic and optoelectronic devices.
Han, Young-Soo; Tokunaga, Tetsu K
2014-12-01
Renewed interest in managing C balance in soils is motivated by increasing atmospheric concentrations of CO2 and consequent climate change. Here, experiments were conducted in soil columns to determine C mass balances with and without addition of CaSO4-minerals (anhydrite and gypsum), which were hypothesized to promote soil organic carbon (SOC) retention and soil inorganic carbon (SIC) precipitation as calcite under slightly alkaline conditions. Changes in C contents in three phases (gas, liquid and solid) were measured in unsaturated soil columns tested for one year and comprehensive C mass balances were determined. The tested soil columns had no C inputs, and only C utilization by microbial activity and C transformations were assumed in the C chemistry. The measurements showed that changes in C inventories occurred through two processes, SOC loss and SIC gain. However, the measured SOC losses in the treated columns were lower than their corresponding control columns, indicating that the amendments promoted SOC retention. The SOC losses resulted mostly from microbial respiration and loss of CO2 to the atmosphere rather than from chemical leaching. Microbial oxidation of SOC appears to have been suppressed by increased Ca(2+) and SO4(2)(-) from dissolution of CaSO4 minerals. For the conditions tested, SIC accumulation per m(2) soil area under CaSO4-treatment ranged from 130 to 260 g C m(-1) infiltrated water (20-120 g C m(-1) infiltrated water as net C benefit). These results demonstrate the potential for increasing C sequestration in slightly alkaline soils via CaSO4-treatment. Copyright © 2014 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Uehara, Yasushi; Uchida, Shunsuke; Naitoh, Masanori; Okada, Hidetoshi; Koshizuka, Seiichi
2009-01-01
In order to predict and mitigate flow accelerated corrosion (FAC) of carbon steel piping in PWR and BWR secondary systems, computer program packages for evaluating FAC have been developed by coupling one through three dimensional (1-3D) computational flow dynamics (CFD) models and corrosion models. To evaluate corrosive conditions, e.g., oxygen concentration and electrochemical corrosion potential (ECP) along the flow path, flow pattern and temperature in each elemental volume were obtained with 1D computational flow dynamics (CFD) codes. Precise flow turbulence and mass transfer coefficients at the structure surface were calculated with 3D CFD codes to determine wall thinning rates. One of the engineering options is application of k-ε calculation as a 3D CFD code, which has limitation of detail evaluation of flow distribution at very surface of large scale piping. A combination of k-ε calculation and wall function was proposed to evaluate precise distribution of mass transfer coefficients with reasonable CPU volume and computing time and, at the same time, reasonable accuracy. (author)
Fission neutron multiplicity calculations
International Nuclear Information System (INIS)
Maerten, H.; Ruben, A.; Seeliger, D.
1991-01-01
A model for calculating neutron multiplicities in nuclear fission is presented. It is based on the solution of the energy partition problem as function of mass asymmetry within a phenomenological approach including temperature-dependent microscopic energies. Nuclear structure effects on fragment de-excitation, which influence neutron multiplicities, are discussed. Temperature effects on microscopic energy play an important role in induced fission reactions. Calculated results are presented for various fission reactions induced by neutrons. Data cover the incident energy range 0-20 MeV, i.e. multiple chance fission is considered. (author). 28 refs, 13 figs
Mass balance modelling of contaminants in river basins: a flexible matrix approach.
Warren, Christopher; Mackay, Don; Whelan, Mick; Fox, Kay
2005-12-01
A novel and flexible approach is described for simulating the behaviour of chemicals in river basins. A number (n) of river reaches are defined and their connectivity is described by entries in an n x n matrix. Changes in segmentation can be readily accommodated by altering the matrix entries, without the need for model revision. Two models are described. The simpler QMX-R model only considers advection and an overall loss due to the combined processes of volatilization, net transfer to sediment and degradation. The rate constant for the overall loss is derived from fugacity calculations for a single segment system. The more rigorous QMX-F model performs fugacity calculations for each segment and explicitly includes the processes of advection, evaporation, water-sediment exchange and degradation in both water and sediment. In this way chemical exposure in all compartments (including equilibrium concentrations in biota) can be estimated. Both models are designed to serve as intermediate-complexity exposure assessment tools for river basins with relatively low data requirements. By considering the spatially explicit nature of emission sources and the changes in concentration which occur with transport in the channel system, the approach offers significant advantages over simple one-segment simulations while being more readily applicable than more sophisticated, highly segmented, GIS-based models.
Directory of Open Access Journals (Sweden)
J. Czerny
2013-05-01
Full Text Available Recent studies on the impacts of ocean acidification on pelagic communities have identified changes in carbon to nutrient dynamics with related shifts in elemental stoichiometry. In principle, mesocosm experiments provide the opportunity of determining temporal dynamics of all relevant carbon and nutrient pools and, thus, calculating elemental budgets. In practice, attempts to budget mesocosm enclosures are often hampered by uncertainties in some of the measured pools and fluxes, in particular due to uncertainties in constraining air–sea gas exchange, particle sinking, and wall growth. In an Arctic mesocosm study on ocean acidification applying KOSMOS (Kiel Off-Shore Mesocosms for future Ocean Simulation, all relevant element pools and fluxes of carbon, nitrogen and phosphorus were measured, using an improved experimental design intended to narrow down the mentioned uncertainties. Water-column concentrations of particulate and dissolved organic and inorganic matter were determined daily. New approaches for quantitative estimates of material sinking to the bottom of the mesocosms and gas exchange in 48 h temporal resolution as well as estimates of wall growth were developed to close the gaps in element budgets. However, losses elements from the budgets into a sum of insufficiently determined pools were detected, and are principally unavoidable in mesocosm investigation. The comparison of variability patterns of all single measured datasets revealed analytic precision to be the main issue in determination of budgets. Uncertainties in dissolved organic carbon (DOC, nitrogen (DON and particulate organic phosphorus (POP were much higher than the summed error in determination of the same elements in all other pools. With estimates provided for all other major elemental pools, mass balance calculations could be used to infer the temporal development of DOC, DON and POP pools. Future elevated pCO2 was found to enhance net autotrophic community carbon
Atwell, William; Koontz, Steve; Reddell, Brandon; Rojdev, Kristina; Franklin, Jennifer
2010-01-01
Both crew and radio-sensitive systems, especially electronics must be protected from the effects of the space radiation environment. One method of mitigating this radiation exposure is to use passive-shielding materials. In previous vehicle designs such as the International Space Station (ISS), materials such as aluminum and polyethylene have been used as parasitic shielding to protect crew and electronics from exposure, but these designs add mass and decrease the amount of usable volume inside the vehicle. Thus, it is of interest to understand whether structural materials can also be designed to provide the radiation shielding capability needed for crew and electronics, while still providing weight savings and increased useable volume when compared against previous vehicle shielding designs. In this paper, we present calculations and analysis using the HZETRN (deterministic) and FLUKA (Monte Carlo) codes to investigate the radiation mitigation properties of these structural shielding materials, which includes graded-Z and composite materials. This work is also a follow-on to an earlier paper, that compared computational results for three radiation transport codes, HZETRN, HETC, and FLUKA, using the Feb. 1956 solar particle event (SPE) spectrum. In the following analysis, we consider the October 1989 Ground Level Enhanced (GLE) SPE as the input source term based on the Band function fitting method. Using HZETRN and FLUKA, parametric absorbed doses at the center of a hemispherical structure on the lunar surface are calculated for various thicknesses of graded-Z layups and an all-aluminum structure. HZETRN and FLUKA calculations are compared and are in reasonable (18% to 27%) agreement. Both codes are in agreement with respect to the predicted shielding material performance trends. The results from both HZETRN and FLUKA are analyzed and the radiation protection properties and potential weight savings of various materials and materials lay-ups are compared.
Esque, Jeremy; Cecchini, Marco
2015-04-23
The calculation of the free energy of conformation is key to understanding the function of biomolecules and has attracted significant interest in recent years. Here, we present an improvement of the confinement method that was designed for use in the context of explicit solvent MD simulations. The development involves an additional step in which the solvation free energy of the harmonically restrained conformers is accurately determined by multistage free energy perturbation simulations. As a test-case application, the newly introduced confinement/solvation free energy (CSF) approach was used to compute differences in free energy between conformers of the alanine dipeptide in explicit water. The results are in excellent agreement with reference calculations based on both converged molecular dynamics and umbrella sampling. To illustrate the general applicability of the method, conformational equilibria of met-enkephalin (5 aa) and deca-alanine (10 aa) in solution were also analyzed. In both cases, smoothly converged free-energy results were obtained in agreement with equilibrium sampling or literature calculations. These results demonstrate that the CSF method may provide conformational free-energy differences of biomolecules with small statistical errors (below 0.5 kcal/mol) and at a moderate computational cost even with a full representation of the solvent.
Energy Technology Data Exchange (ETDEWEB)
Petrizzi, L.; Batistoni, P.; Migliori, S. [Associazione EURATOM ENEA sulla Fusione, Frascati (Roma) (Italy); Chen, Y.; Fischer, U.; Pereslavtsev, P. [Association FZK-EURATOM Forschungszentrum Karlsruhe (Germany); Loughlin, M. [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxfordshire, OX (United Kingdom); Secco, A. [Nice Srl Via Serra 33 Camerano Casasco AT (Italy)
2003-07-01
In deuterium-deuterium (D-D) and deuterium-tritium (D-T) fusion plasmas neutrons are produced causing activation of JET machine components. For safe operation and maintenance it is important to be able to predict the induced activation and the resulting shut down dose rates. This requires a suitable system of codes which is capable of simulating both the neutron induced material activation during operation and the decay gamma radiation transport after shut-down in the proper 3-D geometry. Two methodologies to calculate the dose rate in fusion devices have been developed recently and applied to fusion machines, both using the MCNP Monte Carlo code. FZK has developed a more classical approach, the rigorous 2-step (R2S) system in which MCNP is coupled to the FISPACT inventory code with an automated routing. ENEA, in collaboration with the ITER Team, has developed an alternative approach, the direct 1 step method (D1S). Neutron and decay gamma transport are handled in one single MCNP run, using an ad hoc cross section library. The intention was to tightly couple the neutron induced production of a radio-isotope and the emission of its decay gammas for an accurate spatial distribution and a reliable calculated statistical error. The two methods have been used by the two Associations to calculate the dose rate in five positions of JET machine, two inside the vacuum chamber and three outside, at cooling times between 1 second and 1 year after shutdown. The same MCNP model and irradiation conditions have been assumed. The exercise has been proposed and financed in the frame of the Fusion Technological Program of the JET machine. The scope is to supply the designers with the most reliable tool and data to calculate the dose rate on fusion machines. Results showed that there is a good agreement: the differences range between 5-35%. The next step to be considered in 2003 will be an exercise in which the comparison will be done with dose-rate data from JET taken during and
International Nuclear Information System (INIS)
Petrizzi, L.; Batistoni, P.; Migliori, S.; Chen, Y.; Fischer, U.; Pereslavtsev, P.; Loughlin, M.; Secco, A.
2003-01-01
In deuterium-deuterium (D-D) and deuterium-tritium (D-T) fusion plasmas neutrons are produced causing activation of JET machine components. For safe operation and maintenance it is important to be able to predict the induced activation and the resulting shut down dose rates. This requires a suitable system of codes which is capable of simulating both the neutron induced material activation during operation and the decay gamma radiation transport after shut-down in the proper 3-D geometry. Two methodologies to calculate the dose rate in fusion devices have been developed recently and applied to fusion machines, both using the MCNP Monte Carlo code. FZK has developed a more classical approach, the rigorous 2-step (R2S) system in which MCNP is coupled to the FISPACT inventory code with an automated routing. ENEA, in collaboration with the ITER Team, has developed an alternative approach, the direct 1 step method (D1S). Neutron and decay gamma transport are handled in one single MCNP run, using an ad hoc cross section library. The intention was to tightly couple the neutron induced production of a radio-isotope and the emission of its decay gammas for an accurate spatial distribution and a reliable calculated statistical error. The two methods have been used by the two Associations to calculate the dose rate in five positions of JET machine, two inside the vacuum chamber and three outside, at cooling times between 1 second and 1 year after shutdown. The same MCNP model and irradiation conditions have been assumed. The exercise has been proposed and financed in the frame of the Fusion Technological Program of the JET machine. The scope is to supply the designers with the most reliable tool and data to calculate the dose rate on fusion machines. Results showed that there is a good agreement: the differences range between 5-35%. The next step to be considered in 2003 will be an exercise in which the comparison will be done with dose-rate data from JET taken during and
da Silva, Maria Cristina Rodrigues
1998-11-01
Molecular orbital methods and laser ionization mass spectrometry measurements are used to investigate the spectroscopy and relaxation dynamics of benzaldehyde following excitation to its S2(/pi/pi/sp/*) state. Energies, equilibrium geometries and vibrational frequencies of ground and low-lying excited states of benzaldehyde neutral and cation determined by ab initio calculations provide a theoretical description of the electronic spectroscopy of benzaldehyde and of the changes occurring on excitation and ionization. The S2(/pi/pi/sp/*)[/gets]S0 excitation spectrum of jet-cooled benzaldehyde acquired using two-color laser ionization mass spectrometry techniques is interpreted with the aid of these calculations. The spectrum is dominated by the origin band and by transitions involving some of the ring modes consistent with the results of the molecular orbital calculations that indicate that the major geometric changes on excitation to S2 are located in the aromatic ring. Ten fundamental vibrations of the S2(/pi/pi/sp/*) state are assigned. The dissociation dynamics of benzaldehyde into benzene and carbon monoxide following excitation to its S2(/pi/pi/sp/*) state are investigated under jet- cooled conditions by two-color laser ionization mass spectrometry using a pump-probe technique. This experimental arrangement allows monitoring the benzaldehyde reactant and the benzene product ion signals as a function of the time delay between the excitation and ionization steps. A kinetic model is proposed to explain the observed biexponential decay of the benzaldehyde signal and the single exponential growth of the benzene product signal in terms of a sequential decay of two excited states of benzaldehyde, one of which leads to formation of benzene molecules in its lowest triplet state. Reactant disappearance and product appearance rates are determined for a number of vibronic transitions of the S2 state. They are found to increase with excitation energy without any indication
Energy Technology Data Exchange (ETDEWEB)
Oyama, Y [Hitachi Car Engineering, Ltd., Tokyo (Japan); Nishimura, Y; Osuga, M; Yamauchi, T [Hitachi, Ltd., Tokyo (Japan)
1997-10-01
Air flow characteristics of hot-wire air flow meters for gasoline fuel-injection systems with supercharging and exhaust gas recycle during transient conditions were investigated to analyze a simple method for calculating air mass in cylinder. It was clarified that the air mass in cylinder could be calculated by compensating for the change of air mass in intake system by using aerodynamic models of intake system. 3 refs., 6 figs., 1 tab.
Dabić, Dario; Brkljačić, Lidija; Tandarić, Tana; Žinić, Mladen; Vianello, Robert; Frkanec, Leo; Kobetić, Renata
2018-01-01
Gels formed by self-assembly of small organic molecules are of wide interest as dynamic soft materials with numerous possible applications, especially in terms of nanotechnology for functional and responsive biomaterials, biosensors, and nanowires. Four bis-oxalamides were chosen to show if electrospray ionization mass spectrometry (ESI-MS) could be used as a prediction of a good gelator and also to shed light on the gelation processes. By inspecting the gelation of several solvent, we showed that bis(amino acid)oxalamide 1 proved to be the most efficient, also being able of forming the largest observable assemblies in the gas phase. The formation of singly charged assemblies holding from one up to six monomer units is the outcome of the strong intermolecular H-bonds, particularly among terminal carboxyl groups. The variation of solvents from polar aprotic towards polar protic did not have any significant effects on the size of the assemblies. The addition of a salt such as NaOAc or Mg(OAc)2, depending on the concentration, altered the assembling. Computational analysis at the DFT level aided in the interpretation of the observed trends and revealed that individual gelator molecules spontaneously assemble to higher aggregates, but the presence of the Na+ cation disrupts any gelator organization since it becomes significantly more favorable for gelator molecules to bind Na+ cations up to the 3:1 ratio than to self-assemble, being fully in line with experimental observations reported here. [Figure not available: see fulltext.
International Nuclear Information System (INIS)
Sohn, Kyung Myung; Lee, Sung Yong; Lee, Keun Ho
2003-01-01
To determine the usefulness of percentage enhancement washout value calculated on unenhanced, enhanced and delayed enhanced CT scans for the characterization of adrenal masses. Forty adrenal masses less than 5 cm in size were assessed using a protocol consisting of unenhanced CT, enhanced CT 60 seconds after intravenous administration of contrast material, and delayed enhanced CT at 10 minutes. The CT attenuation value of adrenal tumors was estimated on each scan, and percentage enhancement washout value was calculated as follows:[(attenuation value at enhanced CT-attenuation value at delayed CT)/ (attenuation value at enhanced CT-attenuation value at unenhanced CT)x100]. An adrenal mass was considered benign if its percentage enhancement washout value was at the threshold value, set to 60% and 50%, or higher. The accuracy of the procedure was determined by comparing its findings with the final clinical diagnosis. Twenty-nine massess were benign and 11 were malignant. The mean percentage enhancement washout value of the former was significantly higher than that of the latter (66.7% vs. 21.8%; p<0.01). All adenomas except one had a washout value of more than 50%. With a percentage washout threshold of 60%, 35 of 40 lesions were correctly characterized as benign or malignant [sensitivity 82.7% (24/29), specificity 100% (11/11), accuracy 87.5% (35/40)]; with a threshold of 50%, 39 of 40 lesions were correctly characterized [(sensitivity 96.5% (28/29), specificity 100% (11/11), accuracy 97.5% (39/40)]. Percentage enhancement washout values are useful for characterizing an adrenal mass as benign or malignant. For characterization, a threshold value of 50% was more accurate than one of 60%
Neto, Brenno A D; Viana, Barbara F L; Rodrigues, Thyago S; Lalli, Priscila M; Eberlin, Marcos N; da Silva, Wender A; de Oliveira, Heibbe C B; Gatto, Claudia C
2013-08-28
We describe the synthesis of novel mononuclear and dinuclear copper complexes and an investigation of their behaviour in solution using mass spectrometry (ESI-MS and ESI-MS/MS) and in the solid state using X-ray crystallography. The complexes were synthesized from two widely used diacetylpryridine (dap) ligands, i.e. 2,6-diacetylpyridinebis(benzoic acid hydrazone) and 2,6-diacetylpyridinebis(2-aminobenzoic acid hydrazone). Theoretical calculations (DFT) were used to predict the complex geometries of these new structures, their equilibrium in solution and energies associated with the transformations.
International Nuclear Information System (INIS)
Timmer, G.A.; Meurders, F.; Brussaard, P.J.; Hienen, J.F.A. van
1978-01-01
Electromagnetic transition rates and log ft values were calculated for transitions between positive-parity states in the A=24-28 mass region. The wave functions used were taken from a previous paper. In general we found satisfactory agreement with experiment. In order to have a measure of the stability of the results against changes in the Hamiltonian a method was developed for assigning errors to calculated transition properties. The renormalized single-particle matrix elements of the E2 and M1 transition operators were determined in a phenomenological way. To this end use was made of the errors just mentioned. It was found that good agreement was obtained with bare-nucleon M1 single-particle matrix elements and a state independent effective isoscalar charge for the E2 operator. Predictions for static moments are given. (orig.) [de
Giussi, Juan M; Gastaca, Belen; Albesa, Alberto; Cortizo, M Susana; Allegretti, Patricia E
2011-02-01
The study of tautomerics equilibria is really important because the reactivity of each compound with tautomeric capacity can be determined from the proportion of each tautomer. In the present work the tautomeric equilibria in some γ,δ-unsaturated β-hydroxynitriles and γ,δ-unsaturated β-ketonitriles were studied. The first family of compounds presents two possible theoretical tautomers, nitrile and ketenimine, while the second one presents four possible theoretical tautomers, keto-nitrile, enol (E and Z)-nitrile and keto-ketenimine. The equilibrium in gas phase was studied by gas chromatography-mass spectrometry (GC-MS). Tautomerization enthalpies were calculated by this methodology, and results were compared with those obtained by density functional theory (DFT) calculations, observing a good agreement between them. Nitrile tautomers were favored within the first family of compounds, while keto-nitrile tautomers were favored in the second family. Copyright Â© 2010 Elsevier B.V. All rights reserved.
Beckers, J.; Frind, E. O.
2001-03-01
A steady-state groundwater model of the Oro Moraine aquifer system in Central Ontario, Canada, is developed. The model is used to identify the role of baseflow in the water balance of the Minesing Swamp, a 70 km 2 wetland of international significance. Lithologic descriptions are used to develop a hydrostratigraphic conceptual model of the aquifer system. The numerical model uses long-term averages to represent temporal variations of the flow regime and includes a mechanism to redistribute recharge in response to near-surface geologic heterogeneity. The model is calibrated to water level and streamflow measurements through inverse modeling. Observed baseflow and runoff quantities validate the water mass balance of the numerical model and provide information on the fraction of the water surplus that contributes to groundwater flow. The inverse algorithm is used to compare alternative model zonation scenarios, illustrating the power of non-linear regression in calibrating complex aquifer systems. The adjoint method is used to identify sensitive recharge areas for groundwater discharge to the Minesing Swamp. Model results suggest that nearby urban development will have a significant impact on baseflow to the swamp. Although the direct baseflow contribution makes up only a small fraction of the total inflow to the swamp, it provides an important steady influx of water over relatively large portions of the wetland. Urban development will also impact baseflow to the headwaters of local streams. The model provides valuable insight into crucial characteristics of the aquifer system although definite conclusions regarding details of its water budget are difficult to draw given current data limitations. The model therefore also serves to guide future data collection and studies of sub-areas within the basin.
Li, Amanda; Muddana, Hari S; Gilson, Michael K
2014-04-08
Quantum mechanical (QM) calculations of noncovalent interactions are uniquely useful as tools to test and improve molecular mechanics force fields and to model the forces involved in biomolecular binding and folding. Because the more computationally tractable QM methods necessarily include approximations, which risk degrading accuracy, it is essential to evaluate such methods by comparison with high-level reference calculations. Here, we use the extensive Benchmark Energy and Geometry Database (BEGDB) of CCSD(T)/CBS reference results to evaluate the accuracy and speed of widely used QM methods for over 1200 chemically varied gas-phase dimers. In particular, we study the semiempirical PM6 and PM7 methods; density functional theory (DFT) approaches B3LYP, B97-D, M062X, and ωB97X-D; and symmetry-adapted perturbation theory (SAPT) approach. For the PM6 and DFT methods, we also examine the effects of post hoc corrections for hydrogen bonding (PM6-DH+, PM6-DH2), halogen atoms (PM6-DH2X), and dispersion (DFT-D3 with zero and Becke-Johnson damping). Several orders of the SAPT expansion are also compared, ranging from SAPT0 up to SAPT2+3, where computationally feasible. We find that all DFT methods with dispersion corrections, as well as SAPT at orders above SAPT2, consistently provide dimer interaction energies within 1.0 kcal/mol RMSE across all systems. We also show that a linear scaling of the perturbative energy terms provided by the fast SAPT0 method yields similar high accuracy, at particularly low computational cost. The energies of all the dimer systems from the various QM approaches are included in the Supporting Information, as are the full SAPT2+(3) energy decomposition for a subset of over 1000 systems. The latter can be used to guide the parametrization of molecular mechanics force fields on a term-by-term basis.
Punjabi, Alkesh; Ali, Halima
2008-12-01
A new approach to integration of magnetic field lines in divertor tokamaks is proposed. In this approach, an analytic equilibrium generating function (EGF) is constructed in natural canonical coordinates (ψ,θ) from experimental data from a Grad-Shafranov equilibrium solver for a tokamak. ψ is the toroidal magnetic flux and θ is the poloidal angle. Natural canonical coordinates (ψ,θ,φ) can be transformed to physical position (R,Z,φ) using a canonical transformation. (R,Z,φ) are cylindrical coordinates. Another canonical transformation is used to construct a symplectic map for integration of magnetic field lines. Trajectories of field lines calculated from this symplectic map in natural canonical coordinates can be transformed to trajectories in real physical space. Unlike in magnetic coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)], the symplectic map in natural canonical coordinates can integrate trajectories across the separatrix surface, and at the same time, give trajectories in physical space. Unlike symplectic maps in physical coordinates (x,y) or (R,Z), the continuous analog of a symplectic map in natural canonical coordinates does not distort trajectories in toroidal planes intervening the discrete map. This approach is applied to the DIII-D tokamak [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)]. The EGF for the DIII-D gives quite an accurate representation of equilibrium magnetic surfaces close to the separatrix surface. This new approach is applied to demonstrate the sensitivity of stochastic broadening using a set of perturbations that generically approximate the size of the field errors and statistical topological noise expected in a poloidally diverted tokamak. Plans for future application of this approach are discussed.
International Nuclear Information System (INIS)
Punjabi, Alkesh; Ali, Halima
2008-01-01
A new approach to integration of magnetic field lines in divertor tokamaks is proposed. In this approach, an analytic equilibrium generating function (EGF) is constructed in natural canonical coordinates (ψ,θ) from experimental data from a Grad-Shafranov equilibrium solver for a tokamak. ψ is the toroidal magnetic flux and θ is the poloidal angle. Natural canonical coordinates (ψ,θ,φ) can be transformed to physical position (R,Z,φ) using a canonical transformation. (R,Z,φ) are cylindrical coordinates. Another canonical transformation is used to construct a symplectic map for integration of magnetic field lines. Trajectories of field lines calculated from this symplectic map in natural canonical coordinates can be transformed to trajectories in real physical space. Unlike in magnetic coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)], the symplectic map in natural canonical coordinates can integrate trajectories across the separatrix surface, and at the same time, give trajectories in physical space. Unlike symplectic maps in physical coordinates (x,y) or (R,Z), the continuous analog of a symplectic map in natural canonical coordinates does not distort trajectories in toroidal planes intervening the discrete map. This approach is applied to the DIII-D tokamak [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)]. The EGF for the DIII-D gives quite an accurate representation of equilibrium magnetic surfaces close to the separatrix surface. This new approach is applied to demonstrate the sensitivity of stochastic broadening using a set of perturbations that generically approximate the size of the field errors and statistical topological noise expected in a poloidally diverted tokamak. Plans for future application of this approach are discussed.
Improved EDELWEISS-III sensitivity for low-mass WIMPs using a profile likelihood approach
Energy Technology Data Exchange (ETDEWEB)
Hehn, L. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Armengaud, E.; Boissiere, T. de; Gros, M.; Navick, X.F.; Nones, C.; Paul, B. [CEA Saclay, DSM/IRFU, Gif-sur-Yvette Cedex (France); Arnaud, Q. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Queen' s University, Kingston (Canada); Augier, C.; Billard, J.; Cazes, A.; Charlieux, F.; Jesus, M. de; Gascon, J.; Juillard, A.; Queguiner, E.; Sanglard, V.; Vagneron, L. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Benoit, A.; Camus, P. [Institut Neel, CNRS/UJF, Grenoble (France); Berge, L.; Chapellier, M.; Dumoulin, L.; Giuliani, A.; Le-Sueur, H.; Marnieros, S.; Olivieri, E.; Poda, D. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Bluemer, J. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Broniatowski, A. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Eitel, K.; Kozlov, V.; Siebenborn, B. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Foerster, N.; Heuermann, G.; Scorza, S. [Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Jin, Y. [Laboratoire de Photonique et de Nanostructures, CNRS, Route de Nozay, Marcoussis (France); Kefelian, C. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Kleifges, M.; Tcherniakhovski, D.; Weber, M. [Karlsruher Institut fuer Technologie, Institut fuer Prozessdatenverarbeitung und Elektronik, Karlsruhe (Germany); Kraus, H. [University of Oxford, Department of Physics, Oxford (United Kingdom); Kudryavtsev, V.A. [University of Sheffield, Department of Physics and Astronomy, Sheffield (United Kingdom); Pari, P. [CEA Saclay, DSM/IRAMIS, Gif-sur-Yvette (France); Piro, M.C. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Rensselaer Polytechnic Institute, Troy, NY (United States); Rozov, S.; Yakushev, E. [JINR, Laboratory of Nuclear Problems, Dubna, Moscow Region (Russian Federation); Schmidt, B. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Lawrence Berkeley National Laboratory, Berkeley, CA (United States)
2016-10-15
We report on a dark matter search for a Weakly Interacting Massive Particle (WIMP) in the mass range m{sub χ} element of [4, 30] GeV/c{sup 2} with the EDELWEISS-III experiment. A 2D profile likelihood analysis is performed on data from eight selected detectors with the lowest energy thresholds leading to a combined fiducial exposure of 496 kg-days. External backgrounds from γ- and β-radiation, recoils from {sup 206}Pb and neutrons as well as detector intrinsic backgrounds were modelled from data outside the region of interest and constrained in the analysis. The basic data selection and most of the background models are the same as those used in a previously published analysis based on boosted decision trees (BDT) [1]. For the likelihood approach applied in the analysis presented here, a larger signal efficiency and a subtraction of the expected background lead to a higher sensitivity, especially for the lowest WIMP masses probed. No statistically significant signal was found and upper limits on the spin-independent WIMP-nucleon scattering cross section can be set with a hypothesis test based on the profile likelihood test statistics. The 90 % C.L. exclusion limit set for WIMPs with m{sub χ} = 4 GeV/c{sup 2} is 1.6 x 10{sup -39} cm{sup 2}, which is an improvement of a factor of seven with respect to the BDT-based analysis. For WIMP masses above 15 GeV/c{sup 2} the exclusion limits found with both analyses are in good agreement. (orig.)
Simple, empirical approach to predict neutron capture cross sections from nuclear masses
Couture, A.; Casten, R. F.; Cakirli, R. B.
2017-12-01
Background: Neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40 % , and has limited predictive power, with predictions from different models rapidly differing by an order of magnitude a few nucleons from the last measurement. Purpose: To develop a new approach to predicting neutron capture cross sections over broad ranges of nuclei that accounts for their values where known and which has reliable predictive power with small uncertainties for many nuclei where they are unknown. Methods: Experimental neutron capture cross sections were compared to empirical mass observables in regions of similar structure. Results: We present an extremely simple method, based solely on empirical mass observables, that correlates neutron capture cross sections in the critical energy range from a few keV to a couple hundred keV. We show that regional cross sections are compactly correlated in medium and heavy mass nuclei with the two-neutron separation energy. These correlations are easily amenable to predict unknown cross sections, often converting the usual extrapolations to more reliable interpolations. It almost always reproduces existing data to within 25 % and estimated uncertainties are below about 40 % up to 10 nucleons beyond known data. Conclusions: Neutron capture cross sections display a surprisingly strong connection to the two-neutron separation energy, a nuclear structure property. The simple, empirical correlations uncovered provide model-independent predictions of
Altuntaş, Esra; Schubert, Ulrich S
2014-01-15
Mass spectrometry (MS) is the most versatile and comprehensive method in "OMICS" sciences (i.e. in proteomics, genomics, metabolomics and lipidomics). The applications of MS and tandem MS (MS/MS or MS(n)) provide sequence information of the full complement of biological samples in order to understand the importance of the sequences on their precise and specific functions. Nowadays, the control of polymer sequences and their accurate characterization is one of the significant challenges of current polymer science. Therefore, a similar approach can be very beneficial for characterizing and understanding the complex structures of synthetic macromolecules. MS-based strategies allow a relatively precise examination of polymeric structures (e.g. their molar mass distributions, monomer units, side chain substituents, end-group functionalities, and copolymer compositions). Moreover, tandem MS offer accurate structural information from intricate macromolecular structures; however, it produces vast amount of data to interpret. In "OMICS" sciences, the software application to interpret the obtained data has developed satisfyingly (e.g. in proteomics), because it is not possible to handle the amount of data acquired via (tandem) MS studies on the biological samples manually. It can be expected that special software tools will improve the interpretation of (tandem) MS output from the investigations of synthetic polymers as well. Eventually, the MS/MS field will also open up for polymer scientists who are not MS-specialists. In this review, we dissect the overall framework of the MS and MS/MS analysis of synthetic polymers into its key components. We discuss the fundamentals of polymer analyses as well as recent advances in the areas of tandem mass spectrometry, software developments, and the overall future perspectives on the way to polymer sequencing, one of the last Holy Grail in polymer science. Copyright © 2013 Elsevier B.V. All rights reserved.
Mass Spectrometry-based Approaches to Understand the Molecular Basis of Memory
Directory of Open Access Journals (Sweden)
Arthur Henriques Pontes
2016-10-01
Full Text Available The central nervous system is responsible for an array of cognitive functions such as memory, learning, language and attention. These processes tend to take place in distinct brain regions; yet, they need to be integrated to give rise to adaptive or meaningful behavior. Since cognitive processes result from underlying cellular and molecular changes, genomics and transcriptomics assays have been applied to human and animal models to understand such events. Nevertheless, genes and RNAs are not the end products of most biological functions. In order to gain further insights toward the understanding of brain processes, the field of proteomics has been of increasing importance in the past years. Advancements in liquid chromatography-tandem mass spectrometry (LC-MS/MS have enable the identification and quantification of thousand of proteins with high accuracy and sensitivity, fostering a revolution in the neurosciences. Herein, we review the molecular bases of explicit memory in the hippocampus. We outline the principles of mass spectrometry (MS-based proteomics, highlighting the use of this analytical tool to study memory formation. In addition, we discuss MS-based targeted approaches as the future of protein analysis.
Energy Technology Data Exchange (ETDEWEB)
Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.
2013-04-27
This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account
Bowden, Peter; Beavis, Ron; Marshall, John
2009-11-02
A goodness of fit test may be used to assign tandem mass spectra of peptides to amino acid sequences and to directly calculate the expected probability of mis-identification. The product of the peptide expectation values directly yields the probability that the parent protein has been mis-identified. A relational database could capture the mass spectral data, the best fit results, and permit subsequent calculations by a general statistical analysis system. The many files of the Hupo blood protein data correlated by X!TANDEM against the proteins of ENSEMBL were collected into a relational database. A redundant set of 247,077 proteins and peptides were correlated by X!TANDEM, and that was collapsed to a set of 34,956 peptides from 13,379 distinct proteins. About 6875 distinct proteins were only represented by a single distinct peptide, 2866 proteins showed 2 distinct peptides, and 3454 proteins showed at least three distinct peptides by X!TANDEM. More than 99% of the peptides were associated with proteins that had cumulative expectation values, i.e. probability of false positive identification, of one in one hundred or less. The distribution of peptides per protein from X!TANDEM was significantly different than those expected from random assignment of peptides.
Energy Technology Data Exchange (ETDEWEB)
Guadalupe Maldonado, S.
2014-07-01
Pressurized water reactors (PWR) used for power generation are operated at elevated temperatures (280-300 °C) and under higher pressure (120-150 bar). In addition to these harsh environmental conditions some components of the PWR assemblies are subject to mechanical loading (sliding, vibration and impacts) leading to undesirable and hardly controllable material degradation phenomena. In such situations wear is determined by the complex interplay (tribocorrosion) between mechanical, material and physical-chemical phenomena. Tribocorrosion in PWR conditions is at present little understood and models need to be developed in order to predict component lifetime over several decades. The goal of this project, carried out in collaboration with the French company AREVA NP, is to develop a predictive model based on the mechanistic understanding of tribocorrosion of specific PWR components (stainless steel control assemblies, stellite grippers). The approach taken here is to describe degradation in terms of electro-chemical and mechanical material flows (third body concept of tribology) from the metal into the friction film (i.e. the oxidized film forming during rubbing on the metal surface) and from the friction film into the environment instead of simple mass loss considerations. The project involves the establishment of mechanistic models for describing the single flows based on ad-hoc tribocorrosion measurements operating at low temperature. The overall behaviour at high temperature and pressure in investigated using a dedicated tribometer (Aurore) including electrochemical control of the contact during rubbing. Physical laws describing the individual flows according to defined mechanisms and as a function of defined physical parameters were identified based on the obtained experimental results and from literature data. The physical laws were converted into mass flow rates and solved as differential equation system by considering the mass balance in compartments
Evolution of a Lowland Karst Landscape; A Mass-Balance Approach
Chamberlin, C.; Heffernan, J. B.; Cohen, M. J.; Quintero, C.; Pain, A.
2016-12-01
Karst landscapes are highly soluble, and are vulnerable to biological acid production as a major driving factor in their evolution. Big Cypress National Park (BICY) is a low-lying karst landscape in southern Florida displaying a distinctive morphology of isolated depressions likely influenced by biology. The goal of this study is to constrain timescales of landform development in BICY. This question was addressed through the construction of landscape-scale elemental budgets for both calcium and phosphorus. Precipitation and export fluxes were calculated using available chemistry and hydrology data, and stocks were calculated from a combination of existing data, field measurements, and laboratory chemical analysis. Estimates of expected mass export given no biological acid production and given an equivalent production of 100% of GPP were compared with observed rates. Current standing stocks of phosphorus are dominated by a large soil pool, and contain 500 Gg P. Inputs are largely dominated by precipitation, and 8000 years are necessary to accumulate standing stocks of phosphorus given modern fluxes. Calcium flux is vastly dominated by dissolution of the limestone bedrock, and though some calcium is retained in the soil, most is exported. Using LiDAR generated estimates of volume loss across the landscape and current export rates, an estimated 15,000 years would be necessary to create the modern landscape. Both of these estimates indicate that the BICY landscape is geologically very young. The different behaviors of these elements (calcium is largely exported, while phosphorus is largely retained) lend additional confidence to estimates of denudation rates of the landscape. These estimates can be even closer reconciled if calcium redistribution over the landscape is allowed for. This estimate is compared to the two bounding conditions for biological weathering to indicate a likely level of biological importance to landscape development in this system.
Kiran Yildirim, Demet; Abdelnasser, Amr; Doner, Zeynep; Kumral, Mustafa
2016-04-01
The Halilar Cu-Pb (-Zn) mineralization that is formed in the volcanogenic metasediments of Bagcagiz Formation at Balikesir province, NW Turkey, represents locally vein-type deposit as well as restricted to fault gouge zone directed NE-SW along with the lower boundary of Bagcagiz Formation and Duztarla granitic intrusion in the study area. Furthermore, This granite is traversed by numerous mineralized sheeted vein systems, which locally transgress into the surrounding metasediments. Therefore, this mineralization closely associated with intense hydrothermal alteration within brecciation, and quartz stockwork veining. The ore mineral assemblage includes chalcopyrite, galena, and some sphalerite with covellite and goethite formed during three phases of mineralization (pre-ore, main ore, and supergene) within an abundant gangue of quartz and calcite. The geologic and field relationships, petrographic and mineralogical studies reveal two alteration zones occurred with the Cu-Pb (-Zn) mineralization along the contact between the Bagcagiz Formation and Duztarla granite; pervasive phyllic alteration (quartz, sericite, and pyrite), and selective propylitic alteration (albite, calcite, epidote, sericite and/or chlorite). This work, by using the mass balance calculations, reports the mass/volume changes (gain and loss) of the chemical components of the hydrothermal alteration zones associated with Halilar Cu-Pb (-Zn) mineralization at Balikesir area (Turkey). It revealed that the phyllic alteration has enrichments of Si, Fe, K, Ba, and LOI with depletion of Mg, Ca, and Na reflect sericitization of alkali feldspar and destruction of ferromagnesian minerals. This zone has high Cu and Pb with Zn contents represents the main mineralized zone. On the other hand, the propylitic zone is characterized by addition of Ca, Na, K, Ti, P, and Ba with LOI and Cu (lower content) referring to the replacement of plagioclase and ferromagnesian minerals by albite, calcite, epidote, and sericite
International Nuclear Information System (INIS)
Poquillon, D.
1997-10-01
Usually, for the integrity assessment of defective components, well established rules are used: global approach to fracture. A more fundamental way to deal with these problems is based on the local approach to fracture. In this study, we choose this way and we perform numerical simulations of intergranular crack initiation and intergranular crack propagation. This type of damage can be find in components of fast breeder reactors in 316 L austenitic stainless steel which operate at high temperatures. This study deals with methods coupling partly the behaviour and the damage for crack growth in specimens submitted to various thermomechanical loadings. A new numerical method based on finite element computations and a damage model relying on quantitative observations of grain boundary damage is proposed. Numerical results of crack initiation and growth are compared with a number of experimental data obtained in previous studies. Creep and creep-fatigue crack growth are studied. Various specimen geometries are considered: compact Tension Specimens and axisymmetric notched bars tested under isothermal (600 deg C) conditions and tubular structures containing a circumferential notch tested under thermal shock. Adaptative re-meshing technique and/or node release technique are used and compared. In order to broaden our knowledge on stress triaxiality effects on creep intergranular damage, new experiments are defined and conducted on sharply notched tubular specimens in torsion. These isothermal (600 deg C) Mode II creep tests reveal severe intergranular damage and creep crack initiation. Calculated damage fields at the crack tip are compared with the experimental observations. The good agreement between calculations and experimental data shows the damage criterion used can improve the accuracy of life prediction of components submitted to intergranular creep damage. (author)
Energy Technology Data Exchange (ETDEWEB)
Ali, Ahmed [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Parkhomenko, Alexander Ya.; Rusov, Aleksey V. [P.G. Demidov Yaroslavl State Univ. (Russian Federation). Dept. of Theoretical Physics
2013-12-15
We present a precise calculation of the dilepton invariant-mass spectrum and the decay rate for B{sup {+-}}{yields}{pi}{sup {+-}}l{sup +}l{sup -} (l{sup {+-}}=e{sup {+-}},{mu}{sup {+-}}) in the Standard Model (SM) based on the effective Hamiltonian approach for the b{yields}dl{sup +}l{sup -} transitions. With the Wilson coefficients already known in the next-to-next-to-leading logarithmic (NNLL) accuracy, the remaining theoretical uncertainty in the short-distance contribution resides in the form factors f{sub +}(q{sup 2}), f{sub 0}(q{sup 2}) and f{sub T}(q{sup 2}). Of these, f{sub +}(q{sup 2}) is well measured in the charged-current semileptonic decays B{yields}{pi}l{nu}{sub l} and we use the B-factory data to parametrize it. The corresponding form factors for the B{yields}K transitions have been calculated in the Lattice-QCD approach for large-q{sup 2} and extrapolated to the entire q{sup 2}-region using the so-called z-expansion. Using an SU(3){sub F}-breaking Ansatz, we calculate the B{yields}{pi} tensor form factor, which is consistent with the recently reported lattice B{yields}{pi} analysis obtained at large q{sup 2}. The prediction for the total branching fraction B(B{sup {+-}}{yields}{pi}{sup {+-}}{mu}{sup +}{mu}{sup -})=(1.88{sub +0.32}{sup -0.21}) x 10{sup -8} is in good agreement with the experimental value obtained by the LHCb collaboration. In the low q{sup 2}-region, the Heavy-Quark Symmetry (HQS) relates the three form factors with each other. Accounting for the leading-order symmetry-breaking effects, and using data from the charged-current process B{yields}{pi}l{nu}{sub l} to determine f{sub +}(q{sup 2}), we calculate the dilepton invariant-mass distribution in the low q{sup 2}-region in the B{sup {+-}}{yields}{pi}{sup {+-}}l{sup +}l{sup -} decay. This provides a model-independent and precise calculation of the partial branching ratio for this decay.
Petruzziello, Filomena; Fouillen, Laetitia; Wadensten, Henrik; Kretz, Robert; Andren, Per E; Rainer, Gregor; Zhang, Xiaozhe
2012-02-03
Neuropeptidomics is used to characterize endogenous peptides in the brain of tree shrews (Tupaia belangeri). Tree shrews are small animals similar to rodents in size but close relatives of primates, and are excellent models for brain research. Currently, tree shrews have no complete proteome information available on which direct database search can be allowed for neuropeptide identification. To increase the capability in the identification of neuropeptides in tree shrews, we developed an integrated mass spectrometry (MS)-based approach that combines methods including data-dependent, directed, and targeted liquid chromatography (LC)-Fourier transform (FT)-tandem MS (MS/MS) analysis, database construction, de novo sequencing, precursor protein search, and homology analysis. Using this integrated approach, we identified 107 endogenous peptides that have sequences identical or similar to those from other mammalian species. High accuracy MS and tandem MS information, with BLAST analysis and chromatographic characteristics were used to confirm the sequences of all the identified peptides. Interestingly, further sequence homology analysis demonstrated that tree shrew peptides have a significantly higher degree of homology to equivalent sequences in humans than those in mice or rats, consistent with the close phylogenetic relationship between tree shrews and primates. Our results provide the first extensive characterization of the peptidome in tree shrews, which now permits characterization of their function in nervous and endocrine system. As the approach developed fully used the conservative properties of neuropeptides in evolution and the advantage of high accuracy MS, it can be portable for identification of neuropeptides in other species for which the fully sequenced genomes or proteomes are not available.
Energy Technology Data Exchange (ETDEWEB)
Weisz, Daniel R.; Fouesneau, Morgan; Dalcanton, Julianne J.; Clifton Johnson, L.; Beerman, Lori C.; Williams, Benjamin F. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Hogg, David W.; Foreman-Mackey, Daniel T. [Center for Cosmology and Particle Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Rix, Hans-Walter; Gouliermis, Dimitrios [Max Planck Institute for Astronomy, Koenigstuhl 17, D-69117 Heidelberg (Germany); Dolphin, Andrew E. [Raytheon Company, 1151 East Hermans Road, Tucson, AZ 85756 (United States); Lang, Dustin [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); Bell, Eric F. [Department of Astronomy, University of Michigan, 500 Church Street, Ann Arbor, MI 48109 (United States); Gordon, Karl D.; Kalirai, Jason S. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Skillman, Evan D., E-mail: dweisz@astro.washington.edu [Minnesota Institute for Astrophysics, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States)
2013-01-10
We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M {approx}> 1 M {sub Sun }). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, {alpha}, are unbiased and that the uncertainty, {Delta}{alpha}, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on {alpha}, and provide an analytic approximation for {Delta}{alpha} as a function of the observed number of stars and mass range. Comparison with literature studies shows that {approx}3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield ({alpha}) = 2.46, with a 1{sigma} dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the
Weisz, Daniel R.; Fouesneau, Morgan; Hogg, David W.; Rix, Hans-Walter; Dolphin, Andrew E.; Dalcanton, Julianne J.; Foreman-Mackey, Daniel T.; Lang, Dustin; Johnson, L. Clifton; Beerman, Lori C.; Bell, Eric F.; Gordon, Karl D.; Gouliermis, Dimitrios; Kalirai, Jason S.; Skillman, Evan D.; Williams, Benjamin F.
2013-01-01
We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M >~ 1 M ⊙). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, α, are unbiased and that the uncertainty, Δα, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on α, and provide an analytic approximation for Δα as a function of the observed number of stars and mass range. Comparison with literature studies shows that ~3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield langαrang = 2.46, with a 1σ dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the completeness for stars of a given mass. The precision on MF
International Nuclear Information System (INIS)
Weisz, Daniel R.; Fouesneau, Morgan; Dalcanton, Julianne J.; Clifton Johnson, L.; Beerman, Lori C.; Williams, Benjamin F.; Hogg, David W.; Foreman-Mackey, Daniel T.; Rix, Hans-Walter; Gouliermis, Dimitrios; Dolphin, Andrew E.; Lang, Dustin; Bell, Eric F.; Gordon, Karl D.; Kalirai, Jason S.; Skillman, Evan D.
2013-01-01
We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M ∼> 1 M ☉ ). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, α, are unbiased and that the uncertainty, Δα, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on α, and provide an analytic approximation for Δα as a function of the observed number of stars and mass range. Comparison with literature studies shows that ∼3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield (α) = 2.46, with a 1σ dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the completeness for stars of a given mass. The precision on MF
Directory of Open Access Journals (Sweden)
Shane Stimpson
2017-09-01
Full Text Available An essential component of the neutron transport solver is the resonance self-shielding calculation used to determine equivalence cross sections. The neutron transport code, MPACT, is currently using the subgroup self-shielding method, in which the method of characteristics (MOC is used to solve purely absorbing fixed-source problems. Recent efforts incorporating multigroup kernels to the MOC solvers in MPACT have reduced runtime by roughly 2×. Applying the same concepts for self-shielding and developing a novel lumped parameter approach to MOC, substantial improvements have also been made to the self-shielding computational efficiency without sacrificing any accuracy. These new multigroup and lumped parameter capabilities have been demonstrated on two test cases: (1 a single lattice with quarter symmetry known as VERA (Virtual Environment for Reactor Applications Progression Problem 2a and (2 a two-dimensional quarter-core slice known as Problem 5a-2D. From these cases, self-shielding computational time was reduced by roughly 3–4×, with a corresponding 15–20% increase in overall memory burden. An azimuthal angle sensitivity study also shows that only half as many angles are needed, yielding an additional speedup of 2×. In total, the improvements yield roughly a 7–8× speedup. Given these performance benefits, these approaches have been adopted as the default in MPACT.
International Nuclear Information System (INIS)
Stimpson, Shane G.; Liu, Yuxuan; Collins, Benjamin S.; Clarno, Kevin T.
2017-01-01
An essential component of the neutron transport solver is the resonance self-shielding calculation used to determine equivalence cross sections. The neutron transport code, MPACT, is currently using the subgroup self-shielding method, in which the method of characteristics (MOC) is used to solve purely absorbing fixed-source problems. Recent efforts incorporating multigroup kernels to the MOC solvers in MPACT have reduced runtime by roughly 2×. Applying the same concepts for self-shielding and developing a novel lumped parameter approach to MOC, substantial improvements have also been made to the self-shielding computational efficiency without sacrificing any accuracy. These new multigroup and lumped parameter capabilities have been demonstrated on two test cases: (1) a single lattice with quarter symmetry known as VERA (Virtual Environment for Reactor Applications) Progression Problem 2a and (2) a two-dimensional quarter-core slice known as Problem 5a-2D. From these cases, self-shielding computational time was reduced by roughly 3–4×, with a corresponding 15–20% increase in overall memory burden. An azimuthal angle sensitivity study also shows that only half as many angles are needed, yielding an additional speedup of 2×. In total, the improvements yield roughly a 7–8× speedup. Furthermore given these performance benefits, these approaches have been adopted as the default in MPACT.
International Nuclear Information System (INIS)
Hoshyargar, Vahid; Fadaei, Farzad; Ashrafizadeh, Seyed Nezameddin
2015-01-01
A comprehensive mathematical model is developed for simulation of ion transport through nanofiltration membranes. The model is based on the Maxwell-Stefan approach and takes into account steric, Donnan, and dielectric effects in the transport of mono and divalent ions. Theoretical ion rejection for multi-electrolyte mixtures was obtained by numerically solving the 'hindered transport' based on the generalized Maxwell-Stefan equation for the flux of ions. A computer simulation has been developed to predict the transport in the range of nanofiltration, a numerical procedure developed linearization and discretization form of the governing equations, and the finite volume method was employed for the numerical solution of equations. The developed numerical method is capable of solving equations for multicomponent systems of n species no matter to what extent the system shows stiffness. The model findings were compared and verified with the experimental data from literature for two systems of Na 2 SO 4 +NaCl and MgCl 2 +NaCl. Comparison showed great agreement for different concentrations. As such, the model is capable of predicting the rejection of different ions at various concentrations. The advantage of such a model is saving costs as a result of minimizing the number of required experiments, while it is closer to a realistic situation since the adsorption of ions has been taken into account. Using this model, the flux of permeates and rejections of multi-component liquid feeds can be calculated as a function of membrane properties. This simulation tool attempts to fill in the gap in methods used for predicting nanofiltration and optimization of the performance of charged nanofilters through generalized Maxwell-Stefan (GMS) approach. The application of the current model may weaken the latter gap, which has arisen due to the complexity of the fundamentals of ion transport processes via this approach, and may further facilitate the industrial development of
Depression, body mass index, and chronic obstructive pulmonary disease – a holistic approach
Directory of Open Access Journals (Sweden)
Catalfo G
2016-02-01
Full Text Available Giuseppe Catalfo,1 Luciana Crea,1 Tiziana Lo Castro,1 Francesca Magnano San Lio,1 Giuseppe Minutolo,1 Gherardo Siscaro,2 Noemi Vaccino,1 Nunzio Crimi,3 Eugenio Aguglia1 1Department of Psychiatry, Policlinico “G. Rodolico” University Hospital, University of Catania, Catania, Italy; 2Operative Unit Neurorehabilitation, IRCCS Fondazione Salvatore Maugeri, Sciacca, Italy; 3Department of Pneumology, Policlinico “G. Rodolico” University Hospital, University of Catania, Catania, Italy Background: Several clinical studies suggest common underlying pathogenetic mechanisms of COPD and depressive/anxiety disorders. We aim to evaluate psychopathological and physical effects of aerobic exercise, proposed in the context of pulmonary rehabilitation, in a sample of COPD patients, through the correlation of some psychopathological variables and physical/pneumological parameters. Methods: Fifty-two consecutive subjects were enrolled. At baseline, the sample was divided into two subgroups consisting of 38 depression-positive and 14 depression-negative subjects according to the Hamilton Depression Rating Scale (HAM-D. After the rehabilitation treatment, we compared psychometric and physical examinations between the two groups. Results: The differences after the rehabilitation program in all assessed parameters demonstrated a significant improvement in psychiatric and pneumological conditions. The reduction of BMI was significantly correlated with fat mass but only in the depression-positive patients. Conclusion: Our results suggest that pulmonary rehabilitation improves depressive and anxiety symptoms in COPD. This improvement is significantly related to the reduction of fat mass and BMI only in depressed COPD patients, in whom these parameters were related at baseline. These findings suggest that depressed COPD patients could benefit from a rehabilitation program in the context of a multidisciplinary approach. Keywords: COPD, depression, aerobic exercise
Lössl, P.
2017-01-01
This thesis illustrates the current standing of mass spectrometry (MS) in molecular and structural biology. The primary aim of the herein described research is to facilitate protein characterization by combining mass spectrometric methods among each other and with complementary analytical
International Nuclear Information System (INIS)
Ōnuki, Yoshichika; Hedo, Masato; Nakama, Takao; Nakamura, Ai; Aoki, Dai; Boukahil, Mounir; Haga, Yoshinori; Takeuchi, Tetsuya; Harima, Hisatomo
2015-01-01
We succeeded in growing single crystals of EuCo 2 Si 2 by the Bridgman method, and carried out the de Haas-van Alphen (dHvA) experiments. EuCo 2 Si 2 was previously studied from a viewpoint of the trivalent electronic state on the basis of the magnetic susceptibility and X-ray absorption experiments, whereas most of the other Eu compounds order magnetically, with the divalent electronic state. The detected dHvA branches in the present experiments are found to be explained by the results of the full potential linearized augmented plane wave energy band calculations on the basis of a local density approximation (LDA) for YCo 2 Si 2 (LDA) and EuCo 2 Si 2 (LDA + U), revealing the trivalent electronic state. The detected cyclotron effective masses are moderately large, ranging from 1.2 to 2.9 m 0
Bayani, Amir Hossein; Voves, Jan; Dideban, Daryoosh
2018-01-01
Here, we compare the output characteristics of a gate-all-around germanium nanowire field effect transistor (GAA-GeNW-FET) with 2.36 nm2 square cross-section area using tight-binding (TB) sp3d5s∗ model (full atomistic model (FAM)) and effective mass approximation (EMA). Synopsys/QuantumWise Atomistix ToolKit (ATK) and Silvaco Atlas3D are used to consider the TB model and EMA, respectively. Results show that EMA predicted only one quantum state (QS) for quantum transport, whereas FAM predicted three QSs. A cosine function behavior is obtained by both methods for the first quantum state. The calculated bandgap value by EMA is almost twice smaller than that of the FAM. Also, a fluctuating current is predicted by both methods but in different oscillation values.
Reyes, Jeanette M; Hubbard, Heidi F; Stiegel, Matthew A; Pleil, Joachim D; Serre, Marc L
2018-01-09
Currently in the United States there are no regulatory standards for ambient concentrations of polycyclic aromatic hydrocarbons (PAHs), a class of organic compounds with known carcinogenic species. As such, monitoring data are not routinely collected resulting in limited exposure mapping and epidemiologic studies. This work develops the log-mass fraction (LMF) Bayesian maximum entropy (BME) geostatistical prediction method used to predict the concentration of nine particle-bound PAHs across the US state of North Carolina. The LMF method develops a relationship between a relatively small number of collocated PAH and fine Particulate Matter (PM2.5) samples collected in 2005 and applies that relationship to a larger number of locations where PM2.5 is routinely monitored to more broadly estimate PAH concentrations across the state. Cross validation and mapping results indicate that by incorporating both PAH and PM2.5 data, the LMF BME method reduces mean squared error by 28.4% and produces more realistic spatial gradients compared to the traditional kriging approach based solely on observed PAH data. The LMF BME method efficiently creates PAH predictions in a PAH data sparse and PM2.5 data rich setting, opening the door for more expansive epidemiologic exposure assessments of ambient PAH.
Schwientek, Marc; Selle, Benny
2016-02-01
As field data on in-stream nitrate retention is scarce at catchment scales, this study aimed at quantifying net retention of nitrate within the entire river network of a fourth-order stream. For this purpose, a practical mass balance approach combined with a Lagrangian sampling scheme was applied and seasonally repeated to estimate daily in-stream net retention of nitrate for a 17.4 km long, agriculturally influenced, segment of the Steinlach River in southwestern Germany. This river segment represents approximately 70% of the length of the main stem and about 32% of the streambed area of the entire river network. Sampling days in spring and summer were biogeochemically more active than in autumn and winter. Results obtained for the main stem of Steinlach River were subsequently extrapolated to the stream network in the catchment. It was demonstrated that, for baseflow conditions in spring and summer, in-stream nitrate retention could sum up to a relevant term of the catchment's nitrogen balance if the entire stream network was considered.
Energy Technology Data Exchange (ETDEWEB)
Drechsler, S.L.; Efremov, D.; Grinenko, V. [IFW-Dresden (Germany); Johnston, S. [Inst. of Quantum Matter, University of British Coulumbia, Vancouver (Canada); Rosner, H. [MPI-cPfS, Dresden, (Germany); Kikoin, K. [Tel Aviv University (Israel)
2015-07-01
Combining DFT calculations of the density of states and plasma frequencies with experimental thermodynamic, optical, ARPES, and dHvA data taken from the literature, we estimate both the high-energy (Coulomb, Hund's rule coupling) and the low-energy (el-boson coupling) electronic mass renormalization [H(L)EMR] for typical Fe-pnictides with T{sub c}<40 K, focusing on (K,Rb,Cs)Fe{sub 2}As{sub 2}, (Ca,Na)122, (Ba,K)122, LiFeAs, and LaFeO{sub 1-x}F{sub x}As with and without As-vacancies. Using Eliashberg theory we show that these systems can NOT be described by a very strong el-boson coupling constant λ ≥ ∝ 2, being in conflict with the HEMR as seen by DMFT, ARPES and optics. Instead, an intermediate s{sub ±} coupling regime is realized, mainly based on interband spin fluctuations from one predominant pair of bands. For (Ca,Na)122, there is also a non-negligible intraband el-phonon/orbital fluctuation intraband contribution. The coexistence of magnetic As-vacancies and high-T{sub c}=28 K for LaFeO{sub 1-x}F{sub x}As{sub 1-δ} excludes an orbital fluctuation dominated s{sub ++} scenario at least for that system. In contrast, the line nodal BaFe{sub 2}(As,P){sub 2} near the quantum critical point is found as a superstrongly coupled system. The role of a pseudo-gap is briefly discussed for some of these systems.
Aggarwal, Pravin
2007-01-01
electrical power functions to other Elements of the CLV, is included as secondary structure. The MSFC has an overall responsibility for the integrated US element as well as structural design an thermal control of the fuel tanks, intertank, interstage, avionics, main propulsion system, Reaction Control System (RCS) for both the Upper Stage and the First Stage. MSFC's Spacecraft and Vehicle Department, Structural and Analysis Design Division is developing a set of predicted mass of these elements. This paper details the methodology, criterion and tools used for the preliminary mass predictions of the upper stage structural assembly components. In general, weight of the cylindrical barrel sections are estimated using the commercial code Hypersizer, whereas, weight of the domes are developed using classical solutions. HyperSizer is software that performs automated structural analysis and sizing optimization based on aerospace methods for strength, stability, and stiffness. Analysis methods range from closed form, traditional hand calculations repeated every day in industry to more advanced panel buckling algorithms. Margin-of-safety reporting for every potential failure provides the engineer with a powerful insight into the structural problem. Optimization capabilities include finding minimum weight panel or beam concepts, material selections, cross sectional dimensions, thicknesses, and lay-ups from a library of 40 different stiffened and sandwich designs and a database of composite, metallic, honeycomb, and foam materials. Multiple different concepts (orthogrid, isogrid, and skin stiffener) were run for multiple loading combinations of ascent design load with and with out tank pressure as well as proof pressure condition. Subsequently, selected optimized concept obtained from Hypersizer runs was translated into a computer aid design (CAD) model to account for the wall thickness tolerance, weld land etc for developing the most probable weight of the components. The flow diram
Koliopanos, F.; Ciambur, B.; Graham, A.; Webb, N.; Coriat, M.; Mutlu-Pakdil, B.; Davis, B.; Godet, O.; Barret, D.; Seigar, M.
2017-10-01
Intermediate Mass Black Holes (IMBHs) are predicted by a variety of models and are the likely seeds for super massive BHs (SMBHs). However, we have yet to establish their existence. One method, by which we can discover IMBHs, is by measuring the mass of an accreting BH, using X-ray and radio observations and drawing on the correlation between radio luminosity, X-ray luminosity and the BH mass, known as the fundamental plane of BH activity (FP-BH). Furthermore, the mass of BHs in the centers of galaxies, can be estimated using scaling relations between BH mass and galactic properties. We are initiating a campaign to search for IMBH candidates in dwarf galaxies with low-luminosity AGN, using - for the first time - three different scaling relations and the FP-BH, simultaneously. In this first stage of our campaign, we measure the mass of seven LLAGN, that have been previously suggested to host central IMBHs, investigate the consistency between the predictions of the BH scaling relations and the FP-BH, in the low mass regime and demonstrate that this multiple method approach provides a robust average mass prediction. In my talk, I will discuss our methodology, results and next steps of this campaign.
Elizaga Navascués, Beatriz; Martín de Blas, Daniel; Mena Marugán, Guillermo A.
2018-02-01
Loop quantum cosmology has recently been applied in order to extend the analysis of primordial perturbations to the Planck era and discuss the possible effects of quantum geometry on the cosmic microwave background. Two approaches to loop quantum cosmology with admissible ultraviolet behavior leading to predictions that are compatible with observations are the so-called hybrid and dressed metric approaches. In spite of their similarities and relations, we show in this work that the effective equations that they provide for the evolution of the tensor and scalar perturbations are somewhat different. When backreaction is neglected, the discrepancy appears only in the time-dependent mass term of the corresponding field equations. We explain the origin of this difference, arising from the distinct quantization procedures. Besides, given the privileged role that the big bounce plays in loop quantum cosmology, e.g. as a natural instant of time to set initial conditions for the perturbations, we also analyze the positivity of the time-dependent mass when this bounce occurs. We prove that the mass of the tensor perturbations is positive in the hybrid approach when the kinetic contribution to the energy density of the inflaton dominates over its potential, as well as for a considerably large sector of backgrounds around that situation, while this mass is always nonpositive in the dressed metric approach. Similar results are demonstrated for the scalar perturbations in a sector of background solutions that includes the kinetically dominated ones; namely, the mass then is positive for the hybrid approach, whereas it typically becomes negative in the dressed metric case. More precisely, this last statement is strictly valid when the potential is quadratic for values of the inflaton mass that are phenomenologically favored.
Ackermann, O; Heigel, U; Lazic, D; Vogel, T; Schofer, M D; Rülander, C
2012-04-01
For the clinical planning of mass events the emergency departments are of critical importance, but there are still no data available for the workload in these cases. As this is essential for an effective medical preparation, we calculated the workload based on the ICD codes of the vicitims at the Loveparade 2010 in Duisburg. Based on the patient data of the Loveparade 2010 we used a filter diagnosis to estimate the number of shock room patients, regular admittances, surgical wound treatments, applications of casts or splints, and diagnosis of drug abuse. In addition every patient was classified to a Manchester Triage System category. This resulted in a chronological and quantitative work-load profile of the emergency department, which was evaluated by the clinical experiences of the departmental medical staff. The workload profile as a whole displayed a realistic image of the real true situation on July 24, 2010. While only the number, diagnosis and chronology of medical surgical patients was realistic, the MTS classification was not. The emergency department had a maximum of 6 emergency room admittances, 6 regular admittances, 4-5 surgical wound treatments, 3 casts and 2 drug abuse patients per hour. The calculation of workload from the ICD data is a reasonable tool for retrospective estimation of the workload of an emergency department, the data can be used for future planning. The retrospective MTS grouping is at present not suitable for a realistic calculation. Retrospective measures in the MTS groups are at present not sufficiently suitable for valid data publication. © Georg Thieme Verlag KG Stuttgart · New York.
International Nuclear Information System (INIS)
Yu, Hua-Gen
2015-01-01
We report a rigorous full dimensional quantum dynamics algorithm, the multi-layer Lanczos method, for computing vibrational energies and dipole transition intensities of polyatomic molecules without any dynamics approximation. The multi-layer Lanczos method is developed by using a few advanced techniques including the guided spectral transform Lanczos method, multi-layer Lanczos iteration approach, recursive residue generation method, and dipole-wavefunction contraction. The quantum molecular Hamiltonian at the total angular momentum J = 0 is represented in a set of orthogonal polyspherical coordinates so that the large amplitude motions of vibrations are naturally described. In particular, the algorithm is general and problem-independent. An application is illustrated by calculating the infrared vibrational dipole transition spectrum of CH based on the ab initio T8 potential energy surface of Schwenke and Partridge and the low-order truncated ab initio dipole moment surfaces of Yurchenko and co-workers. A comparison with experiments is made. The algorithm is also applicable for Raman polarizability active spectra
International Nuclear Information System (INIS)
Sakko, Arto; Rossi, Tuomas P; Nieminen, Risto M
2014-01-01
The presence of plasmonic material influences the optical properties of nearby molecules in untrivial ways due to the dynamical plasmon-molecule coupling. We combine quantum and classical calculation schemes to study this phenomenon in a hybrid system that consists of a Na 2 molecule located in the gap between two Au/Ag nanoparticles. The molecule is treated quantum-mechanically with time-dependent density-functional theory, and the nanoparticles with quasistatic classical electrodynamics. The nanoparticle dimer has a plasmon resonance in the visible part of the electromagnetic spectrum, and the Na 2 molecule has an electron-hole excitation in the same energy range. Due to the dynamical interaction of the two subsystems the plasmon and the molecular excitations couple, creating a hybridized molecular-plasmon excited state. This state has unique properties that yield e.g. enhanced photoabsorption compared to the freestanding Na 2 molecule. The computational approach used enables decoupling of the mutual plasmon-molecule interaction, and our analysis verifies that it is not legitimate to neglect the backcoupling effect when describing the dynamical interaction between plasmonic material and nearby molecules. Time-resolved analysis shows nearly instantaneous formation of the coupled state, and provides an intuitive picture of the underlying physics. (paper)
International Nuclear Information System (INIS)
Tanaka, Midori; Tanimura, Yoshitaka
2010-01-01
Multiple displaced oscillators coupled to an Ohmic heat bath are used to describe electron transfer (ET) in a dissipative environment. By performing a canonical transformation, the model is reduced to a multilevel system coupled to a heat bath with the Brownian spectral distribution. A reduced hierarchy equations of motion approach is introduced for numerically rigorous simulation of the dynamics of the three-level system with various oscillator configurations, for different nonadiabatic coupling strengths and damping rates, and at different temperatures. The time evolution of the reduced density matrix elements illustrates the interplay of coherences between the electronic and vibrational states. The ET reaction rates, defined as a flux-flux correlation function, are calculated using the linear response of the system to an external perturbation as a function of activation energy. The results exhibit an asymmetric inverted parabolic profile in a small activation regime due to the presence of the intermediate state between the reactant and product states and a slowly decaying profile in a large activation energy regime, which arises from the quantum coherent transitions.
Musah, Rabi A.; Espinoza, Edgard O.; Cody, Robert B.; Lesiak, Ashton D.; Christensen, Earl D.; Moore, Hannah E.; Maleknia, Simin; Drijfhout, Falko P.
2015-01-01
A high throughput method for species identification and classification through chemometric processing of direct analysis in real time (DART) mass spectrometry-derived fingerprint signatures has been developed. The method entails introduction of samples to the open air space between the DART ion source and the mass spectrometer inlet, with the entire observed mass spectral fingerprint subjected to unsupervised hierarchical clustering processing. A range of both polar and non-polar chemotypes a...
Troise, A.D.; Ferracane, R.; Palermo, M.; Fogliano, V.
2014-01-01
In this paper a new targeted metabolic profile approach using Orbitrap high resolution mass spectrometry was described. For each foodmatrix various classes of bioactive compounds and some specificmetabolites of interest were selected on the basis of the existing knowledge creating an easy-to-read
Heidema, A.G.; Nagelkerke, N.
2008-01-01
To discriminate between breast cancer patients and controls, we used a three-step approach to obtain our decision rule. First, we ranked the mass/charge values using random forests, because it generates importance indices that take possible interactions into account. We observed that the top ranked
Ran, J.; Ditmar, P.G.; Klees, R.; Farahani, H.
2017-01-01
We present an improved mascon approach to transform monthly spherical harmonic solutions based on GRACE satellite data into mass anomaly estimates in Greenland. The GRACE-based spherical harmonic coefficients are used to synthesize gravity anomalies at satellite altitude, which are then inverted
A deep learning approach for the analysis of masses in mammograms with minimal user intervention.
Dhungel, Neeraj; Carneiro, Gustavo; Bradley, Andrew P
2017-04-01
We present an integrated methodology for detecting, segmenting and classifying breast masses from mammograms with minimal user intervention. This is a long standing problem due to low signal-to-noise ratio in the visualisation of breast masses, combined with their large variability in terms of shape, size, appearance and location. We break the problem down into three stages: mass detection, mass segmentation, and mass classification. For the detection, we propose a cascade of deep learning methods to select hypotheses that are refined based on Bayesian optimisation. For the segmentation, we propose the use of deep structured output learning that is subsequently refined by a level set method. Finally, for the classification, we propose the use of a deep learning classifier, which is pre-trained with a regression to hand-crafted feature values and fine-tuned based on the annotations of the breast mass classification dataset. We test our proposed system on the publicly available INbreast dataset and compare the results with the current state-of-the-art methodologies. This evaluation shows that our system detects 90% of masses at 1 false positive per image, has a segmentation accuracy of around 0.85 (Dice index) on the correctly detected masses, and overall classifies masses as malignant or benign with sensitivity (Se) of 0.98 and specificity (Sp) of 0.7. Copyright © 2017 Elsevier B.V. All rights reserved.
A population balance approach considering heat and mass transfer-Experiments and CFD simulations
International Nuclear Information System (INIS)
Krepper, Eckhard; Beyer, Matthias; Lucas, Dirk; Schmidtke, Martin
2011-01-01
Highlights: → The MUSIG approach was extended by mass transfer between the size groups to describe condensation or re-evaporation. → Experiments on steam bubble condensation in vertical co-current steam/water flows have been carried out. The cross sectional gas fraction distribution, the bubble size distribution ad the gas velocity profiles were measured. → The following phenomena could be reproduced with good agreement to the experiments: (a) Dependence of the condensation rate on the initial bubble size distribution and (b) re-evaporation over the height in tests with low inlet temperature subcooling. - Abstract: Bubble condensation in sub-cooled water is a complex process, to which various phenomena contribute. Since the condensation rate depends on the interfacial area density, bubble size distribution changes caused by breakup and coalescence play a crucial role. Experiments on steam bubble condensation in vertical co-current steam/water flows have been carried out in an 8 m long vertical DN200 pipe. Steam is injected into the pipe and the development of the bubbly flow is measured at different distances to the injection using a pair of wire mesh sensors. By varying the steam nozzle diameter the initial bubble size can be influenced. Larger bubbles come along with a lower interfacial area density and therefore condensate slower. Steam pressures between 1 and 6.5 MPa and sub-cooling temperatures from 2 to 12 K were applied. Due to the pressure drop along the pipe, the saturation temperature falls towards the upper pipe end. This affects the sub-cooling temperature and can even cause re-evaporation in the upper part of the test section. The experimental configurations are simulated with the CFD code CFX using an extended MUSIG approach, which includes the bubble shrinking or growth due to condensation or re-evaporation. The development of the vapour phase along the pipe with respect to vapour void fractions and bubble sizes is qualitatively well reproduced
Barcaru, A; Mol, H G J; Tienstra, M; Vivó-Truyols, G
2017-08-29
A novel probabilistic Bayesian strategy is proposed to resolve highly coeluting peaks in high-resolution GC-MS (Orbitrap) data. Opposed to a deterministic approach, we propose to solve the problem probabilistically, using a complete pipeline. First, the retention time(s) for a (probabilistic) number of compounds for each mass channel are estimated. The statistical dependency between m/z channels was implied by including penalties in the model objective function. Second, Bayesian Information Criterion (BIC) is used as Occam's razor for the probabilistic assessment of the number of components. Third, a probabilistic set of resolved spectra, and their associated retention times are estimated. Finally, a probabilistic library search is proposed, computing the spectral match with a high resolution library. More specifically, a correlative measure was used that included the uncertainties in the least square fitting, as well as the probability for different proposals for the number of compounds in the mixture. The method was tested on simulated high resolution data, as well as on a set of pesticides injected in a GC-Orbitrap with high coelution. The proposed pipeline was able to detect accurately the retention times and the spectra of the peaks. For our case, with extremely high coelution situation, 5 out of the 7 existing compounds under the selected region of interest, were correctly assessed. Finally, the comparison with the classical methods of deconvolution (i.e., MCR and AMDIS) indicates a better performance of the proposed algorithm in terms of the number of correctly resolved compounds. Copyright © 2017 Elsevier B.V. All rights reserved.
Nuclear structure effects on calculated fast neutron reaction cross sections
International Nuclear Information System (INIS)
Avrigeanu, V.
1992-01-01
The importance of accurate low-lying level schemes for reaction cross section calculation and need for microscopically calculated levels are proved with reference to fast neutron induced reactions in the A = 50 atomic mass range. The uses of the discrete levels both for normalization of phenomenological level density approaches and within Hauser-Feshbach calculations are discussed in this respect. (Author)
Directory of Open Access Journals (Sweden)
Ait Brahim L.
2018-01-01
Full Text Available The Tleta of Beni Ider region located in the SW of Tetouan (Rif Septentrional knows many mass instabilities. The diagnostic via the inventory, the mapping and the characterization of mass movements was made by using satellite imagery, aerial photography and field data coupled with existing documents (geological, geomorphological,…. The understanding of both their spatial distribution and the mechanism generating them, is very complex because of the existence of an important number of natural factors (geological, geomorphological, hydrological in a relative mountainous landscape with deep valleys, steep slopes and significant elevation changes. Thus, a multidisciplinary approach was adopted to elaborate the landslide susceptibility map of the region taking into account interactions and causal relationships between the various natural parameters that tend to accentuate and aggravate the setting of landslides. The multidisciplinary database allowed us to evaluate the susceptibility thanks to a bivariate probabiliste model (Weight of Evidence. The obtained landslide susceptibility map is a major contribution to the development of urban development plans in the region.
Holmes, Lisa; Landsverk, John; Ward, Harriet; Rolls-Reutz, Jennifer; Saldana, Lisa; Wulczyn, Fred; Chamberlain, Patricia
2014-04-01
Estimating costs in child welfare services is critical as new service models are incorporated into routine practice. This paper describes a unit costing estimation system developed in England (cost calculator) together with a pilot test of its utility in the United States where unit costs are routinely available for health services but not for child welfare services. The cost calculator approach uses a unified conceptual model that focuses on eight core child welfare processes. Comparison of these core processes in England and in four counties in the United States suggests that the underlying child welfare processes generated from England were perceived as very similar by child welfare staff in California county systems with some exceptions in the review and legal processes. Overall, the adaptation of the cost calculator for use in the United States child welfare systems appears promising. The paper also compares the cost calculator approach to the workload approach widely used in the United States and concludes that there are distinct differences between the two approaches with some possible advantages to the use of the cost calculator approach, especially in the use of this method for estimating child welfare costs in relation to the incorporation of evidence-based interventions into routine practice.
Assembly of a Vacuum Chamber: A Hands-On Approach to Introduce Mass Spectrometry
Bussie`re, Guillaume; Stoodley, Robin; Yajima, Kano; Bagai, Abhimanyu; Popowich, Aleksandra K.; Matthews, Nicholas E.
2014-01-01
Although vacuum technology is essential to many aspects of modern physical and analytical chemistry, vacuum experiments are rarely the focus of undergraduate laboratories. We describe an experiment that introduces students to vacuum science and mass spectrometry. The students first assemble a vacuum system, including a mass spectrometer. While…
International Nuclear Information System (INIS)
Lougovski, A; Hofheinz, F; Maus, J; Schramm, G; Will, E; Hoff, J van den
2014-01-01
The aim of this study is the evaluation of on-the-fly volume of intersection computation for system’s geometry modelling in 3D PET image reconstruction. For this purpose we propose a simple geometrical model in which the cubic image voxels on the given Cartesian grid are approximated with spheres and the rectangular tubes of response (ToRs) are approximated with cylinders. The model was integrated into a fully 3D list-mode PET reconstruction for performance evaluation. In our model the volume of intersection between a voxel and the ToR is only a function of the impact parameter (the distance between voxel centre to ToR axis) but is independent of the relative orientation of voxel and ToR. This substantially reduces the computational complexity of the system matrix calculation. Based on phantom measurements it was determined that adjusting the diameters of the spherical voxel size and the ToR in such a way that the actual voxel and ToR volumes are conserved leads to the best compromise between high spatial resolution, low noise, and suppression of Gibbs artefacts in the reconstructed images. Phantom as well as clinical datasets from two different PET systems (Siemens ECAT HR + and Philips Ingenuity-TF PET/MR) were processed using the developed and the respective vendor-provided (line of intersection related) reconstruction algorithms. A comparison of the reconstructed images demonstrated very good performance of the new approach. The evaluation showed the respective vendor-provided reconstruction algorithms to possess 34–41% lower resolution compared to the developed one while exhibiting comparable noise levels. Contrary to explicit point spread function modelling our model has a simple straight-forward implementation and it should be easy to integrate into existing reconstruction software, making it competitive to other existing resolution recovery techniques. (paper)
Lefebvre, Corentin; Khartabil, Hassan; Boisson, Jean-Charles; Contreras-García, Julia; Piquemal, Jean-Philip; Hénon, Eric
2018-03-19
Extraction of the chemical interaction signature from local descriptors based on electron density (ED) is still a fruitful field of development in chemical interpretation. In a previous work that used promolecular ED (frozen ED), the new descriptor, δg , was defined. It represents the difference between a virtual upper limit of the ED gradient (∇ρIGM , IGM=independent gradient model) that represents a noninteracting system and the true ED gradient (∇ρ ). It can be seen as a measure of electron sharing brought by ED contragradience. A compelling feature of this model is to provide an automatic workflow that extracts the signature of interactions between selected groups of atoms. As with the noncovalent interaction (NCI) approach, it provides chemists with a visual understanding of the interactions present in chemical systems. ∇ρIGM is achieved simply by using absolute values upon summing the individual gradient contributions that make up the total ED gradient. Hereby, we extend this model to relaxed ED calculated from a wave function. To this end, we formulated gradient-based partitioning (GBP) to assess the contribution of each orbital to the total ED gradient. We highlight these new possibilities across two prototypical examples of organic chemistry: the unconventional hexamethylbenzene dication, with a hexa-coordinated carbon atom, and β-thioaminoacrolein. It will be shown how a bond-by-bond picture can be obtained from a wave function, which opens the way to monitor specific interactions along reaction paths. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Kunenkov, Erast V; Kononikhin, Alexey S; Perminova, Irina V; Hertkorn, Norbert; Gaspar, Andras; Schmitt-Kopplin, Philippe; Popov, Igor A; Garmash, Andrew V; Nikolaev, Evgeniy N
2009-12-15
The ultrahigh-resolution Fourier transform ion cyclotron resonance (FTICR) mass spectrum of natural organic matter (NOM) contains several thousand peaks with dozens of molecules matching the same nominal mass. Such a complexity poses a significant challenge for automatic data interpretation, in which the most difficult task is molecular formula assignment, especially in the case of heavy and/or multielement ions. In this study, a new universal algorithm for automatic treatment of FTICR mass spectra of NOM and humic substances based on total mass difference statistics (TMDS) has been developed and implemented. The algorithm enables a blind search for unknown building blocks (instead of a priori known ones) by revealing repetitive patterns present in spectra. In this respect, it differs from all previously developed approaches. This algorithm was implemented in designing FIRAN-software for fully automated analysis of mass data with high peak density. The specific feature of FIRAN is its ability to assign formulas to heavy and/or multielement molecules using "virtual elements" approach. To verify the approach, it was used for processing mass spectra of sodium polystyrene sulfonate (PSS, M(w) = 2200 Da) and polymethacrylate (PMA, M(w) = 3290 Da) which produce heavy multielement and multiply-charged ions. Application of TMDS identified unambiguously monomers present in the polymers consistent with their structure: C(8)H(7)SO(3)Na for PSS and C(4)H(6)O(2) for PMA. It also allowed unambiguous formula assignment to all multiply-charged peaks including the heaviest peak in PMA spectrum at mass 4025.6625 with charge state 6- (mass bias -0.33 ppm). Application of the TMDS-algorithm to processing data on the Suwannee River FA has proven its unique capacities in analysis of spectra with high peak density: it has not only identified the known small building blocks in the structure of FA such as CH(2), H(2), C(2)H(2)O, O but the heavier unit at 154.027 amu. The latter was
A dried blood spot mass spectrometry metabolomic approach for rapid breast cancer detection
Directory of Open Access Journals (Sweden)
Wang Q
2016-03-01
Full Text Available Qingjun Wang,1,2,* Tao Sun,3,* Yunfeng Cao,1,2,4,5 Peng Gao,2,4,6 Jun Dong,2,4 Yanhua Fang,2 Zhongze Fang,2 Xiaoyu Sun,2 Zhitu Zhu1,2 1Oncology Department 2, The First Affiliated Hospital of Liaoning Medical University, 2Personalized Treatment and Diagnosis Research Center, The First Affiliated Hospital of Liaoning Medical University and Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Jinzhou, 3Department of Internal Medicine 1, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Insititute, Shenyang, 4CAS Key Laboratory of Separation Science for Analytical Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian, 5Key Laboratory of Contraceptives and Devices Research (NPFPC, Shanghai Engineer and Technology Research Center of Reproductive Health Drug and Devices, Shanghai Institute of Planned Parenthood Research, Shanghai, 6Clinical Laboratory, Dalian Sixth People’s Hospital, Dalian, People’s Republic of China *These authors contributed equally to this work Objective: Breast cancer (BC is still a lethal threat to women worldwide. An accurate screening and diagnosis strategy performed in an easy-to-operate manner is highly warranted in clinical perspective. Besides the routinely focused protein markers, blood is full of small molecular metabolites with diverse structures and properties. This study aimed to screen metabolite markers with BC diagnosis potentials.Methods: A dried blood spot-based direct infusion mass spectrometry (MS metabolomic analysis was conducted for BC and non-BC differentiation. The targeted analytes included 23 amino acids and 26 acylcarnitines.Results: Multivariate analysis screened out 21 BC-related metabolites in the blood. Regression analysis generated a diagnosis model consisting of parameters Pip, Asn, Pro, C14:1/C16, Phe/Tyr, and Gly/Ala. Tested with another set of BC and non-BC samples, this model showed a sensitivity of 92.2% and a specificity
La Colla, Luca; Albertin, Andrea; La Colla, Giorgio; Porta, Andrea; Aldegheri, Giorgio; Di Candia, Domenico; Gigli, Fausto
2010-01-01
In a previous article, we showed that the pharmacokinetic set of remifentanil used for target-controlled infusion (TCI) might be biased in obese patients because it incorporates flawed equations for the calculation of lean body mass (LBM), which is a covariate of several pharmacokinetic parameters in this set. The objectives of this study were to determine the predictive performance of the original pharmacokinetic set, which incorporates the James equation for LBM calculation, and to determine the predictive performance of the pharmacokinetic set when a new method to calculate LBM was used (the Janmahasatian equations). This was an observational study with intraoperative observations and no follow-up. Fifteen morbidly obese inpatients scheduled for bariatric surgery were included in the study. The intervention included manually controlled continuous infusion of remifentanil during the surgery and analysis of arterial blood samples to determine the arterial remifentanil concentration, to be compared with concentrations predicted by either the unadjusted or the adjusted pharmacokinetic set. The statistical analysis included parametric and non-parametric tests on continuous variables and determination of the median performance error (MDPE), median absolute performance error (MDAPE), divergence and wobble. The median values (interquartile ranges) of the MDPE, MDAPE, divergence and wobble for the James equations during maintenance were -53.4% (-58.7% to -49.2%), 53.4% (49.0-58.7%), 3.3% (2.9-4.7%) and 1.4% h(-1) (1.1-2.5% h(-1)), respectively. The respective values for the Janmahasatian equations were -18.9% (-24.2% to -10.4%), 20.5% (13.3-24.8%), 2.6% (-0.7% to 4.5%) and 1.9% h(-1) (1.4-3.0% h(-1)). The performance (in terms of the MDPE and MDAPE) of the corrected pharmacokinetic set was better than that of the uncorrected one. The predictive performance of the original pharmacokinetic set is not clinically acceptable. Use of a corrected LBM value in morbidly obese
Cajka, Tomas; Garay, Luis A; Sitepu, Irnayuli R; Boundy-Mills, Kyria L; Fiehn, Oliver
2016-10-28
A multiplatform mass spectrometry-based approach was used for elucidating extracellular lipids with biosurfactant properties produced by the oleaginous yeast Rhodotorula babjevae UCDFST 04-877. This strain secreted 8.6 ± 0.1 g/L extracellular lipids when grown in a benchtop bioreactor fed with 100 g/L glucose in medium without addition of hydrophobic substrate, such as oleic acid. Untargeted reversed-phase liquid chromatography-quadrupole/time-of-flight mass spectrometry (QTOFMS) detected native glycolipid molecules with masses of 574-716 Da. After hydrolysis into the fatty acid and sugar components and hydrophilic interaction chromatography-QTOFMS analysis, the extracellular lipids were found to consist of hydroxy fatty acids and sugar alcohols. Derivatization and chiral separation gas chromatography-mass spectrometry (GC-MS) identified these components as d-arabitol, d-mannitol, (R)-3-hydroxymyristate, (R)-3-hydroxypalmitate, and (R)-3-hydroxystearate. In order to assemble these substructures back into intact glycolipids that were detected in the initial screen, potential structures were in-silico acetylated to match the observed molar masses and subsequently characterized by matching predicted and observed MS/MS fragmentation using the Mass Frontier software program. Eleven species of acetylated sugar alcohol esters of hydroxy fatty acids were characterized for this yeast strain.
Chatterjee, D.; Gulminelli, F.; Raduta, Ad. R.; Margueron, J.
2017-12-01
The question of correlations among empirical equation of state (EoS) parameters constrained by nuclear observables is addressed in a Thomas-Fermi meta-modeling approach. A recently proposed meta-modeling for the nuclear EoS in nuclear matter is augmented with a single finite size term to produce a minimal unified EoS functional able to describe the smooth part of the nuclear ground state properties. This meta-model can reproduce the predictions of a large variety of models, and interpolate continuously between them. An analytical approximation to the full Thomas-Fermi integrals is further proposed giving a fully analytical meta-model for nuclear masses. The parameter space is sampled and filtered through the constraint of nuclear mass reproduction with Bayesian statistical tools. We show that this simple analytical meta-modeling has a predictive power on masses, radii, and skins comparable to full Hartree-Fock or extended Thomas-Fermi calculations with realistic energy functionals. The covariance analysis on the posterior distribution shows that no physical correlation is present between the different EoS parameters. Concerning nuclear observables, a strong correlation between the slope of the symmetry energy and the neutron skin is observed, in agreement with previous studies.
A hybrid approach to protein differential expression in mass spectrometry-based proteomics
Wang, X.; Anderson, G. A.; Smith, R. D.; Dabney, A. R.
2012-01-01
MOTIVATION: Quantitative mass spectrometry-based proteomics involves statistical inference on protein abundance, based on the intensities of each protein's associated spectral peaks. However, typical MS-based proteomics datasets have substantial
DEFF Research Database (Denmark)
Nielsen, Peter; Brunø, Thomas Ditlev; Nielsen, Kjeld
2014-01-01
This paper presents a data mining method for analyzing historical configuration data providing a number of opportunities for improving mass customization capabilities. The overall objective of this paper is to investigate how specific quantitative analyses, more specifically the association rule...
DEFF Research Database (Denmark)
Erba, Elisabetta Boeri; Matthiesen, Rune; Bunkenborg, Jakob
2007-01-01
Using stable isotope labeling and mass spectrometry, we performed a sensitive, quantitative analysis of multiple phosphorylation sites of the epidermal growth factor (EGF) receptor. Phosphopeptide detection efficiency was significantly improved by using the tyrosine phosphatase inhibitor sodium p...
International Nuclear Information System (INIS)
Yue, Ning J.
2008-01-01
As different types of radionuclides (e.g., 131 Cs source) are introduced for clinical use in brachytherapy, the question is raised regarding whether a relatively simple method exists for the derivation of values of the half value layer (HVL) or the tenth value layer (TVL). For the radionuclide that has been clinically used for years, such as 125 I and 103 Pd, the sources have been manufactured and marketed by several vendors with different designs and structures. Because of the nature of emission of low energy photons for these radionuclides, energy spectra of the sources are very dependent on their individual designs. Though values of the HVL or the TVL in certain commonly used shielding materials are relatively small for these low energy photon emitting sources, the question remains how the variations in energy spectra affect the HVL (or TVL) values and whether these values can be calculated with a relatively simple method. A more fundamental question is whether a method can be established to derive the HVL (TVL) values for any brachytherapy sources and for different materials in a relatively straightforward fashion. This study was undertaken to answer these questions. Based on energy spectra, a well established semiempirical mass attenuation coefficient computing scheme was utilized to derive the HVL (TVL) values of different materials for different types of brachytherapy sources. The method presented in this study may be useful to estimate HVL (TVL) values of different materials for brachytherapy sources of different designs and containing different radionuclides
Saidi, F.; Sebaa, N.; Mahmoudi, A.; Aourag, H.; Merad, G.; Dergal, M.
2018-06-01
We performed first-principle calculations to investigate structural, phase stability, electronic and mechanical properties for the Laves phases YM2 (M = Mn, Fe, Co) with C15, C14 and C36 structures. We used the density functional theory within the framework of both pseudo-potentials and plane wave basis using VASP (Vienna Ab Initio Software Package). The calculated equilibrium structural parameters are in accordance with available theoretical values. Mechanical properties were calculated, discussed, and analyzed with data mining approach in terms of structure stability. The results reveal that YCo2 is harder than YFe2 and YMn2.
International Nuclear Information System (INIS)
Borodkin, P.G.; Khrennikov, N.N.; Ryabinin, Yu.A.; Adeev, V.A.
2015-01-01
A description is given of the universal procedure for calculation of fast neutron fluence (FNF) on WWER vessels. Approbation of the calculation procedure was carried out by comparing the calculation results for this procedure and measurements on the outer surface of the WWER-440 and WWER-1000 vessels. In addition, an estimation of the uncertainty of the settlement procedure was made in accordance with the requirements of regulatory documents. The developed procedure is applied at Kola NPP for independent fast neutron fluence estimates on the WWER-440 reactor vessels when planning core loads taking into account the introduction of new fuels. The results of the pilot operation of the procedure for calculating FNF at the Kola NPP were taken into account when improving the procedure and its application to the calculations of FNF on the WWER-1000 vessels [ru
Directory of Open Access Journals (Sweden)
Nasser Bahonar
2008-10-01
Full Text Available The religious program of mass media exclusively produced for children have had a significant growth in recent years. The artistic expression of stories related to the life of the great prophets and to the history of Islam as well as taking advantage of theatrical literature in religious occasions can herald successes in this neglected field. But, what is questionable in national religious policies in that why who are involved in the religious education of children whether in traditional media (family, mosques, religious communities, etc or in modern media (textbooks, press, radio and television do not follow an integrated and coherent policy based on a proved theoretical view of religious communications. In fact, this question results from the same old opposition between audience-oriented and media-oriented approaches in communications as well as the opposition between cognitivism and other approaches in psychology. The findings of the field of study conducted by the author along with psychological achievements of cognitivism in human communications and cultural audience-oriented approaches, especially reception theory in mass communications can solve some existing difficulties in the formulation of religious messages. Drawing upon the above mentioned theoretical schools, this article tries to introduce a useful approach to producing religious programs for children and describes the main tasks of mass media in this field accordingly.
International Nuclear Information System (INIS)
Sadek, M.A.
2006-01-01
The major elements of El-Burulus lake water system are rainfall, agricultural drainage discharge, groundwater, human activities, evaporation and water interaction between the lake and the Mediterranean sea. The principal input sources are agricultural drainage (8 drains at the southern borders of the lake), sea water as well as some contribution of precipitation, groundwater and human activities. Water is lost from the lake through evaporation and surface outflow. The present study has been conducted using isotopic / mass balance approach to investigate the water balance of El-Burulus lake and to emphasize the relative contribution of different input / output components which affect the environmental and hydrological terms of the system. An isotopic evaporation pan experiment was performed to estimate the parameters of relevance to water balance (isotopic composition of free air moisture and evaporating flux) and to simulate the isotopic enrichment of evaporation under atmospheric and hydraulic control. The isotopic mass balance approach employed herein facilitated the estimation of groundwater inflow to the lake, evaporated fraction of total lake inflow (E/I) and its fraction to outflow (E/O), ratio of surface inflow to surface outflow (I/O) as well as residence time of lake water. The isotopic mass balance approach has been validated by comparing the values of estimated parameters with the previous hydrological investigations; a quite good match has been indicated, the relevance of this approach is related to its integrative scale and the more simply implementation
Bergman, Nina; Shevchenko, Denys; Bergquist, Jonas
2014-01-01
This review summarizes various approaches for the analysis of low molecular weight (LMW) compounds by different laser desorption/ionization mass spectrometry techniques (LDI-MS). It is common to use an agent to assist the ionization, and small molecules are normally difficult to analyze by, e.g., matrix assisted laser desorption/ionization mass spectrometry (MALDI-MS) using the common matrices available today, because the latter are generally small organic compounds themselves. This often results in severe suppression of analyte peaks, or interference of the matrix and analyte signals in the low mass region. However, intrinsic properties of several LDI techniques such as high sensitivity, low sample consumption, high tolerance towards salts and solid particles, and rapid analysis have stimulated scientists to develop methods to circumvent matrix-related issues in the analysis of LMW molecules. Recent developments within this field as well as historical considerations and future prospects are presented in this review.
Slovenian National Landslide DataBase – A promising approach to slope mass movement prevention plan
Directory of Open Access Journals (Sweden)
Mihael Ribičič
2007-12-01
Full Text Available The Slovenian territory is, geologically speaking, very diverse and mainly composed of sediments or sedimentary rocks. Slope mass movements occur almost in all parts of the country. In the Alpine carbonate areas of the northern part of Slovenia rock falls, rock slides and even debris flows can be triggered.In the mountainous regions of central Slovenia composed from different clastic rocks, large soil landslides are quite usual, and in the young soil sediments of eastern part of Slovenia there is a large density of small soil landslides.The damage caused by slope mass movements is high, but still no common strategy and regulations to tackle this unwanted event, especially from the aspect of prevention, have been developed. One of the first steps towards an effective strategy of struggling against landslides and other slope mass movements is a central landslide database, where (ideally all known landslide occurrences would be reported, and described in as much detail as possible. At the end of the project of National Landslide Database construction which ended in May 2005 there were more than 6600 registered landslides, of which almost half occurred at a known location and were accompanied with the main characteristic descriptions.The erected database is a chance for Slovenia to once and for all start a solid slope mass movement prevention plan. The only part which is missing and which is the most important one is adopting a legal act that will legalise the obligation of reporting slope mass movement events to the database.
New approach to 3-D, high sensitivity, high mass resolution space plasma composition measurements
International Nuclear Information System (INIS)
McComas, D.J.; Nordholt, J.E.
1990-01-01
This paper describes a new type of 3-D space plasma composition analyzer. The design combines high sensitivity, high mass resolution measurements with somewhat lower mass resolution but even higher sensitivity measurements in a single compact and robust design. While the lower resolution plasma measurements are achieved using conventional straight-through time-of-flight mass spectrometry, the high mass resolution measurements are made by timing ions reflected in a linear electric field (LEF), where the restoring force that an ion experiences is proportional to the depth it travels into the LEF region. Consequently, the ion's equation of motion in that dimension is that of a simple harmonic oscillator and its travel time is simply proportional to the square root of the ion's mass/charge (m/q). While in an ideal LEF, the m/q resolution can be arbitrarily high, in a real device the resolution is limited by the field linearity which can be achieved. In this paper we describe how a nearly linear field can be produced and discuss how the design can be optimized for various different plasma regimes and spacecraft configurations
International Nuclear Information System (INIS)
Traino, A. Claudio; Di Martino, Fabio; Lazzeri, Mauro
2004-01-01
The traditional algorithms (Marinelli-Quimby and MIRD) used for the absorbed dose calculation in radionuclide therapy generally assume that the mass of the target organs does not change with time. In radioiodine therapy for Graves' disease this approximation may not be valid. In this paper a mathematical model of thyroid mass reduction during the clearance phase (30-35 days) after 131 I administration to patients with Graves' disease is presented. A new algorithm for the absorbed dose calculation is derived, taking into account the reduction of the mass of the gland resulting from the 131 I therapy. It is demonstrated that thyroid mass reduction has a considerable effect on the calculated radiation dose. Either the model of the thyroid mass reduction or the new equation for the absorbed dose calculation depend on a parameter k for each patient. This parameter can be calculated after the administration of a diagnostic amount of radioiodine activity (0.37-1.85 MBq). Thus, thyroid absorbed dose and thyroid mass reduction during the first month after therapy can be predicted before therapy administration. The absorbed dose values calculated by the new algorithm are compared to those calculated by the traditional Marinelli-Quimby and MIRD algorithms
Directory of Open Access Journals (Sweden)
Ni Made Sintha Pratiwi
2018-05-01
Full Text Available Background: The person suffering mental disorders is not only burdened by his condition but also by the stigma. The impact of stigma extremely influences society that it is considered to be the obstacle in mental disorders therapy. Stigma as the society adverse view toward severe mental disorders is related with the cultural aspect. The interaction appeared from each component of nursing model namely sunrise model, which a model developed by Madeleine Leininger is connected with the wide society views about severe mental disorders condition in society. Objective: The aim of this study was to analyze the factors related to public stigma and to find out the dominant factors related to public stigma about severe mental illness through sunrise model approach in Sukonolo Village, Malang Regency. Methods: This study using observational analytical design with cross sectional approach. There were 150 respondents contributed in this study. The respondents were obtained using purposive sampling technique. Results: The results showed a significant relationship between mass media exposure, spiritual well-being, interpersonal contact, attitude, and knowledge with public stigma about mental illness. The result from multiple logistic regression shows the low exposure of mass media has the highest OR value at 26.744. Conclusion: There were significant correlation between mass media exposure, spiritual well-being, interpersonal contact, attitude, and knowledge with public stigma toward mental illness. Mass media exposure as a dominant factor influencing public stigma toward mental illness.
DEFF Research Database (Denmark)
Manning, Alisa K; Hivert, Marie-France; Scott, Robert A
2012-01-01
pathways might be uncovered by accounting for differences in body mass index (BMI) and potential interactions between BMI and genetic variants. We applied a joint meta-analysis approach to test associations with fasting insulin and glucose on a genome-wide scale. We present six previously unknown loci...... associated with fasting insulin at P triglyceride and lower high-density lipoprotein (HDL) cholesterol levels, suggesting a role for these loci...
International Nuclear Information System (INIS)
Adamson, S; Astapenko, V; Chernysheva, I; Chorkov, V; Deminsky, M; Demchenko, G; Demura, A; Demyanov, A; Dyatko, N; Eletzkii, A; Knizhnik, A; Kochetov, I; Napartovich, A; Rykova, E; Sukhanov, L; Umanskii, S; Vetchinkin, A; Zaitsevskii, A; Potapkin, B
2007-01-01
Present-day computational techniques provide a possibility of evaluating properties of macrosystems using ab initio quantum chemistry and theories of elementary processes. Physical and chemical phenomena on very different timescales have to be taken into account (excitation, emission, chemical reactions, diffusion) at different levels of refining. This refining covers a very wide region of parameters starting from the structure of species up to the macro chemical mechanism of their conversion. This multilevel approach is described in detail in the paper and includes interaction and data transfer between different levels of phenomena description. In the framework of the approach, unknown properties of molecules, ions and atoms (structure, potential energy curves, transition dipole moments) are calculated based on quantum-chemical methods. The calculation results are used to evaluate rate characteristics of physical and chemical processes. The developed kinetic state-to-state scheme is then used to calculate the macro properties of the system under investigation. As an example of the multilevel approach, the emission properties of the Ar-GaI 3 positive column discharge plasma were calculated using the Chemical Work Bench computational environment. The calculations yield the electron energy balance and emission efficiency as functions of plasma parameters
International Nuclear Information System (INIS)
Noor, Fatimah A.; Abdullah, Mikrajuddin; Sukirno; Khairurrijal
2010-01-01
Analytical expressions of electron transmittance and tunneling current in an anisotropic TiN x /HfO 2 /SiO 2 /p-Si(100) metal-oxide-semiconductor (MOS) capacitor were derived by considering the coupling of transverse and longitudinal energies of an electron. Exponential and Airy wavefunctions were utilized to obtain the electron transmittance and the electron tunneling current. A transfer matrix method, as a numerical approach, was used as a benchmark to assess the analytical approaches. It was found that there is a similarity in the transmittances calculated among exponential- and Airy-wavefunction approaches and the TMM at low electron energies. However, for high energies, only the transmittance calculated by using the Airy-wavefunction approach is the same as that evaluated by the TMM. It was also found that only the tunneling currents calculated by using the Airy-wavefunction approach are the same as those obtained under the TMM for all range of oxide voltages. Therefore, a better analytical description for the tunneling phenomenon in the MOS capacitor is given by the Airy-wavefunction approach. Moreover, the tunneling current density decreases as the titanium concentration of the TiN x metal gate increases because the electron effective mass of TiN x decreases with increasing nitrogen concentration. In addition, the mass anisotropy cannot be neglected because the tunneling currents obtained under the isotropic and anisotropic masses are very different. (semiconductor devices)
Reconnaissance Estimates of Recharge Based on an Elevation-dependent Chloride Mass-balance Approach
Energy Technology Data Exchange (ETDEWEB)
Charles E. Russell; Tim Minor
2002-08-31
Significant uncertainty is associated with efforts to quantity recharge in arid regions such as southern Nevada. However, accurate estimates of groundwater recharge are necessary to understanding the long-term sustainability of groundwater resources and predictions of groundwater flow rates and directions. Currently, the most widely accepted method for estimating recharge in southern Nevada is the Maxey and Eakin method. This method has been applied to most basins within Nevada and has been independently verified as a reconnaissance-level estimate of recharge through several studies. Recharge estimates derived from the Maxey and Eakin and other recharge methodologies ultimately based upon measures or estimates of groundwater discharge (outflow methods) should be augmented by a tracer-based aquifer-response method. The objective of this study was to improve an existing aquifer-response method that was based on the chloride mass-balance approach. Improvements were designed to incorporate spatial variability within recharge areas (rather than recharge as a lumped parameter), develop a more defendable lower limit of recharge, and differentiate local recharge from recharge emanating as interbasin flux. Seventeen springs, located in the Sheep Range, Spring Mountains, and on the Nevada Test Site were sampled during the course of this study and their discharge was measured. The chloride and bromide concentrations of the springs were determined. Discharge and chloride concentrations from these springs were compared to estimates provided by previously published reports. A literature search yielded previously published estimates of chloride flux to the land surface. {sup 36}Cl/Cl ratios and discharge rates of the three largest springs in the Amargosa Springs discharge area were compiled from various sources. This information was utilized to determine an effective chloride concentration for recharging precipitation and its associated uncertainty via Monte Carlo simulations
International Nuclear Information System (INIS)
Dermisek, Radovan
2004-01-01
We show that both small mixing in the quark sector and large mixing in the lepton sector can be obtained from a simple assumption of universality of Yukawa couplings and the right-handed neutrino Majorana mass matrix in leading order. We discuss conditions under which bilarge mixing in the lepton sector is achieved with a minimal amount of fine-tuning requirements for possible models. From knowledge of the solar and atmospheric mixing angles we determine the allowed values of sin θ 13 . If embedded into grand unified theories, the third generation Yukawa coupling unification is a generic feature while masses of the first two generations of charged fermions depend on small perturbations. In the neutrino sector, the heavier two neutrinos are model dependent, while the mass of the lightest neutrino in this approach does not depend on perturbations in the leading order. The right-handed neutrino mass scale can be identified with the GUT scale in which case the mass of the lightest neutrino is given as (m top 2 /M GUT )sin 2 θ 23 sin 2 θ 12 in the limit sin θ 13 ≅0. Discussing symmetries we make a connection with hierarchical models and show that the basis independent characteristic of this scenario is a strong dominance of the third generation right-handed neutrino, M 1 ,M 2 -4 M 3 , M 3 =M GUT
Ran, J.; Ditmar, P.; Klees, R.; Farahani, H. H.
2018-03-01
We present an improved mascon approach to transform monthly spherical harmonic solutions based on GRACE satellite data into mass anomaly estimates in Greenland. The GRACE-based spherical harmonic coefficients are used to synthesize gravity anomalies at satellite altitude, which are then inverted into mass anomalies per mascon. The limited spectral content of the gravity anomalies is properly accounted for by applying a low-pass filter as part of the inversion procedure to make the functional model spectrally consistent with the data. The full error covariance matrices of the monthly GRACE solutions are properly propagated using the law of covariance propagation. Using numerical experiments, we demonstrate the importance of a proper data weighting and of the spectral consistency between functional model and data. The developed methodology is applied to process real GRACE level-2 data (CSR RL05). The obtained mass anomaly estimates are integrated over five drainage systems, as well as over entire Greenland. We find that the statistically optimal data weighting reduces random noise by 35-69%, depending on the drainage system. The obtained mass anomaly time-series are de-trended to eliminate the contribution of ice discharge and are compared with de-trended surface mass balance (SMB) time-series computed with the Regional Atmospheric Climate Model (RACMO 2.3). We show that when using a statistically optimal data weighting in GRACE data processing, the discrepancies between GRACE-based estimates of SMB and modelled SMB are reduced by 24-47%.
The mass transfer approach to multivariate discrete first order stochastic dominance
DEFF Research Database (Denmark)
Østerdal, Lars Peter Raahave
2010-01-01
A fundamental result in the theory of stochastic dominance tells that first order dominance between two finite multivariate distributions is equivalent to the property that the one can be obtained from the other by shifting probability mass from one outcome to another that is worse a finite numbe...
Czech Academy of Sciences Publication Activity Database
Zemenová, Jana; Sýkora, D.; Adámková, H.; Maletínská, Lenka; Elbert, Tomáš; Marek, Aleš; Blechová, Miroslava
2017-01-01
Roč. 40, č. 5 (2017), s. 1032-1039 ISSN 1615-9306 Institutional support: RVO:61388963 Keywords : enzyme-linked immunosorbent assay * ghrelin * lipopeptides * liquid chromatography mass spectrometry * monolithic columns Subject RIV: CB - Analytical Chemistry, Separation OBOR OECD: Analytical chemistry Impact factor: 2.557, year: 2016
Integrated genomic approaches implicate osteoglycin (Ogn) in the regulation of left ventricular mass
Petretto, Enrico; Sarwar, Rizwan; Grieve, Ian; Lu, Han; Kumaran, Mande K.; Muckett, Phillip J.; Mangion, Jonathan; Schroen, Blanche; Benson, Matthew; Punjabi, Prakash P.; Prasad, Sanjay K.; Pennell, Dudley J.; Kiesewetter, Chris; Tasheva, Elena S.; Corpuz, Lolita M.; Webb, Megan D.; Conrad, Gary W.; Kurtz, Theodore W.; Kren, Vladimir; Fischer, Judith; Hubner, Norbert; Pinto, Yigal M.; Pravenec, Michal; Aitman, Timothy J.; Cook, Stuart A.
2008-01-01
Left ventricular mass (LVM) and cardiac gene expression are complex traits regulated by factors both intrinsic and extrinsic to the heart. To dissect the major determinants of LVM, we combined expression quantitative trait locus1 and quantitative trait transcript (QTT) analyses of the cardiac
Constraint specification in architecture : a user-oriented approach for mass customization
Niemeijer, R.A.
2011-01-01
The last several decades, particularly those after the end of World War II, have seen an increasing industrialization of the housing industry. This was partially driven by the large demand for new houses that resulted from the baby boom and the destruction caused by the war. By adopting mass
Gao, Wu; Xu, Wenjie; Bian, Xuecheng; Chen, Yunmin
2017-11-01
The settlement of any position of the municipal solid waste (MSW) body during the landfilling process and after its closure has effects on the integrity of the internal structure and storage capacity of the landfill. This paper proposes a practical approach for calculating the settlement and storage capacity of landfills based on the space and time discretization of the landfilling process. The MSW body in the landfill was divided into independent column units, and the filling process of each column unit was determined by a simplified complete landfilling process. The settlement of a position in the landfill was calculated with the compression of each MSW layer in every column unit. Then, the simultaneous settlement of all the column units was integrated to obtain the settlement of the landfill and storage capacity of all the column units; this allowed to obtain the storage capacity of the landfill based on the layer-wise summation method. When the compression of each MSW layer was calculated, the effects of the fluctuation of the main leachate level and variation in the unit weight of the MSW on the overburdened effective stress were taken into consideration by introducing the main leachate level's proportion and the unit weight and buried depth curve. This approach is especially significant for MSW with a high kitchen waste content and landfills in developing countries. The stress-biodegradation compression model was used to calculate the compression of each MSW layer. A software program, Settlement and Storage Capacity Calculation System for Landfills, was developed by integrating the space and time discretization of the landfilling process and the settlement and storage capacity algorithms. The landfilling process of the phase IV of Shanghai Laogang Landfill was simulated using this software. The maximum geometric volume of the landfill error between the calculated and measured values is only 2.02%, and the accumulated filling weight error between the
Wu, X.; Jiang, Y.; Simonsen, S.; van den Broeke, M. R.; Ligtenberg, S.; Kuipers Munneke, P.; van der Wal, W.; Vermeersen, B. L. A.
2017-12-01
Determining present-day mass transport (PDMT) is complicated by the fact that most observations contain signals from both present day ice melting and Glacial Isostatic Adjustment (GIA). Despite decades of progress in geodynamic modeling and new observations, significant uncertainties remain in both. The key to separate present-day ice mass change and signals from GIA is to include data of different physical characteristics. We designed an approach to separate PDMT and GIA signatures by estimating them simultaneously using globally distributed interdisciplinary data with distinct physical information and a dynamically constructed a priori GIA model. We conducted a high-resolution global reappraisal of present-day ice mass balance with focus on Earth's polar regions and its contribution to global sea-level rise using a combination of ICESat, GRACE gravity, surface geodetic velocity data, and an ocean bottom pressure model. Adding ice altimetry supplies critically needed dual data types over the interiors of ice covered regions to enhance separation of PDMT and GIA signatures, and achieve half an order of magnitude expected higher accuracies for GIA and consequently ice mass balance estimates. The global data based approach can adequately address issues of PDMT and GIA induced geocenter motion and long-wavelength signatures important for large areas such as Antarctica and global mean sea level. In conjunction with the dense altimetry data, we solved for PDMT coefficients up to degree and order 180 by using a higher-resolution GRACE data set, and a high-resolution a priori PDMT model that includes detailed geographic boundaries. The high-resolution approach solves the problem of multiple resolutions in various data types, greatly reduces aliased errors from a low-degree truncation, and at the same time, enhances separation of signatures from adjacent regions such as Greenland and Canadian Arctic territories.
International Nuclear Information System (INIS)
Ishikawa, Fumitaka; Sumi, Mika; Chiba, Masahiko; Suzuki, Toru; Abe, Tomoyuki; Kuno, Yusuke
2008-01-01
The accountancy analysis of the nuclear fuel material at Plutonium Fuel Development Center of JAEA is performed by isotope dilution mass spectrometry (IDMS; Isotope Dilution Mass Spectrometry). IDMS requires the standard material called LSD spike (Large Size Dried spike) which is indispensable for the accountancy in the facilities where the nuclear fuel materials are handled. Although the LSD spike and Pu source material have been supplied from foreign countries, the transportation for such materials has been getting more difficult recently. This difficulty may affect the operation of nuclear facilities in the future. Therefore, research and development of the domestic LSD spike and base material has been performed at JAEA. Certification for such standard nuclear materials including spikes produced in Japan is being studied. This report presents the current status and the future plan for the technological development. (author)
Real estate market and building energy performance: Data for a mass appraisal approach.
Bonifaci, Pietro; Copiello, Sergio
2015-12-01
Mass appraisal is widely considered an advanced frontier in the real estate valuation field. Performing mass appraisal entails the need to get access to base information conveyed by a large amount of transactions, such as prices and property features. Due to the lack of transparency of many Italian real estate market segments, our survey has been addressed to gather data from residential property advertisements. The dataset specifically focuses on property offer prices and dwelling energy efficiency. The latter refers to the label expressed and exhibited by the energy performance certificate. Moreover, data are georeferenced with the highest possible accuracy: at the neighborhood level for a 76.8% of cases, at street or building number level for the remaining 23.2%. Data are related to the analysis performed in Bonifaci and Copiello [1], about the relationship between house prices and building energy performance, that is to say, the willingness to pay in order to benefit from more efficient dwellings.
Westgate Shootings: An Emergency Department Approach to a Mass-casualty Incident.
Wachira, Benjamin W; Abdalla, Ramadhani O; Wallis, Lee A
2014-10-01
At approximately 12:30 pm on Saturday September 21, 2013, armed assailants attacked the upscale Westgate shopping mall in the Westlands area of Nairobi, Kenya. Using the seven key Major Incident Medical Management and Support (MIMMS) principles, command, safety, communication, assessment, triage, treatment, and transport, the Aga Khan University Hospital, Nairobi (AKUH,N) emergency department (ED) successfully coordinated the reception and care of all the casualties brought to the hospital. This report describes the AKUH,N ED response to the first civilian mass-casualty shooting incident in Kenya, with the hope of informing the development and implementation of mass-casualty emergency preparedness plans by other EDs and hospitals in Kenya, appropriate for the local health care system.
Directory of Open Access Journals (Sweden)
Jing Niu
2013-01-01
reproducing kernel on infinite interval is obtained concisely in polynomial form for the first time. Furthermore, as a particular effective application of this method, we give an explicit representation formula for calculation of reproducing kernel in reproducing kernel space with boundary value conditions.
International Nuclear Information System (INIS)
Mo, O.; Riera, A.; Yaez, M.
1985-01-01
We present an extension of the analytical method of Macias and Riera to calculate radial couplings, to include model potentials or (local and nonlocal) pseudopotentials, in the Hamiltonian. As an illustration, energies, couplings, and momentum matrix elements are presented and discussed for the two-effective-electron NaH quasimolecule, as a stringent test case
Stur, J.; Bos, M.; van der Linden, W.E.
1984-01-01
Fast and accurate calculation procedures for pH and redox potentials are required for optimum control of automatic titrations. The procedure suggested is based on a three-dimensional titration curve V = f(pH, redox potential). All possible interactions between species in the solution, e.g., changes
International Nuclear Information System (INIS)
Fukumoto, Hideshi
1986-01-01
A new method and an accompanying computer code CINAC have been developed for the calculation of neutron-induced transmutation, radioactivity and afterheat. In the method, the generation and depletion of nuclides during and after reactor operation are described in a matrix form in which the arrangement of nuclides is determined systematically. The solutions are obtained by an eigenvalue analysis of the matrix without any time steps or iterative schemes which would increase the computational time. The method can treat any type of activation chains equally, and it gives analytical solutions for linear chains. The CINAC code, coupled with the radiation transport codes ANISN and DOT3.5, can also calculate the dose distribution at any time after shutdown in a one- or two-dimensional geometry of fusion reactors. Two calculations were carried out using CINAC to confirm its validity. The results were compared to those calculated by the THIDA code system which is based on a matrix exponential method. The new method was 50 times faster than the latter, while the discrepancy between them was 4 % at the most. (author)