a Relativistic Calculation of Baryon Masses
Giammarco, Joseph Michael
1990-01-01
We calculate ground state baryon masses using a saddle-point variational (SPV) method, which permits us the use of fully relativistic 4-component Dirac spinors without the need for positive energy projection operators. This variational approach has been shown to work in the relativistic domain for one particle in an external potential (Dirac equation). We have extended its use to the relativistic 3-body Breit equation. Our procedure is as follows: we pick a trial wave function having the appropriate spin, flavor and color dependence. This can be accomplished with a non-symmetric relativistic spatial wave function having two different size parameters if the the first two quarks are always chosen to be identical. We than calculate an energy eigenvalue for the particle state and vary the parameters in our wave function to search for a "saddle-point". We minimize the energy with respect to the two size parameters and maximize with respect to two parameters that measure the contribution from the negative-energy states. This gives the baryon's mass as a function of four input parameters: the masses of the up, down and strange quarks (m_{u=d },m_{s}), and the strength of the coupling constants for the potentials ( alpha_{s},mu). We do this for the eight Baryon ground states and fit these to experimental data. This fit gives the values of the input parameters. For the potentials we use a coulombic term to represent one-gluon exchange and a linear term for confinement. For both terms we include a retardation term required by relativity. We also add delta function and spin-spin terms to account for the large contribution of the coulomb interaction at the origin. The results we obtain from our SPV method are in good agreement with experimental data. The actual search for the saddle-point parameters and the fitting of the quark masses and the values of the coupling strengths was done on a CDC Cyber 860.
Bubin, Sergiy; Komasa, Jacek; Stanke, Monika; Adamowicz, Ludwik
2010-03-01
We present very accurate quantum mechanical calculations of the three lowest S-states [1s22s2(S10), 1s22p2(S10), and 1s22s3s(S10)] of the two stable isotopes of the boron ion, B10+ and B11+. At the nonrelativistic level the calculations have been performed with the Hamiltonian that explicitly includes the finite mass of the nucleus as it was obtained by a rigorous separation of the center-of-mass motion from the laboratory frame Hamiltonian. The spatial part of the nonrelativistic wave function for each state was expanded in terms of 10 000 all-electron explicitly correlated Gaussian functions. The nonlinear parameters of the Gaussians were variationally optimized using a procedure involving the analytical energy gradient determined with respect to the nonlinear parameters. The nonrelativistic wave functions of the three states were subsequently used to calculate the leading α2 relativistic corrections (α is the fine structure constant; α =1/c, where c is the speed of light) and the α3 quantum electrodynamics (QED) correction. We also estimated the α4 QED correction by calculating its dominant component. A comparison of the experimental transition frequencies with the frequencies obtained based on the energies calculated in this work shows an excellent agreement. The discrepancy is smaller than 0.4 cm-1.
Bubin, Sergiy; Komasa, Jacek; Stanke, Monika; Adamowicz, Ludwik
2010-03-21
We present very accurate quantum mechanical calculations of the three lowest S-states [1s(2)2s(2)((1)S(0)), 1s(2)2p(2)((1)S(0)), and 1s(2)2s3s((1)S(0))] of the two stable isotopes of the boron ion, (10)B(+) and (11)B(+). At the nonrelativistic level the calculations have been performed with the Hamiltonian that explicitly includes the finite mass of the nucleus as it was obtained by a rigorous separation of the center-of-mass motion from the laboratory frame Hamiltonian. The spatial part of the nonrelativistic wave function for each state was expanded in terms of 10,000 all-electron explicitly correlated Gaussian functions. The nonlinear parameters of the Gaussians were variationally optimized using a procedure involving the analytical energy gradient determined with respect to the nonlinear parameters. The nonrelativistic wave functions of the three states were subsequently used to calculate the leading alpha(2) relativistic corrections (alpha is the fine structure constant; alpha=1/c, where c is the speed of light) and the alpha(3) quantum electrodynamics (QED) correction. We also estimated the alpha(4) QED correction by calculating its dominant component. A comparison of the experimental transition frequencies with the frequencies obtained based on the energies calculated in this work shows an excellent agreement. The discrepancy is smaller than 0.4 cm(-1).
Gaseous Nitrogen Orifice Mass Flow Calculator
Ritrivi, Charles
2013-01-01
The Gaseous Nitrogen (GN2) Orifice Mass Flow Calculator was used to determine Space Shuttle Orbiter Water Spray Boiler (WSB) GN2 high-pressure tank source depletion rates for various leak scenarios, and the ability of the GN2 consumables to support cooling of Auxiliary Power Unit (APU) lubrication during entry. The data was used to support flight rationale concerning loss of an orbiter APU/hydraulic system and mission work-arounds. The GN2 mass flow-rate calculator standardizes a method for rapid assessment of GN2 mass flow through various orifice sizes for various discharge coefficients, delta pressures, and temperatures. The calculator utilizes a 0.9-lb (0.4 kg) GN2 source regulated to 40 psia (.276 kPa). These parameters correspond to the Space Shuttle WSB GN2 Source and Water Tank Bellows, but can be changed in the spreadsheet to accommodate any system parameters. The calculator can be used to analyze a leak source, leak rate, gas consumables depletion time, and puncture diameter that simulates the measured GN2 system pressure drop.
A New Approach for Calculating Vacuum Susceptibility
Institute of Scientific and Technical Information of China (English)
宗红石; 平加伦; 顾建中
2004-01-01
Based on the Dyson-Schwinger approach, we propose a new method for calculating vacuum susceptibilities. As an example, the vector vacuum susceptibility is calculated. A comparison with the results of the previous approaches is presented.
Improving the accuracy of dynamic mass calculation
Directory of Open Access Journals (Sweden)
Oleksandr F. Dashchenko
2015-06-01
Full Text Available With the acceleration of goods transporting, cargo accounting plays an important role in today's global and complex environment. Weight is the most reliable indicator of the materials control. Unlike many other variables that can be measured indirectly, the weight can be measured directly and accurately. Using strain-gauge transducers, weight value can be obtained within a few milliseconds; such values correspond to the momentary load, which acts on the sensor. Determination of the weight of moving transport is only possible by appropriate processing of the sensor signal. The aim of the research is to develop a methodology for weighing freight rolling stock, which increases the accuracy of the measurement of dynamic mass, in particular wagon that moves. Apart from time-series methods, preliminary filtration for improving the accuracy of calculation is used. The results of the simulation are presented.
Source apportionment using reconstructed mass calculations.
Siddique, Naila; Waheed, Shahida
2014-01-01
A long-term study was undertaken to investigate the air quality of the Islamabad/Rawalpindi area. In this regard fine and coarse particulate matter were collected from 4 sites in the Islamabad/Rawalpindi region from 1998 to 2010 using Gent samplers and polycarbonate filters and analyzed for their elemental composition using the techniques of Neutron Activation Analysis (NAA), Proton Induced X-ray Emission/Proton Induced Gamma-ray Emission (PIXE/PIGE) and X-ray Fluorescence (XRF) Spectroscopy. The elemental data along with the gravimetric measurements and black carbon (BC) results obtained by reflectance measurement were used to approximate or reconstruct the particulate mass (RCM) by estimation of pseudo sources such as soil, smoke, sea salt, sulfate and black carbon or soot. This simple analysis shows that if the analytical technique used does not measure important major elements then the data will not be representative of the sample composition and cannot be further utilized for source apportionment studies or to perform transboundary analysis. In this regard PIXE/PIGE and XRF techniques that can provide elemental compositional data for most of the major environmentally important elements appear to be more useful as compared to NAA. Therefore %RCM calculations for such datasets can be used as a quality assurance (QA) measure to treat data prior to application of chemometrical tools such as factor analysis (FA) or cluster analysis (CA).
Body Mass Index: Calculator for Child and Teen
... About CDC.gov . DNPAO Home Division Information Nutrition Physical Activity Overweight and Obesity Healthy Weight Assessing Your Weight Body Mass Index (BMI) About Adult BMI Adult BMI Calculator About Child Teen BMI Child & Teen BMI Calculator Children's BMI ...
Milky Way Mass Models for Orbit Calculations
Irrgang, Andreas; Tucker, Evan; Schiefelbein, Lucas
2013-01-01
Studying the trajectories of objects like stars, globular clusters or satellite galaxies in the Milky Way allows to trace the dark matter halo but requires reliable models of its gravitational potential. Realistic, yet simple and fully analytical models have already been presented in the past. However, improved as well as new observational constraints have become available in the meantime calling for a recalibration of the respective model parameters. Three widely used model potentials are revisited. By a simultaneous least-squared fit to the observed rotation curve, in-plane proper motion of Sgr A*, local mass/surface density and the velocity dispersion in Baade's window, parameters of the potentials are brought up-to-date. The mass at large radii - and thus in particular that of the dark matter halo - is hereby constrained by imposing that the most extreme halo blue horizontal-branch star known has to be bound to the Milky Way. The Galactic mass models are tuned to yield a very good match to recent observat...
Institute of Scientific and Technical Information of China (English)
刘大庆; 吴济民; 陈莹
2002-01-01
We develop a new approach to constructing the lattice operators for the calculation of the glueball mass, which is based on the connection between the continuum limit of the chosen operator and the quantum number JPc of the state. The spin of the state is then determined uniquely and directly in numerical simulation. Furthermore, the approach can be applied to the calculation of the mass of glueball states with any spin. J. Under the quenched approximation, we present our preliminary results in SU(3) pure gauge theory for the mass of 0++ state and 2++ state, which are 1754(85)(86) MeV and 2417(56)(117) MeV, respectively.%发展了一种为了计算胶球质量而构造格点算符的新途径.基于所选用算符的连续极限与状态量子数JPC两者之间的联系,状态的自旋就可以在数值模拟中唯一和直接地被确定下来.进而,这一途径可以被应用于计算任意自旋J的胶球质量.在淬火近似下,给出在SU(3)纯规范场中0++态和2++态胶球质量的初步结果,它们分别是1754(85)(86)MeV和2417(56)(17)MeV.
Calculating the mass spectrum of primordial black holes
Young, Sam; Sasaki, Misao
2014-01-01
We reinspect the calculation for the mass fraction of primordial black holes (PBHs) which are formed from primordial perturbations, finding that performing the calculation using the comoving curvature perturbation $\\mathcal{R}_{c}$ in the standard way vastly overestimates the number of PBHs, by many orders of magnitude. This is because PBHs form shortly after horizon entry, meaning modes significantly larger than the PBH are unobservable and should not affect whether a PBH forms or not - this important effect is not taken into account by smoothing the distribution in the standard fashion. We discuss alternative methods and argue that the density contrast, $\\Delta$, should be used instead as super-horizon modes are damped by a factor $k^{2}$. We make a comparison between using a Press-Schechter approach and peaks theory, finding that the two are in close agreement in the region of interest. We also investigate the effect of varying the spectral index, and the running of the spectral index, on the abundance of ...
A calculation of the physical mass of sigma meson
Indian Academy of Sciences (India)
J R Morones-Ibarra; Ayax Santos-Guevara
2007-06-01
We calculate the physical mass and the width of the sigma meson by considering that it couples in vacuum to two virtual pions. The mass is calculated by using the spectral function, and we find that it is about 600 MeV. In addition, we obtained 220 MeV as the value for the width of its spectral function. The value obtained for the mass is in good agreement with that reported in the Particle Data Book for the σ-meson, which is also named 0(600). This result also shows that -meson can be considered as a two-pion resonance.
Unified approach to alpha decay calculations
Indian Academy of Sciences (India)
C S Shastry; S M Mahadevan; K Aditya
2014-05-01
With the discovery of a large number of superheavy nuclei undergoing decay through emissions, there has been a revival of interest in decay in recent years. In the theoretical study of decay the -nucleus potential, which is the basic input in the study of -nucleus systems, is also being studied using advanced theoretical methods. In the light of these, theWentzel–Kramers–Brillouin (WKB) approximation method often used for the study of decay is critically examined and its limitations are pointed out. At a given energy, the WKB expression uses barrier penetration formula for the determination of the transmission coefficient. This approach utilizes the -nucleus potential only at the barrier region and ignores it elsewhere. In the present era, when one has more precise experimental information on decay parameters and better understanding of -nucleus potential, it is desirable to use a more precise method for the calculation of decay parameters. We describe the analytic -matrix (SM) method which gives a procedure for the calculation of decay energy and mean life in an integrated way by evaluating the resonance pole of the -matrix in the complex momentum or energy plane. We make an illustrative comparative study of WKB and -matrix methods for the determination of decay parameters in a number of superheavy nuclei.
Shell-model calculations of nuclei around mass 130
Teruya, E.; Yoshinaga, N.; Higashiyama, K.; Odahara, A.
2015-09-01
Shell-model calculations are performed for even-even, odd-mass, and doubly-odd nuclei of Sn, Sb, Te, I, Xe, Cs, and Ba isotopes around mass 130 using the single-particle space made up of valence nucleons occupying the 0 g7 /2 ,1 d5 /2 ,2 s1 /2 ,0 h11 /2 , and 1 d3 /2 orbitals. The calculated energies and electromagnetic transitions are compared with the experimental data. In addition, several typical isomers in this region are investigated.
Large-Scale Self-Consistent Nuclear Mass Calculations
Stoitsov, M V; Dobaczewski, J; Nazarewicz, W
2006-01-01
The program of systematic large-scale self-consistent nuclear mass calculations that is based on the nuclear density functional theory represents a rich scientific agenda that is closely aligned with the main research directions in modern nuclear structure and astrophysics, especially the radioactive nuclear beam physics. The quest for the microscopic understanding of the phenomenon of nuclear binding represents, in fact, a number of fundamental and crucial questions of the quantum many-body problem, including the proper treatment of correlations and dynamics in the presence of symmetry breaking. Recent advances and open problems in the field of nuclear mass calculations are presented and discussed.
An Improved Calculation of the Non-Gaussian Halo Mass Function
D'Amico, Guido; Noreña, Jorge; Paranjape, Aseem
2010-01-01
The abundance of collapsed objects in the universe, or halo mass function, is an important theoretical tool in studying the effects of primordially generated non-Gaussianities on the large scale structure. The non-Gaussian mass function has been calculated by several authors in different ways, typically by exploiting the smallness of certain parameters which naturally appear in the calculation, to set up a perturbative expansion. We improve upon the existing results for the mass function by combining path integral methods and saddle point techniques (which have been separately applied in previous approaches). Additionally, we carefully account for the various scale dependent combinations of small parameters which appear. Some of these combinations in fact become of order unity for large mass scales and at high redshifts, and must therefore be treated non-perturbatively. Our approach allows us to do this, and to also account for multi-scale density correlations which appear in the calculation. We thus derive a...
Structural uncertainty in air mass factor calculation for NO
Lorente Delgado, Alba; Folkert Boersma, K.; Yu, Huan; Dörner, Steffen; Hilboll, Andreas; Richter, Andreas; Liu, Mengyao; Lamsal, Lok N.; Barkley, Michael; Smedt, De Isabelle; Roozendael, Van Michel; Wang, Yang; Wagner, Thomas; Beirle, Steffen; Lin, Jin Tai; Krotkov, Nickolay; Stammes, Piet; Wang, Ping; Eskes, Henk J.; Krol, Maarten
2017-01-01
Air mass factor (AMF) calculation is the largest source of uncertainty in NO2 and HCHO satellite retrievals in situations with enhanced trace gas concentrations in the lower troposphere. Structural uncertainty arises when different retrieval methodologies are applied within the scientific community
Calculable mass hierarchies and a light dilaton from gravity duals
Elander, Daniel; Piai, Maurizio
2017-09-01
In the context of gauge/gravity dualities, we calculate the scalar and tensor mass spectrum of the boundary theory defined by a special 8-scalar sigma-model in five dimensions, the background solutions of which include the 1-parameter family dual to the baryonic branch of the Klebanov-Strassler field theory. This provides an example of a strongly-coupled, multi-scale system that yields a parametrically light mass for one of the composite scalar particles: the dilaton. We briefly discuss the implications of these findings towards identifying a satisfactory solution to both the big and little hierarchy problems of the electro-weak theory.
Pfennig, Brian W.; Schaefer, Amy K.
2011-01-01
A general chemistry laboratory experiment is described that introduces students to instrumental analysis using gas chromatography-mass spectrometry (GC-MS), while simultaneously reinforcing the concepts of mass percent and the calculation of atomic mass. Working in small groups, students use the GC to separate and quantify the percent composition…
Pfennig, Brian W.; Schaefer, Amy K.
2011-01-01
A general chemistry laboratory experiment is described that introduces students to instrumental analysis using gas chromatography-mass spectrometry (GC-MS), while simultaneously reinforcing the concepts of mass percent and the calculation of atomic mass. Working in small groups, students use the GC to separate and quantify the percent composition…
Calculating fermion masses in superstring derived standard-like models
Energy Technology Data Exchange (ETDEWEB)
Faraggi, A.E.
1996-04-01
One of the intriguing achievements of the superstring derived standard-like models in the free fermionic formulation is the possible explanation of the top quark mass hierarchy and the successful prediction of the top quark mass. An important property of the superstring derived standard-like models, which enhances their predictive power, is the existence of three and only three generations in the massless spectrum. Up to some motivated assumptions with regard to the light Higgs spectrum, it is then possible to calculate the fermion masses in terms of string tree level amplitudes and some VEVs that parameterize the string vacuum. I discuss the calculation of the heavy generation masses in the superstring derived standard-like models. The top quark Yukawa coupling is obtained from a cubic level mass term while the bottom quark and tau lepton mass terms are obtained from nonrenormalizable terms. The calculation of the heavy fermion Yukawa couplings is outlined in detail in a specific toy model. The dependence of the effective bottom quark and tau lepton Yukawa couplings on the flat directions at the string scale is examined. The gauge and Yukawa couplings are extrapolated from the string unification scale to low energies. Agreement with {alpha}{sub strong}, sin{sup 2} {theta}{sub W} and {alpha}{sub em} at M{sub Z} is imposed, which necessitates the existence of intermediate matter thresholds. The needed intermediate matter thresholds exist in the specific toy model. The effect of the intermediate matter thresholds on the extrapolated Yukawa couplings is studied. It is observed that the intermediate matter thresholds help to maintain the correct b/{tau} mass relation. It is found that for a large portion of the parameter space, the LEP precision data for {alpha}{sub strong}, sin{sup 2} {theta}{sub W} and {alpha}{sub em}, as well as the top quark mass and the b/{tau} mass relation can all simultaneously be consistent with the superstring derived standard-like models.
DOWNSCALE APPLICATION OF BOILER THERMAL CALCULATION APPROACH
Zelený, Zbynĕk; Hrdlička, Jan
2016-01-01
Commonly used thermal calculation methods are intended primarily for large scale boilers. Hot water small scale boilers, which are commonly used for home heating have many specifics, that distinguish them from large scale boilers especially steam boilers. This paper is focused on application of thermal calculation procedure that is designed for large scale boilers, on a small scale boiler for biomass combustion of load capacity 25 kW. Special issue solved here is influence of formation of dep...
Calculating Masses of Pentaquarks Composed of Baryons and Mesons
Directory of Open Access Journals (Sweden)
M. Monemzadeh
2016-01-01
Full Text Available We consider an exotic baryon (pentaquark as a bound state of two-body systems composed of a baryon (nucleon and a meson. We used a baryon-meson picture to reduce a complicated five-body problem to simple two-body problems. The homogeneous Lippmann-Schwinger integral equation is solved in configuration space by using one-pion exchange potential. We calculate the masses of pentaquarks θc(uuddc¯ and θb(uuddb¯.
A review of Higgs mass calculations in supersymmetric models
DEFF Research Database (Denmark)
Draper, P.; Rzehak, H.
2016-01-01
related to the electroweak hierarchy problem. Perhaps the most extensively studied examples are supersymmetric models, which, while capable of producing a 125 GeV Higgs boson with SM-like properties, do so in non-generic parts of their parameter spaces. We review the computation of the Higgs mass...... in the Minimal Supersymmetric Standard Model, in particular the large radiative corrections required to lift mh to 125 GeV and their calculation via Feynman-diagrammatic and effective field theory techniques. This review is intended as an entry point for readers new to the field, and as a summary of the current...
The effect of dynamical quark mass on the calculation of a strange quark star's structure
Institute of Scientific and Technical Information of China (English)
Gholam Hossein Bordbar; Babak Ziaei
2012-01-01
We discuss the dynamical behavior of strange quark matter components,in particular the effects of density dependent quark mass on the equation of state of strange quark matter.The dynamical masses of quarks are computed within the Nambu-Jona-Lasinio model,then we perform strange quark matter calculations employing the MIT bag model with these dynamical masses.For the sake of comparing dynamical mass interaction with QCD quark-quark interaction,we consider the one-gluon-exchange term as the effective interaction between quarks for the MIT bag model.Our dynamical approach illustrates an improvement in the obtained equation of state values.We also investigate the structure of the strange quark star using TolmanOppenheimer-Volkoff equations for all applied models.Our results show that dynamical mass interaction leads to lower values for gravitational mass.
First-principles calculations of mass transport in magnesium borohydride
Yu, Chao; Ozolins, Vidvuds
2013-03-01
Mg(BH4)2 is a hydrogen storage material which can decompose to release hydrogen in the following reaction: Mg(BH4)2(solid) -->1/6 MgB12H12(solid) + 5/6MgH2(solid) +13/6 H2(gas) --> MgH2(solid) + 2B(solid) + 4H2(gas). However, experiments show that hydrogen release only occurs at temperatures above 300 °C, which severely limits applications in mobile storage. Using density-functional theory calculations, we systematically study bulk diffusion of defects in the reactant Mg(BH4)2 and products MgB12H12 and MgH2 during the first step of the solid-state dehydrogenation reaction. The defect concentrations and concentration gradients are calculated for a variety of defects, including charged vacancies and interstitials. We find that neutral [BH3] vacancies have the highest bulk concentration and concentration gradient in Mg(BH4)2. The diffusion mechanism of [BH3] vacancy in Mg(BH4)2 is studied using the nudged elastic band method. Our results shows that the calculated diffusion barrier for [BH3] vacancies is ~ . 2 eV, suggesting that slow mass transport limits the kinetics of hydrogen desorption.
Gamow's calculation of the neutron star's critical mass revised
Energy Technology Data Exchange (ETDEWEB)
Ludwig, Hendrik; Ruffini, Remo [Sapienza Universita di Roma, Rome (Italy); ICRANet, University of Nice-Sophia Antipolis, Nice Cedex (France)
2014-09-15
It has at times been indicated that Landau introduced neutron stars in his classic paper of 1932. This is clearly impossible because the discovery of the neutron by Chadwick was submitted more than one month after Landau's work. Therefore, and according to his calculations, what Landau really did was to study white dwarfs, and the critical mass he obtained clearly matched the value derived by Stoner and later by Chandrasekhar. The birth of the concept of a neutron star is still today unclear. Clearly, in 1934, the work of Baade and Zwicky pointed to neutron stars as originating from supernovae. Oppenheimer in 1939 is also well known to have introduced general relativity (GR) in the study of neutron stars. The aim of this note is to point out that the crucial idea for treating the neutron star has been advanced in Newtonian theory by Gamow. However, this pioneering work was plagued by mistakes. The critical mass he should have obtained was 6.9 M, not the one he declared, namely, 1.5 M. Probably, he was taken to this result by the work of Landau on white dwarfs. We revise Gamow's calculation of the critical mass regarding calculational and conceptual aspects and discuss whether it is justified to consider it the first neutron-star critical mass. We compare Gamow's approach to other early and modern approaches to the problem.
Dibaryon Mass and Width Calculation with Tensor Interaction
Institute of Scientific and Technical Information of China (English)
PANG Hou-Rong; PING Jia-Lun; CHEN Ling-Zhi; WANG Fan
2004-01-01
@@ The effect of tensor interaction due to gluon and Goldstone boson exchange on the dibaryon mass and decay width has been studied in the framework of the quark delocalization and colour screening model. The effective S-D wave transition interactions induced by gluon and Goldstone boson exchanges decrease quickly with the increasing channel strangeness, and there is no six-quark state in the light flavour world, which can become a bound one by the help of these tensor interactions, except for the deuteron. The K and η meson exchange effect has been shown to be negligible after a short-range truncation in this model approach. The partial D-wave decay widths, from the NΩ state to the A final states of spins 0 and 1, are 20. 7keV and 63.1 kev respectively. This is a very narrow dibaryon resonance, that might be detected in the relativistic heavy ion reaction by the existing RHIC detectors through the reconstruction of the A vertex mass and by the future COMPAS detector at CERNand the FAIR project in Germany.
Numerical Approach to Calculation of Feynman Loop Integrals
Yuasa, F; Kurihara, Y; Fujimoto, J; Shimizu, Y; Hamaguchi, N; de Doncker, E; Kato, K
2011-01-01
In this paper, we describe a numerical approach to evaluate Feynman loop integrals. In this approach the key technique is a combination of a numerical integration method and a numerical extrapolation method. Since the computation is carried out in a fully numerical way, our approach is applicable to one-, two- and multi-loop diagrams. Without any analytic treatment it can compute diagrams with not only real masses but also complex masses for the internal particles. As concrete examples we present numerical results of a scalar one-loop box integral with complex masses and two-loop planar and non-planar box integrals with masses. We discuss the quality of our numerical computation by comparisons with other methods and also propose a self consistency check.
Global Approach for Calculation of Minimum Miscibility Pressure
DEFF Research Database (Denmark)
Jessen, Kristian; Michelsen, Michael Locht; Stenby, Erling Halfdan
1998-01-01
An algorithm has been developed for calculation of minimum miscibility pressure (MMP) for the displacement of oil by multicomponent gas injection. The algorithm is based on the key tie line identification approach initially addressed by Wang and Orr [Y. Wang and F.M. Orr Jr., Analytical calculation...... of minimum miscibility pressure, Fluid Phase Equilibria, 139 (1997) 101-124]. In this work a new global approach is introduced. A number of deficiencies of the sequential approach have been eliminated resulting in a robust and highly efficient algorithm. The time consumption for calculation of the MMP...... results from the key tie line identification approach are shown to be in excellent agreement with slimtube data and with other multicell/slimtube simulators presented in the literature....
Baran, Richard; Northen, Trent R
2013-10-15
Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.
Missing mass calculator as a technique to reconstruct the mass of resonances decaying into tau pairs
Energy Technology Data Exchange (ETDEWEB)
Blumenschein, Ulla; De Maria, Antonio; Quadt, Arnulf; Zinonos, Zinonas [II. Physikalisches Institut, Georg-August-Universitaet Goettingen (Germany)
2016-07-01
An accurate reconstruction of a resonance mass decaying into a pair of tau leptons is a difficult task because of the presence of multiple undetected neutrinos from the tau decays. The Missing Mass Calculator (MMC) is a sophisticated method to optimise the reconstruction of this events. It is based on the requirement that mutual orientations of the neutrinos and other decay products are consistent with the mass and decay kinematics of a tau lepton. This is achieved by minimizing a likelihood function defined in the kinematically allowed phase space region. MMC was one of the most powerful tools used in SM-Higgs to tau tau searches in Run1 at LHC. Now, in Run2, LHC collides proton-proton at center of mass energy √(s) = 13 TeV and at higher luminosity. Therefore, many efforts need to be done to optimise the analysis tools to the new experimental conditions. Amongst these tools, MMC requires to be retuned in order to play a key role again in the searches of the Higgs boson in di-tau final states. This talk outlines the main aspects of the MMC retuning and the impact on its performance.
A simple approach for maximum heat recovery calculations
Energy Technology Data Exchange (ETDEWEB)
Jezowski, J. (Wroclaw Technical Univ. (PL). Inst. of Chemical Engineering and Heating Equipment); Friedler, F. (Hungarian Academy of Sciences, Egyetem (HU). Research Inst. for Technical Chmeistry)
1992-04-01
This paper addresses the problem of calculating the maximum heat energy recovery for a given set of process streams. Simple, straightforward algorithms of calculations are presented that account for tasks with multiple utilities, forbidden matches and nonpoint utilities. A new way of applying the so-called dual-stream approach to reduce utility usage for tasks with forbidden matches is also given in this paper. The calculation methods do not require computer programs and mathematical programming application. They give the user a proper insight into a problem to understand heat integration as well as to recognize options and traps in heat exchanger network synthesis. (author).
Forensic analysis of explosions: Inverse calculation of the charge mass
Voort, M.M. van der; Wees, R.M.M. van; Brouwer, S.D.; Jagt-Deutekom, M.J. van der; Verreault, J.
2015-01-01
Forensic analysis of explosions consists of determining the point of origin, the explosive substance involved, and the charge mass. Within the EU fP7 project Hyperion, TNO developed the Inverse Explosion Analysis (TNO-IEA) tool to estïmate the charge mass and point of origin based on observed damage
Nakamura, Kazuki; Yamanokuchi, Tsutomu; Doi, Koichiro; Shibuya, Kazuo
2016-06-01
We quantify the mass budget of the Shirase drainage basin (SHI), Antarctica, by separately estimating snow accumulation (surface mass balance; SMB) and glacier ice mass discharge (IMD). We estimated the SMB in the SHI, using a regional atmospheric climate model (RACMO2.1). The SMB of the mainstream A flow region was 12.1 ± 1.5 Gt a-1 for an area of 1.985 × 105 km2. Obvious overestimation of the model round the coast, ∼0.5 Gt a-1, was corrected for. For calculating the IMD, we employed a 15-m resolution Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) with a digital elevation model (DEM) to determine the heights at the grounding line (GL), after comparison with the interpolated Bamber DEM grid heights; the results of this are referred to as the measured heights. Ice thickness data at the GL were inferred by using a free-board relationship between the measured height and the ice thickness, and considering the measured firn depth correction (4.2 m with the reference ice density of 910 kg m-3) for the nearby blue-ice area. The total IMD was estimated to be 14.0 ± 1.8 Gt a-1. Semi-empirical firn densification model gives the estimate within 0.1-0.2 Gt a-1 difference. The estimated net mass balance, -1.9 Gt a-1, has a two-σ uncertainty of ±3.3 Gt a-1, and probable melt water discharge strongly suggests negative NMB, although the associated uncertainty is large.
Fast Calculation of the Weak Lensing Aperture Mass Statistic
Leonard, Adrienne; Starck, Jean-Luc
2012-01-01
The aperture mass statistic is a common tool used in weak lensing studies. By convolving lensing maps with a filter function of a specific scale, chosen to be larger than the scale on which the noise is dominant, the lensing signal may be boosted with respect to the noise. This allows for detection of structures at increased fidelity. Furthermore, higher-order statistics of the aperture mass (such as its skewness or kurtosis), or counting of the peaks seen in the resulting aperture mass maps, provide a convenient and effective method to constrain the cosmological parameters. In this paper, we more fully explore the formalism underlying the aperture mass statistic. We demonstrate that the aperture mass statistic is formally identical to a wavelet transform at a specific scale. Further, we show that the filter functions most frequently used in aperture mass studies are not ideal, being non-local in both real and Fourier space. In contrast, the wavelet formalism offers a number of wavelet functions that are loca...
Implications of improved Higgs mass calculations for supersymmetric models
Energy Technology Data Exchange (ETDEWEB)
Buchmueller, O. [Imperial College, London (United Kingdom). High Energy Physics Group; Dolan, M.J. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Theory Group; Ellis, J. [King' s College, London (United Kingdom). Theoretical Particle Physics and Cosmology Group; and others
2014-03-15
We discuss the allowed parameter spaces of supersymmetric scenarios in light of improved Higgs mass predictions provided by FeynHiggs 2.10.0. The Higgs mass predictions combine Feynman-diagrammatic results with a resummation of leading and subleading logarithmic corrections from the stop/top sector, which yield a significant improvement in the region of large stop masses. Scans in the pMSSM parameter space show that, for given values of the soft supersymmetry-breaking parameters, the new logarithmic contributions beyond the two-loop order implemented in FeynHiggs tend to give larger values of the light CP-even Higgs mass, M{sub h}, in the region of large stop masses than previous predictions that were based on a fixed-order Feynman-diagrammatic result, though the differences are generally consistent with the previous estimates of theoretical uncertainties. We re-analyze the parameter spaces of the CMSSM, NUHM1 and NUHM2, taking into account also the constraints from CMS and LHCb measurements of BR(B{sub s}→μ{sup +}μ{sup -}) and ATLAS searches for E{sub T} events using 20/fb of LHC data at 8 TeV. Within the CMSSM, the Higgs mass constraint disfavours tan β
Efficient wave-function matching approach for quantum transport calculations
DEFF Research Database (Denmark)
Sørensen, Hans Henrik Brandenborg; Hansen, Per Christian; Petersen, Dan Erik;
2009-01-01
The wave-function matching (WFM) technique has recently been developed for the calculation of electronic transport in quantum two-probe systems. In terms of efficiency it is comparable to the widely used Green's function approach. The WFM formalism presented so far requires the evaluation of all ...
An indirect approach to the extensive calculation of relationship coefficients
Directory of Open Access Journals (Sweden)
Colleau Jean-Jacques
2002-07-01
Full Text Available Abstract A method was described for calculating population statistics on relationship coefficients without using corresponding individual data. It relied on the structure of the inverse of the numerator relationship matrix between individuals under investigation and ancestors. Computation times were observed on simulated populations and were compared to those incurred with a conventional direct approach. The indirect approach turned out to be very efficient for multiplying the relationship matrix corresponding to planned matings (full design by any vector. Efficiency was generally still good or very good for calculating statistics on these simulated populations. An extreme implementation of the method is the calculation of inbreeding coefficients themselves. Relative performances of the indirect method were good except when many full-sibs during many generations existed in the population.
Energy Technology Data Exchange (ETDEWEB)
Avancini, S.S.; Marinelli, J.R. [Universidade Federal de Santa Catarina Florianopolis, Depto de Fisica - CFM, Florianopolis (Brazil); Carlson, B.V. [Instituto Tecnologico de Aeronautica, Sao Jose dos Campos (Brazil)
2013-06-15
Relativistic models for finite nuclei contain spurious center-of-mass motion in most applications for the nuclear many-body problem, where the nuclear wave function is taken as a single Slater determinant within a space-fixed frame description. We use the Peierls-Yoccoz projection method, previously developed for relativistic approaches together with a reparametrization of the coupling constants that fits binding energies and charge radius and apply our results to calculate elastic electron scattering monopole charge form factors for light nuclei. (orig.)
Calculation of wind turbine aeroelastic behaviour. The Garrad Hassan approach
Energy Technology Data Exchange (ETDEWEB)
Quarton, D.C. [Garrad Hassan and Partners Ltd., Bristol (United Kingdom)
1996-09-01
The Garrad Hassan approach to the prediction of wind turbine loading and response has been developed over the last decade. The goal of this development has been to produce calculation methods that contain realistic representation of the wind, include sensible aerodynamic and dynamic models of the turbine and can be used to predict fatigue and extreme loads for design purposes. The Garrad Hassan calculation method is based on a suite of four key computer programs: WIND3D for generation of the turbulent wind field; EIGEN for modal analysis of the rotor and support structure; BLADED for time domain calculation of the structural loads; and SIGNAL for post-processing of the BLADED predictions. The interaction of these computer programs is illustrated. A description of the main elements of the calculation method will be presented. (au)
Calculation of plantar pressure time integral, an alternative approach.
Melai, Tom; IJzerman, T Herman; Schaper, Nicolaas C; de Lange, Ton L H; Willems, Paul J B; Meijer, Kenneth; Lieverse, Aloysius G; Savelberg, Hans H C M
2011-07-01
In plantar pressure measurement, both peak pressure and pressure time integral are used as variables to assess plantar loading. However, pressure time integral shows a high concordance with peak pressure. Many researchers and clinicians use Novel software (Novel GmbH Inc., Munich, Germany) that calculates this variable as the summation of the products of peak pressure and duration per time sample, which is not a genuine integral of pressure over time. Therefore, an alternative calculation method was introduced. The aim of this study was to explore the relevance of this alternative method, in different populations. Plantar pressure variables were measured in 76 people with diabetic polyneuropathy, 33 diabetic controls without polyneuropathy and 19 healthy subjects. Peak pressure and pressure time integral were obtained using Novel software. The quotient of the genuine force time integral over contact area was obtained as the alternative pressure time integral calculation. This new alternative method correlated less with peak pressure than the pressure time integral as calculated by Novel. The two methods differed significantly and these differences varied between the foot sole areas and between groups. The largest differences were found under the metatarsal heads in the group with diabetic polyneuropathy. From a theoretical perspective, the alternative approach provides a more valid calculation of the pressure time integral. In addition, this study showed that the alternative calculation is of added value, along peak pressure calculation, to interpret adapted plantar pressures patterns in particular in patients at risk for foot ulceration. Copyright © 2011 Elsevier B.V. All rights reserved.
Computational approach for calculating bound states in quantum field theory
Lv, Q. Z.; Norris, S.; Brennan, R.; Stefanovich, E.; Su, Q.; Grobe, R.
2016-09-01
We propose a nonperturbative approach to calculate bound-state energies and wave functions for quantum field theoretical models. It is based on the direct diagonalization of the corresponding quantum field theoretical Hamiltonian in an effectively discretized and truncated Hilbert space. We illustrate this approach for a Yukawa-like interaction between fermions and bosons in one spatial dimension and show where it agrees with the traditional method based on the potential picture and where it deviates due to recoil and radiative corrections. This method permits us also to obtain some insight into the spatial characteristics of the distribution of the fermions in the ground state, such as the bremsstrahlung-induced widening.
A new approach to calculating spatial impulse responses
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
1997-01-01
Using linear acoustics the emitted and scattered ultrasound field can be found by using spatial impulse responses as developed by Tupholme (1969) and Stepanishen (1971). The impulse response is calculated by the Rayleigh integral by summing the spherical waves emitted from all of the aperture...... of the emitting aperture. Summing the angles of the arcs within the aperture readily yields the spatial impulse response for a point in space. The approach makes is possible to make very general calculation routines for arbitrary, flat apertures in which the outline of the aperture is either analytically...... be used for finding analytic solutions to the spatial impulse response for new geometries of, for example, ellipsoidal shape. The approach also makes it easy to incorporate any apodization function and the effect from different transducers baffle mountings. Examples of spatial impulse responses...
Fatigue approach for addressing environmental effects in fatigue usage calculation
Energy Technology Data Exchange (ETDEWEB)
Wilhelm, Paul; Rudolph, Juergen [AREVA GmbH, Erlangen (Germany); Steinmann, Paul [Erlangen-Nuremberg Univ., erlangen (Germany). Chair of Applied Mechanics
2015-04-15
Laboratory tests consider simple trapezoidal, triangle, and sinusoidal signals. However, actual plant components are characterized by complex loading patterns and periods of holds. Fatigue tests in water environment show, that the damage from a realistic strain variation or the presence of hold-times within cyclic loading results in an environmental reduction factor (Fen) only half that of a simple waveform. This study proposes a new fatigue approach for addressing environmental effects in fatigue usage calculation for class 1 boiler and pressure vessel reactor components. The currently accepted method of fatigue assessment has been used as a base model and all cycles, which have been comparable with realistic fatigue tests, have been excluded from the code-based fatigue calculation and evaluated directly with the test data. The results presented show that the engineering approach can successfully be integrated in the code-based fatigue assessment. The cumulative usage factor can be reduced considerably.
Road Transport Congestion Costs Calculations-Adaptation to Engineering Approach
Directory of Open Access Journals (Sweden)
Marjan Lep
2008-01-01
Full Text Available The article represents so called engineering approach for computing the total road transport congestion costs. According to economic welfare theory, the total costs of transport congestion are defined as dead weight loss (DWL of infrastructure use. With a set of equations DWL could be formulated in a mathematical way. Because such form of equation is not directly applicable for concrete road network calculations it should be transformed into engineering form, which comprises transport engineering related data as classified road links, traffic volumes, passenger unit costs, etc. The equation is well applicable on the interurban road network; adaptations are needed for the urban road network cost calculations, where time losses are not so much related to the link travel time. The final equation was derived for the purposes of national road congestion cost calculation.
A general approach for calculating coupling impedances of small discontinuities
Kurennoy, S S; Stupakov, G V; Kurennoy, Sergey S; Gluckstern, Robert L; Stupakov, Gennady V
1995-01-01
A general theory of the beam interaction with small discontinuities of the vacuum chamber is developed taking into account the reaction of radiated waves back on the discontinuity. The reactive impedance calculated earlier is reproduced as the first order, and the resistive one as the second order of a perturbation theory based on this general approach. The theory also gives, in a very natural way, the results for the trapped modes due to small discontinuities obtained earlier by a different method.
A General approach for calculating coupling impedances of small discontinuities
Kurennoy, Sergei S.; Gluckstern, Robert L.; Stupakov, Gennady V.
A general theory of the beam interaction with small discontinuities of the vacuum chamber is developed taking into account the reaction of radiated waves back on the discontinuity. The reactive impedance calculated earlier is reproduced as the first order, and the resistive one as the second order of a perturbation theory based on this general approach. The theory also gives, in a very natural way, the results for the trapped modes due to small discontinuities obtained earlier by a different method.
Raznikova, M O; Raznikov, V V
2015-01-01
In this work, information relating to charge states of biomolecule ions in solution obtained using the electrospray ionization mass spectrometry of different biopolymers is analyzed. The data analyses have mainly been carried out by solving an inverse problem of calculating the probabilities of retention of protons and other charge carriers by ionogenic groups of biomolecules with known primary structures. The approach is a new one and has no known to us analogues. A program titled "Decomposition" was developed and used to analyze the charge distribution of ions of native and denatured cytochrome c mass spectra. The possibility of splitting of the charge-state distribution of albumin into normal components, which likely corresponds to various conformational states of the biomolecule, has been demonstrated. The applicability criterion for using previously described method of decomposition of multidimensional charge-state distributions with two charge carriers, e.g., a proton and a sodium ion, to characterize the spatial structure of biopolymers in solution has been formulated. In contrast to known mass-spectrometric approaches, this method does not require the use of enzymatic hydrolysis or collision-induced dissociation of the biopolymers.
Bayesian inference in mass flow simulations - from back calculation to prediction
Kofler, Andreas; Fischer, Jan-Thomas; Hellweger, Valentin; Huber, Andreas; Mergili, Martin; Pudasaini, Shiva; Fellin, Wolfgang; Oberguggenberger, Michael
2017-04-01
Mass flow simulations are an integral part of hazard assessment. Determining the hazard potential requires a multidisciplinary approach, including different scientific fields such as geomorphology, meteorology, physics, civil engineering and mathematics. An important task in snow avalanche simulation is to predict process intensities (runout, flow velocity and depth, ...). The application of probabilistic methods allows one to develop a comprehensive simulation concept, ranging from back to forward calculation and finally to prediction of mass flow events. In this context optimized parameter sets for the used simulation model or intensities of the modeled mass flow process (e.g. runout distances) are represented by probability distributions. Existing deterministic flow models, in particular with respect to snow avalanche dynamics, contain several parameters (e.g. friction). Some of these parameters are more conceptual than physical and their direct measurement in the field is hardly possible. Hence, parameters have to be optimized by matching simulation results to field observations. This inverse problem can be solved by a Bayesian approach (Markov chain Monte Carlo). The optimization process yields parameter distributions, that can be utilized for probabilistic reconstruction and prediction of avalanche events. Arising challenges include the limited amount of observations, correlations appearing in model parameters or observed avalanche characteristics (e.g. velocity and runout) and the accurate handling of ensemble simulations, always taking into account the related uncertainties. Here we present an operational Bayesian simulation framework with r.avaflow, the open source GIS simulation model for granular avalanches and debris flows.
Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio
2016-10-01
We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two (13) C atoms ((13) C2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of (13) C2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% (13) C2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Hanford Apatite Treatability Test Report Errata: Apatite Mass Loading Calculation
Energy Technology Data Exchange (ETDEWEB)
Szecsody, James E.; Vermeul, Vincent R.; Williams, Mark D.; Truex, Michael J.
2014-05-19
The objective of this errata report is to document an error in the apatite loading (i.e., treatment capacity) estimate reported in previous apatite treatability test reports and provide additional calculation details for estimating apatite loading and barrier longevity. The apatite treatability test final report (PNNL-19572; Vermeul et al. 2010) documents the results of the first field-scale evaluation of the injectable apatite PRB technology. The apatite loading value in units of milligram-apatite per gram-sediment is incorrect in this and some other previous reports. The apatite loading in units of milligram phosphate per gram-sediment, however, is correct, and this is the unit used for comparison to field core sample measurements.
Precise Higgs mass calculations in (non-)minimal supersymmetry at both high and low scales
Athron, Peter; Steudtner, Tom; Stöckinger, Dominik; Voigt, Alexander
2016-01-01
We present FlexibleEFTHiggs, a method for calculating the SM-like Higgs pole mass in SUSY (and even non-SUSY) models, which combines an effective field theory approach with a diagrammatic calculation. It thus achieves an all order resummation of leading and subleading logarithms together with the inclusion of all non-logarithmic 1-loop contributions. We implement this method into FlexibleSUSY and study its properties in the MSSM, NMSSM, E6SSM and MRSSM. In the MSSM, it correctly interpolates between the known results of effective field theory calculations in the literature for a high SUSY scale and fixed-order calculations in the full theory for a sub-TeV SUSY scale. We compare our MSSM results to those from public codes and identify the origin of the most significant deviations between the DR-bar programs. We then perform a similar comparison in the remaining three non-minimal models. For all four models we estimate the theoretical uncertainty of FlexibleEFTHiggs and the fixed-order DR-bar programs thereby f...
Incidence angle modifiers. A general approach for energy calculations
Energy Technology Data Exchange (ETDEWEB)
Carvalho, Maria Joao; Horta, Pedro; Mendes, Joao Farinha [INETI - Inst. Nacional de Engenharia Tecnologia, Inovacao, IP, Lisboa (Portugal); Collares Pereira, Manuel; Carbajal, Wildor Maldonado [AO SOL, Energias Renovaveis, S.A., Samora Correia (Portugal)
2008-07-01
The calculation of the energy (power) delivered by a given solar collector, requires special care in the consideration of the way it handles the incoming solar radiation. Some collectors, e.g. flat plate types, are easy to characterize from an optical point of view, given their rotational symmetry with respect to the incident angle on the entrance aperture. This in contrast with collectors possessing a 2D (or cylindrical) symmetry, such as collectors using evacuated tubes or CPC collectors, requiring the incident radiation to be decomposed and treated in two orthogonal planes. Analyses of incidence angle modifier (IAM) along these lines were done in the past for parabolic through, evacuated tube (ETC) or compound parabolic concentrator (CPC) collectors. The present paper addresses a general approach to IAM calculation, treating in a general, equivalent and systematic way all collector types. This approach will allow the proper handling of the solar radiation available to each collector type, subdivided in its different components, folding that with the optical effects present in the solar collector and enabling more accurate comparisons between different collector types, in terms of long term performance calculation. (orig.)
On the Mass Neutrino Phase calculations along the geodesic line and the null line
Zhang, C. M.; Beesham, A.
2000-01-01
On the mass neutrino phase calculations along both the particle geodesic line and the photon null line, there exists a double counting error--factor of 2 when comparing the geodesic phase with the null phase. For the mass neutrino propagation in the flat spacetime, we study the neutrino interference phase calculation in the Minkowski diagram and find that the double counting effect originates from despising the velocity difference between two mass neutrinos. Moreover, we compare the phase cal...
Institute of Scientific and Technical Information of China (English)
陈建彬; 吕小强
2011-01-01
Aiming at the fact that the energy and mass exchange phenomena exist between barrel and gas-operated device of the automatic weapon, for describing its interior ballistics and dynamic characteristics of the gas-operated device accurately, a new variable-mass thermodynamics model is built. It is used to calculate the automatic mechanism velocity of a certain automatic weapon, the calculation results coincide with the experimental results better, and thus the model is validated. The influences of structure parameters on gas-operated device＇ s dynamic characteristics are discussed. It shows that the model is valuable for design and accurate performance prediction of gas-operated automatic weapon.
Effective source approach to self-force calculations
Energy Technology Data Exchange (ETDEWEB)
Vega, Ian [Department of Physics, University of Guelph, Guelph, Ontario, N1G 2W1 (Canada); Wardell, Barry [Max-Planck-Institut fuer Gravitationphysik, Albert-Einstein-Institut, 14476 Potsdam (Germany); Diener, Peter, E-mail: ianvega@uoguelph.ca, E-mail: barry.wardell@aei.mpg.de, E-mail: diener@cct.lsu.edu [Center for Computation and Technology, Louisiana State University, Baton Rouge, LA 70803 (United States)
2011-07-07
Numerical evaluation of the self-force on a point particle is made difficult by the use of delta functions as sources. Recent methods for self-force calculations avoid delta functions altogether, using instead a finite and extended 'effective source' for a point particle. We provide a review of the general principles underlying this strategy, using the specific example of a scalar point charge moving in a black hole spacetime. We also report on two new developments: (i) the construction and evaluation of an effective source for a scalar charge moving along a generic orbit of an arbitrary spacetime, and (ii) the successful implementation of hyperboloidal slicing that significantly improves on previous treatments of boundary conditions used for effective-source-based self-force calculations. Finally, we identify some of the key issues related to the effective source approach that will need to be addressed by future work.
Effective source approach to self-force calculations
Vega, Ian; Diener, Peter
2011-01-01
Numerical evaluation of the self-force on a point particle is made difficult by the use of delta functions as sources. Recent methods for self-force calculations avoid delta functions altogether, using instead a finite and extended "effective source" for a point particle. We provide a review of the general principles underlying this strategy, using the specific example of a scalar point charge moving in a black hole spacetime. We also report on two new developments: (i) the construction and evaluation of an effective source for a scalar charge moving along a generic orbit of an arbitrary spacetime, and (ii) the successful implementation of hyperboloidal slicing that significantly improves on previous treatments of boundary conditions used for effective-source-based self-force calculations. Finally, we identify some of the key issues related to the effective source approach that will need to be addressed by future work.
Bahl, Henning
2016-01-01
In the Minimal Supersymmetric Standard Model heavy superparticles introduce large logarithms in the calculation of the lightest $\\mathcal{CP}$-even Higgs boson mass. These logarithmic contributions can be resummed using effective field theory techniques. For light superparticles, however, fixed-order calculations are expected to be more accurate. To gain a precise prediction also for intermediate mass scales, both approaches have to be combined. Here, we report on an improvement of this method in various steps: the inclusion of electroweak contributions, of separate electroweakino and gluino thresholds, as well as resummation at the NNLL level. These improvements can lead to significant numerical effects. In most cases, the lightest $\\mathcal{CP}$-even Higgs boson mass is shifted downwards by about 1 GeV. This is mainly caused by higher order corrections to the $\\bar{\\text{MS}}$ top-quark mass. We also describe the implementation of the new contributions in the code {\\tt FeynHiggs}.
Bahl, Henning; Hollik, Wolfgang
2016-09-01
In the Minimal Supersymmetric Standard Model heavy superparticles introduce large logarithms in the calculation of the lightest {CP}-even Higgs-boson mass. These logarithmic contributions can be resummed using effective field theory techniques. For light superparticles, however, fixed-order calculations are expected to be more accurate. To gain a precise prediction also for intermediate mass scales, the two approaches have to be combined. Here, we report on an improvement of this method in various steps: the inclusion of electroweak contributions, of separate electroweakino and gluino thresholds, as well as resummation at the NNLL level. These improvements can lead to significant numerical effects. In most cases, the lightest {CP}-even Higgs-boson mass is shifted downwards by about 1 GeV. This is mainly caused by higher-order corrections to the {overline{ {MS}}} top-quark mass. We also describe the implementation of the new contributions in the code FeynHiggs.
Energy Technology Data Exchange (ETDEWEB)
Bahl, Henning; Hollik, Wolfgang [Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut), Munich (Germany)
2016-09-15
In the Minimal Supersymmetric Standard Model heavy superparticles introduce large logarithms in the calculation of the lightest CP-even Higgs-boson mass. These logarithmic contributions can be resummed using effective field theory techniques. For light superparticles, however, fixed-order calculations are expected to be more accurate. To gain a precise prediction also for intermediate mass scales, the two approaches have to be combined. Here, we report on an improvement of this method in various steps: the inclusion of electroweak contributions, of separate electroweakino and gluino thresholds, as well as resummation at the NNLL level. These improvements can lead to significant numerical effects. In most cases, the lightest CP-even Higgs-boson mass is shifted downwards by about 1 GeV. This is mainly caused by higher-order corrections to the MS top-quark mass. We also describe the implementation of the new contributions in the code FeynHiggs. (orig.)
A MODULAR APPROACH TO SIMULATION WITH AUTOMATIC SENSITIVITY CALCULATION
Energy Technology Data Exchange (ETDEWEB)
K. HANSON; G. CUNNINGHAM
2001-02-01
When using simulation codes, one often has the task of minimizing a scalar objective function with respect to numerous parameters. This situation occurs when trying to fit (assimilate) data or trying to optimize an engineering design. For simulations in which the objective function to be minimized is reasonably well behaved, that is, is differentiable and does not contain too many multiple minima, gradient-based optimization methods can reduce the number of function evaluations required to determine the minimizing parameters. However, gradient-based methods are only advantageous if one can efficiently evaluate the gradients of the objective function. Adjoint differentiation efficiently provides these sensitivities. One way to obtain code for calculating adjoint sensitivities is to use special compilers to process the simulation code. However, this approach is not always so ''automatic''. We will describe a modular approach to constructing simulation codes, which permits adjoint differentiation to be incorporated with relative ease.
Numerical calculations of mass transfer flow in semi-detached binary systems. [of stars
Edwards, D. A.; Pringle, J. E.
1987-01-01
The details of the mass transfer flow near the inner Lagrangian point in a semidetached binary system are numerically calculated. A polytropic equation of state with n = 3/2 is used. The dependence of the mass transfer rate on the degree to which the star overfills its Roche lobe is calculated, and good agreement with previous analytic estimates is found. The variation of mass transfer rate which occurs if the binary system has a small eccentricity is calculated and is used to cast doubt on the model for superhumps in dwarf novae proposed by Papaloizou and Pringle (1979).
The calculation of the mass moment of inertia of a fluid in a rotating rectangular tank
1977-01-01
This analysis calculated the mass moment of inertia of a nonviscous fluid in a slowly rotating rectangular tank. Given the dimensions of the tank in the x, y, and z coordinates, the axis of rotation, the percentage of the tank occupied by the fluid, and angle of rotation, an algorithm was written that could calculate the mass moment of inertia of the fluid. While not included in this paper, the change in the mass moment of inertia of the fluid could then be used to calculate the force exerted by the fluid on the container wall.
Structural uncertainty in air mass factor calculation for NO2 and HCHO satellite retrievals
Lorente, Alba; Folkert Boersma, K.; Yu, Huan; Dörner, Steffen; Hilboll, Andreas; Richter, Andreas; Liu, Mengyao; Lamsal, Lok N.; Barkley, Michael; De Smedt, Isabelle; Van Roozendael, Michel; Wang, Yang; Wagner, Thomas; Beirle, Steffen; Lin, Jin-Tai; Krotkov, Nickolay; Stammes, Piet; Wang, Ping; Eskes, Henk J.; Krol, Maarten
2017-03-01
Air mass factor (AMF) calculation is the largest source of uncertainty in NO2 and HCHO satellite retrievals in situations with enhanced trace gas concentrations in the lower troposphere. Structural uncertainty arises when different retrieval methodologies are applied within the scientific community to the same satellite observations. Here, we address the issue of AMF structural uncertainty via a detailed comparison of AMF calculation methods that are structurally different between seven retrieval groups for measurements from the Ozone Monitoring Instrument (OMI). We estimate the escalation of structural uncertainty in every sub-step of the AMF calculation process. This goes beyond the algorithm uncertainty estimates provided in state-of-the-art retrievals, which address the theoretical propagation of uncertainties for one particular retrieval algorithm only. We find that top-of-atmosphere reflectances simulated by four radiative transfer models (RTMs) (DAK, McArtim, SCIATRAN and VLIDORT) agree within 1.5 %. We find that different retrieval groups agree well in the calculations of altitude resolved AMFs from different RTMs (to within 3 %), and in the tropospheric AMFs (to within 6 %) as long as identical ancillary data (surface albedo, terrain height, cloud parameters and trace gas profile) and cloud and aerosol correction procedures are being used. Structural uncertainty increases sharply when retrieval groups use their preference for ancillary data, cloud and aerosol correction. On average, we estimate the AMF structural uncertainty to be 42 % over polluted regions and 31 % over unpolluted regions, mostly driven by substantial differences in the a priori trace gas profiles, surface albedo and cloud parameters. Sensitivity studies for one particular algorithm indicate that different cloud correction approaches result in substantial AMF differences in polluted conditions (5 to 40 % depending on cloud fraction and cloud pressure, and 11 % on average) even for low
TOF-Brho Mass Measurements of Very Exotic Nuclides for Astrophysical Calculations at the NSCL
Matos, M; Amthor, M; Aprahamian, A; Bazin, D; Becerril, A; Elliot, T; Galaviz, D; Gade, A; Gupta, S; Lorusso, G; Montes, F; Pereira, J; Portillo, M; Rogers, A M; Schatz, H; Shapira, D; Smith, E; Stolz, A; Wallace, M
2008-01-01
Atomic masses play a crucial role in many nuclear astrophysics calculations. The lack of experimental values for relevant exotic nuclides triggered a rapid development of new mass measurement devices around the world. The Time-of-Flight (TOF) mass measurements offer a complementary technique to the most precise one, Penning trap measurements, the latter being limited by the rate and half-lives of the ions of interest. The NSCL facility provides a well-suited infrastructure for TOF mass measurements of very exotic nuclei. At this facility, we have recently implemented a TOF-Brho technique and performed mass measurements of neutron-rich nuclides in the Fe region, important for r-process calculations and for calculations of processes occurring in the crust of accreting neutron stars.
TOF-Bρ mass measurements of very exotic nuclides for astrophysical calculations at the NSCL
Matoš, M.; Estrade, A.; Amthor, M.; Aprahamian, A.; Bazin, D.; Becerril, A.; Elliot, T.; Galaviz, D.; Gade, A.; Gupta, S.; Lorusso, G.; Montes, F.; Pereira, J.; Portillo, M.; Rogers, A. M.; Schatz, H.; Shapira, D.; Smith, E.; Stolz, A.; Wallace, M.
2008-01-01
Atomic masses play a crucial role in many nuclear astrophysics calculations. The lack of experimental values for relevant exotic nuclides triggered a rapid development of new mass measurement devices around the world. The time-of-flight (TOF) mass measurements offer a complementary technique to the most precise one, Penning trap measurements (Blaum 2006 Phys. Rep. 425 1), the latter being limited by the rate and half-lives of the ions of interest. The NSCL facility provides a well-suited infrastructure for the TOF mass measurements of very exotic nuclei. At this facility, we have recently implemented a TOF-Bρ technique and performed mass measurements of neutron-rich nuclides in the Fe region, important for r-process calculations and for calculations of processes occurring in the crust of accreting neutron stars.
Fernandez-de-Cossio, Jorge
2010-03-01
Fine isotopic structure patterns resolvable by ultrahigh-resolution mass spectrometers are diagnostic of the elemental composition of moderately large compounds. Despite the proven performance of Fourier transforms algorithms to calculate accurate high resolution isotopic distribution, its application to finer ultrahigh resolving power exhibits limited performance. Fast Fourier transforms algorithm requires sampling the relevant range at equally spaced mass values, but ultrahigh resolution mass spectrum displays highly localized complex patterns (peaks) separated in between by relatively large unstructured intervals. Computational efforts consumed on those uninformative intervals are a waste of resources. A fast and memory efficient procedure is introduced in this paper to calculate the isotopic distribution of a single relatively high-mass molecule at ultrahigh resolution by Fourier transforms approaches. The whole isotopic distribution is packed closer to the monoisotopic peak without distorting the actual scale of the peak fine structure. This packing procedure reduced 8 to 32 times the computation resources in comparison to the same calculation performed without packing. The procedure can be readily implemented in existing software.
Extended Hansen approach: calculating partial solubility parameters of solid solutes.
Wu, P L; Beerbower, A; Martin, A
1982-11-01
A multiple linear regression method, known as the extended Hansen solubility approach, was used to estimate the partial solubility parameters, delta d, delta p, and delta h for crystalline solutes. The method is useful, since organic compounds may decompose near their melting points, and it is not possible, to determine solubility parameters for these solid compounds by the methods used for liquid solvents. The method gives good partial and total solubility parameters for naphthalene; with related compounds, less satisfactory results were obtained. At least three conditions, pertaining to the regression equation and the solvent systems, must be met in order to obtain reasonable solute solubility parameters. In addition to providing partial solubility parameters, the regression equations afford a calculation of solute solubility in both polar and nonpolar solvents.
New Approaches for Calculating Moran's Index of Spatial Autocorrelation
Chen, Yanguang
2016-01-01
Spatial autocorrelation plays an important role in geographical analysis, however, there is still room for improvement of this method. The formula for Moran's index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran's index. Moran's scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran's index and Geary's coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran's index and Geary's coefficient will be clarified and defined. One of theoretical findings is that Moran's index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be...
TOF Mass Measurements of Very Exotic Nuclides: an Input for Astrophysical Calculations
Matoš, M.; Estrade, A.; Amthor, M.; Bazin, D.; Becerril, A.; Elliot, T.; Galaviz, D.; Gade, A.; Lorusso, G.; Montes, F.; Pereira, J.; Portillo, M.; Rogers, A. M.; Schatz, H.; Stolz, A.; Aprahamian, A.; Shapira, D.; Smith, E.; Gupta, S.; Wallace, M.
2007-10-01
Atomic masses play a crucial role in many nuclear astrophysics calculations. Very exotic nuclei can be accessed by time-of- flight techniques at radioactive beam facilities. The NSCL facility provides a well-suited infrastructure for TOF mass measurements of very exotic nuclei. At this facility, we have recently implemented a TOF-Bρ technique and performed mass measurements of neutron-rich nuclides in the Fe region, important for calculations of the r-process and processes occurring in the crust of accreting neutron stars. Description of the TOF technique, results and future plans related to nuclear astrophysics will be presented.
African Journals Online (AJOL)
Enrique
The neck mass is often surrounded by mystique — in arriving at a diagnosis as ... and Neck Surgery at. Groote Schuur ... picious features include lymph nodes ..... Blood investigations can often exclude ... after treatment with antibiotics, referral.
Multiple diagnostic approaches to palpable breast mass
Energy Technology Data Exchange (ETDEWEB)
Chin, Soo Yil; Kim, Kie Hwan; Moon, Nan Mo; Kim, Yong Kyu; Jang, Ja June [Korea Cancer Center Hospital, Seoul (Korea, Republic of)
1985-12-15
The combination of the various diagnostic methods of palpable breast mass has improved the diagnostic accuracy. From September 1983 to August 1985 pathologically proven 85 patients with palpable breast masses examined with x-ray mammography, ultrasonography, penumomammography and aspiration cytology at Korea Cancer Center Hospital were analyzed. The diagnostic accuracies of each methods were 77.6% of mammogram, 74.1% of ultrasonogram, 90.5% of penumomammogram and 92.4% of aspiration cytology. Pneumomammograms was accomplished without difficulty or complication and depicted more clearly delineated mass with various pathognomonic findings; air-ductal pattern in fibroadenoma (90.4%) and cystosarcoma phylloides (100%), air-halo in fibrocystic disease (14.2%), fibroadenoma (100%), cystosarcoma phylloides (100%), air-cystogram in cystic type of fibrocystic disease (100%) and vaculoar pattern or irregular air collection without retained peripheral gas in carcinoma.
Ice flood velocity calculating approach based on single view metrology
Wu, X.; Xu, L.
2017-02-01
Yellow River is the river in which the ice flood occurs most frequently in China, hence, the Ice flood forecasting has great significance for the river flood prevention work. In various ice flood forecast models, the flow velocity is one of the most important parameters. In spite of the great significance of the flow velocity, its acquisition heavily relies on manual observation or deriving from empirical formula. In recent years, with the high development of video surveillance technology and wireless transmission network, the Yellow River Conservancy Commission set up the ice situation monitoring system, in which live videos can be transmitted to the monitoring center through 3G mobile networks. In this paper, an approach to get the ice velocity based on single view metrology and motion tracking technique using monitoring videos as input data is proposed. First of all, River way can be approximated as a plane. On this condition, we analyze the geometry relevance between the object side and the image side. Besides, we present the principle to measure length in object side from image. Secondly, we use LK optical flow which support pyramid data to track the ice in motion. Combining the result of camera calibration and single view metrology, we propose a flow to calculate the real velocity of ice flood. At last we realize a prototype system by programming and use it to test the reliability and rationality of the whole solution.
New approaches for calculating Moran's index of spatial autocorrelation.
Directory of Open Access Journals (Sweden)
Yanguang Chen
Full Text Available Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran's index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran's index. Moran's scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran's index and Geary's coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran's index and Geary's coefficient will be clarified and defined. One of theoretical findings is that Moran's index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation.
New approaches for calculating Moran's index of spatial autocorrelation.
Chen, Yanguang
2013-01-01
Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran's index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran's index. Moran's scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran's index and Geary's coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran's index and Geary's coefficient will be clarified and defined. One of theoretical findings is that Moran's index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation.
Physical consequences of the alpha/beta rule which accurately calculates particle masses
Energy Technology Data Exchange (ETDEWEB)
Greulich, Karl Otto [Fritz Lipmann Institute, Beutenbergstr.11, D07745 Jena (Germany)
2015-07-01
Using the fine structure constant α (=1/137.036), the proton vs. electron mass ratio β (= 1836.2) and the integers m and n, the α/β rule: m{sub particle} = α{sup -n} x β m x 27.2 eV/c{sup 2} allows almost exact calculation of particle masses. (K.O.Greulich, DPG Spring meeting 2014, Mainz, T99.4) With n=2, m=0 the electron mass becomes 510.79 keV/c{sup 2} (experimental 511 keV/c{sup 2}) With n=2, m=1 the proton mass is 937.9 MeV/c{sup 2} (literature 938.3 MeV/c{sup 2}). For n=3 and m=1 a particle with 128.6 GeV/c{sup 2} close to the reported Higgs mass, is expected. For n=14 and m=-1 the Planck mass results. The calculated masses for gauge bosons and for quarks have similar accuracy. All masses fit into the same scheme (the alpha/beta rule), indicating that non of these particle masses play an extraordinary role. Particularly, the Higgs Boson, often termed the *God particle* plays in this sense no extraordinary role. In addition, particle masses are intimately correlated with the fine structure constant α. If particle masses have been constant over all times, α must have been constant over these times. In addition, the ionization energy of the hydrogen atom (13.6 eV) needs to have been constant if particle masses have been unchanged or vice versa. In conclusion, the α/β rule needs to be taken into account when cosmological models are developed.
A Mass-Conserving 4D XCAT Phantom for Dose Calculation and Accumulation
Williams, Christopher L; Seco, Joao; James, Sara St; Mak, Raymond H; Berbeco, Ross I; Lewis, John H
2013-01-01
The XCAT phantom is a realistic 4D digital torso phantom that is widely used in imaging and therapy research. However, lung mass is not conserved between respiratory phases of the phantom, making detailed dosimetric simulations and dose accumulation unphysical. A framework is developed to correct this issue by enforcing local mass conservation in the XCAT lung. Dose calculations are performed to assess the implications of neglecting mass conservation, and to demonstrate an application of the phantom to calculate the accumulated delivered dose in an irregularly breathing patient. Monte Carlo methods are used to simulate conventional and SBRT treatment delivery. The spatial distribution of the lung dose was qualitatively changed by the use of mass conservation; however the corresponding DVH did not change significantly. Comparison of the delivered dose with 4DCT-based predictions shows similar lung metric results, however dose differences of 10% can be seen in some spatial regions. Using this tool to simulate p...
Status of the MILC calculation of electromagnetic contributions to pseudoscalar masses
Basak, S; Bernard, C; DeTar, C; Freeland, E; Freeman, W; Foley, J; Gottlieb, Steven; Heller, U M; Hetrick, J E; Laiho, J; Levkova, L; Oktay, M; Osborn, J; Sugar, R L; Torok, A; Toussaint, D; Van de Water, R S; Zhou, R
2012-01-01
We calculate pseudoscalar masses on gauge configurations containing the effects of 2+1 flavors of dynamical asqtad quarks and quenched electromagnetism. The lattice spacings vary from 0.12 to 0.06 fm. The masses are fit with staggered chiral perturbation theory including NLO electromagnetic terms. We attempt to extract the fit parameters for the electromagnetic contributions, while taking into account the finite volume effects, and extrapolate them to the physical limit.
Energy Technology Data Exchange (ETDEWEB)
Tafreshi, H. Vahedi; Ercan, E.; Pourdeyhimi, B. [North Carolina State University, Nonwovens Cooperative Research Center, Raleigh, NC (United States)
2006-07-15
In this note, the evaporation rate from a vertical wet fabric sheet is calculated using a free convection heat transfer correlation. Chilton-Colburn analogy is used to derive a mass transfer correlation from a heat transfer correlation proposed by Churchill and Chu for free convection from a vertical isothermal plate. The mass transfer rate obtained from this expression has shown excellent agreement with experimental data. (orig.)
The mass of the {delta} resonance in a finite volume: fourth-order calculations
Energy Technology Data Exchange (ETDEWEB)
Hoja, Dominik; Rusetsky, Akaki [Helmholtz-Institut fuer Strahlen- und Kernphysik (Theorie), Universitaet Bonn (Germany); Bethe Center for Theoretical Physics, Universitaet Bonn (Germany); Bernard, Veronique [Universite Louis Pasteur, Laboratoire de Physique Theorique (Germany); Meissner, Ulf G. [Helmholtz-Institut fuer Strahlen- und Kernphysik (Theorie), Universitaet Bonn (Germany); Bethe Center for Theoretical Physics, Universitaet Bonn (Germany); Institut fuer Kernphysik und Juelich Center for Hadron Physics, Forschungszentrum Juelich (Germany)
2009-07-01
The self-energy of the {delta} resonance in a finite volume is calculated by using chiral effective field theory with explicit spin-3/2 fields. The calculations are performed up-to-and-including fourth order in the small scale expansion and yield an explicit parameterization of the energy spectrum of the interacting {pi}N pair in a finite box in terms of both the quark mass and the box size L. We show that finite-volume corrections are sizable at small quark masses. The values of certain low-energy constants are extracted from fitting to the available data in lattice QCD.
XAFSmass: a program for calculating the optimal mass of XAFS samples
Klementiev, K.; Chernikov, R.
2016-05-01
We present a new implementation of the XAFSmass program that calculates the optimal mass of XAFS samples. It has several improvements as compared to the old Windows based program XAFSmass: 1) it is truly platform independent, as provided by Python language, 2) it has an improved parser of chemical formulas that enables parentheses and nested inclusion-to-matrix weight percentages. The program calculates the absorption edge height given the total optical thickness, operates with differently determined sample amounts (mass, pressure, density or sample area) depending on the aggregate state of the sample and solves the inverse problem of finding the elemental composition given the experimental absorption edge jump and the chemical formula.
Neural Approach for Calculating Permeability of Porous Medium
Institute of Scientific and Technical Information of China (English)
ZHANG Ji-Cheng; LIU Li; SONG Kao-Ping
2006-01-01
@@ Permeability is one of the most important properties of porous media. It is considerably difficult to calculate reservoir permeability precisely by using single well-logging response and simple formula because reservoir is of serious heterogeneity, and well-logging response curves are badly affected by many complicated factors underground. We propose a neural network method to calculate permeability of porous media. By improving the algorithm of the back-propagation neural network, convergence speed is enhanced and better results can be achieved. A four-layer back-propagation network is constructed to effectively calculate permeability from well log data.
p-adic description of Higgs mechanism; 3, calculation of elementary particle masses
Pitkänen, M
1994-01-01
This paper belongs to the series devoted to the calculation of particle masses in the framework of p-adic conformal field theory limit of Topological GeometroDynamics. In paper II the general formulation of p-adic Higgs mechanism was given. In this paper the calculation of the fermionic and bosonic masses is carried out. The calculation of the masses necessitates the evaluation of dege- neracies for states as a function of conformal weight in certain tensor product of Super Virasoro algebras. The masses are very sen- sitive to the degeneracy ratios: Planck mass results unless the ratio for the degeneracies for first excited states and massless states is an integer multiple of 2/3. For leptons, quarks and gauge bosons this miracle occurs. The main deviation from standard model is the prediction of light color excited leptons and quarks as well as colored boson exotics. Higgs particle is absent from spectrum as is also graviton: the latter is due to the basic approximation of p-adic TGD. Reason for replacement:...
Higgs-boson masses and mixing matrices in the NMSSM: analysis of on-shell calculations
Energy Technology Data Exchange (ETDEWEB)
Drechsel, P.; Weiglein, G. [DESY, Hamburg (Germany); Groeber, R. [Durham University, Department of Physics, Institute for Particle Physics Phenomenology, Durham (United Kingdom); INFN, Sezione di Roma Tre, Rome (Italy); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Universidad Autonoma de Madrid, Instituto de Fisica Teorica, (UAM/CSIC), Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Muehlleitner, M. [Karlsruhe Institute of Technology, Institute for Theoretical Physics, Karlsruhe (Germany); Rzehak, H. [University of Southern Denmark, CP3-Origins, Odense M (Denmark)
2017-06-15
We analyze the Higgs-boson masses and mixing matrices in the NMSSM based on an on-shell (OS) renormalization of the gauge-boson and Higgs-boson masses and the parameters of the top/scalar top sector. We compare the implementation of the OS calculations in the codes NMSSMCALC and NMSSM-FeynHiggs up to O(α{sub t}α{sub s}). We identify the sources of discrepancies at the one- and at the two-loop level. Finally we compare the OS and DR evaluation as implemented in NMSSMCALC. The results are important ingredients for an estimate of the theoretical precision of Higgs-boson mass calculations in the NMSSM. (orig.)
$\\pi_0$ pole mass calculation in a strong magnetic field and lattice constraints
Avancini, Sidney S; Pinto, Marcus Benghi; Tavares, William R; Timóteo, Varese S
2016-01-01
The $\\pi_0$ neutral meson pole mass is calculated in a strongly magnetized medium using the SU(2) Nambu-Jona-Lasinio model within the random phase approximation (RPA) at zero temperature and zero baryonic density. We employ a magnetic field dependent coupling $G(eB)$ fitted to reproduce lattice QCD results for the quark condensates. Divergent quantities are handled with a magnetic field independent regularization scheme in order to avoid unphysical oscillations. A comparison between the running and the fixed couplings reveals that the former produces results much closer to the predictions from recent lattice calculations. In particular, we find that the $\\pi_0$ meson mass systematically decreases when the magnetic field increases while the scalar mass remains almost constant. We also investigate how the magnetic background influences other mesonic properties such as $f_{{\\pi}_0}$ and $g_{\\pi_0 q q}$.
Wegner, T.; Grooss, J.; Mueller, R.; Stroh, F.; Lehmann, R.; Volk, C.; Hösen, E.; Vom Scheidt, M.; Wintel, J.; Riediger, O.; Schlager, H.; Scheibe, M.; Stock, P.; Ravegnani, F.; Ulanovsky, A.; Yushkov, V. A.; von Hobe, M.
2010-12-01
During the RECONCILE campaign in the Arctic winter 2009/10, an active Match experiment was performed sampling the same air masses up to three times during two consecutive flights of the high-altitude research aircraft M55-Geophysica from Kiruna (67.83 N, 20.42 E). The first flight was westbound and its flightpath designed to resample the air masses from the outbound leg during the return to Kiruna with a time difference of up to 3 hours. Another match was attempted during a second flight 72 hours later when the air masses had moved into the Geophysica's range again. Flightplans were designed using trajectory calculations driven by ECMWF wind fields. In situ measurements of N2O and NOy revealed strong gradients inside the vortex thus allowing us to examine the accuracy of such trajectory calculations with wind fields in different spatial and temporal resolution.
Modified lattice-statics approach to dislocation calculations. I - Formalism
Esterling, D. M.
1978-01-01
A modified lattice-statics method to calculate the atomic displacements associated with a screw dislocation is outlined. The model incorporates an anharmonic region wherein the forces are derived from a pair potential. Appropriate energy and force expressions are derived. The modifications necessary for the implementation of the conjugate-gradient function minimization method are also derived.
Mass and heat balance approach for oil sand flowsheets
Energy Technology Data Exchange (ETDEWEB)
Salama, A.I.A. [Natural Resources Canada, Ottawa, ON (Canada). CANMET Energy Technology Centre
2009-07-01
Plant flowsheet mass balance is carried out in many industrial applications to evaluate overall plant performance and to optimize plant recoveries. This information is necessary for improving the economics of the operation and improving profitability. Flowsheet mass balance begins with the collection of plant stream samples using well-known sampling schemes. Stream samples collected using ASTM sampling standards are then analyzed using ASTM analytical techniques to characterize stream components which often contain sampling and analytical errors. The paper presented an approach for oil sands flowsheet mass and heat balance where different objective functions were presented depending on the nature of the stream error distributions. Hot water or steam is used to heat plant streams in oil sands extraction and froth treatment plants. As such, an approach is needed to integrate mass and heat balance. The mass and heat balance approach proposed in this paper integrated mass and heat balance and optimized the deviations/errors between the raw/observed and estimated data sets. The estimated data set was constrained to satisfy mass and heat balance conditions around the flowsheet internal nodes. Stream normalization and stream normalization conditions were forced. The relationship between the flowsheet independent, dependent, and reference streams were identified. The number of the independent stream mass splits was expressed in terms of the number of streams, number of nodes, and number of reference streams. 9 refs., 3 tabs., 2 figs.
Mass transfer in rolling rotary kilns : a novel approach
Heydenrych, M.D.; Greeff, P.; Heesink, A. Bert M.; Versteeg, G.F.
2002-01-01
A novel approach to modeling mass transfer in rotary kilns or rotating cylinders is explored. The movement of gas in the interparticle voids in the bed of the kiln is considered, where particles move concentrically with the geometry of the kiln and gas is entrained by these particles. The approach c
A Novel Approach to Conduct Resistance Calculation Considering Skin Effectp
Jiang, Li-Min; Yan, Hua-Guang; Meng, Jun-Xia; Yin, Zhong-Dong; Lin, Zhi
2017-05-01
With the rapid growth of power electronics, increasingly more non-linear loads access the power grid, aggravating harmonic pollution. To calculate harmonic loss, it is necessary to calculate the resistance of a conductor considering skin effect, however, most method base on the current density on a semi-infinite plane. Based on the principle of engineering electromagnetics, this paper derived the distribution of current density on a finite plane, which is also suitable for the transmission line, and therefore developed a more practical and precise enough version of frequency-dependent resistance of current carrying bar or transmission line considering skin effect. In order to make this method more convincing, a corresponding physical experiment was also conducted, proving the feasibility of this method.
A Parallel Approach to Cosine Calculation Using OpenCL
Directory of Open Access Journals (Sweden)
Faris Cakaric
2015-03-01
Full Text Available Fast calculation of high-precision cosine function becomes increasingly important in many areas of computer science. This paper proposes an algorithm for high-performance calculation of cosine function using Maclaurin series and by exploiting benefits of parallelisation. It presents several parallel implementations of this algorithm in OpenCL framework, improving them from naïve to the optimised implementation. The paper shows comparison of time of execution when the algorithm is executed sequentially on CPU and in parallel on GPU, confirming enormous decrease in time of execution when algorithm executes in parallel. Finally, the paper draws conclusions about the scalability of algorithm and percentage of total time of execution wasted on communication overheads.
Non-perturbative Calculation of the Positronium Mass Spectrum in Basis Light-Front Quantization
Wiecki, Paul; Zhao, Xingbo; Maris, Pieter; Vary, James P
2015-01-01
We report on recent improvements to our non-perturbative calculation of the positronium spectrum. Our Hamiltonian is a two-body effective interaction which incorporates one-photon exchange terms, but neglects fermion self-energy effects. This effective Hamiltonian is diagonalized numerically in a harmonic oscillator basis at strong coupling ($\\alpha=0.3$) to obtain the mass eigenvalues. We find that the mass spectrum compares favorably to the Bohr spectrum of non-relativistic quantum mechanics evaluated at this unphysical coupling.
Calculation of the Mass Spectrum and Deconfining Temperature in Non-Abelian Gauge Theory.
Vohwinkel, Claus
1989-03-01
Using a small volume expansion the mass spectrum and deconfining temperature of SU(2) and SU(3) gauge theory are evaluated. Including non-perturbative features by restoring symmetries which were broken in perturbation theory we obtain results which are valid up to intermediate volumes. The mass spectrum obtained is in good agreement with Luscher's small volume expansion in the small-volume limit and with Monte Carlo Data in medium sized volumes. Using asymmetric volumes we are able to derive the deconfining temperature and find a reasonable agreement with Monte Carlo calculations.
Energy Technology Data Exchange (ETDEWEB)
Kneur, J.L
2006-06-15
This document is divided into 2 parts. The first part describes a particular re-summation technique of perturbative series that can give a non-perturbative results in some cases. We detail some applications in field theory and in condensed matter like the calculation of the effective temperature of Bose-Einstein condensates. The second part deals with the minimal supersymmetric standard model. We present an accurate calculation of the mass spectrum of supersymmetric particles, a calculation of the relic density of supersymmetric black matter, and the constraints that we can infer from models.
Calculation of the axion mass based on high-temperature lattice quantum chromodynamics
Borsanyi, S.; Fodor, Z.; Guenther, J.; Kampert, K.-H.; Katz, S. D.; Kawanai, T.; Kovacs, T. G.; Mages, S. W.; Pasztor, A.; Pittler, F.; Redondo, J.; Ringwald, A.; Szabo, K. K.
2016-11-01
Unlike the electroweak sector of the standard model of particle physics, quantum chromodynamics (QCD) is surprisingly symmetric under time reversal. As there is no obvious reason for QCD being so symmetric, this phenomenon poses a theoretical problem, often referred to as the strong CP problem. The most attractive solution for this requires the existence of a new particle, the axion—a promising dark-matter candidate. Here we determine the axion mass using lattice QCD, assuming that these particles are the dominant component of dark matter. The key quantities of the calculation are the equation of state of the Universe and the temperature dependence of the topological susceptibility of QCD, a quantity that is notoriously difficult to calculate, especially in the most relevant high-temperature region (up to several gigaelectronvolts). But by splitting the vacuum into different sectors and re-defining the fermionic determinants, its controlled calculation becomes feasible. Thus, our twofold prediction helps most cosmological calculations to describe the evolution of the early Universe by using the equation of state, and may be decisive for guiding experiments looking for dark-matter axions. In the next couple of years, it should be possible to confirm or rule out post-inflation axions experimentally, depending on whether the axion mass is found to be as predicted here. Alternatively, in a pre-inflation scenario, our calculation determines the universal axionic angle that corresponds to the initial condition of our Universe.
A new approach to calculate the hydration of DNA molecules
Energy Technology Data Exchange (ETDEWEB)
Hummer, G. [Los Alamos National Lab., NM (United States); Soumpasis, D.M. [Max-Planck-Institut fuer Biophysikalische Chemie (Karl-Friedrich-Bonhoeffer-Institut), Goettingen (Germany)
1993-09-01
A new method to calculate approximate water density distributions around DNA is presented. Formal and computational simplicity are emphasized in order to allow routine hydration studies. The method is based on the application of pair and triplet correlation functions of water-oxygen calculated by computer simulation. These correlation functions are combined with the configurational data of the electronegative atoms on DNA (oxygen and nitrogen) taken from crystal structures. For three B-DNA structures water density distributions are calculated and discussed. The observed characteristic features agree well with the prevalent picture from experiments. The minor groove shows a more structured hydration than the major groove. Also, the minor groove hydration of A{center_dot}T basepair tracts differs from that found in G{center_dot}C basepair regions. In A{center_dot}T tracts single peaks of high water density appear, whereas in G{center_dot}C regions the minor groove is occupied by two side-by-side ribbons of water.
Calculation of mass discharge of the Greenland ice sheet in the Earth System Model
Directory of Open Access Journals (Sweden)
O. O. Rybak
2016-01-01
Full Text Available Mass discharge calculation is a challenging task for the ice sheet modeling aimed at evaluation of their contribution to the global sea level rise during past interglacials, as well as one of the consequences of future climate change. In Greenland, ablation is the major source of fresh water runoff. It is approximately equal to the dynamical discharge (iceberg calving. Its share might have still larger during the past interglacials when the margins of the GrIS retreated inland. Refreezing of the melted water and its retention are two poorly known processes playing as a counterpart of melting and, thus, exerting influence on the run off. Interaction of ice sheets and climate is driven by energy and mass exchange processes and is complicated by numerous feed-backs. To study the complex of these processes, coupling of an ice sheet model and a climate model (i.e. models of the atmosphere and the ocean in one model is required, which is often called the Earth System Model (ESM. Formalization of processes of interaction between the ice sheets and climate within the ESM requires elaboration of special techniques to deal with dramatic differences in spatial and temporal variability scales within each of three ESM’s blocks. In this paper, we focus on the method of coupling of a Greenland ice sheet model (GrISM with the climate model INMCM having been developed in the Institute of Numerical Mathematics of Russian Academy of Sciences. Our coupling approach consists in applying of a special buffer model, which serves as an interface between GrISM and INMCM. A simple energy and water exchange model (EWBM-G allows realistic description of surface air temperature and precipitation fields adjusted to a relief of elevation of the GrIS surface. In a series of diagnostic numerical experiments with the present-day GrIS geometry and the modeled climate we studied sensitivity of the modeled surface mass balance and run off to the key EWBM-G parameters and compared
Simplified approach for quantitative calculations of optical pumping
Atoneche, Fred; Kastberg, Anders
2017-07-01
We present a simple and pedagogical method for quickly calculating optical pumping processes based on linearised population rate equations. The method can easily be implemented on mathematical software run on modest personal computers, and can be generalised to any number of concrete situations. We also show that the method is still simple with realistic experimental complications taken into account, such as high level degeneracy, impure light polarisation, and an added external magnetic field. The method and the associated mathematical toolbox should be of value in advanced physics teaching, and can also facilitate the preparation of research tasks.
A simplified approach to calculate atomic partition functions in plasmas
Energy Technology Data Exchange (ETDEWEB)
D' Ammando, Giuliano [Dipartimento di Chimica, Universita di Bari, Via Orabona 4, 70125 Bari (Italy); Colonna, Gianpiero [CNR-IMIP, Via Amendola 122/D, 70126 Bari (Italy); Capitelli, Mario [Dipartimento di Chimica, Universita di Bari, Via Orabona 4, 70125 Bari (Italy); CNR-IMIP, Via Amendola 122/D, 70126 Bari (Italy)
2013-03-15
A simplified method to calculate the electronic partition functions and the corresponding thermodynamic properties of atomic species is presented and applied to C(I) up to C(VI) ions. The method consists in reducing the complex structure of an atom to three lumped levels. The ground level of the lumped model describes the ground term of the real atom, while the second lumped level represents the low lying states and the last one groups all the other atomic levels. It is also shown that for the purpose of thermodynamic function calculation, the energy and the statistical weight of the upper lumped level, describing high-lying excited atomic states, can be satisfactorily approximated by an analytic hydrogenlike formula. The results of the simplified method are in good agreement with those obtained by direct summation over a complete set (i.e., including all possible terms and configurations below a given cutoff energy) of atomic energy levels. The method can be generalized to include more lumped levels in order to improve the accuracy.
The impact of nuclear mass models on r-process nucleosynthesis network calculations
Vaughan, Kelly
2002-10-01
An insight into understanding various nucleosynthesis processes is via modelling of the process with network calculations. My project focus is r-process network calculations where the r-process is nucleosynthesis via rapid neutron capture thought to take place in high entropy supernova bubbles. One of the main uncertainties of the simulations is the Nuclear Physics input. My project investigates the role that nuclear masses play in the resulting abundances. The code tecode, involves rapid (n,γ) capture reactions in competition with photodisintegration and β decay onto seed nuclei. In order to fully analyze the effects of nuclear mass models on the relative isotopic abundances, calculations were done from the network code, keeping the initial environmental parameters constant throughout. The supernova model investigated by Qian et al (1996) in which two r-processes, of high and low frequency with seed nucleus ^90Se and of fixed luminosity (fracL_ν_e(0)r_7(0)^2 ˜= 8.77), contribute to the nucleosynthesis of the heavier elements. These two r-processes, however, do not contribute equally to the total abundance observed. The total isotopic abundance produced from both events was therefore calculated using equation refabund. Y(H+L) = fracY(H)+fY(L)f+1 applicability of the P-Scheme in relation to the other mass models to the r-process network calculations. 02 Pscheme Aprahamian,A., Gadala-Maria,A. & Cuka,N. 1996, Revista Mexicana de Fisica,42,1 code Surman,R. & Engel,J. 1998, Phys.Rev. C,54,4 thebibliography
Li, Hui; Shi, LiLi; Zhang, Min; Su, Zhongmin; Wang, XiuJun; Hu, LiHong; Chen, GuanHua
2007-04-14
The combination of genetic algorithm and neural network approach (GANN) has been developed to improve the calculation accuracy of density functional theory. As a demonstration, this combined quantum mechanical calculation and GANN correction approach has been applied to evaluate the optical absorption energies of 150 organic molecules. The neural network approach reduces the root-mean-square (rms) deviation of the calculated absorption energies of 150 organic molecules from 0.47 to 0.22 eV for the TDDFTB3LYP6-31G(d) calculation, and the newly developed GANN correction approach reduces the rms deviation to 0.16 eV.
A Brownian Dynamics Approach to ESR Line Shape Calculations
Wright, Matthew P.
The work presented in this thesis uses a Monte Carlo technique to simulate spectra for 14N spin-labels and 15N spin labels. The algorithm presented here also has the capability to produce simulated spectra for any admixture of 14N and 15N. The algorithm makes use of `iterative loops' to model Brownian rotational diffusion and for the repeated evaluation of the spectral correlation function (relaxation function). The method described in this work starts with a derivation of an angular dependent "Spin Hamiltonian" that when diagonalized yields orientation dependent eigenvalues. The resulting eigenvalue equations are later used to calculate the energy trajectories of a nitroxide spin-label undergoing rotational diffusion. The energy trajectories are then used to evaluate the relaxation function. The absorption spectrum is obtained by applying a Fourier transform to the relaxation function. However, the application of the Fourier transform to the relaxation function produces "leakage" effects that manifest as spurious peaks in the first derivative spectrum. To counter "leakage" effects a data windowing function was applied to the relaxation function prior to the Fourier transform. In order to test the accuracy of this algorithm, simulated spectra for 14N, and 15N spin labels diffusing in a glycerol-water mixture as well as a 14N-15N admixture diffusing in the same solvent were produced and compared to experimental spectra. An attempt to quantify the level of agreement was made by calculating the mean square residual of the simulated and experimental spectra. The main spectral features were reproduced with reasonable fidelity by the simulated spectra.
New Global Calculation of Nuclear Masses and Fission Barriers for Astrophysical Applications
Möller, P.; Sierk, A. J.; Bengtsson, R.; Ichikawa, T.; Iwamoto, A.
2008-05-01
The FRDM(1992) mass model [1] has an accuracy of 0.669 MeV in the region where its parameters were determined. For the 529 masses that have been measured since, its accuracy is 0.46 MeV, which is encouraging for applications far from stability in astrophysics. We are developing an improved mass model, the FRDM(2008). The improvements in the calculations with respect to the FRDM(1992) are in two main areas. (1) The macroscopic model parameters are better optimized. By simulation (adjusting to a limited set of now known nuclei) we can show that this actually makes the results more reliable in new regions of nuclei. (2) The ground-state deformation parameters are more accurately calculated. We minimize the energy in a four-dimensional deformation space (ɛ2, V3, V4, V6,) using a grid interval of 0.01 in all 4 deformation variables. The (non-finalized) FRDM (2008-a) has an accuracy of 0.596 MeV with respect to the 2003 Audi mass evaluation before triaxial shape degrees of freedom are included (in progress). When triaxiality effects are incorporated preliminary results indicate that the model accuracy will improve further, to about 0.586 MeV. We also discuss very large-scale fission-barrier calculations in the related FRLDM (2002) model, which has been shown to reproduce very satisfactorily known fission properties, for example barrier heights from 70Se to the heaviest elements, multiple fission modes in the Ra region, asymmetry of mass division in fission and the triple-humped structure found in light actinides. In the superheavy region we find barriers consistent with the observed half-lives. We have completed production calculations and obtain barrier heights for 5254 nuclei heavier than A = 170 for all nuclei between the proton and neutron drip lines. The energy is calculated for 5009325 different shapes for each nucleus and the optimum barrier between ground state and separated fragments is determined by use of an ``immersion'' technique.
Fast heat transfer calculations in supercritical fluids versus hydrodynamic approach
Nikolayev, Vadim; Garrabos, Y; Beysens, D
2016-01-01
This study investigates the heat transfer in a simple pure fluid whose temperature is slightly above its critical temperature. We propose a efficient numerical method to predict the heat transfer in such fluids when the gravity can be neglected. The method, based on a simplified thermodynamic approach, is compared with direct numerical simulations of the Navier-Stokes and energy equations performed for CO2 and SF6. A realistic equation of state is used to describe both fluids. The proposed method agrees with the full hydrodynamic solution and provides a huge gain in computation time. The connection between the purely thermodynamic and hydrodynamic descriptions is also discussed.
Dysphagia and dyspnea by lingual thyroid mass: An appropriate approach
Directory of Open Access Journals (Sweden)
Samad Ghiasi
2015-03-01
Full Text Available Lingual thyroid is a rare embryological anomaly originated from the thyroid gland failure that descends from the foramen cecum to its normal eutopic pre-laryngeal site. The case in this study was a 39 year old female, presenting with the sensation of a foreign body, progressive dysphagia and dyspnea. Indirect laryngoscopy revealed a large well-defined mass in the tongue base. Imaging studies confirmed the diagnosis of large ectopic lingual thyroid. The surgery was performed via an external cervical approach due to the mass size. The decision on the best treatment looks into the mass position, size, symptoms, airway emergency and medical facilities.
A New Approach to Axial Vector Model Calculations, 2
Dilkes, F A; Schubert, C; Schubert, Christian
1999-01-01
We further develop the new approach, proposed in part I (hep-th/9807072), to computing the heat kernel associated with a Fermion coupled to vector and axial vector fields. We first use the path integral representation obtained for the heat kernel trace in a vector-axialvector background to derive a Bern-Kosower type master formula for the one-loop amplitude with $M$ vectors and $N$ axialvectors, valid in any even spacetime dimension. For the massless case we then generalize this approach to the full off-diagonal heat kernel. In the D=4 case the SO(4) structure of the theory can be broken down to $SU(2) \\times SU(2)$ by use of the 't Hooft symbols. Various techniques for explicitly evaluating the spin part of the path integral are developed and compared. We also extend the method to external fermions, and to the inclusion of isospin. On the field theory side, we obtain an extension of the second order formalism for fermion QED to an abelian vector-axialvector theory.
Glass viscosity calculation based on a global statistical modelling approach
Energy Technology Data Exchange (ETDEWEB)
Fluegel, Alex
2007-02-01
A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.
Nikolova, Gergana; Toshev, Yuli
2008-01-01
On the basis of a representative anthropological investigation of 5290 individuals (2435 males and 2855 females) of the Bulgarian population at the age of 30-40 years (Yordanov et al. [1]) we proposed a 3D biomechanical model of human body of the average Bulgarian male and female and compared two different possible approaches to calculate analytically and to evaluate numerically the corresponding geometric and inertial characteristics of all the segments of the body. In the framework of the first approach, we calculated the positions of the centres of mass of the segments of human body as well as their inertial characteristics merely by using the initial original anthropometrical data, while in the second approach we adjusted the data by using the method based on regression equations. Wherever possible, we presented a comparison of our data with those available in the literature on other Caucasians and determined in which cases the use of which approach is more reliable.
Directory of Open Access Journals (Sweden)
M.A. Zayed
2017-03-01
Full Text Available Naproxen (C14H14O3 is a non-steroidal anti-inflammatory drug (NSAID. It is important to investigate its structure to know the active groups and weak bonds responsible for medical activity. In the present study, naproxen was investigated by mass spectrometry (MS, thermal analysis (TA measurements (TG/DTG and DTA and confirmed by semi empirical molecular orbital (MO calculation, using PM3 procedure. These calculations included, bond length, bond order, bond strain, partial charge distribution, ionization energy and heat of formation (ΔHf. The mass spectra and thermal analysis fragmentation pathways were proposed and compared to select the most suitable scheme representing the correct fragmentation pathway of the drug in both techniques. The PM3 procedure reveals that the primary cleavage site of the charged molecule is the rupture of the COOH group (lowest bond order and high strain which followed by CH3 loss of the methoxy group. Thermal analysis of the neutral drug reveals a high response to the temperature variation with very fast rate. It decomposed in several sequential steps in the temperature range 80–400 °C. These mass losses appear as two endothermic and one exothermic peaks which required energy values of 255.42, 10.67 and 371.49 J g−1 respectively. The initial thermal ruptures are similar to that obtained by mass spectral fragmentation (COOH rupture. It was followed by the loss of the methyl group and finally by ethylene loss. Therefore, comparison between MS and TA helps in selection of the proper pathway representing its fragmentation. This comparison is successfully confirmed by MO-calculation.
Large-scale subduction of continental crust implied by India-Asia mass-balance calculation
Ingalls, Miquela; Rowley, David B.; Currie, Brian; Colman, Albert S.
2016-11-01
Continental crust is buoyant compared with its oceanic counterpart and resists subduction into the mantle. When two continents collide, the mass balance for the continental crust is therefore assumed to be maintained. Here we use estimates of pre-collisional crustal thickness and convergence history derived from plate kinematic models to calculate the crustal mass balance in the India-Asia collisional system. Using the current best estimates for the timing of the diachronous onset of collision between India and Eurasia, we find that about 50% of the pre-collisional continental crustal mass cannot be accounted for in the crustal reservoir preserved at Earth's surface today--represented by the mass preserved in the thickened crust that makes up the Himalaya, Tibet and much of adjacent Asia, as well as southeast Asian tectonic escape and exported eroded sediments. This implies large-scale subduction of continental crust during the collision, with a mass equivalent to about 15% of the total oceanic crustal subduction flux since 56 million years ago. We suggest that similar contamination of the mantle by direct input of radiogenic continental crustal materials during past continent-continent collisions is reflected in some ocean crust and ocean island basalt geochemistry. The subduction of continental crust may therefore contribute significantly to the evolution of mantle geochemistry.
Temperature Distribution in Solar Cells Calculated in Three Dimensional Approach
Directory of Open Access Journals (Sweden)
Hamdy K. Elminir
2000-01-01
Full Text Available Field-testing is costly, time consuming and depends heavily on prevailing weather conditions. Adequate security and weather protection must also provide at the test site. Delays can also be caused due to bad weather and system failures. To overcome these problems, a Photovoltaic (PV array simulation may be used. For system design purpose, the model must reflect the details of the physical process occurring in the cell, to get a closer insight into device operation as well as optimization of particular device parameters. PV cell temperature ratings have a great effect on the main cell performance. Hence, the need for an exact technique to calculate accurately and efficiently the temperature distribution of a PV cell arises, from which we can adjust safe and proper operation at maximum ratings. The Scope of this work is to describe the development of 3D-thermal models, which are used to update the operation temperature, to get a closer insight into the response behavior and to estimate the overall performance.
Variational Approach to Enhanced Sampling and Free Energy Calculations
Valsson, Omar; Parrinello, Michele
2014-08-01
The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.
A Variational Approach to Enhanced Sampling and Free Energy Calculations
Parrinello, Michele
2015-03-01
The presence of kinetic bottlenecks severely hampers the ability of widely used sampling methods like molecular dynamics or Monte Carlo to explore complex free energy landscapes. One of the most popular methods for addressing this problem is umbrella sampling which is based on the addition of an external bias which helps overcoming the kinetic barriers. The bias potential is usually taken to be a function of a restricted number of collective variables. However constructing the bias is not simple, especially when the number of collective variables increases. Here we introduce a functional of the bias which, when minimized, allows us to recover the free energy. We demonstrate the usefulness and the flexibility of this approach on a number of examples which include the determination of a six dimensional free energy surface. Besides the practical advantages, the existence of such a variational principle allows us to look at the enhanced sampling problem from a rather convenient vantage point.
Towards NLO calculations in the parton Reggeization approach
Nefedov, Maxim
2016-01-01
Parton Reggeization approach is the scheme of kT-factorization for multiscale hard processes, which is based on the Lipatov's gauge invariant effective field theory (EFT) for high energy processes in QCD. The new type of rapidity divergences, associated with the log 1/x-corrections, appears in the loop corrections in this formalism. The covariant procedure of regularization of rapidity divergences, preserving the gauge invariance of effective action is described. As an example application, the one-loop correction to the propagator of Reggeized quark and gamma Q q -scattering vertex are computed. Obtained results are used to construct the Regge limit of one-loop gamma+gamma -> q + barq amplitude. The cancelation of rapidity divergences and consistency of the EFT prediction with the full QCD result is demonstrated. The rapidity renormalization group within the EFT is discussed.
UAV-based NDVI calculation over grassland: An alternative approach
Mejia-Aguilar, Abraham; Tomelleri, Enrico; Asam, Sarah; Zebisch, Marc
2016-04-01
The Normalised Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring and assessing vegetation in remote sensing. The index relies on the reflectance difference between the near infrared (NIR) and red light and is thus able to track variations of structural, phenological, and biophysical parameters for seasonal and long-term monitoring. Conventionally, NDVI is inferred from space-borne spectroradiometers, such as MODIS, with moderate resolution up to 250 m ground resolution. In recent years, a new generation of miniaturized radiometers and integrated hyperspectral sensors with high resolution became available. Such small and light instruments are particularly adequate to be mounted on airborne unmanned aerial vehicles (UAV) used for monitoring services reaching ground sampling resolution in the order of centimetres. Nevertheless, such miniaturized radiometers and hyperspectral sensors are still very expensive and require high upfront capital costs. Therefore, we propose an alternative, mainly cheaper method to calculate NDVI using a camera constellation consisting of two conventional consumer-grade cameras: (i) a Ricoh GR modified camera that acquires the NIR spectrum by removing the internal infrared filter. A mounted optical filter additionally obstructs all wavelengths below 700 nm. (ii) A Ricoh GR in RGB configuration using two optical filters for blocking wavelengths below 600 nm as well as NIR and ultraviolet (UV) light. To assess the merit of the proposed method, we carry out two comparisons: First, reflectance maps generated by the consumer-grade camera constellation are compared to reflectance maps produced with a hyperspectral camera (Rikola). All imaging data and reflectance maps are processed using the PIX4D software. In the second test, the NDVI at specific points of interest (POI) generated by the consumer-grade camera constellation is compared to NDVI values obtained by ground spectral measurements using a
Energy Technology Data Exchange (ETDEWEB)
Tsukamoto, T.; Sagawa, N. [The Institute of Energy Economics, Tokyo (Japan)
1996-02-01
In order to optimize introduction of wide area power transmission access according to the amended Electric Power Business Law, discussions were given on a theoretical approach to calculation of access fee to transmission lines. The amended law is intended not to limit new comers in power generation area to operations in the supply areas for general electric power business operators, but to form power wholesale markets in wider range. Since a power wholesale market is structured via one transmission network, the access conditions for transmission lines largely govern the economic reasonability for new market comers. Too high access fee prevents high-efficiency power generation units from entering a market, thus not resulting in reduction in energy fees. Conversely, if the fee is too low, harmful effects will result in the system operation, such as entry of low-efficiency generation units. What maximizes economic gains and gives incentives to the system participants would be an operation by using a marginal expense approach, but a number of problems also exists. The overall expense distribution method is simple and easy to operate, but contains economic problems related to technical problems, fairness, and efficiency in the system operation. 5 refs., 5 figs., 1 tab.
A simple approach to calculate active power of electrosurgical units
Directory of Open Access Journals (Sweden)
André Luiz Regis Monteiro
Full Text Available Abstract Introduction: Despite of more than a hundred years of electrosurgery, only a few electrosurgical equipment manufacturers have developed methods to regulate the active power delivered to the patient, usually around an arbitrary setpoint. In fact, no manufacturer has a method to measure the active power actually delivered to the load. Measuring the delivered power and computing it fast enough so as to avoid injury to the organic tissue is challenging. If voltage and current signals can be sampled in time and discretized in the frequency domain, a simple and very fast multiplication process can be used to determine the active power. Methods This paper presents an approach for measuring active power at the output power stage of electrosurgical units with mathematical shortcuts based on a simple multiplication procedure of discretized variables – frequency domain vectors – obtained through Discrete Fourier Transform (DFT applied on time-sampled voltage and current vectors. Results Comparative results between simulations and a practical experiment are presented – all being in accordance with the requirements of the applicable industry standards. Conclusion An analysis is presented comparing the active power analytically obtained through well-known voltage and current signals against a computational methodology based on vector manipulation using DFT only for time-to-frequency domain transformation. The greatest advantage of this method is to determine the active power of noisy and phased out signals with neither complex DFT or ordinary transform methodologies nor sophisticated computing techniques such as convolution. All results presented errors substantially lower than the thresholds defined by the applicable standards.
Orchestrating Masses of Sensors; A Design-Driven Development Approach
Kabáč, Milan; Consel, Charles
2015-01-01
International audience; This paper proposes a design-driven development approach that is dedicated to the domain of orchestration of masses of sensors. The developer declares what an application does using a domain-specific language (DSL). Our compiler processes domain-specific declarations to generate a customized programming framework that guides and supports the programming phase.
Complete nucleosynthesis calculations for low-mass stars from NuGrid
Pignatari, Marco; Bennett, Michael; Diehl, Steven; Fryer, Christopher L; Hirschi, Raphael; Hungerford, Aimee; Magkotsios, Georgios; Rockefeller, Gabriel; Timmes, Francis X; Young, Patrick
2008-01-01
Many nucleosynthesis and mixing processes of low-mass stars as they evolve from the Main Sequence to the thermal-pulse Asymptotic Giant Branch phase (TP-AGB) are well understood (although of course important physics components, e.g. rotation, magnetic fields, gravity wave mixing, remain poorly known). Nevertheless, in the last years presolar grain measurements with high resolution have presented new puzzling problems and strong constraints on nucleosynthesis processes in stars. The goal of the NuGrid collaboration is to present uniform yields for a large range of masses and metallicities, including low$-$mass stars and massive stars and their explosions. Here we present the first calculations of stellar evolution and high-resolution, post-processing simulations of an AGB star with an initial mass of 2 M_sun and solar-like metallicity (Z=0.01), based on the post-processing code PPN. In particular, we analyze the formation and evolution of the radiative 13C-pocket between the 17th TP and the 18th TP. The s-proc...
Large-scale shell-model calculations of nuclei around mass 210
Teruya, E.; Higashiyama, K.; Yoshinaga, N.
2016-06-01
Large-scale shell-model calculations are performed for even-even, odd-mass, and doubly odd nuclei of Pb, Bi, Po, At, Rn, and Fr isotopes in the neutron deficit region (Z ≥82 ,N ≤126 ) assuming 208Pb as a doubly magic core. All the six single-particle orbitals between the magic numbers 82 and 126, namely, 0 h9 /2,1 f7 /2,0 i13 /2,2 p3 /2,1 f5 /2 , and 2 p1 /2 , are considered. For a phenomenological effective two-body interaction, one set of the monopole pairing and quadrupole-quadrupole interactions including the multipole-pairing interactions is adopted for all the nuclei considered. The calculated energies and electromagnetic properties are compared with the experimental data. Furthermore, many isomeric states are analyzed in terms of the shell-model configurations.
Anharmonic effects in neutron cross-section calculation for nuclei in mass range 48 [<=] A [<=] 58
Energy Technology Data Exchange (ETDEWEB)
Lubian, J.; Cabezas, R. (Center for Applied Studies to Nuclear Development, Havana (Cuba))
1993-08-01
In this paper, a deviation of the target nucleus wavefunction from the harmonic vibrator in the neutron scattering process by medium-mass nuclei at low energies is studied. Two forms of anharmonicities are used: anharmonicities due to the higher-order terms in the Hamiltonians and those due to the different deformation parameters, corresponding to transitions between nuclear states. For calculation of neutron cross sections, combined use of the coupled-channel method and the statistical Hauser-Feshbach-Moldauer theory is applied. It is shown that both kinds of anharmonicities introduced a correction (about 10% in some cases) to the neutron cross sections at low energies. (author).
Directory of Open Access Journals (Sweden)
M. A. Hernández-Ceballos
2011-01-01
Full Text Available The Guadalquivir valley favors the channeling of air masses from coastal areas to inland Andalusia. This paper presents a first approximation of the spatial variation along the Guadalquivir valley in some of the representative thermodynamic properties of air masses. We have selected three representative sites of its lower, middle and high course, analyzing all of them on their daily trajectories and hourly records of potential temperature, specific humidity and wind speed during the period 2000-2007. The set of trajectories has been calculated using the HYSPLIT model (Hybrid Single-Particle Lagrangian Integrated Trajectory, establishing 12 UTC as the arrivaltime, a duration of 120 hours and a final height of incidence of 500 m. The cluster analysis has allowed the selection of ten different types of air masses, and those with a clear origin from the west were selected from this group. Analysis in the three sites of the daily cycles of potential temperature show a gradual cooling (3-4 K during the cold period (November-February of the year and warming during the warm period (June-September in the range of 5-6 K between the ends of the valley. The specific humidity experiences a drop, regardless of the period and type of air mass, as the air mass travels through the valley, being more intense during the warm period with up to 8 g kg-1 instead of the 1-2 g kg-1 in the cold period. The wind speed cycles show a progressive drop of intensity along the valley, more marked in the final section with a reduction of up to 3 m s-1 per 100 km, the more intense values being recorded during the warm period of the year with average values of up to 4 m s-1.
Penning de Vries, M. J. M.; Tuinder, O. N. E.; Wagner, T.; Fromm, M.
2012-04-01
The Wallow wildfire of 2011 was one of the most devastating fires ever in Arizona, burning over 2,000 km2 in the states of Arizona and New Mexico. The fire originated in the Bear Wallow Wilderness area in June, 2011, and raged for more than a month. The intense heat of the fire caused the formation of a pyro-convective cloud. The resulting smoke plume, partially located above low-lying clouds, was detected by several satellite instruments, including GOME-2 on June 2. The UV Aerosol Index, indicative of aerosol absorption, reached a maximum of 12 on that day, pointing to an elevated plume with moderately absorbing aerosols. We have performed extensive model calculations assuming different aerosol optical properties to determine the total aerosol optical depth of the plume. The plume altitude, needed to constrain the aerosol optical depth, was obtained from independent satellite measurements. The model results were compared with UV Aerosol Index and UV reflectances measured by the GOME-2 polarization measurement devices, which have a spatial resolution of roughly 10x40 km2. Although neither the exact aerosol optical properties nor optical depth can be obtained with this method, the range in aerosol optical depth values that we calculate, combined with the assumed specific extinction mass factor of 5 m2/kg lead us to a rough estimate of the smoke plume mass that cannot, at present, be assessed in another way.
Calculating Internal Structure and Mass-Radius Relationships of Rocky Exoplanets
Desch, Steve; Lorenzo, Alejandro; Ko, Byeongkwan
2015-12-01
We present a code (ExoPlex) we have written to calculate the internal structures and mass-radius relationships of rocky exoplanets. Existing codes described in the literature consider only a limited range of compositions for the core and mantle, and they generally assume that mineral phases are always present as a single high-pressure polymorph. These restrictions arise from the need to specify material properties, such as bulk modulus, at every depth in the planet, which requires knowledge of the phases present. Existing codes also neglect the effects of temperature on material properties, assuming values attained in the low-temperature limit. Our code circumvents these problems. We specify a stoichiometry for the core and for the mantle, we find the pressure at depth by integrating the equation of hydrostatic equilibrium, and we assume adiabatic temperature gradients in the mantle and in the core. We then supply pressure, temperature, and composition as inputs to the PerpleX software package that calculates the mineral phases present in thermodynamic equilibrium, and their material properties. This allows us to explore mass-radius relationships across a wide range of compositional and mineralogical parameter space. We discuss preliminary results.
Bencs, László; Laczai, Nikoletta; Ajtony, Zsolt
2015-07-01
A combination of former convective-diffusive vapor-transport models is described to extend the calculation scheme for sensitivity (characteristic mass - m0) in graphite furnace atomic absorption spectrometry (GFAAS). This approach encompasses the influence of forced convection of the internal furnace gas (mini-flow) combined with concentration diffusion of the analyte atoms on the residence time in a spatially isothermal furnace, i.e., the standard design of the transversely heated graphite atomizer (THGA). A couple of relationships for the diffusional and convectional residence times were studied and compared, including in factors accounting for the effects of the sample/platform dimension and the dosing hole. These model approaches were subsequently applied for the particular cases of Ag, As, Cd, Co, Cr, Cu, Fe, Hg, Mg, Mn, Mo, Ni, Pb, Sb, Se, Sn, V and Zn analytes. For the verification of the accuracy of the calculations, the experimental m0 values were determined with the application of a standard THGA furnace, operating either under stopped, or mini-flow (50 cm3 min- 1) of the internal sheath gas during atomization. The theoretical and experimental ratios of m0(mini-flow)-to-m0(stop-flow) were closely similar for each study analyte. Likewise, the calculated m0 data gave a fairly good agreement with the corresponding experimental m0 values for stopped and mini-flow conditions, i.e., it ranged between 0.62 and 1.8 with an average of 1.05 ± 0.27. This indicates the usability of the current model calculations for checking the operation of a given GFAAS instrument and the applied methodology.
Institute of Scientific and Technical Information of China (English)
Zun Peng; Yan-ping Bao; Ya-nan Chen; Li-kang Yang; Cao Xie; Feng Zhang
2014-01-01
An unsteady, two-dimensional, explicitly solved finite difference heat transfer model of a billet caster was presented to clarify the influence of the thermal conductivity of steel on model accuracy. Different approaches were utilized for calculating the thermal conductivity of solid, mushy and liquid steels. Model results predicted by these approaches were compared, and the advantages of advocated approaches were discussed. It is found that the approach for calculating the thermal conductivity of solid steel notably influences model predictions. Convection effects of liquid steel should be considered properly while calculating the thermal conductivity of mushy steel. Different values of the effective thermal conductivity of liquid steel adopted could partly be explained by the fact that different models adopted dissimilar ap-proaches for calculating the thermal conductivity of solid and mushy steels.
Institute of Scientific and Technical Information of China (English)
Liu Yu-Min; Yu Zhong-Yuan
2009-01-01
Calculations of electronic structures about the semiconductor quantum dot and the semiconductor quantum ring are presented in this paper. To reduce the calculation costs, for the quantum dot and the quantum ring, their simplified axially symmetric shapes are utilized in our analysis. The energy dependent effective mass is taken into account in solving the Schrodinger equations in the single band effective mass approximation. The calculated results show that the energy dependent effective mass should be considered only for relatively small volume quantum dots or small quantum rings. For large size quantum materials, both the energy dependent effective mass and the parabolic effective mass can give the same results. The energy states and the effective masses of the quantum dot and the quantum ring as a function of geometric parameters are also discussed in detail.
Surgical technique: Retroperitoneoscopic approach for adrenal masses in children.
Yankovic, F; Undre, S; Mushtaq, I
2014-04-01
Laparoscopic adrenalectomy is considered to be the standard of care for the surgical excision of adrenal masses. The transperitoneal laparoscopic and retroperitoneoscopic approaches are described. Both are safe and as effective as open adrenalectomy, with the added benefit of the minimally invasive approach. It can be utilized for patients requiring surgery for a phaeochromocytoma, adrenal adenoma, adrenal adenocarcinoma, Cushing's syndrome, neuroblastoma, and an incidentaloma. Relative contraindications include previous surgery of the liver or kidney, large tumours (>8-10 cm in diameter) or coagulation disorders. Although the transperitoneal route is used more widely, the retroperitoneal approach provides direct access to the adrenal gland and easy visualization of the adrenal vein. It avoids also colonic mobilization, minimizes the risk of injury to hollow viscera, and the potential risk of adhesion formation. However, the reversed orientation of the kidney and hilum, combined with a significantly smaller working space, may make this approach difficult to master.
A new approach to UTC calculation by means of the Kalman filter
Parisi, Federica; Panfilo, Gianna
2016-10-01
In this paper a new approach to Coordinated Universal Time (UTC) calculation is presented by means of the Kalman filter. An ensemble of atomic clocks participating in UTC is selected for analyzing and testing the potentiality of this new method.
Energy Technology Data Exchange (ETDEWEB)
Bencs, László, E-mail: bencs.laszlo@wigner.mta.hu [Institute for Solid State Physics and Optics, Wigner Research Centre for Physics, Hungarian Academy of Sciences, P.O. Box 49, H-1525 Budapest (Hungary); Laczai, Nikoletta [Institute for Solid State Physics and Optics, Wigner Research Centre for Physics, Hungarian Academy of Sciences, P.O. Box 49, H-1525 Budapest (Hungary); Ajtony, Zsolt [Institute of Food Science, University of West Hungary, H-9200 Mosonmagyaróvár, Lucsony utca 15–17 (Hungary)
2015-07-01
A combination of former convective–diffusive vapor-transport models is described to extend the calculation scheme for sensitivity (characteristic mass — m{sub 0}) in graphite furnace atomic absorption spectrometry (GFAAS). This approach encompasses the influence of forced convection of the internal furnace gas (mini-flow) combined with concentration diffusion of the analyte atoms on the residence time in a spatially isothermal furnace, i.e., the standard design of the transversely heated graphite atomizer (THGA). A couple of relationships for the diffusional and convectional residence times were studied and compared, including in factors accounting for the effects of the sample/platform dimension and the dosing hole. These model approaches were subsequently applied for the particular cases of Ag, As, Cd, Co, Cr, Cu, Fe, Hg, Mg, Mn, Mo, Ni, Pb, Sb, Se, Sn, V and Zn analytes. For the verification of the accuracy of the calculations, the experimental m{sub 0} values were determined with the application of a standard THGA furnace, operating either under stopped, or mini-flow (50 cm{sup 3} min{sup −1}) of the internal sheath gas during atomization. The theoretical and experimental ratios of m{sub 0}(mini-flow)-to-m{sub 0}(stop-flow) were closely similar for each study analyte. Likewise, the calculated m{sub 0} data gave a fairly good agreement with the corresponding experimental m{sub 0} values for stopped and mini-flow conditions, i.e., it ranged between 0.62 and 1.8 with an average of 1.05 ± 0.27. This indicates the usability of the current model calculations for checking the operation of a given GFAAS instrument and the applied methodology. - Highlights: • A calculation scheme for convective–diffusive vapor loss in GFAAS is described. • Residence time (τ) formulas were compared for sensitivity (m{sub 0}) in a THGA furnace. • Effects of the sample/platform dimension and dosing hole on τ were assessed. • Theoretical m{sub 0} of 18 analytes were
Martínez-Cifuentes, Maximiliano; Clavijo-Allancan, Graciela; Zuñiga-Hormazabal, Pamela; Aranda, Braulio; Barriga, Andrés; Weiss-López, Boris; Araya-Maturana, Ramiro
2016-01-01
A series of a new type of tetracyclic carbazolequinones incorporating a carbonyl group at the ortho position relative to the quinone moiety was synthesized and analyzed by tandem electrospray ionization mass spectrometry (ESI/MS-MS), using Collision-Induced Dissociation (CID) to dissociate the protonated species. Theoretical parameters such as molecular electrostatic potential (MEP), local Fukui functions and local Parr function for electrophilic attack as well as proton affinity (PA) and gas phase basicity (GB), were used to explain the preferred protonation sites. Transition states of some main fragmentation routes were obtained and the energies calculated at density functional theory (DFT) B3LYP level were compared with the obtained by ab initio quadratic configuration interaction with single and double excitation (QCISD). The results are in accordance with the observed distribution of ions. The nature of the substituents in the aromatic ring has a notable impact on the fragmentation routes of the molecules. PMID:27399676
Mishra, Saurabh
In the past two decades, numerical transport phenomena based models have provided useful information about the thermal cycles and weld pool geometry. However, no effort has been made to apply these concepts to design weld consumables, to study the weld bead shape on welding two plates with different sulfur contents and to tailor weld pool geometry to specified dimensions. The present research focuses on these unexplored areas. The research proposed here seeks to develop a quantitative understanding of mass transport during fusion welding, with special emphasis on the role of surface active elements and the effect of solute distribution on weld defects like liquation cracking. A comprehensive model, incorporating numerical three-dimensional calculations of temperature and velocity fields and solute distribution in the weld pool is developed for the proposed quantitative study. The study identifies the factors that affect the weld pool geometry on joining two plates with different sulfur contents, and predicts the susceptibility of an aluminum-copper alloy GMA weld to liquation cracking. The specific contributions of the present thesis research include (i) development of a numerical solute transport model for fusion welding; (ii) improving the reliability of output of the numerical model; (iii) achieving computational efficiency and economy by developing a neural network trained by data generated by the numerical model; (iv) creating a bi-directional methodology where a target weld attribute like weld pool geometry can be attained via multiple combinations of input process parameters like arc current, voltage and welding speed; (v) calculating sulfur distribution during gas tungsten arc welding of stainless steel plates with different sulfur contents and predicting the arc welding of aluminum-copper alloys by incorporating the heat and mass addition from filler metal and a non-equilibrium solidification model, and using the copper content of the mushy zone to predict
Proteomics by mass spectrometry: approaches, advances, and applications.
Yates, John R; Ruse, Cristian I; Nakorchevsky, Aleksey
2009-01-01
Mass spectrometry (MS) is the most comprehensive and versatile tool in large-scale proteomics. In this review, we dissect the overall framework of the MS experiment into its key components. We discuss the fundamentals of proteomic analyses as well as recent developments in the areas of separation methods, instrumentation, and overall experimental design. We highlight both the inherent strengths and limitations of protein MS and offer a rough guide for selecting an experimental design based on the goals of the analysis. We emphasize the versatility of the Orbitrap, a novel mass analyzer that features high resolution (up to 150,000), high mass accuracy (2-5 ppm), a mass-to-charge range of 6000, and a dynamic range greater than 10(3). High mass accuracy of the Orbitrap expands the arsenal of the data acquisition and analysis approaches compared with a low-resolution instrument. We discuss various chromatographic techniques, including multidimensional separation and ultra-performance liquid chromatography. Multidimensional protein identification technology (MudPIT) involves a continuum sample preparation, orthogonal separations, and MS and software solutions. We discuss several aspects of MudPIT applications to quantitative phosphoproteomics. MudPIT application to large-scale analysis of phosphoproteins includes (a) a fractionation procedure for motif-specific enrichment of phosphopeptides, (b) development of informatics tools for interrogation and validation of shotgun phosphopeptide data, and (c) in-depth data analysis for simultaneous determination of protein expression and phosphorylation levels, analog to western blot measurements. We illustrate MudPIT application to quantitative phosphoproteomics of the beta adrenergic pathway. We discuss several biological discoveries made via mass spectrometry pipelines with a focus on cell signaling proteomics.
Luo, Li; Duan, Nan; Wang, Xiaochang C; Guo, Wenshan; Ngo, Huu Hao
2017-12-15
Although the eutrophication phenomenon has been studied for a long time, there are still no quantifiable parameters available for a comprehensive assessment of its impacts on the water environment. As contamination alters the thermodynamic equilibrium of a water system to a state of imbalance, a novel method was proposed, in this study, for its quantitative evaluation. Based on thermodynamic analyses of the algal growth process, the proposed method targeted, both theoretically and experimentally, the typical algae species encountered in the water environment. By calculating the molar enthalpy of algae biomass production, the heat energy dissipated in the photosynthetic process was firstly evaluated. The associated entropy production (ΔS) in the aquatic system could be then obtained. For six algae strains of distinct molecular formulae, the heat energy consumed for the production of a unit algal biomass was found to proportionate to the mass of nitrogen (N) or phosphorus (P) uptake through photosynthesis. A proportionality relationship between ΔS and the algal biomass with a coefficient circa 44kJ/g was obtained. By the principle of energy conservation, the heat energy consumed in the process of algae biomass production is stored in the algal biomass. Furthermore, by measuring the heat of combustion of mature algae of Microcystis flos-aquae, Anabaena flos-aquae, and Chlorella vulgaris, the proportionality relationships between the heat energy and the N and P contents were validated experimentally at 90% and 85% confidence levels, respectively. As the discharge of excess N and P from domestic wastewater treatment plants is usually the main cause of eutrophication, the proposed impact assessment approach estimates that for a receiving water body, the ΔS due to a unit mass of N and P discharge is 268.9kJ/K and 1870.1kJ/K, respectively. Consequently, P discharge control would be more important for environmental water protection. Copyright © 2017. Published by
Aramendía-Vidaurreta, Verónica; Cabeza, Rafael; Villanueva, Arantxa; Navallas, Javier; Alcázar, Juan Luis
2016-03-01
The discrimination between benign and malignant adnexal masses in ultrasound images represents one of the most challenging problems in gynecologic practice. In the study described here, a new method for automatic discrimination of adnexal masses based on a neural networks approach was tested. The proposed method first calculates seven different types of characteristics (local binary pattern, fractal dimension, entropy, invariant moments, gray level co-occurrence matrix, law texture energy and Gabor wavelet) from ultrasound images of the ovary, from which several features are extracted and collected together with the clinical patient age. The proposed technique was validated using 106 benign and 39 malignant images obtained from 145 patients, corresponding to its probability of appearance in general population. On evaluation of the classifier, an accuracy of 98.78%, sensitivity of 98.50%, specificity of 98.90% and area under the curve of 0.997 were calculated.
Energy Technology Data Exchange (ETDEWEB)
Lee, C.E.
1976-08-01
The Volterra method of the multiplicative integral is used to determine the isotopic density, mass, and energy production in linear systems. The solution method, assumptions, and limitations are discussed. The method allows a rapid accurate calculation of the change in isotopic density, mass, and energy production independent of the magnitude of the time steps, production or decay rates, or flux levels.
2010-07-01
... MONITORING NOX Mass Emissions Provisions § 75.71 Specific provisions for monitoring NOX and heat input for... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Specific provisions for monitoring NOX and heat input for the purpose of calculating NOX mass emissions. 75.71 Section 75.71 Protection...
A NEW APPROACH TO CALCULATION OF THE ELLIPTICAL BEARING OF MULTI-SHEARING-SURFACE
Directory of Open Access Journals (Sweden)
Emin GÜLLÜ
1995-03-01
Full Text Available In this study, a new approach is proposed in the calculation of performance characteristics of elliptic bearings of Multi-Shearing-Surface (MSS type which has widely been used yet. The number of researches available in this area are far below the needs. This paper presents the proposed approach and the performance of bearings without use of superposition.
Sustainability of algae derived biodiesel: a mass balance approach.
Pfromm, Peter H; Amanor-Boadu, Vincent; Nelson, Richard
2011-01-01
A rigorous chemical engineering mass balance/unit operations approach is applied here to bio-diesel from algae mass culture. An equivalent of 50,000,000 gallons per year (0.006002 m3/s) of petroleum-based Number 2 fuel oil (US, diesel for compression-ignition engines, about 0.1% of annual US consumption) from oleaginous algae is the target. Methyl algaeate and ethyl algaeate diesel can according to this analysis conceptually be produced largely in a technologically sustainable way albeit at a lower available diesel yield. About 11 square miles of algae ponds would be needed with optimistic assumptions of 50 g biomass yield per day and m2 pond area. CO2 to foster algae growth should be supplied from a sustainable source such as a biomass-based ethanol production. Reliance on fossil-based CO2 from power plants or fertilizer production renders algae diesel non-sustainable in the long term.
An efficient approach of attractor calculation for large-scale Boolean gene regulatory networks.
He, Qinbin; Xia, Zhile; Lin, Bin
2016-11-07
Boolean network models provide an efficient way for studying gene regulatory networks. The main dynamics of a Boolean network is determined by its attractors. Attractor calculation plays a key role for analyzing Boolean gene regulatory networks. An approach of attractor calculation was proposed in this study, which improved the predecessor-based approach. Furthermore, the proposed approach combined with the identification of constant nodes and simplified Boolean networks to accelerate attractor calculation. The proposed algorithm is effective to calculate all attractors for large-scale Boolean gene regulatory networks. If the average degree of the network is not too large, the algorithm can get all attractors of a Boolean network with dozens or even hundreds of nodes.
TWO APPROACHES TO CALCULATION OF SPLIT PHASE DANCING OF OVERHEAD ELECTRICAL TRANSMISSION LINE
Directory of Open Access Journals (Sweden)
I. I. Sergey
2005-01-01
Full Text Available The paper shows two approaches to mathematical modeling of split phase dancing of overhead electrical transmission line. The first approach is based on calculative method when a phase is in the shape of flexible elastic thread connected with rigid rods. The phase is represented with equivalent wire in the second approach. Principle of mechanics relations has been used to set combined boundary problem of split phase dynamics. Two packets of computer programs for calculation of split phase dancing of overhead (electric power line have been set up and tested.
Textural Approach for Mass Abnormality Segmentation in Mammographic Images
Djaroudib, Khamsa; Ahmed, Abdelmalik Taleb; Zidani, Abdelmadjid
2014-01-01
Mass abnormality segmentation is a vital step for the medical diagnostic process and is attracting more and more the interest of many research groups. Currently, most of the works achieved in this area have used the Gray Level Co-occurrence Matrix (GLCM) as texture features with a region-based approach. These features come in previous phase for segmentation stage or are using as inputs to classification stage. The work discussed in this paper attempts to experiment the GLCM method under a con...
Piermattei, Livia; Carturan, Luca; Calligaro, Simone; Blasone, Giacomo; Guarnieri, Alberto; Tarolli, Paolo; Dalla Fontana, Giancarlo; Vettore, Antonio
2014-05-01
Digital elevation models (DEMs) of glaciated terrain are commonly used to measure changes in geometry and hence infer the mass balance of glaciers. Different tools and methods exist to obtain information about the 3D geometry of terrain. Recent improvements on the quality and performance of digital cameras for close-range photogrammetry, and the development of automatic digital photogrammetric processing makes the 'structure from motion' photogrammetric technique (SfM) competitive for high quality 3D models production, compared to efficient but also expensive and logistically-demanding survey technologies such as airborn and terrestrial laser scanner (TLS). The purpose of this work is to test the SfM approach, using a consumer-grade SLR camera and the low-cost computer vision-based software package Agisoft Photoscan (Agisoft LLC), to monitor the mass balance of Montasio Occidentale glacier, a 0.07km2, low-altitude, debris-covered glacier located in the Eastern Italian Alps. The quality of the 3D models produced by the SfM process has been assessed by comparison with digital terrain models obtained through TLS surveys carried out at the same dates. TLS technique has indeed proved to be very effective in determining the volume change of this glacier in the last years. Our results shows that the photogrammetric approach can produce point cloud densities comparable to those derived from TLS measurements. Furthermore, the horizontal and vertical accuracies are also of the same order of magnitude as for TLS (centimetric to decimetric). The effect of different landscape characteristics (e.g. distance from the camera or terrain gradient) and of different substrata (rock, debris, ice, snow and firn) was also evaluated in terms of SfM reconstruction's accuracy vs. TLS. Given the good results obtained on the Montasio Occidentale glacier, it can be concluded that the terrestrial photogrammetry, with the advantageous features of portability, ease of use and above all low costs
Ratsch, Christian; Kaminski, Jakub
In this talk we will present a new approach for the calculation of surface energies of periodic crystal. For non-polar materials slabs (which are terminated by two identical surfaces) the task of calculating the surface energy is trivial. But it is more problematic for polar systems where both terminating surfaces are different, as there is no single established method allowing for equal treatment of a wide range of surface morphologies and orientations. Our proposed new approach addresses this problem. It relies on carefully chosen capping atoms and the assumptions that their bond energy contributions can be used to approximate the total energy of the surface. The choice of the capping atoms is governed by a set of simple guidelines that are applicable for surfaces with different terminations. We present the results for different semiconductor materials and show that our approach leads to surfaces energies with errors as low as 2%. We show that hydrogen is not always the best choice for a capping atom if accurate surface energies are the target of the calculations. Our approach is suitable for high-throughput screening of new material interfaces, as accurate calculations of surface energies can be performed in an unsupervised algorithm. A New Approach for Surface Energy Calculations Applicable to High-throughput Design of New Interfaces.
Mass sensitivity calculation of the protein layer using love wave SAW biosensor.
Lee, Sangdae; Kim, Ki Bok; Il Kim, Yong
2012-07-01
Love waves, a variety of surface acoustic waves (SAWs), can be used to detect very small biological surface interactions and so have a wide range of potential applications. To demonstrate the practicality of a Love wave SAW biosensor, we fabricated a 155-MHz Love wave SAW biosensor and compared it with a commercial surface Plasmon resonance (SPR) using glycerol-water solution with known densities and viscosities to calibrate the response signals of the biosensors. And the mass per unit area of anti-mouse IgG bound with protein G onto the sensitive layer of the biosensor was calculated on the basis of the calibration result. The sensitivity of the Love wave SAW biosensor was the same as or greater than that of the SPR biosensor. Furthermore, the Love wave SAW biosensor was capable of measuring a much wider range of viscosities than the SPR biosensor. Although the operating principle of the Love wave SAW biosensor is completely different from that of the SPR biosensor, the subtle changes in the viscoelastic properties of the biological layer that accompany biological binding reactions on the sensitive layer can be monitored and measured in the same ways as with the SPR biosensor.
Density functional theory approach for calculation of dielectric properties of warm dense matter
Saitov, Ilnur
2015-06-01
The reflectivity of shocked xenon was measured in the experiments of Mintsev and Zaporoghets for wavelength 1064 nm. But there is no adequate theoretical explanation of these reflectivity results in the framework of the standard methods of nonideal plasma theory. The assumption of significant width to the shock front gives a good agreement with the experimental data. However, there are no evidences of this effect in the experiment. Reflectivity of shocked compressed xenon plasma is calculated in the framework of the density functional theory approach as in. Dependencies on the frequency of incident radiation and on the plasma density are analyzed. The Fresnel formula for the reflectivity is used. The longitudinal expression in the long wavelength limit is applied for the calculation of the imaginary part of the dielectric function. The real part of the dielectric function is calculated by means of the Kramers-Kronig transformation. The approach for the calculation of plasma frequency is developed.
A new approach to calculate the transport matrix in RF cavities
Energy Technology Data Exchange (ETDEWEB)
Eidelman, Yu.; /Novosibirsk, IYF; Mokhov, N.; Nagaitsev, S.; Solyak, N.; /Fermilab
2011-03-01
A realistic approach to calculate the transport matrix in RF cavities is developed. It is based on joint solution of equations of longitudinal and transverse motion of a charged particle in an electromagnetic field of the linac. This field is a given by distribution (measured or calculated) of the component of the longitudinal electric field on the axis of the linac. New approach is compared with other matrix methods to solve the same problem. The comparison with code ASTRA has been carried out. Complete agreement for tracking results for a TESLA-type cavity is achieved. A corresponding algorithm will be implemented into the MARS15 code. A realistic approach to calculate the transport matrix in RF cavities is developed. Complete agreement for tracking results with existed code ASTRA is achieved. New algorithm will be implemented into MARS15 code.
Basel II Approaches for the Calculation of the Regulatory Capital for Operational Risk
Directory of Open Access Journals (Sweden)
Ivana Valová
2011-01-01
Full Text Available The final version of the New Capital Accord, which includes operational risk, was released by the Basel Committee on Banking Supervision in June 2004. The article “Basel II approaches for the calculation of the regulatory capital for operational risk” is devoted to the issue of operational risk of credit financial institutions. The paper talks about methods of operational risk calculation, advantages and disadvantages of particular methods.
Green Function Approach to the Calculation of the Local Density of States in the Graphitic Nanocone
Directory of Open Access Journals (Sweden)
Smotlacha Jan
2016-01-01
Full Text Available Graphene and other nanostructures belong to the center of interest of today’s physics research. The local density of states of the graphitic nanocone influenced by the spin–orbit interaction was calculated. Numerical calculations and the Green function approach were used to solve this problem. It was proven in the second case that the second order approximation is not sufficient for this purpose.
A hybrid approach to calculate the Shielding Failure-Caused Trip-out Rate
Directory of Open Access Journals (Sweden)
Zhou Liang
2016-01-01
Full Text Available Lightning has become a big threat to the safe operation of the main transmission line. Reasonable and accurate calculation of shielding failure rate plays important role in transmission line and tower design. This paper proposes a hybrid approach to calculate the shielding failure-caused trip-out rate, based on the typical electro-geometric model and the regulation method. The case study prove the validity and correctness of this approach, by comparing with the actual operation shielding failure rate.
A New Approach to Calculate Indirect GWPs using the UIUC 2-D CRT and RTM Model
Li, Y.; Youn, D.; Patten, K.; Wuebbles, D.
2006-12-01
Global warming potentials (GWPs) are defined to be the total impact over time of adding a unit mass of a greenhouse gas to the atmosphere. Indirect GWPs are due to ozone depletion effects in the stratosphere for a certain compound and therefore stand for the long-term global cooling effects. Previously, indirect GWPs were calculated using a box model, which was not able to consider the complex processes in the atmosphere. As a step towards obtaining indirect GWPs through a more robust approach, the UIUC 2-D CRT model was used as the computational tool to derive ozone changes. The 2-D model has more realistic chemical, physical, and dynamical processes in the atmosphere and a relatively complete transport system, which makes it useful for a more accurate analysis. Furthermore, the University of Illinois at Urbana-Champaign (UIUC) radiative transfer model (RTM) is employed to derive the corresponding time-dependent radiative forcings from the 2-D CRT outputs. Two Halon compounds, Halon-1211 and Halon-1301, were selected to be studied for their indirect GWPs. The results showed that instantaneous and stratospheric adjusted indirect GWPs for a 100-year horizon are -10004.8 and -10237.1 for Halon-1211, while for Halon-1301 they are -19218.0 and -19627.6. The indirect GWPs for Halon-1211 and -1301 presented here are two to three times smaller compared to the results in WMO (2006) draft. Further analysis on indirect GWPs will be carried out using our 3-D MOZART-3 model.
A fully relativistic approach for calculating atomic data for highly charged ions
Energy Technology Data Exchange (ETDEWEB)
Sampson, Douglas H. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Zhang Honglin [Applied Physics Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)], E-mail: zhang@lanl.gov; Fontes, Christopher J. [Applied Physics Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)], E-mail: cjf@lanl.gov
2009-07-15
We present a review of our fully relativistic approach to calculating atomic data for highly charged ions, highlighting a research effort that spans twenty years. Detailed discussions of both theoretical and numerical techniques are provided. Our basic approach is expected to provide accurate results for ions that range from approximately half ionized to fully stripped. Options for improving the accuracy and range of validity of this approach are also discussed. In developing numerical methods for calculating data within this framework, considerable emphasis is placed on techniques that are robust and efficient. A variety of fundamental processes are considered including: photoexcitation, electron-impact excitation, electron-impact ionization, autoionization, electron capture, photoionization and photorecombination. Resonance contributions to a variety of these processes are also considered, including discussions of autoionization, electron capture and dielectronic recombination. Ample numerical examples are provided in order to illustrate the approach and to demonstrate its usefulness in providing data for large-scale plasma modeling.
A fully relativistic approach for calculating atomic data for highly charged ions
Energy Technology Data Exchange (ETDEWEB)
Zhang, Hong Lin [Los Alamos National Laboratory; Fontes, Christopher J [Los Alamos National Laboratory; Sampson, Douglas H [PENNSYLVANIA STATE UNIV
2009-01-01
We present a review of our fully relativistic approach to calculating atomic data for highly charged ions, highlighting a research effort that spans twenty years. Detailed discussions of both theoretical and numerical techniques are provided. Our basic approach is expected to provide accurate results for ions that range from approximately half ionized to fully stripped. Options for improving the accuracy and range of validity of this approach are also discussed. In developing numerical methods for calculating data within this framework, considerable emphasis is placed on techniques that are robust and efficient. A variety of fundamental processes are considered including: photoexcitation, electron-impact excitation, electron-impact ionization, autoionization, electron capture, photoionization and photorecombination. Resonance contributions to a variety of these processes are also considered, including discussions of autoionization, electron capture and dielectronic recombination. Ample numerical examples are provided in order to illustrate the approach and to demonstrate its usefulness in providing data for large-scale plasma modeling.
A Real-Time Temperature Data Transmission Approach for Intelligent Cooling Control of Mass Concrete
Directory of Open Access Journals (Sweden)
Peng Lin
2014-01-01
Full Text Available The primary aim of the study presented in this paper is to propose a real-time temperature data transmission approach for intelligent cooling control of mass concrete. A mathematical description of a digital temperature control model is introduced in detail. Based on pipe mounted and electrically linked temperature sensors, together with postdata handling hardware and software, a stable, real-time, highly effective temperature data transmission solution technique is developed and utilized within the intelligent mass concrete cooling control system. Once the user has issued the relevant command, the proposed programmable logic controllers (PLC code performs all necessary steps without further interaction. The code can control the hardware, obtain, read, and perform calculations, and display the data accurately. Hardening concrete is an aggregate of complex physicochemical processes including the liberation of heat. The proposed control system prevented unwanted structural change within the massive concrete blocks caused by these exothermic processes based on an application case study analysis. In conclusion, the proposed temperature data transmission approach has proved very useful for the temperature monitoring of a high arch dam and is able to control thermal stresses in mass concrete for similar projects involving mass concrete.
First-principle calculation of the electronic structure, DOS and effective mass TlInSe2
Ismayilova, N. A.; Orudzhev, G. S.; Jabarov, S. H.
2017-05-01
The electronic structure, density of states (DOS), effective mass are calculated for tetragonal TlInSe2 from first principle in the framework of density functional theory (DFT). The electronic structure of TlInSe2 has been investigated by Quantum Wise within GGA. The calculated band structure by Hartwigsen-Goedecker-Hutter (HGH) pseudopotentials (psp) shows both the valence band maximum and conduction band minimum located at the T point of the Brillouin zone. Valence band maximum at the T point and the surrounding parts originate mainly from 6s states of univalent Tl ions. Bottom of the conduction band is due to the contribution of 6p-states of Tl and 5s-states of In atoms. Calculated DOS effective mass for holes and electrons are mDOS h∗ = 0.830m e, mDOS h∗ = 0.492m e, respectively. Electron effective masses are fairly isotropic, while the hole effective masses show strong anisotropy. The calculated electronic structure, density of states and DOS effective masses of TlInSe2 are in good agreement with existing theoretical and experimental results.
Nitrogen isotope and mass balance approach in the Elbe Estuary
Sanders, Tina; Wankel, Scott D.; Dähnke, Kirstin
2017-04-01
The supply of bioavailable nitrogen is crucial to primary production in the world's oceans. Especially in estuaries, which act as a nutrient filter for coastal waters, microbial nitrogen turnover and removal has a particular significance. Nitrification as well as other nitrogen-based processes changes the natural abundance of the stable isotope, which can be used as proxies for sources and sinks as well as for process identification. The eutrophic Elbe estuary in northern Germany is loaded with fertilizer-derived nitrogen, but management efforts have started to reduce this load effectively. However, an internal nitrate source in turn gained in importance and the estuary changed from a sink to a source of dissolved inorganic nitrogen: Nitrification is responsible for significant estuarine nutrient regeneration, especially in the Hamburg Port. In our study, we aimed to quantify sources and sinks of nitrogen based on a mass and stable isotope budget in the Elbe estuary. A model was developed reproduce internal N-cycling and associated isotope changes. For that approach we measured dissolved inorganic nitrogen (DIN), particulate nitrogen and their stable isotopes in a case study in July 2013. We found an almost closed mass balance of nitrogen, with only low lost or gains which we attribute to sediment resuspension. The isotope values of different DIN components and the model approach both support a high fractionation of up to -25‰ during nitrification. However, the nitrogen balance and nitrogen stable isotopes suggest that most important processes are remineralization of organic matter to ammonium and further on the oxidation to nitrate. Denitrification and nitrate assimilation play a subordinate role in the Elbe Estuary.
2007-03-01
by Equation 42 exist, with the most successful based upon numerical fits to quantum mechanical Monte Carlo calculations on the ground state of a...mass with speed. The fifth term is known as the Darwin term, and is a result of zitterbewegung, or trembling motion. It is a result of the Heisenberg ...Benchmark Database. Technical Re- port, August 2005. NIST Standard Reference Database Number 101. 2. Adamo, Carlo and Vincenzo Barone. “Toward reliable
Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C
2010-08-01
Reference intervals (RI) play a key role in clinical interpretation of laboratory test results. Numerous articles are devoted to analyzing and discussing various methods of RI determination. The two most widely used approaches are the parametric method, which assumes data normality, and a nonparametric, rank-based procedure. The decision about which method to use is usually made arbitrarily. The goal of this study was to demonstrate that using a resampling approach for the comparison of RI determination techniques could help researchers select the right procedure. Three methods of RI calculation-parametric, transformed parametric, and quantile-based bootstrapping-were applied to multiple random samples drawn from 81 values of complement factor B observations and from a computer-simulated normally distributed population. It was shown that differences in RI between legitimate methods could be up to 20% and even more. The transformed parametric method was found to be the best method for the calculation of RI of non-normally distributed factor B estimations, producing an unbiased RI and the lowest confidence limits and interquartile ranges. For a simulated Gaussian population, parametric calculations, as expected, were the best; quantile-based bootstrapping produced biased results at low sample sizes, and the transformed parametric method generated heavily biased RI. The resampling approach could help compare different RI calculation methods. An algorithm showing a resampling procedure for choosing the appropriate method for RI calculations is included.
Generic model for calculating carbon footprint of milk using four different LCA modelling approaches
DEFF Research Database (Denmark)
Dalgaard, Randi; Schmidt, Jannick Højrup; Flysjö, Anna
2014-01-01
The aim of the study is to develop a tool, which can be used for calculation of carbon footprint (using a life cycle assessment (LCA) approach) of milk both at a farm level and at a national level. The functional unit is ‘1 kg energy corrected milk (ECM) at farm gate’ and the applied methodology ...
Desch, Steven; Lorenzo, Alejandro; Ko, Byeongkwan
2016-06-01
We present a computer code we have written for general release that calculates the interior structure and mass-radius relationships of solid exoplanets up to a few Earth masses. The basic algorithm is that of Seager et al. (2007), Zeng & Sasselov (2013) and Dorn et al. (2015): the code integrates the 1-D (spherical) equation of hydrostatic equilibrium to find pressure in shells of various depths assuming a gravitational acceleration, uses the bulk modulus of the materials as inputs to an equation of state to convert pressures into density and volume in each shell, recomputes the shell thicknesses and gravitational acceleration, and iterates the solution to convergence. Unlike most existing codes, we do not impose a particular mineralogy in each shell. Instead we adopt the approach of Dorn et al. (2015), in which we impose a stoichiometry in each shell; for rocky shells and the metal core the code calls the PerpleX code (Connolly et al. 2005) to compute the mineralogy and material properties appropriate to that shell’s stoichiometry, pressure and temperature. Unique attributes of the code are as follows. The mineralogy is complete in the Fe-Mg-Si-O system, including species like FeSi and FeO in the core. We also include FeS (VII) in the core. We have also included an approximate phase diagram for water ice to account for an icy mantle. We also include the effects of adiabatic temperature profiles and a temperature jump at the core-mantle boundary. Finally, we have created a user-friendly interface allowing the code to be downloaded and used as a teaching tool. Results of the code and a demonstration of its use will be presented at the meeting.
Liu, Jinfeng; Zhang, John Z H; He, Xiao
2016-01-21
Geometry optimization and vibrational spectra (infrared and Raman spectra) calculations of proteins are carried out by a quantum chemical approach using the EE-GMFCC (electrostatically embedded generalized molecular fractionation with conjugate caps) method (J. Phys. Chem. A, 2013, 117, 7149). The first and second derivatives of the EE-GMFCC energy are derived and employed in geometry optimization and vibrational frequency calculations for several test systems, including a polypeptide ((GLY)6), an α-helix (AKA), a β-sheet (Trpzip2) and ubiquitin (76 residues with 1231 atoms). Comparison of the present results with those obtained from full system QM (quantum mechanical) calculations shows that the EE-GMFCC approach can give accurate molecular geometries, vibrational frequencies and vibrational intensities. The EE-GMFCC method is also employed to simulate the amide I vibration of proteins, which has been widely used for the analysis of peptide and protein structures, and the results are in good agreement with the experimental observations.
DEFF Research Database (Denmark)
Vegge, Tejs; Sethna, J.P.; Cheong, S.-A.;
2001-01-01
Several experiments indicate that there are atomic tunneling defects in plastically deformed metals. How this is possible has not been clear, given the large mass of the metal atoms. Using a classical molecular-dynamics calculation, we determine the structures, energy barriers, effective masses......, and quantum tunneling rates fur dislocation kinks and jogs in copper screw dislocations. We find that jugs are unlikely to tunnel, but the kinks should have large quantum fluctuations. The kink motion involves hundreds of atoms each shifting a tiny amount, leading to a small effective mass and tunneling...
Directory of Open Access Journals (Sweden)
Ahmadi R.
2012-01-01
Full Text Available In this work, a new approach is described for the calculation of the relaxation time and magnetic anisotropy energy of magnetic nanoparticles. Ferrofluids containing monodispersed magnetite nanoparticles were synthesized via hydrothermal method and then heated using the 10 kA/m external AC magnetic fields in three different frequencies: 10, 50 and 100 kHz. By measuring the temperature variations during the application of the magnetic field, the total magnetic time constant including both Brownian and Neel relaxation times can be calculated. By measuring the magnetic core size and hydrodynamic size of particles, the magnetic anisotropy can be calculated too. Synthesized ferrofluids were characterized via TEM, XRD, VSM and PCS techniques and the results were used for the mentioned calculations.
Model operator approach to the Lamb shift calculations in relativistic many-electron atoms
Shabaev, V M; Yerokhin, V A
2013-01-01
A model operator approach to calculations of the QED corrections to energy levels in relativistic many-electron atomic systems is developed. The model Lamb shift operator is represented by a sum of local and nonlocal potentials which are defined using the results of ab initio calculations of the diagonal and nondiagonal matrix elements of the one-loop QED operator with H-like wave functions. The model operator can be easily included in any calculations based on the Dirac-Coulomb-Breit Hamiltonian. Efficiency of the method is demonstrated by comparison of the model QED operator results for the Lamb shifts in many-electron atoms and ions with exact QED calculations.
Orthogonal polynomial approach to calculate the two-nucleon transition operator in three dimensions
Energy Technology Data Exchange (ETDEWEB)
Topolnicki, Kacper; Golak, Jacek; Skibinski, Roman; Witala, Henryk [Jagiellonian University, M. Smoluchowski Institute of Physics, Krakow (Poland)
2016-02-15
We give a short report on the possibility to use orthogonal polynomials (OP) in calculations that involve the two-nucleon (2N) transition operator. The presented work adds another approach to the set of previously developed methods (described in Phys. Rev. C 81, 034006 (2010); Few-Body Syst. 53, 237 (2012); K. Topolnicki, PhD thesis, Jagiellonian University (2014)) and is applied to the transition operator calculated at laboratory kinetic energy 300MeV. The new results for neutron-neutron and neutron-proton scattering observables converge to the results presented in Few-Body Syst. 53, 237 (2012) and to results obtained using the Arnoldi algorithm (Y. Saad, Iterative methods for sparse linear systems (SIAM Philadelphia, PA, USA 2003)). The numerical cost of the calculations performed using the new scheme is large and the new method can serve only as a backup to cross-check the previously used calculation schemes. (orig.)
New efficient optimal mass transport approach for single freeform surface design
Bösel, Christoph
2015-01-01
We present a new optimal mass transport approach for the design of a continuous single freeform surface for collimated beams. By applying the law of reflection/refraction and the well-known integrability condition, it is shown that the design process in a small angle approximation can be decoupled into the calculation of a raymapping by optimal mass transport methods and the subsequent construction of the freeform surface by a steady linear advection equation. It is shown that the solution of this linear advection equation can be obtained by a decomposition into two dimensional subproblems and solving these by standard integrals. The efficiency of the method is demonstrated by applying it to two challenging design examples.
Stanke, Monika; Palikot, Ewa; Adamowicz, Ludwik
2016-05-01
Algorithms for calculating the leading mass-velocity (MV) and Darwin (D) relativistic corrections are derived for electronic wave functions expanded in terms of n-electron explicitly correlated Gaussian functions with shifted centers and without pre-exponential angular factors. The algorithms are implemented and tested in calculations of MV and D corrections for several points on the ground-state potential energy curves of the H2 and LiH molecules. The algorithms are general and can be applied in calculations of systems with an arbitrary number of electrons.
Optical response of metallic and insulating VO{sub 2} calculated with the LDA approach
Energy Technology Data Exchange (ETDEWEB)
Mossanek, R J O; Abbate, M [Departamento de Fisica, Universidade Federal do Parana, Caixa Postal 19081, 81531-990 Curitiba PR (Brazil)
2007-08-29
We calculated the optical response of metallic and insulating VO{sub 2} using the local density approximation (LDA) approach. The band structure calculation was based on the full-potential linear-muffin-tin method. The imaginary part of the dielectric function {epsilon}{sub 2}({omega}) is related to the different optical transitions. The Drude tail in the calculation of the metallic phase corresponds to intraband d-d transitions. The calculation in the insulating phase is characterized by the transitions to the d{sub parallel}* band. The low-frequency features, 0.0-5.0 eV, correspond to V 3d-V 3d transitions, whereas the high-frequency structures, 5.0-12 eV, are related to O 2p-V 3d transitions. The calculation helps to explain the imaginary part of the dielectric function {epsilon}{sub 2}({omega}), as well as the electron-energy-loss and reflectance spectra. The results reproduce not only the energy position and relative intensity of the features in the spectra, but also the main changes across the metal-insulator transition and the polarization dependence. The main difference is a shift of about 0.6 eV in the calculation of the insulating phase. This discrepancy arises because the LDA calculation underestimates the value of the band gap.
A finite element approach to self-consistent field theory calculations of multiblock polymers
Ackerman, David M; Fredrickson, Glenn H; Ganapathysubramanian, Baskar
2016-01-01
Self-consistent field theory (SCFT) has proven to be a powerful tool for modeling equilibrium microstructures of soft materials, particularly for multiblock polymers. A very successful approach to numerically solving the SCFT set of equations is based on using a spectral approach. While widely successful, this approach has limitations especially in the context of current technologically relevant applications. These limitations include non-trivial approaches for modeling complex geometries, difficulties in extending to non-periodic domains, as well as non-trivial extensions for spatial adaptivity. As a viable alternative to spectral schemes, we develop a finite element formulation of the SCFT paradigm for calculating equilibrium polymer morphologies. We discuss the formulation and address implementation challenges that ensure accuracy and efficiency. We explore higher order chain contour steppers that are efficiently implemented with Richardson Extrapolation. This approach is highly scalable and suitable for s...
An alternative approach to calculate the posterior probability of GNSS integer ambiguity resolution
Yu, Xianwen; Wang, Jinling; Gao, Wang
2017-03-01
When precise positioning is carried out via GNSS carrier phases, it is important to make use of the property that every ambiguity should be an integer. With the known float solution, any integer vector, which has the same degree of freedom as the ambiguity vector, is the ambiguity vector in probability. For both integer aperture estimation and integer equivariant estimation, it is of great significance to know the posterior probabilities. However, to calculate the posterior probability, we have to face the thorny problem that the equation involves an infinite number of integer vectors. In this paper, using the float solution of ambiguity and its variance matrix, a new approach to rapidly and accurately calculate the posterior probability is proposed. The proposed approach consists of four steps. First, the ambiguity vector is transformed via decorrelation. Second, the range of the adopted integer of every component is directly obtained via formulas, and a finite number of integer vectors are obtained via combination. Third, using the integer vectors, the principal value of posterior probability and the correction factor are worked out. Finally, the posterior probability of every integer vector and its error upper bound can be obtained. In the paper, the detailed process to calculate the posterior probability and the derivations of the formulas are presented. The theory and numerical examples indicate that the proposed approach has the advantages of small amount of computations, high calculation accuracy and strong adaptability.
An alternative approach to calculate the posterior probability of GNSS integer ambiguity resolution
Yu, Xianwen; Wang, Jinling; Gao, Wang
2016-10-01
When precise positioning is carried out via GNSS carrier phases, it is important to make use of the property that every ambiguity should be an integer. With the known float solution, any integer vector, which has the same degree of freedom as the ambiguity vector, is the ambiguity vector in probability. For both integer aperture estimation and integer equivariant estimation, it is of great significance to know the posterior probabilities. However, to calculate the posterior probability, we have to face the thorny problem that the equation involves an infinite number of integer vectors. In this paper, using the float solution of ambiguity and its variance matrix, a new approach to rapidly and accurately calculate the posterior probability is proposed. The proposed approach consists of four steps. First, the ambiguity vector is transformed via decorrelation. Second, the range of the adopted integer of every component is directly obtained via formulas, and a finite number of integer vectors are obtained via combination. Third, using the integer vectors, the principal value of posterior probability and the correction factor are worked out. Finally, the posterior probability of every integer vector and its error upper bound can be obtained. In the paper, the detailed process to calculate the posterior probability and the derivations of the formulas are presented. The theory and numerical examples indicate that the proposed approach has the advantages of small amount of computations, high calculation accuracy and strong adaptability.
Calculation of the information content of retrieval procedures applied to mass spectral data bases
Marlen, G. van; Dijkstra, Auke; Klooster, H.A. van 't
1979-01-01
A procedure has been developed for estimating the information content of retrieval systems with binary-coded mass spectra, as well as mass spectra coded by other methods, from the statistical properties of a reference file. For a reference file, binary-coded with a threshold of 1% of the intensity o
Intermolecular interaction potentials of methane-argon complex calculated using LDA approaches
Institute of Scientific and Technical Information of China (English)
Bai Yu-Lin; Chen Xiang-Rong; Zhou Xiao-Lin; Yang Xiang-Dong; Wang Hai-Yan
2004-01-01
The intermolecular interaction potential for methane-argon complex is calculated by local density approximation (LDA) approaches. The calculated potential has a minimum when the intermolecular distance of methane-argon complex is 6.75 a.u.; the corresponding depth of the potential is 0.0163eV which has good agreement with experimental data. We also have made a nonlinear fitting of our results for the Lennard-Jones (12-6) potential function and obtain that V(R) = 143794365.332/R12 - 3032.093/R6 (R in a.u. and V(R) in eV).
A New Approach to Calculate Parameters of Induction Generator with Double Windings
Institute of Scientific and Technical Information of China (English)
LIU Chang-hong; YAO Ruo-ping
2005-01-01
The accuracy prediction for the performance of an induction generator depends much on the parameters of the equivalent circuit. This paper presented a new way for calculating these parameters of induction generator with double windings. The method is based on 2D time-dependent magnetic field coupled with electric circuit. An application example of a 12-phase self-excited induction generator (SEIG) was provided to demonstrate the effectiveness of the presented approach. Some of the calculated results show good coincidence with the experiment values.
Calculation of Complexity Costs – An Approach for Rationalizing a Product Program
DEFF Research Database (Denmark)
Hansen, Christian Lindschou; Mortensen, Niels Henrik; Hvam, Lars
2012-01-01
This paper proposes an operational method for rationalizing a product program based on the calculation of complexity costs. The method takes its starting point in the calculation of complexity costs on a product program level. This is done throughout the value chain ranging from component...... inventories at the factory sites, all the way to the distribution of finished goods from distribution centers to the customers. The method proposes a step-wise approach including the analysis, quantification and allocation of product program complexity costs by the means of identifying of a number...
Benvenuto, O G; Althaus, L G; Barba, R H; Morrell, N I
2002-01-01
We present a method to calculate masses for components of both eclipsing and non-eclipsing binary systems as long as their apsidal motion rates are available. The method is based on the fact that the equation that gives the rate of apsidal motion is a supplementary equation that allows the computation of the masses of the components, if the radii and the internal structure constants of them can be obtained from theoretical models. For this reason the use of this equation makes the method presented here model dependent. We apply this method to calculate the mass of the components of the non-eclipsing massive binary system HD 93205 (O3V+O8V), which is suspected to be a very young system. To this end, we computed a grid of evolutionary models covering the mass range of interest, and taking the mass of the primary (M_1) as the only independent variable, we solve the equation of apsidal motion for M_1 as a function of the age of the system. The mass of the primary we find ranges from M_1= 60+-19 msun for ZAMS mode...
A first look at maximally twisted mass lattice QCD calculations at the physical point
Energy Technology Data Exchange (ETDEWEB)
Abdel-Rehim, A. [The Cyprus Institute, Nicosia (Cyprus). CaSToRC; Boucaud, P. [Paris XI Univ., Orsay (France). Laboratoire de Physique Theorique; Carrasco, N. [Valencia-CSIC Univ. (Spain). Dept. de Fisica Teorica; IFIC, Valencia (Spain); and others
2013-11-15
In this contribution, a first look at simulations using maximally twisted mass Wilson fermions at the physical point is presented. A lattice action including clover and twisted mass terms is presented and the Monte Carlo histories of one run with two mass-degenerate flavours at a single lattice spacing are shown. Measurements from the light and heavy-light pseudoscalar sectors are compared to previous N{sub f}=2 results and their phenomenological values. Finally, the strategy for extending simulations to N{sub f}=2+1+1 is outlined.
A first look at maximally twisted mass lattice QCD calculations at the physical point
Abdel-Rehim, A; Carrasco, N; Deuzeman, A; Dimopoulos, P; Frezzotti, R; Herdoiza, G; Jansen, K; Kostrzewa, B; Mangin-Brinet, M; Montvay, I; Palao, D; Rossi, G C; Sanfilippo, F; Scorzato, L; Shindler, A; Urbach, C; Wenger, U
2013-01-01
In this contribution, a first look at simulations using maximally twisted mass Wilson fermions at the physical point is presented. A lattice action including clover and twisted mass terms is presented and the Monte Carlo histories of one run with two mass-degenerate flavours at a single lattice spacing are shown. Measurements from the light and heavy-light pseudoscalar sectors are compared to previous $N_f = 2$ results and their phenomenological values. Finally, the strategy for extending simulations to $N_f = 2 + 1 + 1$ is outlined.
An algorithm for mass matrix calculation of internally constrained molecular geometries.
Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz
2008-01-28
Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model.
An Alternative Approach to the Calculation and Analysis of Connectivity in the World City Network
Hennemann, Stefan
2012-01-01
Empirical research on world cities often draws on Taylor's (2001) notion of an 'interlocking network model', in which office networks of globalized service firms are assumed to shape the spatialities of urban networks. In spite of its many merits, this approach is limited because the resultant adjacency matrices are not really fit for network-analytic calculations. We therefore propose a fresh analytical approach using a primary linkage algorithm that produces a one-mode directed graph based on Taylor's two-mode city/firm network data. The procedure has the advantage of creating less dense networks when compared to the interlocking network model, while nonetheless retaining the network structure apparent in the initial dataset. We randomize the empirical network with a bootstrapping simulation approach, and compare the simulated parameters of this null-model with our empirical network parameter (i.e. betweenness centrality). We find that our approach produces results that are comparable to those of the standa...
Distributed Multipolar Expansion Approach to Calculation of Excitation Energy Transfer Couplings.
Błasiak, Bartosz; Maj, Michał; Cho, Minhaeng; Góra, Robert W
2015-07-14
We propose a new approach for estimating the electrostatic part of the excitation energy transfer (EET) coupling between electronically excited chromophores based on the transition density-derived cumulative atomic multipole moments (TrCAMM). In this approach, the transition potential of a chromophore is expressed in terms of truncated distributed multipolar expansion and analytical formulas for the TrCAMMs are derived. The accuracy and computational feasibility of the proposed approach is tested against the exact Coulombic couplings, and various multipole expansion truncation schemes are analyzed. The results of preliminary calculations show that the TrCAMM approach is capable of reproducing the exact Coulombic EET couplings accurately and efficiently and is superior to other widely used schemes: the transition charges from electrostatic potential (TrESP) and the transition density cube (TDC) method.
Position-dependent mass quantum Hamiltonians: general approach and duality
Rego-Monteiro, M. A.; Rodrigues, Ligia M. C. S.; Curado, E. M. F.
2016-03-01
We analyze a general family of position-dependent mass (PDM) quantum Hamiltonians which are not self-adjoint and include, as particular cases, some Hamiltonians obtained in phenomenological approaches to condensed matter physics. We build a general family of self-adjoint Hamiltonians which are quantum mechanically equivalent to the non-self-adjoint proposed ones. Inspired by the probability density of the problem, we construct an ansatz for the solutions of the family of self-adjoint Hamiltonians. We use this ansatz to map the solutions of the time independent Schrödinger equations generated by the non-self-adjoint Hamiltonians into the Hilbert space of the solutions of the respective dual self-adjoint Hamiltonians. This mapping depends on both the PDM and on a function of position satisfying a condition that assures the existence of a consistent continuity equation. We identify the non-self-adjoint Hamiltonians here studied with a very general family of Hamiltonians proposed in a seminal article of Harrison (1961 Phys. Rev. 123 85) to describe varying band structures in different types of metals. Therefore, we have self-adjoint Hamiltonians that correspond to the non-self-adjoint ones found in Harrison’s article.
Mackie, Jane E; Bruce, Catherine D
2016-05-01
Accurate calculation of medication dosages can be challenging for nursing students. Specific interventions related to types of errors made by nursing students may improve the learning of this important skill. The objective of this study was to determine areas of challenge for students in performing medication dosage calculations in order to design interventions to improve this skill. Strengths and weaknesses in the teaching and learning of medication dosage calculations were assessed. These data were used to create online interventions which were then measured for the impact on student ability to perform medication dosage calculations. The setting of the study is one university in Canada. The qualitative research participants were 8 nursing students from years 1-3 and 8 faculty members. Quantitative results are based on test data from the same second year clinical course during the academic years 2012 and 2013. Students and faculty participated in one-to-one interviews; responses were recorded and coded for themes. Tests were implemented and scored, then data were assessed to classify the types and number of errors. Students identified conceptual understanding deficits, anxiety, low self-efficacy, and numeracy skills as primary challenges in medication dosage calculations. Faculty identified long division as a particular content challenge, and a lack of online resources for students to practice calculations. Lessons and online resources designed as an intervention to target mathematical and concepts and skills led to improved results and increases in overall pass rates for second year students for medication dosage calculation tests. This study suggests that with concerted effort and a multi-modal approach to supporting nursing students, their abilities to calculate dosages can be improved. The positive results in this study also point to the promise of cross-discipline collaborations between nursing and education. Copyright © 2016 Elsevier Ltd. All rights
Novel Approach for Calculation and Analysis of Eigenvalues and Eigenvectors in Microgrids: Preprint
Energy Technology Data Exchange (ETDEWEB)
Li, Y.; Gao, W.; Muljadi, E.; Jiang, J.
2014-02-01
This paper proposes a novel approach based on matrix perturbation theory to calculate and analyze eigenvalues and eigenvectors in a microgrid system. Rigorous theoretical analysis to solve eigenvalues and the corresponding eigenvectors for a system under various perturbations caused by fluctuations of irradiance, wind speed, or loads is presented. A computational flowchart is proposed for the unified solution of eigenvalues and eigenvectors in microgrids, and the effectiveness of the matrix perturbation-based approach in microgrids is verified by numerical examples on a typical low-voltage microgrid network.
Hypervirial approach to calculating expectation values of the many-body Hamiltonian
Adam, R. M.; Fiedeldey, H.
1995-05-01
We present a new method, based on the hypervirial operator, for calculating expectation values of many-body Hamiltonians for local velocity-independent potentials. Our approach enables us to calculate the contributions of different components of an interaction [e.g., tensor, one pion exchange part (OPEP)] to the binding energy when all components are acting. In particular, using the integro-differential equation approach we investigate the contributions of different components of realistic nucleon-nucleon potentials to the triton and α particle ground-state binding energies. Although the tensor force contributes the most to the expectation value of the potential energy, we find that its overall contribution to the binding energy is much reduced by its large contribution to the expectation value of the kinetic energy.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
A new numerical approach has been developed for vapor solid equilibrium calculations and for predicting vapor solid equilibrium constant and composition of vapor and solid phases in gas hydrate formation. Equation of state methods generally do a good job of determining vapor phase properties,but for solid phase it is much more difficult and inaccurate. This proposed new model calculates vapor solid equilibrium constant and vapor and solid phase composition as a function of temperature and partial pressure. The results of this proposed numerical approach, for vapor solid equilibrium, have a good agreement with the available reported data. This new numerical model also has an advantage to tune coefficients, to cover different sets of experimental data accurately.
Directory of Open Access Journals (Sweden)
P.Orea
2003-01-01
Full Text Available We have performed Monte Carlo simulations in the canonical ensemble of a hard-sphere fluid adsorbed in microporous media. The pressure of the adsorbed fluid is calculated by using an original procedure that includes the calculations of the pressure tensor components during the simulation. In order to confirm the equivalence of bulk and adsorbed fluid pressures, we have exploited the mechanical condition of equilibrium and performed additional canonical Monte Carlo simulations in a super system "bulk fluid + adsorbed fluid". When the configuration of a model porous media permits each of its particles to be in contact with adsorbed fluid particles, we found that these pressures are equal. Unlike the grand canonical Monte Carlo method, the proposed calculation approach can be used efficiently to obtain adsorption isotherms over a wide range of fluid densities and porosities of adsorbent.
Analytical approach to calculation of response spectra from seismological models of ground motion
Safak, Erdal
1988-01-01
An analytical approach to calculate response spectra from seismological models of ground motion is presented. Seismological models have three major advantages over empirical models: (1) they help in an understanding of the physics of earthquake mechanisms, (2) they can be used to predict ground motions for future earthquakes and (3) they can be extrapolated to cases where there are no data available. As shown with this study, these models also present a convenient form for the calculation of response spectra, by using the methods of random vibration theory, for a given magnitude and site conditions. The first part of the paper reviews the past models for ground motion description, and introduces the available seismological models. Then, the random vibration equations for the spectral response are presented. The nonstationarity, spectral bandwidth and the correlation of the peaks are considered in the calculation of the peak response.
Energy Technology Data Exchange (ETDEWEB)
Cifka, I.
1987-01-01
Advantages and limitations of some methods used for calculating underground climatic conditions are summarized. Measurement data are evaluated concerning the underground climatic conditions of various galleries in a deep ore mine now under construction. The heat amount absorbed by the air flowing in drifts is to be calculated from the rock side by a corrected heat flow factor. The absorption of vapor has to be calculated by a factor that expresses the ratio of heat and vapor absorptions. These latter values have to be determined reliably by in situ measurements.
Energy Technology Data Exchange (ETDEWEB)
Gao, Zhongming [Laboratory for Atmospheric Research, Department of Civil and Environmental Engineering, Washington State University, Pullman Washington USA; Russell, Eric S. [Laboratory for Atmospheric Research, Department of Civil and Environmental Engineering, Washington State University, Pullman Washington USA; Missik, Justine E. C. [Laboratory for Atmospheric Research, Department of Civil and Environmental Engineering, Washington State University, Pullman Washington USA; Huang, Maoyi [Pacific Northwest National Laboratory, Richland Washington USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA; Strickland, Chris E. [Pacific Northwest National Laboratory, Richland Washington USA; Clayton, Ray [Pacific Northwest National Laboratory, Richland Washington USA; Arntzen, Evan [Pacific Northwest National Laboratory, Richland Washington USA; Ma, Yulong [Laboratory for Atmospheric Research, Department of Civil and Environmental Engineering, Washington State University, Pullman Washington USA; Liu, Heping [Laboratory for Atmospheric Research, Department of Civil and Environmental Engineering, Washington State University, Pullman Washington USA
2017-07-12
We evaluated nine methods of soil heat flux calculation using field observations. All nine methods underestimated the soil heat flux by at least 19%. This large underestimation is mainly caused by uncertainties in soil thermal properties.
Sibiriakova Olena Oleksandrivna
2015-01-01
In this research the author examines changes to approaches of observation of mass communication. As a result of systemization of key theoretical models of communication, the author comes to conclusion of evolution of ideas about the process of mass communication measurement from linear to multisided and multiple.
Seiler, Christian; Evers, Ferdinand
2016-10-01
A formalism for electronic-structure calculations is presented that is based on the functional renormalization group (FRG). The traditional FRG has been formulated for systems that exhibit a translational symmetry with an associated Fermi surface, which can provide the organization principle for the renormalization group (RG) procedure. We here advance an alternative formulation, where the RG flow is organized in the energy-domain rather than in k space. This has the advantage that it can also be applied to inhomogeneous matter lacking a band structure, such as disordered metals or molecules. The energy-domain FRG (ɛ FRG) presented here accounts for Fermi-liquid corrections to quasiparticle energies and particle-hole excitations. It goes beyond the state of the art G W -BSE , because in ɛ FRG the Bethe-Salpeter equation (BSE) is solved in a self-consistent manner. An efficient implementation of the approach that has been tested against exact diagonalization calculations and calculations based on the density matrix renormalization group is presented. Similar to the conventional FRG, also the ɛ FRG is able to signalize the vicinity of an instability of the Fermi-liquid fixed point via runaway flow of the corresponding interaction vertex. Embarking upon this fact, in an application of ɛ FRG to the spinless disordered Hubbard model we calculate its phase boundary in the plane spanned by the interaction and disorder strength. Finally, an extension of the approach to finite temperatures and spin S =1 /2 is also given.
Walitt, L.
1982-01-01
The VANS successive approximation numerical method was extended to the computation of three dimensional, viscous, transonic flows in turbomachines. A cross-sectional computer code, which conserves mass flux at each point of the cross-sectional surface of computation was developed. In the VANS numerical method, the cross-sectional computation follows a blade-to-blade calculation. Numerical calculations were made for an axial annular turbine cascade and a transonic, centrifugal impeller with splitter vanes. The subsonic turbine cascade computation was generated in blade-to-blade surface to evaluate the accuracy of the blade-to-blade mode of marching. Calculated blade pressures at the hub, mid, and tip radii of the cascade agreed with corresponding measurements. The transonic impeller computation was conducted to test the newly developed locally mass flux conservative cross-sectional computer code. Both blade-to-blade and cross sectional modes of calculation were implemented for this problem. A triplet point shock structure was computed in the inducer region of the impeller. In addition, time-averaged shroud static pressures generally agreed with measured shroud pressures. It is concluded that the blade-to-blade computation produces a useful engineering flow field in regions of subsonic relative flow; and cross-sectional computation, with a locally mass flux conservative continuity equation, is required to compute the shock waves in regions of supersonic relative flow.
Institute of Scientific and Technical Information of China (English)
Yuan Xiaoming; Sun Jing; Sun Rui
2006-01-01
An error analysis of the dynamic shear modulus of stiff specimens from tests performed by a new resonant column device developed by the Institute of Engineering Mechanics, China was conducted. A modified approach for calculating the dynamic shear modulus of the stiff specimens is presented. The error formula of the tests was deduced and parameters that impact the accuracy of the test were identified. Using six steel specimens with known standard stiffness as a base, a revised dynamic shear modulus calculation for stiff specimens was formulated by comparing three of the models.The maximum error between the test results and the calculated results shown by curves from both the free-vibration and the resonant-vibration tests is less than 6%. The free-vibration and resonant-vibration tests for three types of stiff samples with a known modulus indicate that the maximum deviation between the actual and the tested value using the modified approach were less than 10%. As a result, the modified approach presented here is shown to be reliable and the new device can be used for testing dynamic shear modulus of any stiff materials at low shear strain levels
Molina, Pablo A; Li, Hui; Jensen, Jan H
2003-12-01
Two divide-and-conquer (DAQ) approaches for building multipole-based molecular electrostatic potentials of proteins are presented and evaluated for use in QM/MM calculations. One approach is a further development of the neutralization method of Bellido and Rullmann (J Comput Chem 1989, 10, 479-487) while the other is based on removing part of the electron density before performing the multipole expansion. Both methods create systems with integer charges without using charge renormalization. To determine their performance in terms of location of cuts and distance to QM region, the new DAQ approaches are tested in calculations of the proton affinity of N(zeta) of Lys55 in the inhibitor turkey ovomucoid third domain. Finally, the two methods are used to build a variety of MM regions, applied to calculations of the pK(a) of Lys55, and compared to other computational methodologies in which force field charges are employed. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 1971-1979, 2003
Energy Technology Data Exchange (ETDEWEB)
Laureau, A., E-mail: laureau.axel@gmail.com; Heuer, D.; Merle-Lucotte, E.; Rubiolo, P.R.; Allibert, M.; Aufiero, M.
2017-05-15
Highlights: • Neutronic ‘Transient Fission Matrix’ approach coupled to the CFD OpenFOAM code. • Fission Matrix interpolation model for fast spectrum homogeneous reactors. • Application for coupled calculations of the Molten Salt Fast Reactor. • Load following, over-cooling and reactivity insertion transient studies. • Validation of the reactor intrinsic stability for normal and accidental transients. - Abstract: In this paper we present transient studies of the Molten Salt Fast Reactor (MSFR). This generation IV reactor is characterized by a liquid fuel circulating in the core cavity, requiring specific simulation tools. An innovative neutronic approach called “Transient Fission Matrix” is used to perform spatial kinetic calculations with a reduced computational cost through a pre-calculation of the Monte Carlo spatial and temporal response of the system. Coupled to this neutronic approach, the Computational Fluid Dynamics code OpenFOAM is used to model the complex flow pattern in the core. An accurate interpolation model developed to take into account the thermal hydraulics feedback on the neutronics including reactivity and neutron flux variation is presented. Finally different transient studies of the reactor in normal and accidental operating conditions are detailed such as reactivity insertion and load following capacities. The results of these studies illustrate the excellent behavior of the MSFR during such transients.
Mclean, J. D.; Randall, J. L.
1979-01-01
A system of computer programs for calculating three dimensional transonic flow over wings, including details of the three dimensional viscous boundary layer flow, was developed. The flow is calculated in two overlapping regions: an outer potential flow region, and a boundary layer region in which the first order, three dimensional boundary layer equations are numerically solved. A consistent matching of the two solutions is achieved iteratively, thus taking into account viscous-inviscid interaction. For the inviscid outer flow calculations, the Jameson-Caughey transonic wing program FLO 27 is used, and the boundary layer calculations are performed by a finite difference boundary layer prediction program. Interface programs provide communication between the two basic flow analysis programs. Computed results are presented for the NASA F8 research wing, both with and without distributed surface suction.
Fragment approach to constrained density functional theory calculations using Daubechies wavelets
Energy Technology Data Exchange (ETDEWEB)
Ratcliff, Laura E., E-mail: lratcliff@anl.gov [Argonne Leadership Computing Facility, Argonne National Laboratory, Lemont, Illinois 60439 (United States); Université de Grenoble Alpes, CEA, INAC-SP2M, L-Sim, F-38000 Grenoble (France); Genovese, Luigi; Mohr, Stephan; Deutsch, Thierry [Université de Grenoble Alpes, CEA, INAC-SP2M, L-Sim, F-38000 Grenoble (France)
2015-06-21
In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix of the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments.
On the Calculation of Optimum Mass Distribution of a Multi-Stage Rocket Vehicle
Directory of Open Access Journals (Sweden)
V. B. Tawakley
1967-04-01
Full Text Available The effect of gravity on the optimum distribution of total required mass among the various stages of a multiple stage rocket arranged series has been considered by making the payload ratio minimum so as to obtain a specified mission velocity at the end of powered flight. The special case when the physical parameters for all the step rockets are each equal, has been discussed in detail. It has also been shown that if the mission requirement is to achieve a given all burnt height, then even at the expense of more total initial mass no more total a given all burnt velocity. Finally it is proved that in order to achieve a given all burnt velocity by arranging the stages in parallel results in an increase in the total initial mass compared to the case when they are arranged in series and the magnitude of this increase depends upon the number of stages.
Free Energy Calculations using a Swarm-Enhanced Sampling Molecular Dynamics Approach.
Burusco, Kepa K; Bruce, Neil J; Alibay, Irfan; Bryce, Richard A
2015-10-26
Free energy simulations are an established computational tool in modelling chemical change in the condensed phase. However, sampling of kinetically distinct substates remains a challenge to these approaches. As a route to addressing this, we link the methods of thermodynamic integration (TI) and swarm-enhanced sampling molecular dynamics (sesMD), where simulation replicas interact cooperatively to aid transitions over energy barriers. We illustrate the approach by using alchemical alkane transformations in solution, comparing them with the multiple independent trajectory TI (IT-TI) method. Free energy changes for transitions computed by using IT-TI grew increasingly inaccurate as the intramolecular barrier was heightened. By contrast, swarm-enhanced sampling TI (sesTI) calculations showed clear improvements in sampling efficiency, leading to more accurate computed free energy differences, even in the case of the highest barrier height. The sesTI approach, therefore, has potential in addressing chemical change in systems where conformations exist in slow exchange.
VAMP: A computer program for calculating volume, area, and mass properties of aerospace vehicles
Norton, P. J.; Glatt, C. R.
1974-01-01
A computerized procedure developed for analyzing aerospace vehicles evaluates the properties of elemental surface areas with specified thickness by accumulating and combining them with arbitrarily specified mass elements to form a complete evaluation. Picture-like images of the geometric description are capable of being generated.
Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity
Harbin Li; Steven G. McNulty
2007-01-01
Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL...
Methods and Approaches to Mass Spectroscopy Based Protein Identification
This book chapter is a review of current mass spectrometers and the role in the field of proteomics. Various instruments are discussed and their strengths and weaknesses are highlighted. In addition, the methods of protein identification using a mass spectrometer are explained as well as data vali...
Mass Society/Culture/Media: An Eclectic Approach.
Clavner, Jerry B.
Instructors of courses in mass society, culture, and communication start out facing three types of difficulties: the historical orientation of learning, the parochialism of various disciplines, and negative intellectually elitist attitudes toward mass culture/media. Added to these problems is the fact that many instructors have little or no…
Kataev, A L
2016-01-01
The summary of the available semi-analytical results for the three-loop corrections to the QCD static potential and for the $\\mathcal{O}(\\alpha_s^4)$ contributions to the ratio of the running and pole heavy quark masses are presented. The procedure of the determination of the dependence of the four-loop contribution to the pole-running heavy quarks mass ratio on the number of quarks flavours, based on application of the least squares method is described. The necessity of clarifying the reason of discrepancy between the numerical uncertainties of the $\\alpha_s^4$ coefficients in the mass ratio, obtained by this mathematical method by the direct numerical calculations is emphasised.
Crawford, Ben; Grimmond, Sue; Kent, Christoph; Gabey, Andrew; Ward, Helen; Sun, Ting; Morrison, William
2017-04-01
Remotely sensed data from satellites have potential to enable high-resolution, automated calculation of urban surface energy balance terms and inform decisions about urban adaptations to environmental change. However, aerodynamic resistance methods to estimate sensible heat flux (QH) in cities using satellite-derived observations of surface temperature are difficult in part due to spatial and temporal variability of the thermal aerodynamic resistance term (rah). In this work, we extend an empirical function to estimate rah using observational data from several cities with a broad range of surface vegetation land cover properties. We then use this function to calculate spatially and temporally variable rah in London based on high-resolution (100 m) land cover datasets and in situ meteorological observations. In order to calculate high-resolution QH based on satellite-observed land surface temperatures, we also develop and employ novel methods to i) apply source area-weighted averaging of surface and meteorological variables across the study spatial domain, ii) calculate spatially variable, high-resolution meteorological variables (wind speed, friction velocity, and Obukhov length), iii) incorporate spatially interpolated urban air temperatures from a distributed sensor network, and iv) apply a modified Monte Carlo approach to assess uncertainties with our results, methods, and input variables. Modeled QH using the aerodynamic resistance method is then compared to in situ observations in central London from a unique network of scintillometers and eddy-covariance measurements.
Open-Ended Recursive Approach for the Calculation of Multiphoton Absorption Matrix Elements.
Friese, Daniel H; Beerepoot, Maarten T P; Ringholm, Magnus; Ruud, Kenneth
2015-03-10
We present an implementation of single residues for response functions to arbitrary order using a recursive approach. Explicit expressions in terms of density-matrix-based response theory for the single residues of the linear, quadratic, cubic, and quartic response functions are also presented. These residues correspond to one-, two-, three- and four-photon transition matrix elements. The newly developed code is used to calculate the one-, two-, three- and four-photon absorption cross sections of para-nitroaniline and para-nitroaminostilbene, making this the first treatment of four-photon absorption in the framework of response theory. We find that the calculated multiphoton absorption cross sections are not very sensitive to the size of the basis set as long as a reasonably large basis set with diffuse functions is used. The choice of exchange-correlation functional, however, significantly affects the calculated cross sections of both charge-transfer transitions and other transitions, in particular, for the larger para-nitroaminostilbene molecule. We therefore recommend the use of a range-separated exchange-correlation functional in combination with the augmented correlation-consistent double-ζ basis set aug-cc-pVDZ for the calculation of multiphoton absorption properties.
A Monte Carlo Resampling Approach for the Calculation of Hybrid Classical and Quantum Free Energies.
Cave-Ayland, Christopher; Skylaris, Chris-Kriton; Essex, Jonathan W
2017-02-14
Hybrid free energy methods allow estimation of free energy differences at the quantum mechanics (QM) level with high efficiency by performing sampling at the classical mechanics (MM) level. Various approaches to allow the calculation of QM corrections to classical free energies have been proposed. The single step free energy perturbation approach starts with a classically generated ensemble, a subset of structures of which are postprocessed to obtain QM energies for use with the Zwanzig equation. This gives an estimate of the free energy difference associated with the change from an MM to a QM Hamiltonian. Owing to the poor numerical properties of the Zwanzig equation, however, recent developments have produced alternative methods which aim to provide access to the properties of the true QM ensemble. Here we propose an approach based on the resampling of MM structural ensembles and application of a Monte Carlo acceptance test which in principle, can generate the exact QM ensemble or intermediate ensembles between the MM and QM states. We carry out a detailed comparison against the Zwanzig equation and recently proposed non-Boltzmann methods. As a test system we use a set of small molecule hydration free energies for which hybrid free energy calculations are performed at the semiempirical Density Functional Tight Binding level. Equivalent ensembles at this level of theory have also been generated allowing the reverse QM to MM perturbations to be performed along with a detailed analysis of the results. Additionally, a previously published nucleotide base pair data set simulated at the QM level using ab initio molecular dynamics is also considered. We provide a strong rationale for the use of the Monte Carlo Resampling and non-Boltzmann approaches by showing that configuration space overlaps can be estimated which provide useful diagnostic information regarding the accuracy of these hybrid approaches.
The charged Higgs boson mass of the MSSM in the Feynman-diagrammatic approach
Energy Technology Data Exchange (ETDEWEB)
Frank, M. [Karlsruhe Univ. (Germany). Inst. fuer Theoretische Physik; Galeta, L.; Heinemeyer, S. [Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Hahn, T.; Hollik, W. [Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut), Muenchen (Germany); Rzehak, H. [CERN, Geneva (Switzerland); Weiglein, G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2013-06-15
The interpretation of the Higgs signal at {proportional_to}126 GeV within the Minimal Supersymmetric Standard Model (MSSM) depends crucially on the predicted properties of the other Higgs states of the model, as the mass of the charged Higgs boson, M{sub H}{sup {sub {+-}}}. This mass is calculated in the Feynman-diagrammatic approach within the MSSM with real parameters. The result includes the complete one-loop contributions and the two-loop contributions of O({alpha}{sub t}{alpha}{sub s}). The one-loop contributions lead to sizable shifts in the M{sub H}{sup {sub {+-}}} prediction, reaching up to {proportional_to}8 GeV for relatively small values of M{sub A}. Even larger effects can occur depending on the sign and size of the {mu} parameter that enters the corrections affecting the relation between the bottom-quark mass and the bottom Yukawa coupling. The two-loop O({alpha}{sub t}{alpha}{sub s}) terms can shift M{sub H}{sup {sub {+-}}} by more than 2 GeV. The two-loop contributions amount to typically about 30% of the one-loop corrections for the examples that we have studied. These effects can be relevant for precision analyses of the charged MSSM Higgs boson.
Stegeby, Henrik; Karlsson, Hans O; Lindh, Roland; Froelich, Piotr
2012-01-01
The problem of proton-antiproton motion in the ${\\rm H}$--${\\rm \\bar{H}}$ system is investigated by means of the variational method. We introduce a modified nuclear interaction through mass-scaling of the Born-Oppenheimer potential. This improved treatment of the interaction includes the nondivergent part of the otherwise divergent adiabatic correction and shows the correct threshold behavior. Using this potential we calculate the vibrational energy levels with angular momentum 0 and 1 and the corresponding nuclear wave functions, as well as the S-wave scattering length. We obtain a full set of all bound states together with a large number of discretized continuum states that might be utilized in variational four-body calculations. The results of our calculations gives an indication of resonance states in the hydrogen-antihydrogen system.
A consistent approach for mixed detailed and statistical calculation of opacities in hot plasmas
Porcherot, Quentin; Gilleron, Franck; Blenski, Thomas
2011-01-01
Absorption and emission spectra of plasmas with multicharged-ions contain transition arrays with a huge number of coalescent electric-dipole (E1) lines, which are well suited for treatment by the unresolved transition array and derivative methods. But, some transition arrays show detailed features whose description requires diagonalization of the Hamiltonian matrix. We developed a hybrid opacity code, called SCORCG, which combines statistical approaches with fine-structure calculations consistently. Data required for the computation of detailed transition arrays (atomic configurations and atomic radial integrals) are calculated by the super-configuration code SCO (Super-Configuration Opacity), which provides an accurate description of the plasma screening effects on the wave-functions. Level energies as well as position and strength of spectral lines are computed by an adapted RCG routine of R. D. Cowan. The resulting code provides opacities for hot plasmas and can handle mid-Z elements. The code is also a po...
Guo, Yang; Li, Wei; Li, Shuhua
2014-10-02
An improved cluster-in-molecule (CIM) local correlation approach is developed to allow electron correlation calculations of large systems more accurate and faster. We have proposed a refined strategy of constructing virtual LMOs of various clusters, which is suitable for basis sets of various types. To recover medium-range electron correlation, which is important for quantitative descriptions of large systems, we find that a larger distance threshold (ξ) is necessary for highly accurate results. Our illustrative calculations show that the present CIM-MP2 (second-order Møller-Plesser perturbation theory, MP2) or CIM-CCSD (coupled cluster singles and doubles, CCSD) scheme with a suitable ξ value is capable of recovering more than 99.8% correlation energies for a wide range of systems at different basis sets. Furthermore, the present CIM-MP2 scheme can provide reliable relative energy differences as the conventional MP2 method for secondary structures of polypeptides.
The length of the world's glaciers - a new approach for the global calculation of center lines
DEFF Research Database (Denmark)
Machguth, Horst; Huss, M.
2014-01-01
Glacier length is an important measure of glacier geometry. Nevertheless, global glacier inventories are mostly lacking length data. Only recently semi-automated approaches to measure glacier length have been developed and applied regionally. Here we present a first global assessment of glacier...... length using an automated method that relies on glacier surface slope, distance to the glacier margins and a set of trade-off functions. The method is developed for East Greenland, evaluated for East Greenland as well as for Alaska and eventually applied to all similar to 200 000 glaciers around...... the globe. The evaluation highlights accurately calculated glacier length where digital elevation model (DEM) quality is high (East Greenland) and limited accuracy on low-quality DEMs (parts of Alaska). Measured length of very small glaciers is subject to a certain level of ambiguity. The global calculation...
A finite element approach to self-consistent field theory calculations of multiblock polymers
Ackerman, David M.; Delaney, Kris; Fredrickson, Glenn H.; Ganapathysubramanian, Baskar
2017-02-01
Self-consistent field theory (SCFT) has proven to be a powerful tool for modeling equilibrium microstructures of soft materials, particularly for multiblock polymers. A very successful approach to numerically solving the SCFT set of equations is based on using a spectral approach. While widely successful, this approach has limitations especially in the context of current technologically relevant applications. These limitations include non-trivial approaches for modeling complex geometries, difficulties in extending to non-periodic domains, as well as non-trivial extensions for spatial adaptivity. As a viable alternative to spectral schemes, we develop a finite element formulation of the SCFT paradigm for calculating equilibrium polymer morphologies. We discuss the formulation and address implementation challenges that ensure accuracy and efficiency. We explore higher order chain contour steppers that are efficiently implemented with Richardson Extrapolation. This approach is highly scalable and suitable for systems with arbitrary shapes. We show spatial and temporal convergence and illustrate scaling on up to 2048 cores. Finally, we illustrate confinement effects for selected complex geometries. This has implications for materials design for nanoscale applications where dimensions are such that equilibrium morphologies dramatically differ from the bulk phases.
Energy Technology Data Exchange (ETDEWEB)
Hofmann, H.M.; Mertelmeier, T. (Erlangen-Nuernberg Univ., Erlangen (Germany, F.R.). Inst. fuer Theoretische Physik); Mello, P.A. (Instituto Nacional de Investigaciones Nucleares, Mexico City. Lab. del Acelerador); Seligman, T.H. (Universidad Nacional Autonoma de Mexico, Mexico City. Inst. de Fisica)
1981-12-14
A comparison is presented between predictions of the entropy approach to statistical nuclear reactions, and numerical calculations performed by generating an ensemble of S-matrices in terms of K-matrices with specified statistical distributions for their parameters. The comparison is done for: (a) the 2nd, 3rd and 4th moments of S in a 4-channel case and (b) the actual distribution of the S-matrix elements in a 2-channel case. In both cases the agreement is found to be very good in the domain of strong absorption.
Comparison Latent Semantic and WordNet Approach for Semantic Similarity Calculation
Wicaksana, I Wayan Simri
2011-01-01
Information exchange among many sources in Internet is more autonomous, dynamic and free. The situation drive difference view of concepts among sources. For example, word 'bank' has meaning as economic institution for economy domain, but for ecology domain it will be defined as slope of river or lake. In this aper, we will evaluate latent semantic and WordNet approach to calculate semantic similarity. The evaluation will be run for some concepts from different domain with reference by expert or human. Result of the evaluation can provide a contribution for mapping of concept, query rewriting, interoperability, etc.
Calvarial masses of infants and children. A radiological approach
Energy Technology Data Exchange (ETDEWEB)
Willatt, J.M.G. E-mail: jonwillatt@doctors.org.uk; Quaghebeur, G
2004-06-01
Children frequently present with asymptomatic head lumps that have been discovered by their parents or by their hairdressers. Other children present with painful lumps or symptoms of intra-cranial masses with calvarial involvement. Imaging plays an important role in the diagnosis of such masses and in subsequent surgical planning. We present a review of the types of lesion that may present in these ways.
The Low-Scale Approach to Neutrino Masses
Directory of Open Access Journals (Sweden)
Sofiane M. Boucenna
2014-01-01
Full Text Available In this short review we revisit the broad landscape of low-scale SU(3c⊗SU(2L⊗U(1Y models of neutrino mass generation, with view on their phenomenological potential. This includes signatures associated to direct neutrino mass messenger production at the LHC, as well as messenger-induced lepton flavor violation processes. We also briefly comment on the presence of WIMP cold dark matter candidates.
Dynamical approach to fusion-fission process in superheavy mass region
Directory of Open Access Journals (Sweden)
Aritomo Y.
2012-10-01
Full Text Available In order to describe heavy-ion fusion reactions around the Coulomb barrier with an actinide target nucleus, we propose a model which combines the coupled-channels approach and a fluctuation-dissipation model for dynamical calculations. This model takes into account couplings to the collective states of the interacting nuclei in the penetration of the Coulomb barrier and the subsequent dynamical evolution of a nuclear shape from the contact configuration. In the fluctuation-dissipation model with a Langevin equation, the effect of nuclear orientation at the initial impact on the prolately deformed target nucleus is considered. Fusion-fission, quasifission and deep quasifission are separated as different Langevin trajectories on the potential energy surface. Using this model, we analyze the experimental data for the mass distribution of fission fragments (MDFF in the reaction of 36S+238U at several incident energies around the Coulomb barrier.
Minimally invasive approaches for histological diagnosis of anterior mediastinal masses
Institute of Scientific and Technical Information of China (English)
FANG Wen-tao; XU Mei-ying; CHEN Gang; CHEN Yong; CHEN Wen-hu
2007-01-01
Background Anterior mediastinal masses include a wide variety of diseases from benign lesions to extremely malignant tumors. Management strategies are highly diverse and depend strongly on the histological diagnosis as well as the extent of the disease. We reported a prospective study comparing the usefulness of core needle biopsy and mini-mediastinotomy under local anesthesia for histological diagnosis in anterior mediastinal masses.Methods A total of 40 patients with masses of unknown histology and located either at or near the anterior mediastinum received biopsy prior to treatment. The diagnostic methods were core needle biopsy in 28 patients and biopsy through mini-mediastinotomy under local anesthesia in 15 patients (including 3 patients for whom core needle biopsy failed to yield a definite diagnosis).Results Histological diagnosis was achieved in 18 of the 28 patients receiving core needle biopsy. Of them, all 4 patients with pleural fibromas and 9 of the 12 patients (75%) with pulmonary mass were diagnosed definitively. In the remaining 12 patients with mediastinal mass, histological diagnosis was achieved in only 5 patients (41.7%). In contrast,biopsy through a mini-mediastinotomy failed in only 3 patients. In the remaining 12 patients with huge mediastinal masses, who underwent mini-mediastinotomy, a definitive histological diagnosis was reached by pathological and/or immunohistochemical study (diagnostic yield 85.7% in 12 of 14 cases of mediastinal mass, P=0.038 vs core needle biopsy). For the 9 patients with thymic epithelial tumors, the diagnostic yield was 40% (2 in 5 cases) for core needle biopsy and 83.3% (5 in 6 cases) for mini-mediastinotomy. There was no morbidity in patients receiving mini-mediastinotomy. In the 30 patients with biopsy-proven histological diagnosis, the results contributed to therapeutic decision making in 25 cases (83.3%).Conclusions Core needle biopsy is effective in the diagnosis of pulmonary and pleural diseases. Yet its
Institute of Scientific and Technical Information of China (English)
WU Tao; WANG Xin-Bing
2011-01-01
An ion flux and its kinetic energy spectrum are obtained using a self similar spherically symmetric fluid model of expansion of a collisionless plasma into vacuum. According to the ion flux and energy distribution, the collector optical lifetime is estimated by knowledge of the sputtering yield of conventional Mo/Si multilayer coatings for the CO2 and Nd:YAG pulsed-laser produced plasmas based on the minimum mass tin droplet target without debris mitigation. The results show that the longer wavelength of the CO2 laser produced plasma light source is more suitable for extreme ultraviolet lithography than Nd:YAG laser in respect of fast ion debris induced sputtering damage to the collector mirror.%@@ An ion flux and its kinetic energy spectrum are obtained using a self similar spherically symmetric fluid model of expansion of a collisionless plasma into vacuum.According to the ion flux and energy distribution,the collector optical lifetime is estimated by knowledge of the sputtering yield of conventional Mo/Si multilayer coatings for the CO2 and Nd:YAG pulsed-laser produced plasmas based on the minimum mass tin droplet target without debris mitigation.The results show that the longer wavelength of the CO2 laser produced plasma light source is more suitable for extreme ultraviolet lithography than Nd:YAG laser in respect of fast ion debris induced sputtering damage to the collector mirror.
Omori, S.; Gross, K. W.
1973-01-01
The turbulent kinetic energy equation is coupled with boundary layer equations to solve the characteristics of compressible turbulent boundary layers with mass injection and combustion. The Reynolds stress is related to the turbulent kinetic energy using the Prandtl-Wieghardt formulation. When a lean mixture of hydrogen and nitrogen is injected through a porous plate into the subsonic turbulent boundary layer of air flow and ignited by external means, the turbulent kinetic energy increases twice as much as that of noncombusting flow with the same mass injection rate of nitrogen. The magnitudes of eddy viscosity between combusting and noncombusting flows with injection, however, are almost the same due to temperature effects, while the distributions are different. The velocity profiles are significantly affected by combustion. If pure hydrogen as a transpiration coolant is injected into a rocket nozzle boundary layer flow of combustion products, the temperature drops significantly across the boundary layer due to the high heat capacity of hydrogen. At a certain distance from the wall hydrogen reacts with the combustion products, liberating an extensive amount of heat.
Energy Technology Data Exchange (ETDEWEB)
Pozdniakov, Sergey; Tsang, Chin-Fu
2004-01-02
In this paper, we consider an approach for estimating the effective hydraulic conductivity of a 3D medium with a binary distribution of local hydraulic conductivities. The medium heterogeneity is represented by a combination of matrix medium conductivity with spatially distributed sets of inclusions. Estimation of effective conductivity is based on a self-consistent approach introduced by Shvidler (1985). The tensor of effective hydraulic conductivity is calculated numerically by using a simple system of equations for the main diagonal elements. Verification of the method is done by comparison with theoretical results for special cases and numerical results of Desbarats (1987) and our own numerical modeling. The method was applied to estimating the effective hydraulic conductivity of a 2D and 3D fractured porous medium. The medium heterogeneity is represented by a combination of matrix conductivity and a spatially distributed set of highly conductive fractures. The tensor of effective hydraulic conductivity is calculated for parallel- and random-oriented sets of fractures. The obtained effective conductivity values coincide with Romm's (1966) and Snow's (1969) theories for infinite fracture length. These values are also physically acceptable for the sparsely-fractured-medium case with low fracture spatial density and finite fracture length. Verification of the effective hydraulic conductivity obtained for a fractured porous medium is done by comparison with our own numerical modeling for a 3D case and with Malkovsky and Pek's (1995) results for a 2D case.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Hong Lin; Sampson, D.H. (Pennsylvania State Univ., University Park, PA (USA). Dept. of Astronomy)
1990-10-22
The rapid relativistic distorted wave method of Zhang et al for excitation, which uses the atomic structure data of Sampson et al, has been extended to ionization. In this approach the same Dirac-Fock-Slater potential evaluated using a single mean configuration is used in calculating the orbitals of all electrons bound and free. Values for the cross sections Q for ionization of various ions have been calculated and generally good agreement is obtained with other recent relativistic calculations. When results are expressed in terms of the reduced ionization cross section Q{sub R}, which is proportional to I{sup 2}Q, they are close to the non-relativistic Coulomb-Born-Exchange values of Moores et al for hydrogenic ions except for high Z and/or high energies. This suggests that fits of the Q{sub R} to simple functions of the impact electron energy in threshold units with coefficients that are quite slowly varying functions of an effective Z can probably be made. This would be convenient for plasma modeling applications. 24 refs., 2 tabs.
Zhang, Hong Lin; Sampson, Douglas H.
1990-11-01
The rapid relativistic distorted-wave method of Zhang, Sampson, and Mohanty [Phys. Rev. A 40, 616 (1989)] for excitation, which uses the atomic-structure data of Sampson et al. [Phys. Rev. A 40, 604 (1989)], has been extended to ionization. In this approach the same Dirac-Fock-Slater potential evaluated using a single mean configuration is used in calculating the orbitals of all electrons bound and free. Values for the cross sections Q for ionization of various ions have been calculated, and generally good agreement is obtained with other recent relativistic calculations. When results are expressed in terms of the reduced ionization cross section QR, which is proportional to I2Q, they are close to the nonrelativistic Coulomb-Born-exchange values of Moores, Golden, and Sampson [J. Phys. B 13, 385 (1980)] for hydrogenic ions except for high Z and/or high energies. This suggests that fits of the QR to simple functions of the impact electron energy in threshold units with coefficients that are quite slowly varying functions of an effective Z can probably be made. This would be convenient for plasma-modeling applications.
Seiler, Christian
2016-01-01
A formalism for electronic-structure calculations is presented that is based on the functional renormalization group (FRG). The traditional FRG has been formulated for systems that exhibit a translational symmetry with an associated Fermi surface, which can provide the organization principle for the renormalization group (RG) procedure. We here advance an alternative formulation, where the RG-flow is organized in the energy-domain rather than in k-space. This has the advantage that it can also be applied to inhomogeneous matter lacking a band-structure, such as disordered metals or molecules. The energy-domain FRG ({\\epsilon}FRG) presented here accounts for Fermi-liquid corrections to quasi-particle energies and particle-hole excitations. It goes beyond the state of the art GW-BSE, because in {\\epsilon}FRG the Bethe-Salpeter equation (BSE) is solved in a self-consistent manner. An efficient implementation of the approach that has been tested against exact diagonalization calculations and calculations based on...
Exciton scattering approach for optical spectra calculations in branched conjugated macromolecules
Li, Hao; Wu, Chao; Malinin, Sergey V.; Tretiak, Sergei; Chernyak, Vladimir Y.
2016-12-01
The exciton scattering (ES) technique is a multiscale approach based on the concept of a particle in a box and developed for efficient calculations of excited-state electronic structure and optical spectra in low-dimensional conjugated macromolecules. Within the ES method, electronic excitations in molecular structure are attributed to standing waves representing quantum quasi-particles (excitons), which reside on the graph whose edges and nodes stand for the molecular linear segments and vertices, respectively. Exciton propagation on the linear segments is characterized by the exciton dispersion, whereas exciton scattering at the branching centers is determined by the energy-dependent scattering matrices. Using these ES energetic parameters, the excitation energies are then found by solving a set of generalized "particle in a box" problems on the graph that represents the molecule. Similarly, unique energy-dependent ES dipolar parameters permit calculations of the corresponding oscillator strengths, thus, completing optical spectra modeling. Both the energetic and dipolar parameters can be extracted from quantum-chemical computations in small molecular fragments and tabulated in the ES library for further applications. Subsequently, spectroscopic modeling for any macrostructure within a considered molecular family could be performed with negligible numerical effort. We demonstrate the ES method application to molecular families of branched conjugated phenylacetylenes and ladder poly-para-phenylenes, as well as structures with electron donor and acceptor chemical substituents. Time-dependent density functional theory (TD-DFT) is used as a reference model for electronic structure. The ES calculations accurately reproduce the optical spectra compared to the reference quantum chemistry results, and make possible to predict spectra of complex macromolecules, where conventional electronic structure calculations are unfeasible.
Feizi, H.; A. A., Rajabi; M. R., Shojaei
2012-07-01
In this work, the binding energy and wavefunctions of three-nucleon systems are obtained by using hyperspherical harmonic approach. We have used a mathematical modification method to obtain the eigenvalues and eigenfunctions of Schrödinger equation for three-nucleon systems in calculation. Next, we have used a simple approach to obtain the difference between binding energy of 3H and 3He where gives us mass splitting of three-nucleon systems. We have compared our results with the other works and experimental values.
Diagnosing Prion Diseases: Mass Spectrometry-Based Approaches
Mass spectrometry is an established means of quantitating the prions present in infected hamsters. Calibration curves relating the area ratios of the selected analyte peptides and their oxidized analogs to stable isotope labeled internal standards were prepared. The limit of detection (LOD) and limi...
Mass Spectrometric Approaches to Detecting and Quantifying Prions
Until recently, the use of mass spectrometry has been limited to identifying covalent posttranslational modifications of PrPSc and PrPC. These efforts support the hypothesis that PrPC and PrPSc possess identical covalent posttranslational modifications. Technical advances in instrumentation now all...
Mass transit security issue: the Standex program approach
Charrue, P.; Ruiter, C.J. de; Kuznetsov, A.; Gorshkov, I.; Ter-Martirosyan, A.; Palucci, A.; Simonet, F.; Planchet, J.L.; Carvalho-Rodrigues, F.; Becker, W.
2011-01-01
Since 2005 NATO, the Explosive Detection Working Group has been dedicated its work about the issue of suicide bombers detection mainly in order to protect the mass transit infrastructures (railways and subways stations). After an analysis of the potential requirements and problems to face in such co
Directory of Open Access Journals (Sweden)
H. M. Arafa
2014-12-01
Full Text Available In this work captopril an antihypertensive (KPL drug, was investigated using thermal analysis (TA measurements (TG-DTA in comparison with electron impact (EI mass spectral (MS fragmentation at 70 eV. Semi-empirical molecular (MO calculations, using PM3 method in the neutral and positively charged forms of the drug. These include molecular geometry, bond order, charge distribution, heats of formation and ionization energy. The behavior of the drug under drug TA decomposition, reveal a moderate stability up to 160Co before a completely decomposition in the range 160-240 Co. The initial decomposition is due to COOH + CH3 loss, followed by SH loss. On the other hand, the molecular ion can easily fragmented by CO2 loss followed by SH loss. This is the best-selected pathway comparable with decomposition using TA. MO-Calculation is used to declare these observations.
Zinoviev, A. N.; Nordlund, K.
2017-09-01
The interatomic potential determines the nuclear stopping power in materials. Most ion irradiation simulation models are based on the universal Ziegler-Biersack-Littmark (ZBL) potential (Ziegler et al., 1983), which, however, is an average and hence may not describe the stopping of all ion-material combinations well. Here we consider pair-specific interatomic potentials determined experimentally and by density-functional theory simulations with DMol approach (DMol software, 1997) to choose basic wave functions. The interatomic potentials calculated using the DMol approach demonstrate an unexpectedly good agreement with experimental data. Differences are mainly observed for heavy atom systems, which suggests they can be improved by extending a basis set and more accurately considering the relativistic effects. Experimental data prove that the approach of determining interatomic potentials from quasielastic scattering can be successfully used for modeling collision cascades in ion-solids collisions. The data obtained clearly indicate that the use of any universal potential is limited to internuclear distances R < 7 af (af is the Firsov length).
Zelovich, Tamar; Kronik, Leeor; Hod, Oded
2014-08-12
We propose a new method for simulating electron dynamics in open quantum systems out of equilibrium, using a finite atomistic model. The proposed method is motivated by the intuitive and practical nature of the driven Liouville-von-Neumann equation approach of Sánchez et al. [J. Chem. Phys. 2006, 124, 214708] and Subotnik et al. [J. Chem. Phys. 2009, 130, 144105]. A key ingredient of our approach is a transformation of the Hamiltonian matrix from an atomistic to a state representation of the molecular junction. This allows us to uniquely define the bias voltage across the system while maintaining a proper thermal electronic distribution within the finite lead models. Furthermore, it allows us to investigate complex molecular junctions, including multilead configurations. A heuristic derivation of our working equation leads to explicit expressions for the damping and driving terms, which serve as appropriate electron sources and sinks that effectively "open" the finite model system. Although the method does not forbid it, in practice we find neither violation of Pauli's exclusion principles nor deviation from density matrix positivity throughout our numerical simulations of various tight-binding model systems. We believe that the new approach offers a practical and physically sound route for performing atomistic time-dependent transport calculations in realistic molecular junction models.
The mass balance calculation of hydrothermal alteration in Sarcheshmeh porphyry copper deposit
Directory of Open Access Journals (Sweden)
Mohammad Maanijou
2013-10-01
Full Text Available Sarcheshmeh porphyry copper deposit is located 65 km southwest of Rafsanjan in Kerman province. The Sarcheshmeh deposit belongs to the southeastern part of Urumieh-Dokhtar magmatic assemblage (i.e., Dehaj-Sarduyeh zone. Intrusion of Sarcheshmeh granodiorite stock in faulted and thrusted early-Tertiary volcano-sedimentary deposits, led to mineralization in Miocene. In this research, the mass changes and element mobilities during hydrothermal process of potassic alteration were studied relative to fresh rock from the deeper parts of the plutonic body, phyllic relative to potassic, argillic relative to phyllic and propylitic alteration relative to fresh andesites surrounding the deposit. In the potassic zone, enrichment in Fe2O3 and K2O is so clear, because of increasing Fe coming from biotite alteration and presence of K-feldspar, respectively. Copper and molybdenum enrichments resulted from presence of chalcopyrite, bornite and molybdenite mineralization in this zone. Enrichment of SiO2 and depletion of CaO, MgO, Na2O and K2O in the phyllic zone resulted from leaching of sodium, calcium and magnesium from the aluminosilicate rocks and alteration of K-feldspar to sericite and quartz. In the argillic zone, Al2O3, CaO, MgO, Na2O and MnO have also been enriched in which increasing Al2O3 may be from kaolinite and illite formation. Also, enrichment in SiO2, Al2O3 and CaO in propylitic alteration zone can be attributed to the formation of chlorite, epidote and calcite as indicative minerals of this zone.
Energy Technology Data Exchange (ETDEWEB)
Borodkin, P.G.; Borodkin, G.I.; Khrennikov, N.N. [Scientific and Engineering Centre for Nuclear and Radiation Safety SEC NRS, Building 5, Malaya Krasnoselskaya Street, 2/8, 107140 Moscow (Russian Federation)
2011-07-01
The approach of improved uncertainty-accounted conservative evaluation of vodo-vodyanoi energetichesky reactor (VVER) (reactor-) pressure-vessel (RPV) radiation loading parameters has been proposed. This approach is based on the calculational-experimental procedure, which takes into account C/E ratio, depending on over- or underestimation, and uncertainties of measured and calculated results. An application of elaborated approach to the full-scale ex-vessel neutron dosimetry experiments on Russian VVERs combined with neutron-transport calculations has been demonstrated in the paper. (authors)
One-loop kink mass shifts: a computational approach
Alonso-Izquierdo, Alberto
2011-01-01
In this paper we develop a procedure to compute the one-loop quantum correction to the kink masses in generic (1+1)-dimensional one-component scalar field theoretical models. The procedure uses the generalized zeta function regularization method helped by the Gilkey-de Witt asymptotic expansion of the heat function via Mellin's transform. We find a formula for the one-loop kink mass shift that depends only on the part of the energy density with no field derivatives, evaluated by means of a symbolic software algorithm that automates the computation. The improved algorithm with respect to earlier work in this subject has been tested in the sine-Gordon and $\\lambda(\\phi)_2^4$ models. The quantum corrections of the sG-soliton and $\\lambda(\\phi^4)_2$-kink masses have been estimated with a relative error of 0.00006% and 0.00007% respectively. Thereafter, the algorithm is applied to other models. In particular, an interesting one-parametric family of double sine-Gordon models interpolating between the ordinary sine-...
Energy Technology Data Exchange (ETDEWEB)
Diwisch, M.; Fabian, B.; Kuzminchuk, N. [Justus Liebig University Giessen (Germany); Knoebel, R.; Geissel, H.; Plass, W.R.; Scheidenberger, C.; Boutin, D.; Brandau, C.; Chen, L. [Justus Liebig University Giessen (Germany); GSI, Darmstadt (Germany); Patyk, Z. [Soltan Institute for Nuclear Studies, Warszawa (Poland); Weick, H.; Beckert, K.; Bosch, F.; Dimopoulou, C.; Dolinskii, A.; Klepper, O.; Kozhuharov, C.; Kurcewicz, J.; Litvinov, S.A.; Litvinov, Yu.A.; Mazzocco, M.; Muenzenberg, G.; Nociforo, C.; Nolden, F.; Steck, M.; Winkler, M. [GSI, Darmstadt (Germany); Cullen, I.J.; Liu, Z.; Walker, P.M. [University of Surrey, Guildford (United Kingdom); Hausmann, M.; Montes, F. [Michigan State University, East Lansing (United States); Musumarra, A. [Laboratori Nazionali del Sud, INFN Catania (Italy); Nakajima, S.; Suzuki, T.; Yamaguchi, T. [Saitama University, Saitama (Japan); Ohtsubo, T. [Niigata University, Niigata (Japan); Ozawa, A. [University of Tsukuba, Tsukuba (Japan); Sun, B. [GSI, Darmstadt (Germany); School of Physics, Peking University, Beijing (China); Winckler, N. [Max Planck Institut fuer Kernphysik, Heidelberg (Germany)
2014-07-01
The Isochronous Mass Spectrometry (IMS) and Schottky Mass Spectrometry (SMS) are powerful tools to measure masses of rare exotic nuclei in a storage ring. While the SMS method provides very high accuracies it does not give access to rare isotopes with lifetimes in the sub second range because beam cooling has to be performed for a few seconds before the measurements start. As a complementary method IMS can be used without beam cooling to reach isotopes with lifetimes of only a few 10 μs. As a drawback of the IMS method one cannot achieve the high mass accuracy of the SMS method until now. For the data evaluation of the SMS data a correlation matrix method has been successfully applied in the past. In order to improve the accuracy of the IMS measurements the same method will now be used, which will allow to combine and to correlate data from different IMS measurements with each other. Applying this method to the analysis of previous experiments with uranium fission fragments at the FRS-ESR facility at GSI and to future experiments, will increase the accuracy of the IMS method and may lead to new mass values with reasonable accuracies for very rare and important nuclei for nuclear astrophysics such as {sup 130}Cd, which were not accessible before.
Relevant XML Documents - Approach Based on Vectors and Weight Calculation of Terms
Directory of Open Access Journals (Sweden)
Abdeslem DENNAI
2016-10-01
Full Text Available Three classes of documents, based on their data, circulate in the web: Unstructured documents (.Doc, .html, .pdf ..., semi-structured documents (.xml, .Owl ... and structured documents (Tables database for example. A semi-structured document is organized around predefined tags or defined by its author. However, many studies use a document classification by taking into account their textual content and underestimate their structure. We attempt in this paper to propose a representation of these semi-structured web documents based on weighted vectors allowing exploiting their content for a possible treatment. The weight of terms is calculated using: The normal frequency for a document, TF-IDF (Term Frequency - Inverse Document Frequency and logic (Boolean frequency for a set of documents. To assess and demonstrate the relevance of our proposed approach, we will realize several experiments on different corpus.
Olejniczak, Malgorzata; Gomes, Andre Severo Pereira
2016-01-01
We report an implementation of the nuclear magnetic resonance (NMR) shielding ($\\sigma$), isotope-independent indirect spin-spin coupling ($K$) and the magnetizability ($\\xi$) tensors in the frozen density embedding (FDE) scheme using the four-component (4c) relativistic Dirac--Coulomb (DC) Hamiltonian and the non-collinear spin density functional theory (SDFT). The formalism takes into account the magnetic balance between the large and the small components of molecular spinors and assures the gauge-origin independence of NMR shielding and magnetizability results. This implementation has been applied to hydrogen-bonded HXH$\\cdots$OH$_2$ complexes (X = Se, Te, Po) and compared with the supermolecular calculations and with the approach based on the integration of the magnetically induced current density vector. A comparison with the approximate Zeroth-Order Regular Approximation (ZORA) Hamiltonian indicates non-negligible differences in $\\sigma$ and $K$ in the HPoH$\\cdots$OH$_2$ complex, and calls for a thourou...
Actuarial calculation for PSAK-24 purposes post-employment benefit using market-consistent approach
Effendie, Adhitya Ronnie
2015-12-01
In this paper we use a market-consistent approach to calculate present value of obligation of a companies' post-employment benefit in accordance with PSAK-24 (the Indonesian accounting standard). We set some actuarial assumption such as Indonesian TMI 2011 mortality tables for mortality assumptions, accumulated salary function for wages assumption, a scaled (to mortality) disability assumption and a pre-defined turnover rate for termination assumption. For economic assumption, we use binomial tree method with estimated discount rate as its average movement. In accordance with PSAK-24, the Projected Unit Credit method has been adapted to determine the present value of obligation (actuarial liability), so we use this method with a modification in its discount function.
Calculation of illumination conditions at the lunar south pole - parallel programming approach
Figuera, R. Marco; Gläser, P.; Oberst, J.; De Rosa, D.
2014-04-01
In this paper we present a parallel programming approach to evaluate illumination conditions at the lunar south pole. Due to the small inclination (1.54°) of the lunar rotational axis with respect to the ecliptic plane and the topography of the lunar south pole, which allows long illumination periods, the study of illumination conditions is of great importance. Several tests were conducted in order to check the viability of the study and to optimize the tool used to calculate such illumination. First results using a simulated case study showed a reduction of the computation time in the order of 8-12 times using parallel programming in the Graphic Processing Unit (GPU) in comparison with sequential programming in the Central Processing Unit (CPU).
The FASTER Approach: A New Tool for Calculating Real-Time Tsunami Flood Hazards
Wilson, R. I.; Cross, A.; Johnson, L.; Miller, K.; Nicolini, T.; Whitmore, P.
2014-12-01
In the aftermath of the 2010 Chile and 2011 Japan tsunamis that struck the California coastline, emergency managers requested that the state tsunami program provide more detailed information about the flood potential of distant-source tsunamis well ahead of their arrival time. The main issue is that existing tsunami evacuation plans call for evacuation of the predetermined "worst-case" tsunami evacuation zone (typically at a 30- to 50-foot elevation) during any "Warning" level event; the alternative is to not call an evacuation at all. A solution to provide more detailed information for secondary evacuation zones has been the development of tsunami evacuation "playbooks" to plan for tsunami scenarios of various sizes and source locations. To determine a recommended level of evacuation during a distant-source tsunami, an analytical tool has been developed called the "FASTER" approach, an acronym for factors that influence the tsunami flood hazard for a community: Forecast Amplitude, Storm, Tides, Error in forecast, and the Run-up potential. Within the first couple hours after a tsunami is generated, the National Tsunami Warning Center provides tsunami forecast amplitudes and arrival times for approximately 60 coastal locations in California. At the same time, the regional NOAA Weather Forecast Offices in the state calculate the forecasted coastal storm and tidal conditions that will influence tsunami flooding. Providing added conservatism in calculating tsunami flood potential, we include an error factor of 30% for the forecast amplitude, which is based on observed forecast errors during recent events, and a site specific run-up factor which is calculated from the existing state tsunami modeling database. The factors are added together into a cumulative FASTER flood potential value for the first five hours of tsunami activity and used to select the appropriate tsunami phase evacuation "playbook" which is provided to each coastal community shortly after the forecast
Comparison of different approaches to the numerical calculation of the LMJ focal
Directory of Open Access Journals (Sweden)
Bourgeade A.
2013-11-01
Full Text Available The beam smoothing in the focal plane of high power lasers is of particular importance to laser-plasma interaction studies in order to minimize plasma parametric and hydrodynamic instabilities on the target. Here we investigate the focal spot structure in different geometrical configurations where standard paraxial hypotheses are no longer verified. We present numerical studies in the cases of single flat top square beam, LMJ quadruplet and complete ring of quads with large azimuth angle. Different calculations are made with Fresnel diffraction propagation model in the paraxial approximation and full vector Maxwell's equations. The first model is based on Fourier transform from near to far field method. The second model uses first spherical wave decomposition in plane waves with Fourier transform and propagates them to the focal spot. These two different approaches are compared with Miró [1] modeling results using paraxial or Feit and Fleck options. The methods presented here are generic for focal spot calculations. They can be used for other complex geometric configurations and various smoothing techniques. The results will be used as boundary conditions in plasma interaction computations.
Shell model approach for nuclei with mass around 220
Kaiura, Yukiko; Yoshinaga, Naotaka; Higashiyama, Koji
2014-09-01
Ra and Th isotopes with mass around 220 belonging to a transitional region between spherical and deformed regions have fascinated our interest from the past. In particular, since a large number of negative parity states are observed in low-lying states, collective octupole correlations are supposed to be important. In this talk we report the nuclear structure of Po, Rn, Ra and Th isotopes in terms of the pair truncated shell model, the basic ingredients of which consist of nuclear collective models. The 208Pb is considered as the doubly-magic core. The conventional pairing plus quadrupole interaction is employed. Energy levels and electric transitions are compared between theory and experiment.
Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel
2013-06-01
Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.
Energy Technology Data Exchange (ETDEWEB)
Graf, Peter; Damiani, Rick R.; Dykes, Katherine; Jonkman, Jason M.
2017-01-09
A new adaptive stratified importance sampling (ASIS) method is proposed as an alternative approach for the calculation of the 50 year extreme load under operational conditions, as in design load case 1.1 of the the International Electrotechnical Commission design standard. ASIS combines elements of the binning and extrapolation technique, currently described by the standard, and of the importance sampling (IS) method to estimate load probability of exceedances (POEs). Whereas a Monte Carlo (MC) approach would lead to the sought level of POE with a daunting number of simulations, IS-based techniques are promising as they target the sampling of the input parameters on the parts of the distributions that are most responsible for the extreme loads, thus reducing the number of runs required. We compared the various methods on select load channels as output from FAST, an aero-hydro-servo-elastic tool for the design and analysis of wind turbines developed by the National Renewable Energy Laboratory (NREL). Our newly devised method, although still in its infancy in terms of tuning of the subparameters, is comparable to the others in terms of load estimation and its variance versus computational cost, and offers great promise going forward due to the incorporation of adaptivity into the already powerful importance sampling concept.
New approach on calculating multiview 3D crosstalk for autostereoscopic displays
Jung, Sung-Min; Lee, Kyeong-Jin; Kang, Ji-Na; Lee, Seung-Chul; Lim, Kyoung-Moon
2012-03-01
In this study, we suggest a new concept of 3D crosstalk for auto-stereoscopic displays and obtain 3D crosstalk values of several multi-view systems based on the suggested definition. First, we measure the angular dependencies of the luminance for auto-stereoscopic displays under various test patterns corresponding to each view of a multi-view system and then calculate the 3D crosstalk based on our new definition with respect to the measured luminance profiles. Our new approach gives just a single 3D crosstalk value for single device without any ambiguity and shows similar order of values to the conventional stereoscopic displays. These results are compared with the conventional 3D crosstalk values of selected auto-stereoscopic displays such as 4-view and 9-view systems. From the result, we believe that this new approach is very useful for controlling 3D crosstalk values of the 3D displays manufacturing and benchmarking of the 3D performances among the various auto-stereoscopic displays.
Vertical and lateral flight optimization algorithm and missed approach cost calculation
Murrieta Mendoza, Alejandro
new method of calculating the missed approach fuel burned and its emissions is developed and explained. This calculation was performed using an emissions database and a Visual Basic for applications code in Excel.
Numerical probabilistic analysis for slope stability in fractured rock masses using DFN-DEM approach
Directory of Open Access Journals (Sweden)
Alireza Baghbanan
2017-06-01
Full Text Available Due to existence of uncertainties in input geometrical properties of fractures, there is not any unique solution for assessing the stability of slopes in jointed rock masses. Therefore, the necessity of applying probabilistic analysis in these cases is inevitable. In this study a probabilistic analysis procedure together with relevant algorithms are developed using Discrete Fracture Network-Distinct Element Method (DFN-DEM approach. In the right abutment of Karun 4 dam and downstream of the dam body, five joint sets and one major joint have been identified. According to the geometrical properties of fractures in Karun river valley, instability situations are probable in this abutment. In order to evaluate the stability of the rock slope, different combinations of joint set geometrical parameters are selected, and a series of numerical DEM simulations are performed on generated and validated DFN models in DFN-DEM approach to measure minimum required support patterns in dry and saturated conditions. Results indicate that the distribution of required bolt length is well fitted with a lognormal distribution in both circumstances. In dry conditions, the calculated mean value is 1125.3 m, and more than 80 percent of models need only 1614.99 m of bolts which is a bolt pattern with 2 m spacing and 12 m length. However, as for the slopes with saturated condition, the calculated mean value is 1821.8 m, and more than 80 percent of models need only 2653.49 m of bolts which is equivalent to a bolt pattern with 15 m length and 1.5 m spacing. Comparison between obtained results with numerical and empirical method show that investigation of a slope stability with different DFN realizations which conducted in different block patterns is more efficient than the empirical methods.
Institute of Scientific and Technical Information of China (English)
Hanjie Guo; Weijie Zhao; Xuemin Yang
2007-01-01
The calculating models of mass action concentrations for electrolyte aqueous solutions NaBr-H2O, LiNO3-H2O, HNO3-H2O,and KF-H2O have been developed at 298.15 K and their molalities ranging from 0.1 mol/kg to saturation according to the ion and molecule coexistence theory as well as mass action law. The calculated mass action concentration is based on pure species as the standard state and the mole fraction as the concentration unit, and the reported activities are usually based on infinite dilution as the standard state and molality as the concentration unit. Hence, the calculated mass action concentration must be transformed to the same standard state and concentration unit. The transformation coefficients between calculated mass action concentrations and reported activities of the same component fluctuate in a very narrow range. Thus, the transformed mass action concentrations not only agree well with reported activities, but also strictly obey mass action law. The calculated results show that the new developed models can embody the intrinsic structure of investigated four electrolyte aqueous solutions. The results also indicate that mass action law has its widespread applicability to electrolyte binary aqueous solutions.
Modelling aeolian sand transport using a dynamic mass balancing approach
Mayaud, Jerome R.; Bailey, Richard M.; Wiggs, Giles F. S.; Weaver, Corinne M.
2017-03-01
Knowledge of the changing rate of sediment flux in space and time is essential for quantifying surface erosion and deposition in desert landscapes. Whilst many aeolian studies have relied on time-averaged parameters such as wind velocity (U) and wind shear velocity (u*) to determine sediment flux, there is increasing field evidence that high-frequency turbulence is an important driving force behind the entrainment and transport of sand. At this scale of analysis, inertia in the saltation system causes changes in sediment transport to lag behind de/accelerations in flow. However, saltation inertia has yet to be incorporated into a functional sand transport model that can be used for predictive purposes. In this study, we present a new transport model that dynamically balances the sand mass being transported in the wind flow. The 'dynamic mass balance' (DMB) model we present accounts for high-frequency variations in the horizontal (u) component of wind flow, as saltation is most strongly associated with the positive u component of the wind. The performance of the DMB model is tested by fitting it to two field-derived (Namibia's Skeleton Coast) datasets of wind velocity and sediment transport: (i) a 10-min (10 Hz measurement resolution) dataset; (ii) a 2-h (1 Hz measurement resolution) dataset. The DMB model is shown to outperform two existing models that rely on time-averaged wind velocity data (e.g. Radok, 1977; Dong et al., 2003), when predicting sand transport over the two experiments. For all measurement averaging intervals presented in this study (10 Hz-10 min), the DMB model predicted total saltation count to within at least 0.48%, whereas the Radok and Dong models over- or underestimated total count by up to 5.50% and 20.53% respectively. The DMB model also produced more realistic (less 'peaky') time series of sand flux than the other two models, and a more accurate distribution of sand flux data. The best predictions of total sand transport are achieved using
Deference, Denial, and Beyond: A Repertoire Approach to Mass Media and Schooling
Rymes, Betsy
2011-01-01
In this article, the author outlines two general research approaches, within the education world, to these mass-mediated formations: "Deference" and "Denial." Researchers who recognize the social practices that give local meaning to mass media formations and ways of speaking do not attempt to recontextualize youth media in their own social…
2010-11-01
AFRL-RX-TY-TP-2010-0051 AN APPROACH TO MASS CUSTOMIZATION OF MILITARY UNIFORMS USING SUPEROLEOPHOBIC NONWOVEN FABRICS POSTPRINT Dnyanada...2010 An Approach to Mass Customization of Military Uniforms Using Superoleophobic Nonwoven Fabrics (POSTPRINT) FA8650-07-1-5916 0602102F GOVT L0...hydroentangled nonwovens and nylon-cotton blended woven fabrics were modified, and made superhydrophobic and superoleophobic to protect soldiers against the
Kartashov, D A; Shurshakov, V A
2012-01-01
The article presents a new procedure of calculating the shielding functions for irregular objects formed from a set of nonintersecting (adjacent) triangles covering completely the surface of each object. Calculated and experimentally derived distributions of space ionizing radiation doses in the spherical tissue-equivalent phantom (experiment MATRYOSHKA-R) inside the International space station were in good agreement in the mass of phantom depths with allowance for measurement error (-10%). The procedure can be applied in modeling radiation loads on cosmonauts, calculating effectiveness of secondary protection in spacecraft, and design review of radiation protection for future space exploration missions.
Ormand, W. E.; Brown, B. A.; Hjorth-Jensen, M.
2017-08-01
We present calculations for the c coefficients of the isobaric mass multiplet equation for nuclei from A =42 to A =54 based on input from three realistic nucleon-nucleon interactions. We demonstrate that there is a clear dependence on the short-range charge-symmetry-breaking (CSB) part of the strong interaction and that there is significant disagreement in the CSB part between the commonly used CD-Bonn, chiral effective field theory at next-to-next-to-next-to-leading-order, and Argonne V18 nucleon-nucleon interactions. In addition, we show that all three interactions give a CSB contribution to the c coefficient that is too large when compared to experiment.
submitter Biologically optimized helium ion plans: calculation approach and its in vitro validation
Mairani, A; Magro, G; Tessonnier, T; Kamp, F; Carlson, D J; Ciocca, M; Cerutti, F; Sala, P R; Ferrari, A; Böhlen, T T; Jäkel, O; Parodi, K; Debus, J; Abdollahi, A; Haberer, T
2016-01-01
Treatment planning studies on the biological effect of raster-scanned helium ion beams should be performed, together with their experimental verification, before their clinical application at the Heidelberg Ion Beam Therapy Center (HIT). For this purpose, we introduce a novel calculation approach based on integrating data-driven biological models in our Monte Carlo treatment planning (MCTP) tool. Dealing with a mixed radiation field, the biological effect of the primary $^4$He ion beams, of the secondary $^3$He and $^4$He (Z = 2) fragments and of the produced protons, deuterons and tritons (Z = 1) has to be taken into account. A spread-out Bragg peak (SOBP) in water, representative of a clinically-relevant scenario, has been biologically optimized with the MCTP and then delivered at HIT. Predictions of cell survival and RBE for a tumor cell line, characterized by ${{(\\alpha /\\beta )}_{\\text{ph}}}=5.4$ Gy, have been successfully compared against measured clonogenic survival data. The mean ...
Multi-reference approach to the calculation of photoelectron spectra including spin-orbit coupling
Grell, Gilbert; Winter, Bernd; Seidel, Robert; Aziz, Emad F; Aziz, Saadullah G; Kühn, Oliver
2015-01-01
X-ray photoelectron spectra provide a wealth of information on the electronic structure. The extraction of molecular details requires adequate theoretical methods, which in case of transition metal complexes has to account for effects due to the multi-configurational and spin-mixed nature of the many-electron wave function. Here, the Restricted Active Space Self-Consistent Field method including spin-orbit coupling is used to cope with this challenge and to calculate valence and core photoelectron spectra. The intensities are estimated within the frameworks of the Dyson orbital formalism and the sudden approximation. Thereby, we utilize an efficient computational algorithm that is based on a biorthonormal basis transformation. The approach is applied to the valence photoionization of the gas phase water molecule and to the core ionization spectrum of the $\\text{[Fe(H}_2\\text{O)}_6\\text{]}^{2+}$ complex. The results show good agreement with the experimental data obtained in this work, whereas the sudden approx...
Ng, C. N.; Chu, T. P.; Wu, Huasheng; Tong, S. Y.; Huang, Hong
1997-03-01
We compare multiple scattering results of angle-resolved photoelectron diffraction spectra between the exact slab method and the separable propagator perturbation method. In the slab method,footnote C.H. Li, A.R. Lubinsky and S.Y. Tong, Phys. Rev. B17, 3128 (1978). the source wave and multiple scattering within the strong scattering atomic layers are expanded in spherical waves while interlayer scattering is expressed in plane waves. The transformation between spherical waves and plane waves is done exactly. The plane waves are then matched across the solid-vacuum interface to a single outgoing plane wave in the detector's direction. The separable propagator perturbation approach uses two approximations: (i) A separable representation of the Green's function propagator and (ii) A perturbation expansion of multiple scattering terms. Results of c(2x2) S-Ni(001) show that this approximate method fails to converge due to the very slow convergence of the separable representation for scattering angles less than 90^circ. However, this method is accurate in the backscattering regime and may be applied to XAFS calculations.(J.J. Rehr and R.C. Albers, Phys. Rev. B41, 8139 (1990).) The use of this method for angle-resolved photoelectron diffraction spectra is substantially less reliable.
Multi-reference approach to the calculation of photoelectron spectra including spin-orbit coupling
Energy Technology Data Exchange (ETDEWEB)
Grell, Gilbert; Bokarev, Sergey I., E-mail: sergey.bokarev@uni-rostock.de; Kühn, Oliver [Institut für Physik, Universität Rostock, D-18051 Rostock (Germany); Winter, Bernd; Seidel, Robert [Helmholtz-Zentrum Berlin für Materialien und Energie, Methods for Material Development, Albert-Einstein-Strasse 15, D-12489 Berlin (Germany); Aziz, Emad F. [Helmholtz-Zentrum Berlin für Materialien und Energie, Methods for Material Development, Albert-Einstein-Strasse 15, D-12489 Berlin (Germany); Department of Physics, Freie Universität Berlin, Arnimalle 14, D-14159 Berlin (Germany); Aziz, Saadullah G. [Chemistry Department, Faculty of Science, King Abdulaziz University, 21589 Jeddah (Saudi Arabia)
2015-08-21
X-ray photoelectron spectra provide a wealth of information on the electronic structure. The extraction of molecular details requires adequate theoretical methods, which in case of transition metal complexes has to account for effects due to the multi-configurational and spin-mixed nature of the many-electron wave function. Here, the restricted active space self-consistent field method including spin-orbit coupling is used to cope with this challenge and to calculate valence- and core-level photoelectron spectra. The intensities are estimated within the frameworks of the Dyson orbital formalism and the sudden approximation. Thereby, we utilize an efficient computational algorithm that is based on a biorthonormal basis transformation. The approach is applied to the valence photoionization of the gas phase water molecule and to the core ionization spectrum of the [Fe(H{sub 2}O){sub 6}]{sup 2+} complex. The results show good agreement with the experimental data obtained in this work, whereas the sudden approximation demonstrates distinct deviations from experiments.
A first approach to calculate BIOCLIM variables and climate zones for Antarctica
Wagner, Monika; Trutschnig, Wolfgang; Bathke, Arne C.; Ruprecht, Ulrike
2017-02-01
For testing the hypothesis that macroclimatological factors determine the occurrence, biodiversity, and species specificity of both symbiotic partners of Antarctic lecideoid lichens, we present a first approach for the computation of the full set of 19 BIOCLIM variables, as available at http://www.worldclim.org/ for all regions of the world with exception of Antarctica. Annual mean temperature (Bio 1) and annual precipitation (Bio 12) were chosen to define climate zones of the Antarctic continent and adjacent islands as required for ecological niche modeling (ENM). The zones are based on data for the years 2009-2015 which was obtained from the Antarctic Mesoscale Prediction System (AMPS) database of the Ohio State University. For both temperature and precipitation, two separate zonings were specified; temperature values were divided into 12 zones (named 1 to 12) and precipitation values into five (named A to E). By combining these two partitions, we defined climate zonings where each geographical point can be uniquely assigned to exactly one zone, which allows an immediate explicit interpretation. The soundness of the newly calculated climate zones was tested by comparison with already published data, which used only three zones defined on climate information from the literature. The newly defined climate zones result in a more precise assignment of species distribution to the single habitats. This study provides the basis for a more detailed continental-wide ENM using a comprehensive dataset of lichen specimens which are located within 21 different climate regions.
A computational approach to calculate the heat of transport of aqueous solutions
Di Lecce, Silvia; Albrecht, Tim; Bresme, Fernando
2017-01-01
Thermal gradients induce concentration gradients in alkali halide solutions, and the salt migrates towards hot or cold regions depending on the average temperature of the solution. This effect has been interpreted using the heat of transport, which provides a route to rationalize thermophoretic phenomena. Early theories provide estimates of the heat of transport at infinite dilution. These values are used to interpret thermodiffusion (Soret) and thermoelectric (Seebeck) effects. However, accessing heats of transport of individual ions at finite concentration remains an outstanding question both theoretically and experimentally. Here we discuss a computational approach to calculate heats of transport of aqueous solutions at finite concentrations, and apply our method to study lithium chloride solutions at concentrations >0.5 M. The heats of transport are significantly different for Li+ and Cl− ions, unlike what is expected at infinite dilution. We find theoretical evidence for the existence of minima in the Soret coefficient of LiCl, where the magnitude of the heat of transport is maximized. The Seebeck coefficient obtained from the ionic heats of transport varies significantly with temperature and concentration. We identify thermodynamic conditions leading to a maximization of the thermoelectric response of aqueous solutions.
An efficient approach to calculating Wannier states and extension to inhomogeneous systems
Energy Technology Data Exchange (ETDEWEB)
Bissbort, Ulf; Hofstetter, Walter [ITP, Goethe-Universitaet Frankfurt (Germany)
2013-07-01
Wannier states are a fundamental and central constituent to the construction of many-body models, as they are restricted to the single-particle Hilbert subspace of the respective band, while minimizing the spatial spread. Although simple in their initial definition as discrete Fourier transforms of the Bloch states, their actual computation amounts to a non-trivial high-dimensional minimization problem of the spatial variance as a complex phases of the single-particle Bloch state. Various involved techniques have been devised to efficiently treat this minimization problem, which quickly becomes numerically demanding for all but the simplest lattice geometries. We present an alternative approach, which allows for an efficient numerical calculation of the maximally localized Wannier states and entirely circumvents the pitfalls associated with the minimization technique, such as getting stuck in local minima. The computational effort scales favorably with increasing dimensions and lattice geometries in comparison to the minimization technique. Furthermore it allows for the first clear and unambiguous definition of Wannier states in inhomogeneous systems.
New approach to calculate the true-coincidence effect of HpGe detector
Energy Technology Data Exchange (ETDEWEB)
Alnour, I. A., E-mail: aaibrahim3@live.utm.my, E-mail: ibrahim.elnour@yahoo.com [Department of Physics, Faculty of Pure and Applied Science, International University of Africa, 12223 Khartoum (Sudan); Wagiran, H. [Department of Physics, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Skudai,Johor (Malaysia); Ibrahim, N. [Faculty of Defence Science and Technology, National Defence University of Malaysia, Kem Sungai Besi, 57000 Kuala Lumpur (Malaysia); Hamzah, S.; Elias, M. S. [Malaysia Nuclear Agency (MNA), Bangi, 43000 Kajang, Selangor D.E. (Malaysia); Siong, W. B. [Chemistry Department, Faculty of Resource Science & Technology, Universiti Malaysia Sarawak, 94300 Kota Samarahan, Sarawak (Malaysia)
2016-01-22
The corrections for true-coincidence effects in HpGe detector are important, especially at low source-to-detector distances. This work established an approach to calculate the true-coincidence effects experimentally for HpGe detectors of type Canberra GC3018 and Ortec GEM25-76-XLB-C, which are in operation at neutron activation analysis lab in Malaysian Nuclear Agency (NM). The correction for true-coincidence effects was performed close to detector at distances 2 and 5 cm using {sup 57}Co, {sup 60}Co, {sup 133}Ba and {sup 137}Cs as standard point sources. The correction factors were ranged between 0.93-1.10 at 2 cm and 0.97-1.00 at 5 cm for Canberra HpGe detector; whereas for Ortec HpGe detector ranged between 0.92-1.13 and 0.95-100 at 2 and 5 cm respectively. The change in efficiency calibration curve of the detector at 2 and 5 cm after correction was found to be less than 1%. Moreover, the polynomial parameters functions were simulated through a computer program, MATLAB in order to find an accurate fit to the experimental data points.
Biologically optimized helium ion plans: calculation approach and its in vitro validation
Mairani, A.; Dokic, I.; Magro, G.; Tessonnier, T.; Kamp, F.; Carlson, D. J.; Ciocca, M.; Cerutti, F.; Sala, P. R.; Ferrari, A.; Böhlen, T. T.; Jäkel, O.; Parodi, K.; Debus, J.; Abdollahi, A.; Haberer, T.
2016-06-01
Treatment planning studies on the biological effect of raster-scanned helium ion beams should be performed, together with their experimental verification, before their clinical application at the Heidelberg Ion Beam Therapy Center (HIT). For this purpose, we introduce a novel calculation approach based on integrating data-driven biological models in our Monte Carlo treatment planning (MCTP) tool. Dealing with a mixed radiation field, the biological effect of the primary 4He ion beams, of the secondary 3He and 4He (Z = 2) fragments and of the produced protons, deuterons and tritons (Z = 1) has to be taken into account. A spread-out Bragg peak (SOBP) in water, representative of a clinically-relevant scenario, has been biologically optimized with the MCTP and then delivered at HIT. Predictions of cell survival and RBE for a tumor cell line, characterized by {{(α /β )}\\text{ph}}=5.4 Gy, have been successfully compared against measured clonogenic survival data. The mean absolute survival variation ({μΔ \\text{S}} ) between model predictions and experimental data was 5.3% ± 0.9%. A sensitivity study, i.e. quantifying the variation of the estimations for the studied plan as a function of the applied phenomenological modelling approach, has been performed. The feasibility of a simpler biological modelling based on dose-averaged LET (linear energy transfer) has been tested. Moreover, comparisons with biophysical models such as the local effect model (LEM) and the repair-misrepair-fixation (RMF) model were performed. {μΔ \\text{S}} values for the LEM and the RMF model were, respectively, 4.5% ± 0.8% and 5.8% ± 1.1%. The satisfactorily agreement found in this work for the studied SOBP, representative of clinically-relevant scenario, suggests that the introduced approach could be applied for an accurate estimation of the biological effect for helium ion radiotherapy.
Shykoff, Barbara E.; Swanson, Harvey T.
1987-01-01
A new method for correction of mass spectrometer output signals is described. Response-time distortion is reduced independently of any model of mass spectrometer behavior. The delay of the system is found first from the cross-correlation function of a step change and its response. A two-sided time-domain digital correction filter (deconvolution filter) is generated next from the same step response data using a regression procedure. Other data are corrected using the filter and delay. The mean squared error between a step response and a step is reduced considerably more after the use of a deconvolution filter than after the application of a second-order model correction. O2 consumption and CO2 production values calculated from data corrupted by a simulated dynamic process return to near the uncorrupted values after correction. Although a clean step response or the ensemble average of several responses contaminated with noise is needed for the generation of the filter, random noise of magnitude not above 0.5 percent added to the response to be corrected does not impair the correction severely.
Zhang, Xiaolei
2016-01-01
Using the potential-density phase shift approach developed by the present authors in earlier publications, we estimate the magnitude of radial mass accretion/excretion rates across the disks of six nearby spiral galaxies having a range of Hubble types. Our goal is to examine these rates in the context of bulge building and secular morphological evolution along the Hubble sequence. Stellar surface density maps of the sample galaxies are derived from SINGS 3.6um and SDSS i-band images. Corresponding molecular and atomic gas surface densities are derived from published CO(1-0) and HI interferometric observations of the BIMA SONG, THINGS, and VIVA surveys. The mass flow rate calculations utilize a volume-type torque integral to calculate the angular momentum exchange rate between the basic state disk matter and density wave modes. The potential-density phase shift approach yields angular momentum transport rates several times higher than those estimated using the Lynden-Bell and Kalnajs (1972) approach. The curre...
Sago, Norichika; Nakano, Hiroyuki
2016-01-01
We revisit the accuracy of the post-Newtonian (PN) approximation and its region of validity for quasi-circular orbits of a point particle in the Kerr spacetime, by using an analytically known highest post-Newtonian order gravitational energy flux and accurate numerical results in the black hole perturbation approach. It is found that regions of validity become larger for higher PN order results although there are several local maximums in regions of validity for relatively low-PN order results. This might imply that higher PN order calculations are also encouraged for comparable-mass binaries.
Euser, P.; Knorr, K.T.; Nicolaas, H.J.; Velde, A. van der
1984-01-01
Simplified methods and design aids for the calculation of cooling loads and room air temperatures are described. The complicated influence of the room mass is approximated by introducing the 'thermal effective mass' of the room. This quantity accounts for the restricted penetration of the fluctuatin
Performance of the Levenberg–Marquardt neural network approach in nuclear mass prediction
Zhang, Hai Fei; Hao Wang, Li; Yin, Jing Peng; Chen, Peng Hui; Zhang, Hong Fei
2017-04-01
Resorting to a neural network approach we refined several representative and sophisticated global nuclear mass models within the latest atomic mass evaluation (AME2012). In the training process, a quite robust algorithm named the Levenberg–Marquardt (LM) method is employed to determine the weights and biases of the neural network. As a result, this LM neural network approach demonstrates a very useful tool for further improving the accuracy of mass models. For a simple liquid drop formula the root mean square (rms) deviation between the predictions and the 2353 experimental known masses are sharply reduced from 2.455 MeV to 0.235 MeV, and for the other revisited mass models, the rms is remarkably improved by about 30%.
Klement, Laura; Bach, Martin; Breuer, Lutz; Häußermann, Uwe
2017-04-01
The latest inventory of the EU Water Framework Directive determined that 26.3% of Germany's groundwater bodies are in a poor chemical state regarding nitrate. As of late October 2016, the European Commission has filed a lawsuit against Germany for not taking appropriate measures against high nitrate levels in water bodies and thus failing to comply with the EU Nitrate Directive. Due to over-fertilization and high-density animal production, Agriculture was identified as the main source of nitrate pollution. One way to characterize the potential impact of reactive nitrogen on water bodies is the soil surface nitrogen balance where all agricultural nitrogen inputs within an area are contrasted with the output, i.e. the harvest. The surplus nitrogen (given in kg N per ha arable land and year) can potentially leach into the groundwater and thus can be used as a risk indicator. In order to develop and advocate appropriate measures to mitigate the agricultural nitrogen surplus with spatial precision, high-resolution data for the nitrogen surplus is needed. In Germany, not all nitrogen input data is available with the required spatial resolution, especially the use of mineral fertilizers is only given statewide. Therefore, some elements of the nitrogen balance need to be estimated based on agricultural statistics. Hitherto, statistics from the Federal Statistical Office and the statistical offices of the 16 federal states of Germany were used to calculate the soil surface balance annually for the spatial resolution of the 402 districts of Germany (mean size 890 km2). In contrast, this study presents an approach to estimate the nitrogen surplus at a much higher spatial resolution by using the comprehensive Agricultural census data collected in 2010 providing data for 326000 agricultural holdings. This resulted in a nitrogen surplus map with a 5 km x 5 km grid which was subsequently used to calculate the nitrogen concentration of percolation water. This provides a
A simplified approach to calculate slurry production of growing pigs at farm level
Directory of Open Access Journals (Sweden)
Franco Tagliapietra
2010-01-01
Full Text Available A simplified approach to predict the amount of slurry produced by growing pigs at farm level is proposed. The inputs are initial (LWi and final (LWf live weights, production (t and empty (empty periods, feed consumption (FC, dry matter (DMD, N digestibilities and farm water consumption per pig (FWC. Estimates of the amount of water required (or arisen per kg of feed for the various physiological functions were estimated by running a published mathematical model using data representing the ordinary conditions of rearing. Water excretion was estimated in two ways depending on: 1 free access (ad lib to water; 2 restricted access (forced. In the first case, the proportion of water consumed (wiad lib and those excreted with the urine (wuad lib and the faeces (wfec were quantified to be 2.9, 1.72 and 0.33 kg per kg of feed, respectively. From the urinary excretions of N and minerals, obtained as the difference between the digestible nutrient intakes and the retentions, the model predicted a urinary DM content of 2.1% (by weight. In the second case, for pigs receiving drinking water in forced ratio with the feed (wiforced, the urinary production was calculated as wuforced=(wiforced+wf+wo-(wd+ws+wg+wfec+we, where wf=water content in feed (0.12 kg/kg, wo=water arising from nutrient oxidation (0.25 kg/kg, wd=water required for digestion (0.08 kg/kg, ws=water demand for protein and lipid synthesis (0.06 kg/kg, wg=water retained in body tissues (0.14 kg/kg and we=water lost through evaporation (0.96 kg/kg. Estimates of fresh slurry production (faeces+urine were regressed against the values resulting from empirical literature equations and referred to pigs fed water:feed ratios of 2.5:1, 2.9:1 and 4:1. The resulting regression (R2=0.97, with a slope close to unity (1.05, indicated that the approach can be extended to predict the farm fresh slurry production with pigs having free access to water or kept on different water:feed ratios. In agreement with
Roadmaps through free energy landscapes calculated using the multi-dimensional vFEP approach.
Lee, Tai-Sung; Radak, Brian K; Huang, Ming; Wong, Kin-Yiu; York, Darrin M
2014-01-14
The variational free energy profile (vFEP) method is extended to two dimensions and tested with molecular simulation applications. The proposed 2D-vFEP approach effectively addresses the two major obstacles to constructing free energy profiles from simulation data using traditional methods: the need for overlap in the re-weighting procedure and the problem of data representation. This is especially evident as these problems are shown to be more severe in two dimensions. The vFEP method is demonstrated to be highly robust and able to provide stable, analytic free energy profiles with only a paucity of sampled data. The analytic profiles can be analyzed with conventional search methods to easily identify stationary points (e.g. minima and first-order saddle points) as well as the pathways that connect these points. These "roadmaps" through the free energy surface are useful not only as a post-processing tool to characterize mechanisms, but can also serve as a basis from which to direct more focused "on-the-fly" sampling or adaptive force biasing. Test cases demonstrate that 2D-vFEP outperforms other methods in terms of the amount and sparsity of the data needed to construct stable, converged analytic free energy profiles. In a classic test case, the two dimensional free energy profile of the backbone torsion angles of alanine dipeptide, 2D-vFEP needs less than 1% of the original data set to reach a sampling accuracy of 0.5 kcal/mol in free energy shifts between windows. A new software tool for performing one and two dimensional vFEP calculations is herein described and made publicly available.
Calculation of the Cost of an Adequate Education in Kentucky: A Professional Judgment Approach
Directory of Open Access Journals (Sweden)
Deborah A. Verstegen
2004-02-01
Full Text Available What is an adequate education and how much does it cost? In 1989, Kentucky’s State Supreme Court found the entire system of education unconstitutional-“all of its parts and parcels”. The Court called for all children to have access to an adequate education, one that is uniform and has as its goal the development of seven capacities, including: (i “sufficient oral and written communication skills to enable students to function in a complex and rapidly changing civilization . . . .and (vii sufficient levels of academic or vocational skills to enable public school students to compete favorably with their counterparts in surrounding states, in academics or in the job market”. Now, over a decade later, key questions remain regarding whether these objectives have been fulfilled. This research is designed to calculate the cost of an adequate education by aligning resources to State standards, laws and objectives, using a professional judgment approach. Seven focus groups were convened for this purpose and the scholarly literature was reviewed to provide multiple inputs into study findings. The study produced a per pupil base cost for each of three prototype school districts and an total statewide cost, with the funding gap between existing revenue and the revenue needed for current operations of $1.097 billion per year (2001-02. Additional key resource requirements needed to achieve an adequate education, identified by professional judgment panels, include: (1 extending the school year for students and teachers, (2 adding voluntary half-day preschool for three and four year olds, and (3 raising teacher salaries. This increases the funding gap to $1.23 billion and suggests that significant new funding is required over time if the Commonwealth of Kentucky is to provide an adequate and equitable education of high quality for all children and youth as directed by the State Supreme Court.
Utama, R; Prosper, H B
2016-01-01
Besides their intrinsic nuclear-structure value, nuclear mass models are essential for astrophysical applications, such as r-process nucleosynthesis and neutron-star structure. To overcome the intrinsic limitations of existing "state-of-the-art" mass models, we propose a refinement based on a Bayesian Neural Network (BNN) formalism. A novel BNN approach is implemented with the goal of optimizing mass residuals between theory and experiment. A significant improvement (of about 40%) in the mass predictions of existing models is obtained after BNN refinement. Moreover, these improved results are now accompanied by proper statistical errors. Finally, by constructing a "world average" of these predictions, a mass model is obtained that is used to predict the composition of the outer crust of a neutron star. The power of the Bayesian neural network method has been successfully demonstrated by a systematic improvement in the accuracy of the predictions of nuclear masses. Extension to other nuclear observables is a n...
A New Approach for Offshore Wind Farm Energy Yields Calculation with Mixed Hub Height Wind Turbines
DEFF Research Database (Denmark)
Hou, Peng; Hu, Weihao; Soltani, Mohsen
2016-01-01
In this paper, a mathematical model for calculating the energy yields of offshore wind farm with mixed types of wind turbines is proposed. The Jensen model is selected as the base and developed to a three dimension wake model to estimate the energy yields. Since the wind turbines are with different...... hub heights, the wind shear effect is also taken into consideration. The results show that the proposed wake model is effective in calculating the wind speed deficit. The calculation framework is applicable for energy yields calculation in offshore wind farms....
Institute of Scientific and Technical Information of China (English)
刘松芬; 胡北来
2003-01-01
The internal energy and pressure of dense hydrogen plasma are calculated by the direct path integral Monte Carlo approach. The Kelbg potential is used as interaction potentials both between electrons and between protons and electrons in the calculation. The complete formulae for internal energy and pressure in dense hydrogen plasma derived for the simulation are presented. The correctness of the derived formulae are validated by the obtained simulation results. The numerical results are discussed in details.
Global nuclear-structure calculations
Energy Technology Data Exchange (ETDEWEB)
Moeller, P.; Nix, J.R.
1990-04-20
The revival of interest in nuclear ground-state octupole deformations that occurred in the 1980's was stimulated by observations in 1980 of particularly large deviations between calculated and experimental masses in the Ra region, in a global calculation of nuclear ground-state masses. By minimizing the total potential energy with respect to octupole shape degrees of freedom in addition to {epsilon}{sub 2} and {epsilon}{sub 4} used originally, a vastly improved agreement between calculated and experimental masses was obtained. To study the global behavior and interrelationships between other nuclear properties, we calculate nuclear ground-state masses, spins, pairing gaps and {Beta}-decay and half-lives and compare the results to experimental qualities. The calculations are based on the macroscopic-microscopic approach, with the microscopic contributions calculated in a folded-Yukawa single-particle potential.
Silva, A. M.; Silva, B. P.; Sales, F. A. M.; Freire, V. N.; Moreira, E.; Fulco, U. L.; Albuquerque, E. L.; Maia, F. F., Jr.; Caetano, E. W. S.
2012-11-01
Density functional theory (DFT) computations within the local-density approximation and generalized gradient approximation in pure form and with dispersion correction (GGA+D) were carried out to investigate the structural, electronic, and optical properties of L-aspartic acid anhydrous crystals. The electronic (band structure and density of states) and optical absorption properties were used to interpret the light absorption measurements we have performed in L-aspartic acid anhydrous crystalline powder at room temperature. We show the important role of the layered spatial disposition of L-aspartic acid molecules in anhydrous L-aspartic crystals to explain the observed electronic and optical properties. There is good agreement between the GGA+D calculated and experimental lattice parameters, with (Δa, Δb, Δc) deviations of (0.029,-0.023,-0.024) (units in Å). Mulliken [J. Chem. Phys.JCPSA60021-960610.1063/1.1740588 23, 1833 (1955)] and Hirshfeld [Theor. Chim. ActaTCHAAM0040-574410.1007/BF00549096 44, 129 (1977)] population analyses were also performed to assess the degree of charge polarization in the zwitterion state of the L-aspartic acid molecules in the DFT converged crystal. The lowest-energy optical absorption peaks related to transitions between the top of the valence band and the bottom of the conduction band involve O 2p valence states and C 1p and O 2p conduction states, with the carboxyl and COOH lateral chain group contributing significantly to the energy band gap. Among the calculated band gaps, the lowest GGA+D (4.49-eV) gap is smaller than the experimental estimate of 5.02 eV, as obtained by optical absorption. Such a wide-band-gap energy together with the small carrier effective masses estimated from band curvatures allows us to suggest that an L-aspartic acid anhydrous crystal can behave as a wide-gap semiconductor. A comparison of effective masses among directions parallel and perpendicular to the L-aspartic molecules layers reveals that charge
Stur, J.; Bos, M.; van der Linden, W.E.
1984-01-01
The very fast calculation procedure described earlier is applied to calculate the titration curves of complicated redox systems. The theory is extended slightly to cover inhomogeneous redox systems. Titrations of iodine or 2,6-dichloroindophenol with ascorbic acid are described. It is shown that cor
Heat and mass transfer intensification and shape optimization a multi-scale approach
2013-01-01
Is the heat and mass transfer intensification defined as a new paradigm of process engineering, or is it just a common and old idea, renamed and given the current taste? Where might intensification occur? How to achieve intensification? How the shape optimization of thermal and fluidic devices leads to intensified heat and mass transfers? To answer these questions, Heat & Mass Transfer Intensification and Shape Optimization: A Multi-scale Approach clarifies the definition of the intensification by highlighting the potential role of the multi-scale structures, the specific interfacial area, the distribution of driving force, the modes of energy supply and the temporal aspects of processes. A reflection on the methods of process intensification or heat and mass transfer enhancement in multi-scale structures is provided, including porous media, heat exchangers, fluid distributors, mixers and reactors. A multi-scale approach to achieve intensification and shape optimization is developed and clearly expla...
Zayed, M. A.; Hawash, M. F.; Fahmey, M. A.; El-Habeeb, Abeer A.
2007-11-01
Sertraline (C 17H 17Cl 2N) as an antidepressant drug was investigated using thermal analysis (TA) measurements (TG/DTG and DTA) in comparison with electron impact (EI) mass spectral (MS) fragmentation at 70 eV. Semi-empirical MO-calculations, using PM3 procedure, has been carried out on neutral molecule and positively charged species. These calculations included bond length, bond order, bond strain, partial charge distribution and heats of formation (Δ Hf). Also, in the present work sertraline-iodine product was prepared and its structure was investigated using elemental analyses, IR, 1H NMR, 13C NMR, MS and TA. It was also subjected to molecular orbital calculations (MOC) in order to confirm its fragmentation behavior by both MS and TA in comparison with the sertraline parent drug. In MS of sertraline the initial rupture occurred was CH 3NH 2+ fragment ion via H-rearrangement while in sertraline-iodine product the initial rupture was due to the loss of I + and/or HI + fragment ions followed by CH 2dbnd NH + fragment ion loss. In thermal analyses (TA) the initial rupture in sertraline is due to the loss of C 6H 3Cl 2 followed by the loss of CH 3-NH forming tetraline molecule which thermally decomposed to give C 4H 8, C 6H 6 or the loss of H 2 forming naphthalene molecule which thermally sublimated. In sertraline-iodine product as a daughter the initial thermal rupture is due to successive loss of HI and CH 3NH followed by the loss of C 6H 5HI and HCl. Sertraline biological activity increases with the introduction of iodine into its skeleton. The activities of the drug and its daughter are mainly depend upon their fragmentation to give their metabolites in vivo systems, which are very similar to the identified fragments in both MS and TA. The importance of the present work is also due to the decision of the possible mechanism of fragmentation of the drug and its daughter and its confirmation by MOC.
Algebraic-Eikonal approach to medium energy proton scattering from odd-mass nuclei
Bijker, R
1995-01-01
We extend the algebraic-eikonal approach to medium energy proton scattering from odd-mass nuclei by combining the eikonal approximation for the scattering with a description of odd-mass nuclei in terms of the interacting boson-fermion model. We derive closed expressions for the transition matrix elements for one of the dynamical symmetries and discuss the interplay between collective and single-particle degrees of freedom in an application to elastic and inelastic proton scattering from ^{195}Pt.
Dolatabadi, N.; Littlefair, B.; De la Cruz, M.; Theodossiades, S.; Rothberg, S. J.; Rahnejat, H.
2015-09-01
An analytical/numerical methodology is presented to calculate the radiated noise due to internal combustion engine piston impacts on the cylinder liner through a film of lubricant. Both quasi-static and transient dynamic analyses coupled with impact elasto-hydrodynamics are reported. The local impact impedance is calculated, as well as the transferred energy onto the cylinder liner. The simulations are verified against experimental results for different engine operating conditions and for noise levels calculated in the vicinity of the engine block. Continuous wavelet signal processing is performed to identify the occurrence of piston slap noise events and their spectral content, showing good conformance between the predictions and experimentally acquired signals.
New approach to 3D electrostatic calculations for micro-pattern detectors
Lazic, P; Formaggio, J A; Abraham, H; Stefancic, H
2011-01-01
We demonstrate practically approximation-free electrostatic calculations of micromesh detectors that can be extended to any other type of micropattern detectors. Using newly developed Boundary Element Method called Robin Hood Method we can easily handle objects with huge number of boundary elements (hundreds of thousands) without any compromise in numerical accuracy. In this paper we show how such calculations can be applied to Micromegas detectors by comparing electron transparencies and gains for four different types of meshes. We demonstrate inclusion of dielectric material by calculating the electric field around different types of dielectric spacers.
New approach to 3D electrostatic calculations for micro-pattern detectors
Lazić, P.; Dujmić, D.; Formaggio, J. A.; Abraham, H.; Štefancić, H.
2011-12-01
We demonstrate nearly approximation-free electrostatic calculations of micromesh detectors that can be extended to any other type of micropattern detectors. Using a newly developed Boundary Element Method called Robin Hood Method, we can easily handle objects with huge number of boundary elements (hundreds of thousands) without any compromise in numerical accuracy. In this paper we show how such calculations can be applied to Micromegas detectors by comparing electron transparencies and gains for four different types of meshes. We also demonstrate the inclusion of dielectric material by calculating the electric field around different types of dielectric spacers.
Indian Academy of Sciences (India)
Mrinal Kumar Das; Mahadev Patgiri; N Nimai Singh
2005-12-01
We briefly outline the two popular approaches on radiative corrections to neutrino masses and mixing angles, and then carry out a detailed numerical analysis for a consistency check between them in MSSM. We find that the two approaches are nearly consistent with a discrepancy factor of 4.2% with running vacuum expectation value (VEV) (13% for scale-independent VEV) in mass eigenvalues at low-energy scale but the predictions on mixing angles are almost consistent. We check the stability of the three types of neutrino models, i.e., hierarchical, inverted hierarchical and degenerate models, under radiative corrections, using both approaches, and find consistent conclusions. The neutrino mass models which are found to be stable under radiative corrections in MSSM are the normal hierarchical model and the inverted hierarchical model with opposite CP parity. We also carry out numerical analysis on some important conjectures related to radiative corrections in the MSSM, viz., radiative magnification of solar and atmospheric mixings in the case of nearly degenerate model having same CP parity (MPR conjecture) and radiative generation of solar mass scale in exactly two-fold degenerate model with opposite CP parity and non-zero 3 (JM conjecture). We observe certain exceptions to these conjectures. We find a new result that both solar mass scale and 3 can be generated through radiative corrections at low energy scale. Finally the effect of scale-dependent vacuum expectation value in neutrino mass renormalisation is discussed.
An integrated approach to calculate life cycle costs of arms and military equipment
Directory of Open Access Journals (Sweden)
Vlada S. Sokolović
2013-12-01
, costs are one of the most dominant parameters in decision-making. Modern trends in this area comprehensively perceive all costs during the life cycle of assets.In general, in the analysis of costs in the life cycle of AME there are two sets of costs: visible and invisible (hidden costs. The visible part of the costs is mainly present in decision-making and usually includes the cost of equipping units or purchase of assets. The invisible part of the costs is far more significant. Although it is larger than the visible part and covers more groups of costs, decision-makers often do not take it into account. The hidden costs include: distribution costs, operating costs, maintenance costs, training costs, inventory costs, information systems costs, the cost of disposal and write-offs, etc. The decision making problem about investment in the AME purchase and equipping is obviously of multicriteria nature, whether an optimum combination of costs for one technical system (AME is in question, or whether it is a choice of a system of AME among many offered. COST ANALYSIS OF A PARTICULAR ASSET For the illustration of an integrated approach to the analysis of the cost of assets in their life-cycle, a model from the US Naval Postgraduate School, was adjusted and applied on an example of a real asset. The model is applied to the case of two squadrons of identical aircraft based at different airports. With regard to the availability, confidentiality, and the variability of costs and reliability of the elements of AME, the calculations in the model are implemented on the basis of the estimated or orientation parameters. Essentially, the goal is to demonstrate the interdependence, mutual relations and influences of parameters and their ultimate impact on the overall cost of military assets. Applying the model to a particular example points to the fact that, in the first years of asset life, the dominant cost is that of asset procurement (cost of acquisition, cost of assets
Schmiedt, Hanno; Schlemmer, Stephan; Yurchenko, Sergey N.; Yachmenev, Andrey
2017-01-01
We report a new semi-classical method to compute highly excited rotational energy levels of an asymmetric-top molecule. The method forgoes the idea of a full quantum mechanical treatment of the ro-vibrational motion of the molecule. Instead, it employs a semi-classical Green's function approach to describe the rotational motion, while retaining a quantum mechanical description of the vibrations. Similar approaches have existed for some time, but the method proposed here has two novel features. First, inspired by the path integral method, periodic orbits in the phase space and tunneling paths are naturally obtained by means of molecular symmetry analysis. Second, the rigorous variational method is employed for the first time to describe the molecular vibrations. In addition, we present a new robust approach to generating rotational energy surfaces for vibrationally excited states; this is done in a fully quantum-mechanical, variational manner. The semi-classical approach of the present work is applied to calculating the energies of very highly excited rotational states and it reduces dramatically the computing time as well as the storage and memory requirements when compared to the fullly quantum-mechanical variational approach. Test calculations for excited states of SO2 yield semi-classical energies in very good agreement with the available experimental data and the results of fully quantum-mechanical calculations. PMID:28000807
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
In the present study, analyzed are the variation of added mass for a circular cylinder in the lock-in (synchronization) range of vortex-induced vibration (VIV) and the relationship between added mass and natural frequency. A theoretical minimum value of the added mass coefficient for a circular cylinder at lock-in is given. Developed are semi-empirical formulas for the added mass of a circular cylinder at lock-in as a function of flow speed and mass ratio. A comparison between experiments and numerical simulations shows that the semi-empirical formulas describing the variation of the added mass for a circular cylinder at lock-in are better than the ideal added mass. In addition, computation models such as the wake oscillator model using the present formulas can predict the amplitude response of a circular cylinder at lock-in more accurately than those using the ideal added mass.
Lattice Hamiltonian approach to the massless Schwinger model. Precise extraction of the mass gap
Energy Technology Data Exchange (ETDEWEB)
Cichy, Krzysztof [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Poznan Univ. (Poland). Faculty of Physics; Kujawa-Cichy, Agnieszka [Poznan Univ. (Poland). Faculty of Physics; Szyniszewski, Marcin [Poznan Univ. (Poland). Faculty of Physics; Manchester Univ. (United Kingdom). NOWNano DTC
2012-12-15
We present results of applying the Hamiltonian approach to the massless Schwinger model. A finite basis is constructed using the strong coupling expansion to a very high order. Using exact diagonalization, the continuum limit can be reliably approached. This allows to reproduce the analytical results for the ground state energy, as well as the vector and scalar mass gaps to an outstanding precision better than 10{sup -6} %.
1995-05-01
A HYBRID ANALYTICAL/ SIMULATION MODELING APPROACH FOR PLANNING AND OPTIMIZING MASS TACTICAL AIRBORNE OPERATIONS by DAVID DOUGLAS BRIGGS M.S.B.A...COVERED MAY 1995 TECHNICAL REPORT THESIS 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS A HYBRID ANALYTICAL SIMULATION MODELING APPROACH FOR PLANNING AND...are present. Thus, simulation modeling presents itself as an excellent alternate tool for planning because it allows for the modeling of highly complex
Energy Technology Data Exchange (ETDEWEB)
Zhu, G.; Lewandowski, A.
2012-11-01
A new analytical method -- First-principle OPTical Intercept Calculation (FirstOPTIC) -- is presented here for optical evaluation of trough collectors. It employs first-principle optical treatment of collector optical error sources and derives analytical mathematical formulae to calculate the intercept factor of a trough collector. A suite of MATLAB code is developed for FirstOPTIC and validated against theoretical/numerical solutions and ray-tracing results. It is shown that FirstOPTIC can provide fast and accurate calculation of intercept factors of trough collectors. The method makes it possible to carry out fast evaluation of trough collectors for design purposes. The FirstOPTIC techniques and analysis may be naturally extended to other types of CSP technologies such as linear-Fresnel collectors and central-receiver towers.
Directory of Open Access Journals (Sweden)
Mo Gao
2016-01-01
Full Text Available It is a multiobjective mixed integer programming problem that calculates the carrying capacity of high speed railway based on mathematical programming method. The model is complex and difficult to solve, and it is difficult to comprehensively consider the various influencing factors on the train operation. The multiagent theory is employed to calculate high speed railway carrying capacity. In accordance with real operations of high speed railway, a three-layer agent model is developed to simulate the operating process of high speed railway. In the proposed model, railway network agent, line agent, station agent, and train agent are designed, respectively. To validate the proposed model, a case study is performed for Beijing–Shanghai high speed railway by using NetLogo software. The results are consistent with the actual data, which implies that the proposed multiagent method is feasible to calculate the carrying capacity of high speed railway.
2014-01-01
Heidegger’s two modes of thinking, calculative and meditative, were used as the thematic basis for this qualitative study of physicians from seven countries (Canada, China, India, Ireland, Japan, Korea, & Thailand). Focus groups were conducted in each country with 69 physicians who cared for the elderly. Results suggest that physicians perceived ethical issues primarily through the lens of calculative thinking (76%) with emphasis on economic concerns. Meditative responses represented 24% of the statements and were mostly generated by Canadian physicians whose patients typically were not faced with economic barriers to treatment due to Canada’s universal health care system. PMID:25381149
Boundary-projection acceleration: A new approach to synthetic acceleration of transport calculations
Energy Technology Data Exchange (ETDEWEB)
Adams, M.L.; Martin, W.R.
1987-01-01
We present a new class of synthetic acceleration methods which can be applied to transport calculations regardless of geometry, discretization scheme, or mesh shape. Unlike other synthetic acceleration methods which base their acceleration on P1 equations, these methods use acceleration equations obtained by projecting the transport solution onto a coarse angular mesh only on cell boundaries. We demonstrate, via Fourier analysis of a simple model problem as well as numerical calculations of various problems, that the simplest of these methods are unconditionally stable with spectral radius less than or equal toc/3 (c being the scattering ratio), for several different discretization schemes in slab geometry. 28 refs., 4 figs., 3 tabs.
Mateu, J.; Collado, C.; Menéndez, O.; O'Callaghan, J. M.
2003-01-01
We report on a general procedure to calculate intermodulation distortion in cavities with superconducting endplates that is applicable to the dielectric-loaded cavities currently used for measurement of surface resistance in high-temperature superconductors. The procedure would enable the use such cavities for intermodulation characterization of unpatterned superconducting films, and would remove the uncertainty of measuring intermodulation on patterned devices, in which the effect of patterning damage might influence the outcome of the measurements. We have verified the calculation method by combining superconducting and copper endplates in a rutile-loaded cavity.
A numerical approach to calculate the induced voltage in the case of conduced perturbations
Energy Technology Data Exchange (ETDEWEB)
Andretzko, J.P.; Hedjiedj, A.; Babouri, A.; Guendouz, L.; Nadi, M. [Nancy-1 Univ. Henri Poincare, Lab. d' Instrumentation Electronique de Nancy, Faculte des Sciences, 54 - Vandoeuvre les Nancy (France)
2006-07-01
This paper presents a method of numerical simulation that makes it possible to calculate the induced tension to the terminals of the cardiac pacemaker subjected to conduced disturbances. The physical model used for simulation is an experimental test bed which makes it possible to study the behaviour of pacemaker, in vitro, subjected to electromagnetic disturbances in low frequencies range (50 hz - 500 khz). The test bed in which the pacemaker is implanted is described in this article. The process of calculation uses the admittance method adapted to the case of conducted disturbances. Results obtained by numerical simulation are close to experimental values. (authors)
A Computationally Efficient Approach for Calculating Galaxy Two-Point Correlations
Demina, Regina; BenZvi, Segev; Hindrichs, Otto
2016-01-01
We develop a modification to the calculation of the two-point correlation function commonly used in the analysis of large scale structure in cosmology. An estimator of the two-point correlation function is constructed by contrasting the observed distribution of galaxies with that of a uniformly populated random catalog. Using the assumption that the distribution of random galaxies in redshift is independent of angular position allows us to replace pairwise combinatorics with fast integration over probability maps. The new method significantly reduces the computation time while simultaneously increasing the precision of the calculation.
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
Based on the phase diagrams and the mass action law in combination with the coexistence theory of metallic melts structure, the calculation model of mass action concentration for Mg-Al, Sr-Al and Ba-Al was built, and their thermodynamic parameters were determined. The agreement between calculated and measured results shows that the model and the determined thermodynamic parameters can reflect the structural characteristics of relevant melts. However, the fact that the thermodynamic parameters from literature don′t give the value agree with the measured result may be due to unconformity of these parameters to real chemical reactions in metallic melts.
Qian, Michael; Reineccius, G A
2003-03-01
Potentially important aroma compounds in Parmigiano Reggiano cheese were quantified. Free fatty acids were isolated with ion-exchange chromatography and quantified by gas chromatography. Neutral aroma compounds were quantified with a purge-trap/gas chromatography-mass spectrometry with selective mass ion technique. Odor activity values were calculated based on sensory thresholds reported in literature. The calculated odor activity values suggest that 3-methylbutanal, 2-methylpropanal, 2-methylbutanal, dimethyl trisulfide, diacetyl, methional, phenylacetaldehyde, ethyl butanoate, ethyl hexanoate, ethyl octanoate, acetic, butanoic, hexanoic, and octanoic acids are the most important aroma contributors to Parmigiano Reggiano cheese.
Bamber, Jonathan L.; Schoen, Nana; Zammit-Mangion, Andrew; Rougier, Jonty; Luthcke, Scott; King, Matt
2013-04-01
Quantifying ice mass loss from the Antarctic Ice Sheet remains an important, yet still challenging problem. Although some agreement has been reached as to the order of magnitude of ice loss over the last two decades, in general methods lack statistical rigour in deriving uncertainties and for East Antarctica and the Peninsula significant inconsistencies remain. Here, we present rigorously-derived, error-bounded mass balance trends for part of the Antarctic ice sheet from a combination of satellite, in situ and regional climate model data sets for 2003-2009. Estimates for glacial isostatic adjustment (GIA), surface mass balance (SMB) anomaly, and ice mass change are derived from satellite gravimetry (the Gravity Recovery and Climate Experiment, GRACE), laser altimetry (ICESat, the Ice, Cloud and land Elevation Satellite) and GPS bedrock elevation rates. We use a deterministic Bayes approach to simultaneously solve for the unknown parameters and the covariance matrix which provides the uncertainties. The data were distributed onto a finite element grid the resolution of which reflects the gradients in the underlying process: here ice dynamics and surface mass balance. In this proof of concept study we solve for the time averaged, spatial distribution of mass trends over the 7 year time interval. The results illustrate the potential of the approach, especially for the Antarctic Peninsula (AP), where, due to its narrow width and steep orography, data coverage is sparse and error-prone for satellite altimetry. Results for the ice mass balance estimates are consistent with previous estimates and demonstrate the strength of the approach. Well-known patterns of ice mass change over the WAIS, like the stalled Kamb Ice Stream and the rapid thinning in the Amundsen Sea Embayment, are reproduced in terms of mass trend. Also, without relying on information on ice dynamics, the method correctly places ice loss maxima at the outlets of major glaciers on the AP. Combined ice mass
Symbolic algebra approach to the calculation of intraocular lens power following cataract surgery
Hjelmstad, David P.; Sayegh, Samir I.
2013-03-01
We present a symbolic approach based on matrix methods that allows for the analysis and computation of intraocular lens power following cataract surgery. We extend the basic matrix approach corresponding to paraxial optics to include astigmatism and other aberrations. The symbolic approach allows for a refined analysis of the potential sources of errors ("refractive surprises"). We demonstrate the computation of lens powers including toric lenses that correct for both defocus (myopia, hyperopia) and astigmatism. A specific implementation in Mathematica allows an elegant and powerful method for the design and analysis of these intraocular lenses.
A new fragment-based approach for calculating electronic excitation energies of large systems.
Ma, Yingjin; Liu, Yang; Ma, Haibo
2012-01-14
We present a new fragment-based scheme to calculate the excited states of large systems without necessity of a Hartree-Fock (HF) solution of the whole system. This method is based on the implementation of the renormalized excitonic method [M. A. Hajj et al., Phys. Rev. B 72, 224412 (2005)] at ab initio level, which assumes that the excitation of the whole system can be expressed by a linear combination of various local excitations. We decomposed the whole system into several blocks and then constructed the effective Hamiltonians for the intra- and inter-block interactions with block canonical molecular orbitals instead of widely used localized molecular orbitals. Accordingly, we avoided the prerequisite HF solution and the localization procedure of the molecular orbitals in the popular local correlation methods. Test calculations were implemented for hydrogen molecule chains at the full configuration interaction, symmetry adapted cluster/symmetry adapted cluster configuration interaction, HF/configuration interaction singles (CIS) levels and more realistic polyene systems at the HF/CIS level. The calculated vertical excitation energies for lowest excited states are in reasonable accordance with those determined by the calculations of the whole systems with traditional methods, showing that our new fragment-based method can give good estimates for low-lying energy spectra of both weak and moderate interaction systems with economic computational costs.
Approaches to calculating P balance at the field-scale in Europe
Tunney, H.; Csathó, P.; Ehlert, P.A.I.
2003-01-01
Policies for mitigating phosphorus (P) loss from agriculture are being developed in a number of European countries and calculation of P balance at farm-gate or field-scale is likely to be a part of such policies. The aim of the paper was to study P balance at the field-scale in 18 countries that par
Review article number 50 - The Maxwell-Stefan approach to mass transfer
Krishna, R.; Wesselingh, J.A
1997-01-01
The limitations of the Fick's law for describing diffusion are discussed. It is argued that the Maxwell-Stefan formulation provides the most general, and convenient, approach for describing mass transport which takes proper account of thermodynamic non-idealities and influence of external force fiel
Nitrate Removal in Two Relict Oxbow Urban Wetlands: A 15N Mass-balance Approach
A 15N-tracer method was used to quantify nitrogen (N) removal processes in two relict oxbow wetlands located adjacent to the Minebank Run restored stream reach in Baltimore County (Maryland, USA) during summer 2009 and early spring 2010. A mass-balance approach was used to determ...
Review article number 50 - The Maxwell-Stefan approach to mass transfer
Krishna, R.; Wesselingh, J.A
1997-01-01
The limitations of the Fick's law for describing diffusion are discussed. It is argued that the Maxwell-Stefan formulation provides the most general, and convenient, approach for describing mass transport which takes proper account of thermodynamic non-idealities and influence of external force fiel
Sliding Mode Control for Mass Moment Aerospace Vehicles Using Dynamic Inversion Approach
Directory of Open Access Journals (Sweden)
Xiao-Yu Zhang
2013-01-01
Full Text Available The moving mass actuation technique offers significant advantages over conventional aerodynamic control surfaces and reaction control systems, because the actuators are contained entirely within the airframe geometrical envelope. Modeling, control, and simulation of Mass Moment Aerospace Vehicles (MMAV utilizing moving mass actuators are discussed. Dynamics of the MMAV are separated into two parts on the basis of the two time-scale separation theory: the dynamics of fast state and the dynamics of slow state. And then, in order to restrain the system chattering and keep the track performance of the system by considering aerodynamic parameter perturbation, the flight control system is designed for the two subsystems, respectively, utilizing fuzzy sliding mode control approach. The simulation results describe the effectiveness of the proposed autopilot design approach. Meanwhile, the chattering phenomenon that frequently appears in the conventional variable structure systems is also eliminated without deteriorating the system robustness.
Yamamoto, Y.; Ando, S.
1987-01-01
The unsteady aerodynamics of a two-dimensional wing at sonic speed are studied by using so-called classical sonic theories (linear), approached from supersonic flow (M=1+0) or subsonic flow (M=1-0). In the former approach, the exact expressions of lift and lift distribution are obtained in terms of Fresnel integrals, while in the latter approach an integral equation must be solved, the kernel function of which is obtained from the subsonic Possio's equation and has a root singularity. The discrete analysis is adopted on the basis of the semicircle method (SCM) and the weighting function for subsonic-flow-Gauss-quadrature, as well as modified characteristics obtained from both approaches agree quite well with each other. The results obtained by the present computations are compared with those of DLM-C (subsonic 2D code) developed by ANDO et al, and are found to give a reasonable outer boundary for subsonic unsteady aerodynamics.
Song, Linze; Shi, Qiang
2015-05-07
We present a new non-perturbative method to calculate the charge carrier mobility using the imaginary time path integral approach, which is based on the Kubo formula for the conductivity, and a saddle point approximation to perform the analytic continuation. The new method is first tested using a benchmark calculation from the numerical exact hierarchical equations of motion method. Imaginary time path integral Monte Carlo simulations are then performed to explore the temperature dependence of charge carrier delocalization and mobility in organic molecular crystals (OMCs) within the Holstein and Holstein-Peierls models. The effects of nonlocal electron-phonon interaction on mobility in different charge transport regimes are also investigated.
Perrelli, L; Calisti, A; Molle, P
1981-01-01
A large series of malignant and benign conditions are generally collected under the term of abdominal masses. Their common aspect is the lack, in most of the cases, of peculiar clinical features which may help early differential diagnosis. In many cases the mass is detected late after a long period of vague, aspecific symptoms. 40% of these space occupying lesions of the abdomen are of malignant origin and delayed detection and investigation affect clinical course. Preoperative study of abdominal masses is a problem of primary importance in pediatric surgical practice. A changing attitude is registered towards many diagnostic procedures and the role of largely diffused techniques like angiography is controversial. The introduction of ultrasonography makes in many cases intensive radiologic investigation unwarranted and academic. The Authors discuss the real role and targets of preoperative investigations of abdominal masses and refer on their experience based on 52 cases, to underline some clinical aspects and analyse their diagnostic approach to this pathology.
Midya, Bikashkali; Roychoudhury, Rajkumar
2010-01-01
Here we have studied first and second-order intertwining approach to generate isospectral partner potentials of position-dependent (effective) mass Schroedinger equation. The second-order intertwiner is constructed directly by taking it as second order linear differential operator with position depndent coefficients and the system of equations arising from the intertwining relationship is solved for the coefficients by taking an ansatz. A complete scheme for obtaining general solution is obtained which is valid for any arbitrary potential and mass function. The proposed technique allows us to generate isospectral potentials with the following spectral modifications: (i) to add new bound state(s), (ii) to remove bound state(s) and (iii) to leave the spectrum unaffected. To explain our findings with the help of an illustration, we have used point canonical transformation (PCT) to obtain the general solution of the position dependent mass Schrodinger equation corresponding to a potential and mass function. It is...
Schmiedt, Hanno; Schlemmer, Stephan; Yurchenko, Sergei N.; Yachmenev, Andrey; Jensen, Per
2017-06-01
We report a new semi-classical method to compute highly excited rotational energy levels of an asymmetric-top molecule. The method forgoes the idea of a full quantum mechanical treatment of the ro-vibrational motion of the molecule. Instead, it employs a semi-classical Green's function approach to describe the rotational motion, while retaining a quantum mechanical description of the vibrations. Similar approaches have existed for some time, but the method proposed here has two novel features. First, inspired by the path integral method, periodic orbits in the phase space and tunneling paths are naturally obtained by means of molecular symmetry analysis. Second, the rigorous variational method is employed for the first time to describe the molecular vibrations. In addition, we present a new robust approach to generating rotational energy surfaces for vibrationally excited states; this is done in a fully quantum-mechanical, variational manner. The semi-classical approach of the present work is applied to calculating the energies of very highly excited rotational states and it reduces dramatically the computing time as well as the storage and memory requirements when compared to the fully quantum-mechanical variational approach. Test calculations for excited states of SO_2 yield semi-classical energies in very good agreement with the available experimental data and the results of fully quantum-mechanical calculations. We hope to be able to present at the meeting also semi-classical calculations of transition intensities. See also the open-access paper Phys. Chem. Chem. Phys. 19, 1847-1856 (2017). DOI: 10.1039/C6CP05589C
A Modal Approach to the Numerical Calculation of Primordial non-Gaussianities
Funakoshi, Hiroyuki
2012-01-01
We propose a new method to numerically calculate higher-order correlation functions of primordial fluctuations generated from any early-universe scenario. Our key-starting point is the realization that the tree-level In-In formalism is intrinsically separable. This enables us to use modal techniques to efficiently calculate and represent non-Gaussian shapes in a separable form well suited to data analysis. We prove the feasibility and the accuracy of our method by applying it to simple single-field inflationary models in which analytical results are available, and we perform non-trivial consistency checks like the verification of the single field consistency relation. We also point out that the i epsilon prescription is automatically taken into account in our method, preventing the need for ad-hoc tricks to implement it numerically.
Fast calculation of two-electron-repulsion integrals: a numerical approach
Lopes, Pedro E M
2016-01-01
An alternative methodology to evaluate two-electron-repulsion integrals based on numerical approximation is proposed. Computational chemistry has branched into two major fields with methodologies based on quantum mechanics and classical force fields. However, there are significant shadowy areas not covered by any of the available methods. Many relevant systems are often too big for traditional quantum chemical methods while being chemically too complex for classical force fields. Examples include systems in nanomedicine, studies of metalloproteins, etc. There is an urgent need to develop fast quantum chemical methods able to study large and complex systems. This work is a proof-of-concept on the numerical techniques required to develop accurate and computationally efficient algorithms for the fast calculation of electron-repulsion integrals, one of the most significant bottlenecks in the extension of quantum chemistry to large systems. All concepts and calculations were developed for the three-center integral...
A Calculation Approach to Elastic Constants of Crystallines at High Pressure and Finite Temperature
Institute of Scientific and Technical Information of China (English)
向士凯; 蔡灵仓; 张林; 经福谦
2002-01-01
Elastic constants of Na and Li metals are calculated successfully for temperatures up to 350K and pressures up to 30 GPa using a scheme without involving any adjustable parameter. Elastic constants are assumed to depend only on an effective pair potential that is only determined by the average interatomic distance. Temperature has an effect on elastic constants by way of charging the equilibrium. The elastic constants can be obtained by fitting the relationship between total energy and strain tensor using the new set of lattice parameters obtained by calculating displacement of atoms at the finite temperature and at a fixed pressure. The relationship between the effective pair potential and the interatomic distance is fitted by using a series of data of cohesive energy corresponding to lattice parameters.
An Approach to Calculate the Efficiency for an N-Receiver Wireless Power Transfer System
Directory of Open Access Journals (Sweden)
Thabat Thabet
2015-09-01
Full Text Available A wireless power transfer system with more than one receiver is a realistic proposition for charging multiple devices such as phones and a tablets. Therefore, it is necessary to consider systems with single transmitters and multiple receivers in terms of efficiency. Current offerings only consider single device charging systems. A problem encountered is the efficiency of one receiver can be affected by another because of the mutual inductance between them. In this paper, an efficiency calculation method is presented for a wireless power transfer system with one to N-receivers. The mutual inductance between coils is implicitly calculated for different spatial positions and verified by practical experimentation. The effect of changing parameters, such as resonant frequency, coil size and distance between coils, on the efficiency has been studied. A clarification of the special performance of a wireless power transfer system at a specific point has been presented.
Papp, Z
1996-01-01
We demonstrate the feasibility and efficiency of the Coulomb-Sturmian separable expansion method for generating accurate solutions of the Faddeev equations. Results obtained with this method are reported for several benchmark cases of bosonic and fermionic three-body systems. Correct bound-state results in agreement with the ones established in the literature are achieved for short-range interactions. We outline the formalism for the treatment of three-body Coulomb systems and present a bound-state calculation for a three-boson system interacting via Coulomb plus short-range forces. The corresponding result is in good agreement with the answer from a recent stochastic-variational-method calculation.
McMillin, Gwendolyn A; Marin, Stephanie J; Johnson-Davis, Kamisha L; Lawlor, Bryan G; Strathmann, Frederick G
2015-02-01
The major objective of this research was to propose a simplified approach for the evaluation of medication adherence in chronic pain management patients, using liquid chromatography time-of-flight (TOF) mass spectrometry, performed in parallel with select homogeneous enzyme immunoassays (HEIAs). We called it a "hybrid" approach to urine drug testing. The hybrid approach was defined based on anticipated positivity rates, availability of commercial reagents for HEIAs, and assay performance, particularly analytical sensitivity and specificity for drug(s) of interest. Subsequent to implementation of the hybrid approach, time to result was compared with that observed with other urine drug testing approaches. Opioids, benzodiazepines, zolpidem, amphetamine-like stimulants, and methylphenidate metabolite were detected by TOF mass spectrometry to maximize specificity and sensitivity of these 37 drug analytes. Barbiturates, cannabinoid metabolite, carisoprodol, cocaine metabolite, ethyl glucuronide, methadone, phencyclidine, propoxyphene, and tramadol were detected by HEIAs that performed adequately and/or for which positivity rates were very low. Time to result was significantly reduced compared with the traditional approach. The hybrid approach to urine drug testing provides a simplified and analytically specific testing process that minimizes the need for secondary confirmation. Copyright© by the American Society for Clinical Pathology.
A novel approach to calculating the thermic effect of food in a metabolic chamber.
Ogata, Hitomi; Kobayashi, Fumi; Hibi, Masanobu; Tanaka, Shigeho; Tokuyama, Kumpei
2016-02-01
The thermic effect of food (TEF) is the well-known concept in spite of its difficulty for measuring. The gold standard for evaluating the TEF is the difference in energy expenditure between fed and fasting states (ΔEE). Alternatively, energy expenditure at 0 activity (EE0) is estimated from the intercept of the linear relationship between energy expenditure and physical activity to eliminate activity thermogenesis from the measurement, and the TEF is calculated as the difference between EE0 and postabsorptive resting metabolic rate (RMR) or sleeping metabolic rate (SMR). However, the accuracy of the alternative methods has been questioned. To improve TEF estimation, we propose a novel method as our original TEF calculation method to calculate EE0 using integrated physical activity over a specific time interval. We aimed to identify which alternative methods of TEF calculation returns reasonable estimates, that is, positive value as well as estimates close to ΔEE. Seven men participated in two sessions (with and without breakfast) of whole-body indirect calorimetry, and physical activity was monitored with a triaxial accelerometer. Estimates of TEF by three simplified methods were compared to ΔEE. ΔEE, EE0 above SMR, and our original method returned positive values for the TEF after breakfast in all measurements. TEF estimates of our original method was indistinguishable from those based on the ΔEE, whereas those as EE0 above RMR and EE0 above SMR were slightly lower and higher, respectively. Our original method was the best among the three simplified TEF methods as it provided positive estimates in all the measurements that were close to the value derived from gold standard for all measurements.
Tang, Grace; Earl, Matthew A.; Luan, Shuang; Wang, Chao; Cao, Daliang; Yu, Cedric X.; Naqvi, Shahid A.
2008-09-01
Dose calculations for radiation arc therapy are traditionally performed by approximating continuous delivery arcs with multiple static beams. For 3D conformal arc treatments, the shape and weight variation per degree is usually small enough to allow arcs to be approximated by static beams separated by 5°-10°. But with intensity-modulated arc therapy (IMAT), the variation in shape and dose per degree can be large enough to require a finer angular spacing. With the increase in the number of beams, a deterministic dose calculation method, such as collapsed-cone convolution/superposition, will require proportionally longer computational times, which may not be practical clinically. We propose to use a homegrown Monte Carlo kernel-superposition technique (MCKS) to compute doses for rotational delivery. The IMAT plans were generated with 36 static beams, which were subsequently interpolated into finer angular intervals for dose calculation to mimic the continuous arc delivery. Since MCKS uses random sampling of photons, the dose computation time only increased insignificantly for the interpolated-static-beam plans that may involve up to 720 beams. Ten past IMRT cases were selected for this study. Each case took approximately 15-30 min to compute on a single CPU running Mac OS X using the MCKS method. The need for a finer beam spacing is dictated by how fast the beam weights and aperture shapes change between the adjacent static planning beam angles. MCKS, however, obviates the concern by allowing hundreds of beams to be calculated in practically the same time as for a few beams. For more than 43 beams, MCKS usually takes less CPU time than the collapsed-cone algorithm used by the Pinnacle3 planning system.
McVeety, Bruce D.; Hites, Ronald A.
A mass balance model was developed to explain the movement of polycyclic aromatic hydrocarbons (PAH) into and out of Siskiwit Lake, which is located on a wilderness island in northern Lake Superior. Because of its location, the PAH found in this lake must have originated exclusively from atmospheric sources. Using gas Chromatographie mass spectrometry, 11 PAH were quantified in rain, snow, air, lake water, sediment core and sediment trap samples. From the dry deposition fluxes, an aerosol deposition velocity of 0.99 ± 0.15 cm s -1 was calculated for indeno[1,2,3- cd]pyrene and benzo[ ghi]perylene, two high molecular weight PAH which are not found in the gas phase. The dry aerosol deposition was found to dominate the wet removal mechanism by an average ratio of 9:1. The dry gas flux was negative, indicating that surface volatilization was taking place; it accounted for 10-80 % of the total output flux depending on the volatility of the PAH. The remaining PAH were lost to sedimentation. From the dry gas flux, an overall mass transfer coefficient for PAH was calculated to be 0.18 ± 0.06 m d -1. In this case, the overall mass transfer is dominated by the liquid phase resistance.
Approach to Improve Speed of Sound Calculation within PC-SAFT Framework
DEFF Research Database (Denmark)
Liang, Xiaodong; Maribo-Mogensen, Bjørn; Thomsen, Kaj;
2012-01-01
An extensive comparison of SRK, CPA and PC-SAFT for speed of sound in normal alkanes has been performed. The results reveal that PC-SAFT captures the curvature of speed of sound better than cubic EoS but the accuracy is not satisfactory. Two approaches have been proposed to improve PC-SAFT’s accu...... keeping acceptable accuracy for the primary properties, i.e. vapor pressure (2.1%) and liquid density (1.5%). The two approaches have also been applied to methanol, and both give very good results....
Teo, Boon K.; Li, Wai-Kee
2011-01-01
This article is divided into two parts. In the first part, the atomic unit (au) system is introduced and the scales of time, space (length), and speed, as well as those of mass and energy, in the atomic world are discussed. In the second part, the utility of atomic units in quantum mechanical and spectroscopic calculations is illustrated with…
DEFF Research Database (Denmark)
Göksu, Ömer; Teodorescu, Remus; Bak-Jensen, Birgitte;
2012-01-01
As more renewable energy sources, especially more wind turbines are installed in the power system, analysis of the power system with the renewable energy sources becomes more important. Short-circuit calculation is a well known fault analysis method which is widely used for early stage analysis...... and design purposes and tuning of the network protection equipments. However, due to current controlled power converter-based grid connection of the wind turbines, short-circuit calculation cannot be performed with its current form for networks with power converter-based wind turbines. In this paper......, an iterative approach for short-circuit calculation of networks with power converter-based wind turbines is developed for both symmetrical and asymmetrical short-circuit grid faults. As a contribution to existing solutions, negative sequence current injection from the wind turbines is also taken into account...
Institute of Scientific and Technical Information of China (English)
Z.J.YANG; A.J.DEEKS
2008-01-01
A frequency-domain approach based on the semi-analytical scaled boundary finite element method (SBFEM) was developed to calculate dynamic stress intensity factors (DSIFs) at bimaterial interface cracks subjected to transient loading. Be-cause the stress solutions of the SBFEM in the frequency domain are analytical in the radial direction, and the complex stress singularity at the bimaterial interface crack tip is explicitly represented in the stress solutions, the mixed-mode DSIFs were calculated directly by definition. The complex frequency-response functions of DSIFs were then used by the fast Fourier transform (FFT) and the inverse FFT to calculate time histories of DSIFs. A benchmark example was modelled. Good re-sults were obtained by modelling the example with a small number of degrees of freedom due to the semi-analytical nature of the SBFEM.
van Wüllen, Christoph
2009-10-29
Antiferromagnetic coupling in multinuclear transition metal complexes usually leads to electronic ground states that cannot be described by a single Slater determinant and that are therefore difficult to describe by Kohn-Sham density functional methods. Density functional calculations in such cases are usually converged to broken symmetry solutions which break spin and, in many cases, also spatial symmetry. While a procedure exists to extract isotropic Heisenberg (exchange) coupling constants from such calculations, no such approach is yet established for the calculation of magnetic anisotropy energies or zero field splitting parameters. This work proposes such a procedure. The broken symmetry solutions are not only used to extract the exchange couplings but also single-ion D tensors which are then used to construct a (phenomenological) spin Hamiltonian, from which the magnetic anisotropy and the zero-field energy levels can be computed. The procedure is demonstrated for a bi- and a trinuclear Mn(III) model compound.
Two $\\Lambda(1405)$ states in a chiral unitary approach with a fully-calculated loop function
Dong, Fang-Yong; Pang, Jing-Long
2016-01-01
The Bethe-Salpeter equation is solved in the framework of unitary coupled-channel approximation by using the pseudoscalar meson-baryon octet interaction. The loop function of the intermediate meson and baryon is deduced accurately in a fully dimensional regularization scheme, where the off-shell correction is supplemented. Two $\\Lambda(1405)$ states are generated dynamically in the strangeness $S=-1$ and isospin $I=0$ sector, and their masses, decay widths and couplings to the meson and the baryon are similar to those values obtained in the on-shell factorization. However, the scattering amplitudes at these two poles become weaker than the cases in the on-shell factorization.
Energy Technology Data Exchange (ETDEWEB)
Whitfield, R. G.; Buehring, W. A.; Bassett, G. W. (Decision and Information Sciences)
2011-04-08
Get a GRiP (Gravitational Risk Procedure) on risk by using an approach inspired by the physics of gravitational forces between body masses! In April 2010, U.S. Department of Homeland Security Special Events staff (Protective Security Advisors [PSAs]) expressed concern about how to calculate risk given measures of consequence, vulnerability, and threat. The PSAs believed that it is not 'right' to assign zero risk, as a multiplicative formula would imply, to cases in which the threat is reported to be extremely small, and perhaps could even be assigned a value of zero, but for which consequences and vulnerability are potentially high. They needed a different way to aggregate the components into an overall measure of risk. To address these concerns, GRiP was proposed and developed. The inspiration for GRiP is Sir Isaac Newton's Universal Law of Gravitation: the attractive force between two bodies is directly proportional to the product of their masses and inversely proportional to the squares of the distance between them. The total force on one body is the sum of the forces from 'other bodies' that influence that body. In the case of risk, the 'other bodies' are the components of risk (R): consequence, vulnerability, and threat (which we denote as C, V, and T, respectively). GRiP treats risk as if it were a body within a cube. Each vertex (corner) of the cube represents one of the eight combinations of minimum and maximum 'values' for consequence, vulnerability, and threat. The risk at each of the vertices is a variable that can be set. Naturally, maximum risk occurs when consequence, vulnerability, and threat are at their maximum values; minimum risk occurs when they are at their minimum values. Analogous to gravitational forces among body masses, the GRiP formula for risk states that the risk at any interior point of the box depends on the squares of the distances from that point to each of the eight vertices. The risk
DeVane, Russell; Space, Brian; Jansen, Thomas L C; Keyes, T
2006-12-21
The fifth order, two-dimensional Raman response in liquid xenon is calculated via a time correlation function (TCF) theory and the numerically exact finite field method. Both employ classical molecular dynamics simulations. The results are shown to be in excellent agreement, suggesting the efficacy of the TCF approach, in which the response function is written approximately in terms of a single classical multitime TCF.
Energy Technology Data Exchange (ETDEWEB)
Schramm, S.M., E-mail: schramm@physics.leidenuniv.nl [Leiden University, Kamerlingh Onnes Laboratorium, P.O. Box 9504, NL-2300 RA Leiden (Netherlands); Pang, A.B. [School of Physics and Electronic Information, Huaibei Normal University, Huaibei, Anhui 235000 (China); Department of Physics, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong (China); Altman, M.S. [Department of Physics, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong (China); Tromp, R.M. [Leiden University, Kamerlingh Onnes Laboratorium, P.O. Box 9504, NL-2300 RA Leiden (Netherlands); IBM T.J. Watson Research Center, 1101 Kitchawan Road, P.O. Box 218, Yorktown Heights, NY 10598 (United States)
2012-04-15
We introduce an extended Contrast Transfer Function (CTF) approach for the calculation of image formation in low energy electron microscopy (LEEM) and photo electron emission microscopy (PEEM). This approach considers aberrations up to fifth order, appropriate for image formation in state-of-the-art aberration-corrected LEEM and PEEM. We derive Scherzer defocus values for both weak and strong phase objects, as well as for pure amplitude objects, in non-aberration-corrected and aberration-corrected LEEM. Using the extended CTF formalism, we calculate contrast and resolution of one-dimensional and two-dimensional pure phase, pure amplitude, and mixed phase and amplitude objects. PEEM imaging is treated by adapting this approach to the case of incoherent imaging. Based on these calculations, we show that the ultimate resolution in aberration-corrected LEEM is about 0.5 nm, and in aberration-corrected PEEM about 3.5 nm. The aperture sizes required to achieve these ultimate resolutions are precisely determined with the CTF method. The formalism discussed here is also relevant to imaging with high resolution transmission electron microscopy. -- Highlights: Black-Right-Pointing-Pointer We introduce an extended Contrast Transfer Function (CTF) approach for the calculation of image formation in low energy electron microscopy (LEEM) and photo electron emission microscopy (PEEM). Black-Right-Pointing-Pointer We consider aberrations up to fifth order, appropriate for image formation in state-of-the-art aberration-corrected LEEM and PEEM. Black-Right-Pointing-Pointer We derive Scherzer defocus values for both weak and strong phase objects, as well as for pure amplitude objects, in non-aberration-corrected and aberration-corrected LEEM. Black-Right-Pointing-Pointer We show that the ultimate resolution in aberration-corrected LEEM is about 0.5 nm, and in aberration-corrected PEEM about 3.5 nm.
A new approach for thermal performance calculation of cross-flow heat exchangers
Energy Technology Data Exchange (ETDEWEB)
Navarro, H.A. [Universidade Estadual Paulista, Rio Claro (Brazil). Dpto. de Estatistica; Cabezas-Gomez, L. [Universidade de Sao Paulo, Sao Carlos (Brazil). Dpto. de Engenharia Mecanica
2005-08-01
A new numerical methodology for thermal performance calculation in cross-flow heat exchangers is developed. Effectiveness-number of transfer units ({epsilon}-NTU) data for several standard and complex flow arrangements are obtained using this methodology. The results are validated through comparison with analytical solutions for one-pass cross-flow heat exchangers with one to four rows and with approximate series solution for an unmixed-unmixed heat exchanger, obtaining in all cases very small errors. New effectiveness data for some complex configurations are provided. (author)
A compressive sensing approach to the calculation of the inverse data space
Khan, Babar Hasan
2012-01-01
Seismic processing in the Inverse Data Space (IDS) has its advantages like the task of removing the multiples simply becomes muting the zero offset and zero time data in the inverse domain. Calculation of the Inverse Data Space by sparse inversion techniques has seen mitigation of some artifacts. We reformulate the problem by taking advantage of some of the developments from the field of Compressive Sensing. The seismic data is compressed at the sensor level by recording projections of the traces. We then process this compressed data directly to estimate the inverse data space. Due to the smaller number of data set we also gain in terms of computational complexity.
Chong, D. P.; Langhoff, S. R.
1986-01-01
A modified coupled pair functional (CPF) method is presented for the configuration interaction problem that dramatically improves properties for cases where the Hartree-Fock reference configuration is not a good zeroth-order wave function description. It is shown that the tendency for CPF to overestimate the effect of higher excitations arises from the choice of the geometric mean for the partial normalization denominator. The modified method is demonstrated for ground state dipole moment calculations of the NiH, CuH, and ZnH transition metal hydrides, and compared to singles-plus-doubles configuration interaction and the Ahlrichs et al. (1984) CPF method.
Energy Technology Data Exchange (ETDEWEB)
Sieres, Jaime; Fernandez-Seara, Jose [University of Vigo, Area de Maquinas y Motores Termicos, E.T.S. de Ingenieros Industriales, Vigo (Spain)
2008-08-15
The ammonia purification process is critical in ammonia-water absorption refrigeration systems. In this paper, a detailed and a simplified analytical model are presented to characterize the performance of the ammonia rectification process in packed columns. The detailed model is based on mass and energy balances and simultaneous heat and mass transfer equations. The simplified model is derived and compared with the detailed model. The range of applicability of the simplified model is determined. A calculation procedure based on the simplified model is developed to determine the volumetric mass transfer coefficients in the vapour phase from experimental data. Finally, the proposed model and other simple calculation methods found in the general literature are compared. (orig.)
Evaluation of a mass-balance approach to determine consumptive water use in northeastern Illinois
Mills, Patrick C.; Duncker, James J.; Over, Thomas M.; Marian Domanski,; ,; Engel, Frank
2014-01-01
A principal component of evaluating and managing water use is consumptive use. This is the portion of water withdrawn for a particular use, such as residential, which is evaporated, transpired, incorporated into products or crops, consumed by humans or livestock, or otherwise removed from the immediate water environment. The amount of consumptive use may be estimated by a water (mass)-balance approach; however, because of the difficulty of obtaining necessary data, its application typically is restricted to the facility scale. The general governing mass-balance equation is: Consumptive use = Water supplied - Return flows.
Evaluation of the decision-making process in the conservative approach to small testicular masses.
Benelli, Andrea; Varca, Virginia; Derchi, Lorenzo; Gregori, Andrea; Carmignani, Giorgio; Simonato, Alchiede
2017-04-28
We evaluate the clinical outcome of patients treated with conservative approach for small testicular masses (STMs). We analyzed the steps who brought to the selection of the therapeutic approach: starting from clinical presentation, through imaging and lab studies. We considered 18 patients who underwent an organ-sparing approach for STMs from 2005 until 2014. The selection criteria were dimension of the mass and absence of clinical, laboratory and/or radiological malignancy suspicion. Preoperative scrotal ultrasound (US) was carried out in all the patients by the same radiologist. The postoperative fertility profile was evaluated in patients younger than 40 years. We performed 13 enucleations, one partial orchiectomy (PO) and four active surveillances. During surgery, a frozen section examination (FSE) was always requested and no discrepancies were noted between its results and the definitive histology. Only one seminomatous tumor was identified, while the remaining masses were four necrosis, four epidermoid cysts, three Leydig tumors, one Sertoli tumor and one chronic orchitis. After a mean follow-up of 41.6 ± 24.7 months, all the patients resulted free of disease and hypogonadism and five of them reached the fatherhood after surgery. The clinical and instrumental evaluation consented an accurate selection of patients eligible for the organ-preserving approach. We believe that testis-sparing surgery leads good functional and aesthetic results in patients with benign lesions; it is a safe option for STMs with a reliable pathologist performing FSE and is an important goal in young patients with fatherhood desire.
Directory of Open Access Journals (Sweden)
Michael J. Leamy
2011-12-01
Full Text Available Dispersion calculations are presented for cylindrical carbon nanotubes using a manifold-based continuum-atomistic finite element formulation combined with Bloch analysis. The formulated finite elements allow any (n,m chiral nanotube, or mixed tubes formed by periodically-repeating heterojunctions, to be examined quickly and accurately using only three input parameters (radius, chiral angle, and unit cell length and a trivial structured mesh, thus avoiding the tedious geometry generation and energy minimization tasks associated with ab initio and lattice dynamics-based techniques. A critical assessment of the technique is pursued to determine the validity range of the resulting dispersion calculations, and to identify any dispersion anomalies. Two small anomalies in the dispersion curves are documented, which can be easily identified and therefore rectified. They include difficulty in achieving a zero energy point for the acoustic twisting phonon, and a branch veering in nanotubes with nonzero chiral angle. The twisting mode quickly restores its correct group velocity as wavenumber increases, while the branch veering is associated with a rapid exchange of eigenvectors at the veering point, which also lessens its impact. By taking into account the two noted anomalies, accurate predictions of acoustic and low-frequency optical branches can be achieved out to the midpoint of the first Brillouin zone.
An approach of sensitivity and uncertainty analyses methods installation in a safety calculation
Energy Technology Data Exchange (ETDEWEB)
Pepin, G.; Sallaberry, C. [Agence nationale pour la gestion des dechets radioactifs (Andra), DS/CS, 92 - Chatenay-Malabry (France)
2003-07-01
Simulation of the migration in deep geological formations leads to solve convection-diffusion equations in porous media, associated with the computation of hydrogeologic flow. Different time-scales (simulation during 1 million years), scales of space, contrasts of properties in the calculation domain, are taken into account. This document deals more particularly with uncertainties on the input data of the model. These uncertainties are taken into account in total analysis with the use of uncertainty and sensitivity analysis. ANDRA (French national agency for the management of radioactive wastes) carries out studies on the treatment of input data uncertainties and their propagation in the models of safety, in order to be able to quantify the influence of input data uncertainties of the models on the various indicators of safety selected. The step taken by ANDRA consists initially of 2 studies undertaken in parallel: - the first consists of an international review of the choices retained by ANDRA foreign counterparts to carry out their uncertainty and sensitivity analysis, - the second relates to a review of the various methods being able to be used in sensitivity and uncertainty analysis in the context of ANDRA's safety calculations. Then, these studies are supplemented by a comparison of the principal methods on a test case which gathers all the specific constraints (physical, numerical and data-processing) of the problem studied by ANDRA.
Bodermann, Bernd; Ehret, Gerd
2005-08-01
High resolution optical microscopy is still an important instrument for dimensional characterisation of micro- und nanostructures. For precise measurements of dimensional quantities a highly accurate modelling of the optical imaging on the basis of rigorous diffraction calculation is essential, which accounts for both the polarisation effects and the 2D or 3D geometry of the structures. Some applications like for example the measurements of linewidths on photomasks demands for measurement uncertainties of few nm or less. For these requirements the numerical and the model induced uncertainty, respectively, may be limiting factors even for sophisticated modelling software. At PTB we use two different rigorous grating diffraction models for modelling of the intensity distribution in the image plane, the rigorous coupled wave analysis (RCWA) method and the finite elements (FEM) method. In order to evaluate the performance of both methods we performed comparative calculations on the basis of a test suite of binary chrome on glass gratings with different line widths reaching from 100nm to 10μm, and with different line/space ratios between 0.01 and 100. We present results of this comparison for TE, TM and unpolarised Koehler illumination of the grating. Residual deviations between both methods and the resulting measurement uncertainty and related to the corresponding time consumptions are considered.
Mass-balance Approach to Interpreting Weathering Reactions in Watershed Systems
Bricker, O. P.; Jones, B. F.; Bowser, C. J.
2003-12-01
The mass-balance approach is conceptually simple and has found widespread applications in many fields over the years. For example, chemists use mass balance (Stumm and Morgan, 1996) to sum the various species containing an element in order to determine the total amount of that element in the system (free ion, complexes). Glaciologists use mass balance to determine the changes in mass of glaciers ( Mayo et al., 1972 and references therein). Groundwater hydrologists use this method to interpret changes in water balance in groundwater systems ( Rasmussen and Andreasen, 1959; Bredehoeft et al., 1982; Heath, 1983; Konikow and Mercer, 1988; Freeze and Cherry, 1979; Ingebritsen and Sanford, 1998). This method has also been used to determine changes in chemistry along a flow path ( Plummer et al., 1983; Bowser and Jones, 1990) and to quantify lake hydrologic budgets using stable isotopes ( Krabbenhoft et al., 1994). Blum and Erel (see Chapter 5.12) discuss the use of strontium isotopes, Chapelle (see Chapter 5.14) treats carbon isotopes in groundwater, and Kendall and Doctor (see Chapter 5.11) and Kendall and McDonnell (1998) discuss the use of stable isotopes in mass balance. Although the method is conceptually simple, the parameters that define a mass balance are not always easy to measure. Watershed investigators use mass balance to determine physical and chemical changes in watersheds ( Garrels and Mackenzie, 1967; Plummer et al., 1991; O'Brien et al., 1997; Drever, 1997). Here we focus on describing the mass-balance approach to interpret weathering reactions in watershed systems including shallow groundwater.Because mass balance is simply an accounting of the flux of material into a system minus the flux of material out of the system, the geochemical mass-balance approach is well suited to interpreting weathering reactions in watersheds (catchments) and in other environmental settings (Drever, 1997). It is, perhaps, the most accurate and reliable way of defining
Approaches to calculate the dielectric function of ZnO around the band gap
Energy Technology Data Exchange (ETDEWEB)
Agocs, E., E-mail: agocs.emil@ttk.mta.hu [Institute for Technical Physics and Materials Science (MFA), Research Center for Natural Sciences, Konkoly Thege Rd. 29-33, 1121 Budapest (Hungary); Doctoral School of Molecular- and Nanotechnologies, Faculty of Information Technology, University of Pannonia, Egyetem u. 10, 8200 Veszprem (Hungary); Fodor, B. [Institute for Technical Physics and Materials Science (MFA), Research Center for Natural Sciences, Konkoly Thege Rd. 29-33, 1121 Budapest (Hungary); Faculty of Science, University of Pécs, 7624 Pécs, Ifjuság útja 6 (Hungary); Pollakowski, B.; Beckhoff, B.; Nutsch, A. [Physikalisch-Technische Bundesanstalt (PTB), Abbestr. 2-12, 10587 Berlin (Germany); Jank, M. [Fraunhofer Institute for Integrated Systems and Device Technology, Schottkystrasse 10, 91058 Erlangen (Germany); Petrik, P. [Institute for Technical Physics and Materials Science (MFA), Research Center for Natural Sciences, Konkoly Thege Rd. 29-33, 1121 Budapest (Hungary); Doctoral School of Molecular- and Nanotechnologies, Faculty of Information Technology, University of Pannonia, Egyetem u. 10, 8200 Veszprem (Hungary)
2014-11-28
Being one of the most sensitive methods for optical thin film metrology ellipsometry is widely used for the characterization of zinc oxide (ZnO), a key material for optoelectronics, photovoltaics, and printable electronics and in a range of critical applications. The dielectric function of ZnO has a special feature around the band gap dominated by a relatively sharp absorption feature and an excitonic peak. In this work we summarize and compare direct (point-by-point) and parametric approaches for the description of the dielectric function. We also investigate how the choice of the wavelength range influences the result, the fit quality and the sensitivity. Results on ZnO layers prepared by sputtering are presented. - Highlights: • Dielectric function of zinc oxide thin film measured by spectroscopic ellipsometry • Direct and parametric approaches summarized and compared • Influence of chosen wavelength range on fit results investigated.
Reid, Madison L.; Evans, Stephen G.
2016-04-01
The magnitude and frequency of glacial hazards is central to the discussion of the effect of climate change in the mountain glacial environment and has persisted as a research question since the 1990s. We propose a new approach to evaluating mass flow (including landslides) hazard in the glacier environment conditioned by temporal and elevation changes in glacier-ice loss. Using digital topographic data sets and InSAR techniques we investigate the hypsometry of ice loss in a well-defined glacial environment in the southwest Coast Mountains of SW British Columbia (the Mount Meager Volcanic Complex - MMVC). The volume and elevation of major mass movements that have taken place in the MMVC since the 1930s is established and compared to the volume and hypsometry of glacial ice loss in the same time period. In the analysis, the volumes of ice loss and landslides are converted to units of mass. The elevation of a sequence of large-scale mass movements do not suggest a close correlation with the elevation or temporal sequence of greatest ice loss. Instead, the temporal relationship between the mass of ice loss and mass lost from slopes in landslides (including ice, rock, and debris) is suggestive of a steady state. The same approach is then applied to the Cordillera Blanca (Peruvian Andes) where we show that the greatest mass moved from the glacier system by glacier-related mass flows since the 1930s, corresponded generally to the period of greatest ice loss suggesting a decay-based response to recent glacier ice loss. As in the MMVC, the elevation of mass flow events is not correlated with the estimated hypsometry of glacial ice loss; in both regions the largest landslide in the period investigated occurred from a high mountain peak defining a topographic divide and where ice loss was minimal. It thus appears that mountain glacial environments exhibit different landslide responses to glacier ice loss that may be conditioned by the rate of ice loss and strongly influenced
Breuer, H P; Petruccione, F; Breuer, Heinz-Peter; Kappler, Bernd; Petruccione, Francesco
1997-01-01
Within the framework of probability distributions on projective Hilbert space a scheme for the calculation of multitime correlation functions is developed. The starting point is the Markovian stochastic wave function description of an open quantum system coupled to an environment consisting of an ensemble of harmonic oscillators in arbitrary pure or mixed states. It is shown that matrix elements of reduced Heisenberg picture operators and general time-ordered correlation functions can be expressed by time-symmetric expectation values of extended operators in a doubled Hilbert space. This representation allows the construction of a stochastic process in the doubled Hilbert space which enables the determination of arbitrary matrix elements and correlation functions. The numerical efficiency of the resulting stochastic simulation algorithm is investigated and compared with an alternative Monte Carlo wave function method proposed first by Dalibard et al. [Phys. Rev. Lett. {\\bf 68}, 580 (1992)]. By means of a stan...
A Transport Equation Approach to Green Functions and Self-force Calculations
Wardell, Barry
2010-01-01
In a recent work, we presented the first application of the Poisson-Wiseman-Anderson method of `matched expansions' to compute the self-force acting on a point particle moving in a curved spacetime. The method employs two expansions for the Green function which are respectively valid in the `quasilocal' and `distant past' regimes, and which may be matched together within the normal neighbourhood. In this article, we introduce the method of matched expansions and discuss transport equation methods for the calculation of the Green function in the quasilocal region. These methods allow the Green function to be evaluated throughout the normal neighborhood and are also relevant to a broad range of problems from radiation reaction to quantum field theory in curved spacetime and quantum gravity.
Continuum Electrostatics Approaches to Calculating pKas and Ems in Proteins
Energy Technology Data Exchange (ETDEWEB)
Gunner, Marilyn R.; Baker, Nathan A.
2016-06-20
Proteins change their charge state through protonation and redox reactions as well as through binding charged ligands. The free energy of these reactions are dominated by solvation and electrostatic energies and modulated by protein conformational relaxation in response to the ionization state changes. Although computational methods for calculating these interactions can provide very powerful tools for predicting protein charge states, they include several critical approximations of which users should be aware. This chapter discusses the strengths, weaknesses, and approximations of popular computational methods for predicting charge states and understanding their underlying electrostatic interactions. The goal of this chapter is to inform users about applications and potential caveats of these methods as well as outline directions for future theoretical and computational research.
Energy Technology Data Exchange (ETDEWEB)
Ortenzi, Luciano
2013-10-17
In this thesis I study the interplay between magnetism and superconductivity in itinerant magnets and superconductors. I do this by applying a semiphenomenological method to four representative compounds. In particular I use the discrepancies (whenever present) between density functional theory (DFT) calculations and the experiments in order to construct phenomenological models which explain the magnetic, superconducting and optical properties of four representative systems. I focus my attention on the superconducting and normal state properties of the recently discovered APt3P superconductors, on superconducting hole-doped CuBiSO, on the optical properties of LaFePO and finally on the ferromagnetic-paramagnetic transition of Ni3Al under pressure. At the end I present a new method which aims to describe the effect of spin fluctuations in itinerant magnets and superconductors that can be used to monitor the evolution of the electronic structure from non magnetic to magnetic in systems close to a quantum critical point.
An approach to first principles electronic structure calculation by symbolic-numeric computation
Directory of Open Access Journals (Sweden)
Akihito Kikuchi
2013-04-01
Full Text Available There is a wide variety of electronic structure calculation cooperating with symbolic computation. The main purpose of the latter is to play an auxiliary role (but not without importance to the former. In the field of quantum physics [1-9], researchers sometimes have to handle complicated mathematical expressions, whose derivation seems almost beyond human power. Thus one resorts to the intensive use of computers, namely, symbolic computation [10-16]. Examples of this can be seen in various topics: atomic energy levels, molecular dynamics, molecular energy and spectra, collision and scattering, lattice spin models and so on [16]. How to obtain molecular integrals analytically or how to manipulate complex formulas in many body interactions, is one such problem. In the former, when one uses special atomic basis for a specific purpose, to express the integrals by the combination of already known analytic functions, may sometimes be very difficult. In the latter, one must rearrange a number of creation and annihilation operators in a suitable order and calculate the analytical expectation value. It is usual that a quantitative and massive computation follows a symbolic one; for the convenience of the numerical computation, it is necessary to reduce a complicated analytic expression into a tractable and computable form. This is the main motive for the introduction of the symbolic computation as a forerunner of the numerical one and their collaboration has won considerable successes. The present work should be classified as one such trial. Meanwhile, the use of symbolic computation in the present work is not limited to indirect and auxiliary part to the numerical computation. The present work can be applicable to a direct and quantitative estimation of the electronic structure, skipping conventional computational methods.
Energy Technology Data Exchange (ETDEWEB)
Whitfield, R. G.; Buehring, W. A.; Bassett, G. W. (Decision and Information Sciences)
2011-04-08
Get a GRiP (Gravitational Risk Procedure) on risk by using an approach inspired by the physics of gravitational forces between body masses! In April 2010, U.S. Department of Homeland Security Special Events staff (Protective Security Advisors [PSAs]) expressed concern about how to calculate risk given measures of consequence, vulnerability, and threat. The PSAs believed that it is not 'right' to assign zero risk, as a multiplicative formula would imply, to cases in which the threat is reported to be extremely small, and perhaps could even be assigned a value of zero, but for which consequences and vulnerability are potentially high. They needed a different way to aggregate the components into an overall measure of risk. To address these concerns, GRiP was proposed and developed. The inspiration for GRiP is Sir Isaac Newton's Universal Law of Gravitation: the attractive force between two bodies is directly proportional to the product of their masses and inversely proportional to the squares of the distance between them. The total force on one body is the sum of the forces from 'other bodies' that influence that body. In the case of risk, the 'other bodies' are the components of risk (R): consequence, vulnerability, and threat (which we denote as C, V, and T, respectively). GRiP treats risk as if it were a body within a cube. Each vertex (corner) of the cube represents one of the eight combinations of minimum and maximum 'values' for consequence, vulnerability, and threat. The risk at each of the vertices is a variable that can be set. Naturally, maximum risk occurs when consequence, vulnerability, and threat are at their maximum values; minimum risk occurs when they are at their minimum values. Analogous to gravitational forces among body masses, the GRiP formula for risk states that the risk at any interior point of the box depends on the squares of the distances from that point to each of the eight vertices. The risk
3D Multiscale Integrated Modeling Approach of Complex Rock Mass Structures
Directory of Open Access Journals (Sweden)
Mingchao Li
2014-01-01
Full Text Available Based on abundant geological data of different regions and different scales in hydraulic engineering, a new approach of 3D engineering-scale and statistical-scale integrated modeling was put forward, considering the complex relationships among geological structures and discontinuities and hydraulic structures. For engineering-scale geological structures, the 3D rock mass model of the study region was built by the exact match modeling method and the reliability analysis technique. For statistical-scale jointed rock mass, the random network simulation modeling method was realized, including Baecher structure plane model, Monte Carlo simulation, and dynamic check of random discontinuities, and the corresponding software program was developed. Finally, the refined model was reconstructed integrating with the engineering-scale model of rock structures, the statistical-scale model of discontinuities network, and the hydraulic structures model. It has been applied to the practical hydraulic project and offers the model basis for the analysis of hydraulic rock mass structures.
Ji, Ji; Nie, Lei; Qiao, Liang; Li, Yixin; Guo, Liping; Liu, Baohong; Yang, Pengyuan; Girault, Hubert H
2012-08-07
A versatile microreactor protocol based on microfluidic droplets has been developed for on-line protein digestion. Proteins separated by liquid chromatography are fractionated in water-in-oil droplets and digested in sequence. The microfluidic reactor acts also as an electrospray ionization emitter for mass spectrometry analysis of the peptides produced in the individual droplets. Each droplet is an enzymatic micro-reaction unit with efficient proteolysis due to rapid mixing, enhanced mass transfer and automated handling. This droplet approach eliminates sample loss, cross-contamination, non-specific absorption and memory effect. A protein mixture was successfully identified using the droplet-based micro-reactor as interface between reverse phase liquid chromatography and mass spectrometry.
Empirical approach to solid-liquid mass transfer in a three-phase sparged reactor
Energy Technology Data Exchange (ETDEWEB)
Gogoi, N.C.; Dutta, N.N. [Regional Research Laboratory, Jodrhat (India). Chemical Engineering Division
1996-08-01
Solid-liquid mass transfer coefficients were determined in three-phase sparged reactors (TPSRs) using benzoic acid dissolution. Experiments were performed in three acrylic column reactors of internal diameter 0.1, 0.2 and 0.3 m respectively. The superficial gas velocities were varied up to 0.35 m s{sup -1}. Using experimental data generated in this work and data reported in the literature for a 0.4-m diameter reactor, the effect of the reactor diameter on the solid-liquid mass transfer coefficient, k{sub SL}, was investigated. It is demonstrated that an empirical approach can be used to determine k{sub SL} from an appropriate mass transfer correlation useful for the design of TPSRs. 20 refs., 5 figs., 3 tabs.
A DYNAMIC APPROACH TO CALCULATE SHADOW PRICES OF WATER RESOURCES FOR NINE MAJOR RIVERS IN CHINA
Institute of Scientific and Technical Information of China (English)
Jing HE; Xikang CHEN; Yong SHI
2006-01-01
China is experiencing from serious water issues. There are many differences among the Nine Major Rivers basins of China in the construction of dikes, reservoirs, floodgates, flood discharge projects, flood diversion projects, water ecological construction, water conservancy management, etc.The shadow prices of water resources for Nine Major Rivers can provide suggestions to the Chinese government. This article develops a dynamic shadow prices approach based on a multiperiod input-output optimizing model. Unlike previous approaches, the new model is based on the dynamic computable general equilibrium (DCGE) model to solve the problem of marginal long-term prices of water resources.First, definitions and algorithms of DCGE are elaborated. Second, the results of shadow prices of water resources for Nine Major Rivers in 1949-2050 in China using the National Water Conservancy input-holding-output table for Nine Major Rivers in 1999 are listed. A conclusion of this article is that the shadow prices of water resources for Nine Major Rivers are largely based on the extent of scarcity.Selling prices of water resources should be revised via the usage of parameters representing shadow prices.
Mali, Bhupesh C; Badgujar, Shamkant B; Shukla, Kunal K; Bhanushali, Paresh B
2017-02-01
We describe a chromatographic approach for the purification of urinary free light chains (FLCs) viz., lambda free light chains (λ-FLCs) and kappa free light chains (κ-FLCs). Isolated urinary FLCs were analyzed by SDS-PAGE, immunoblotting and mass spectrometry (MS). The relative molecular masses of λ-FLC and κ-FLC are 22,933.397 and 23,544.336Da respectively. Moreover, dimer forms of each FLC were also detected in mass spectrum which corresponds to 45,737.747 and 47,348.028Da respectively for λ-FLCs and κ-FLCs. Peptide mass fingerprint analysis of the purified λ-FLCs and κ-FLCs has yielded peptides that partially match with known light chain sequences viz., gi|218783338 and gi|48475432 respectively. The tryptic digestion profile of isolated FLCs infers the exclusive nature of them and they may be additive molecules in the dictionary of urinary proteins. This is the first report of characterization and validation of FLCs from large volume samples by peptide sequencing. This simple and cost-effective approach to purification of FLCs, together with the easy availability of urine samples make the large-scale production of FLCs possible, allowing exploration of various bioclinical as well as biodiagnostic applications.
Mass Transport Modelling in low permeability Fractured Rock: Eulerian versus Lagrangian approaches.
Capilla, J. E.; Rodrigo, J.; Llopis, C.; Grisales, C.; Gomez-Hernandez, J. J.
2003-04-01
Modeling flow and mass transport in fractured rocks can not be always successfully addressed by means of discrete fracture models which can fail due to the difficulty to be calibrated to experimental measurements. This is due to the need of having an accurate knowledge of fractures geometry and of the bidimensional distribution of hydrodynamic parameters on them. Besides, these models tend to be too rigid in the sense of not being able to re-adapt themselves correcting deficiencies or errors in the fracture definition. An alternative approach is assuming a pseudo-continuum media in which fractures are represented by the introduction of discretization blocks of very high hydraulic conductivity (K). This kind of model has been successfully tested in some real cases where the stochastic inversion of the flow equation has been performed to obtain equally likely K fields. However, in this framework, Eulerian mass transport modeling yields numerical dispersion and oscillations that make very difficult the analysis of tracer tests and the inversion of concentration data to identify K fields. In this contribution we present flow and mass transport modelling results in a fractured medium approached by a pseudo-continuum. The case study considered is based on data from a low permeability formation and both Eulerian and Lagrangian approaches have been applied. K fields in fractures are modeled as realizations of a stochastic process conditional to piezometric head data. Both a MultiGaussian and a non-multiGaussian approches are evaluated. The final goal of this research is obtaining K fields able to reproduce field tracer tests. Results show the important numerical problems found when applying an Eurelian approach and the possibilities of avoiding them with a 3D implementation of the Lagrangian random walk method. Besides, we see how different can be mass transport predictions when Gaussian and non-Gaussian models are assumed for K fields in fractures.
A survey of existing and proposed classical and quantum approaches to the photon mass
Energy Technology Data Exchange (ETDEWEB)
Spavieri, G.; Quintero, J. [Centro de Fisica Fundamental, Universidad de Los Andes, 5101 Merida (Venezuela, Bolivarian Republic of); Gillies, G.T. [Department of Physics, University of Virginia, Charlottesville, VA, 22904-4714 (United States); Rodriguez, M. [Departamento de Fisica, FACYT, Universidad de Carabobo, Valencia (Venezuela, Bolivarian Republic of)
2011-02-15
Over the past twenty years, there have been several careful experimental, observational and phenomenological investigations aimed at searching for and establishing ever tighter bounds on the possible mass of the photon. There are many fascinating and paradoxical physical implications that would arise from the presence of even a very small value for it, and thus such searches have always been well motivated in terms of the new physics that would result. We provide a brief overview of the theoretical background and classical motivations for this work and the early tests of the exactness of Coulomb's law that underlie it. We then go on to address the modern situation, in which quantum physics approaches come to attention. Among them we focus especially on the implications that the Aharonov-Bohm and Aharonov-Casher class of effects have on searches for a photon mass. These arise in several different ways and can lead to experiments that might involve the interaction of magnetic dipoles, electric dipoles, or charged particles with suitable potentials. Still other quantum-based approaches employ measurements of the g-factor of the electron. Plausible target sensitivities for limits on the photon mass as sought by the various quantum approaches are in the range of 10{sup -53} to 10{sup -54} g. Possible experimental arrangements for the associated experiments are discussed. We close with an assessment of the state of the art and a prognosis for future work. (authors)
A survey of existing and proposed classical and quantum approaches to the photon mass
Spavieri, G.; Quintero, J.; Gillies, G. T.; Rodríguez, M.
2011-02-01
Over the past twenty years, there have been several careful experimental, observational and phenomenological investigations aimed at searching for and establishing ever tighter bounds on the possible mass of the photon. There are many fascinating and paradoxical physical implications that would arise from the presence of even a very small value for it, and thus such searches have always been well motivated in terms of the new physics that would result. We provide a brief overview of the theoretical background and classical motivations for this work and the early tests of the exactness of Coulomb's law that underlie it. We then go on to address the modern situation, in which quantum physics approaches come to attention. Among them we focus especially on the implications that the Aharonov-Bohm and Aharonov-Casher class of effects have on searches for a photon mass. These arise in several different ways and can lead to experiments that might involve the interaction of magnetic dipoles, electric dipoles, or charged particles with suitable potentials. Still other quantum-based approaches employ measurements of the g-factor of the electron. Plausible target sensitivities for limits on the photon mass as sought by the various quantum approaches are in the range of 10-53 to 10-54 g. Possible experimental arrangements for the associated experiments are discussed. We close with an assessment of the state of the art and a prognosis for future work.
Temperature issues with white laser diodes, calculation and approach for new packages
Lachmayer, Roland; Kloppenburg, Gerolf; Stephan, Serge
2015-01-01
Bright white light sources are of significant importance for automotive front lighting systems. Today's upper class systems mainly use HID or LED light sources. As a further step laser diode based systems offer a high luminance, efficiency and allow the realization of new dynamic and adaptive light functions and styling concepts. The use of white laser diode systems in automotive applications is still limited to laboratories and prototypes even though announcements of laser based front lighting systems have been made. But the environment conditions for vehicles and other industry sectors differ from laboratory conditions. Therefor a model of the system's thermal behavior is set up. The power loss of a laser diode is transported as thermal flux from the junction layer to the diode's case and on to the environment. Therefor its optical power is limited by the maximum junction temperature (for blue diodes typically 125 - 150 °C), the environment temperature and the diode's packaging with its thermal resistances. In a car's headlamp the environment temperature can reach up to 80 °C. While the difference between allowed case temperature and environment temperature is getting small or negative the relevant heat flux also becomes small or negative. In early stages of LED development similar challenges had to be solved. Adapting LED packages to the conditions in a vehicle environment lead to today's efficient and bright headlights. In this paper the need to transfer these results to laser diodes is shown by calculating the diodes lifetimes based on the presented model.
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.
An alternative approach to calculating Area-Under-the-Curve (AUC) in delay discounting research.
Borges, Allison M; Kuang, Jinyi; Milhorn, Hannah; Yi, Richard
2016-09-01
Applied to delay discounting data, Area-Under-the-Curve (AUC) provides an atheoretical index of the rate of delay discounting. The conventional method of calculating AUC, by summing the areas of the trapezoids formed by successive delay-indifference point pairings, does not account for the fact that most delay discounting tasks scale delay pseudoexponentially, that is, time intervals between delays typically get larger as delays get longer. This results in a disproportionate contribution of indifference points at long delays to the total AUC, with minimal contribution from indifference points at short delays. We propose two modifications that correct for this imbalance via a base-10 logarithmic transformation and an ordinal scaling transformation of delays. These newly proposed indices of discounting, AUClog d and AUCor d, address the limitation of AUC while preserving a primary strength (remaining atheoretical). Re-examination of previously published data provides empirical support for both AUClog d and AUCor d . Thus, we believe theoretical and empirical arguments favor these methods as the preferred atheoretical indices of delay discounting.
Energy Technology Data Exchange (ETDEWEB)
Manfrini, Rozangela Magalhaes; Teixeira, Flavia Rodrigues; Pilo-Veloso, Dorila; Alcantara, Antonio Flavio de Carvalho, E-mail: aalcantara@zeus.qui.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Inst. de Ciencias Exatas. Dept. de Quimica; Nelson, David Lee [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Fac. de Farmacia. Dept. de Quimica; Siqueira, Ezequias Pessoa de [Centro de Pesquisas Rene Rachou (FIOCRUZ), Belo Horizonte, MG (Brazil)
2012-07-01
The stability of N-propylbutanimine (1) was investigated under different experimental conditions. The acid-catalyzed self-condensation that produced the E-enimine (4) and Z-inimine (5) was studied by experimental analyses and theoretical calculations. Since the calculations for the energy of 5 indicated that it had a lower energy than 4, yet 4 was the principal product, the self-condensation of 1 must be kinetically controlled. (author)
Sun, Y Y; Kim, Yong-Hyun; Lee, Kyuho; Zhang, S B
2008-10-21
Density functional theory (DFT) in the commonly used local density or generalized gradient approximation fails to describe van der Waals (vdW) interactions that are vital to organic, biological, and other molecular systems. Here, we propose a simple, efficient, yet accurate local atomic potential (LAP) approach, named DFT+LAP, for including vdW interactions in the framework of DFT. The LAPs for H, C, N, and O are generated by fitting the DFT+LAP potential energy curves of small molecule dimers to those obtained from coupled cluster calculations with single, double, and perturbatively treated triple excitations, CCSD(T). Excellent transferability of the LAPs is demonstrated by remarkable agreement with the JSCH-2005 benchmark database [P. Jurecka et al. Phys. Chem. Chem. Phys. 8, 1985 (2006)], which provides the interaction energies of CCSD(T) quality for 165 vdW and hydrogen-bonded complexes. For over 100 vdW dominant complexes in this database, our DFT+LAP calculations give a mean absolute deviation from the benchmark results less than 0.5 kcal/mol. The DFT+LAP approach involves no extra computational cost other than standard DFT calculations and no modification of existing DFT codes, which enables straightforward quantum simulations, such as ab initio molecular dynamics, on biomolecular systems, as well as on other organic systems.
An assumed pdf approach for the calculation of supersonic mixing layers
Baurle, R. A.; Drummond, J. P.; Hassan, H. A.
1992-01-01
In an effort to predict the effect that turbulent mixing has on the extent of combustion, a one-equation turbulence model is added to an existing Navier-Stokes solver with finite-rate chemistry. To average the chemical-source terms appearing in the species-continuity equations, an assumed pdf approach is also used. This code was used to analyze the mixing and combustion caused by the mixing layer formed by supersonic coaxial H2-air streams. The chemistry model employed allows for the formation of H2O2 and HO2. Comparisons are made with recent measurements using laser Raman diagnostics. Comparisons include temperature and its rms, and concentrations of H2, O2, N2, H2O, and OH. In general, good agreement with experiment was noted.
Energy Technology Data Exchange (ETDEWEB)
Kirby, B.; King, J.; Milligan, M.
2012-06-01
The anticipated increase in variable generation in the Western Interconnection over the next several years has raised concerns about how to maintain system balance, especially in smaller Balancing Authority Areas (BAAs). Given renewable portfolio standards in the West, it is possible that more than 50 gigawatts of wind capacity will be installed by 2020. Significant quantities of solar generation are likely to be added as well. The consequent increase in variability and uncertainty that must be managed by the conventional generation fleet and responsive loads has resulted in a proposal for an Energy Imbalance Market (EIM). This paper extends prior work to estimate the reserve requirements for regulation, spinning, and non-spinning reserves with and without the EIM. We also discuss alternative approaches to allocating reserve requirements and show that some apparently attractive allocation methods have undesired consequences.
Energy Technology Data Exchange (ETDEWEB)
Luboldt, W. [University Hospital Essen, Clinic and Policlinic of Angiology, Essen (Germany); Multiorgan Screening Foundation (Germany); Tryon, C. [Philips Medical Systems, Best (Netherlands); Kroll, M.; Vogl, T.J. [University Hospital Frankfurt, Department of Radiology, Frankfurt (Germany); Toussaint, T.L. [Multiorgan Screening Foundation (Germany); Holzer, K. [University Hospital Frankfurt, Department of Visceral and Vascular Surgery, Frankfurt (Germany); Hoepffner, N. [University Hospital Frankfurt, Department of Gastroenterology, Frankfurt (Germany)
2005-02-01
The purpose of this feasibility study was to design and test an algorithm for automating mass detection in contrast-enhanced CT colonography (CTC). Five patients with known colorectal masses underwent a pre-surgical contrast-enhanced (120 ml volume 1.6 g iodine/s injection rate, 60 s scan delay) CTC in high spatial resolution (16-slice CT: collimation: 16 x 0.75 mm, tablefeed: 24 mm/0.5 s, reconstruction increment: 0.5 mm). A CT-density- and volume-based algorithm searched for masses in the colonic wall, which was extracted before by segmenting and dilating the colonic air lumen and subtracting the inner air. A radiologist analyzed the detections and causes of false positives. All masses were detected, and false positives were easy to identify. Combining CT density with volume as a cut-off is a promising approach for automating mass detection that should be further refined and also tested in contrast-enhanced MR colonography. (orig.)
Dynamical Running Mass of Quark in the Dyson－Schwinger Equation Approach
Institute of Scientific and Technical Information of China (English)
MAWei－Xing; SHENPeng－Nian; 等
2002-01-01
Based on the Dyson-Schwinger equations of QCD in the "rainbow" approximation,the fully dressed quark propagator Sf(p)is investigated.and then an algebraic parametrization form of the propagator is obtained as a solution of the equations,The dressed quark amplitudes Af and Bf built up the fully dressed quark propagator and the dynamical runing masses Mf defined by Af and Bf for light quarks u,d and s are calculated,respectively,using the predicted running masses Mf,quark condenstes=-(0.255GeV)3 for u,d quarks,and \\0.8 for s quark,and experimental pion decay constant fπ=0.093 GeV,the masses of Goldstone bosons K,π,and η are also evaluated.The numerical results show that the masses of quarks are dependent on their momentum p2,The fully dressed quark amplitudes Af and Bf have correct behaviors which can be used for many purposes in our future researches on nonperturbative QCD.
Dynamical Running Mass of Quark in the Dyson-Schwinger Equation Approach
Institute of Scientific and Technical Information of China (English)
MA Wei-Xing; SHEN Peng-Nian; ZHOU Li-Juan
2002-01-01
Based on the Dyson-Schwinger equations of QCD in the "rainbow" approximation, the fully dressed quarkpropagator Sf(p) is investigated, and then an algebraic parametrization form of the propagator is obtained as a solutionof the equations. The dressed quark amplitudes Af and Bf built up the fully dressed quark propagator and the dynamicalrunning masses Mf defined by Af and Bf for light quarks u, d and s are calculated, respectively. Using the predictedrunning masses Mf, quark condensates = -(0.255 GeV)a for u, d quarks, and = 0.8<0|q(0)q(0)]0)for s quark, and experimental pion decay constant fπ = 0.093 GeV, the masses of Goldstone bosons K, π, and η are alsoevaluated. The numerical results show that the masses of quarks are dependent on their momentum p2. The fully dressedquark amplitudes Af and Bf have correct behaviors which can be used for many purposes in our future researches onnonperturbative QCD.
Bryans, P; Savin, D W
2008-01-01
We have reanalyzed SUMER observations of a parcel of coronal gas using new collisional ionization equilibrium (CIE) calculations. These improved CIE fractional abundances were calculated using state-of-the-art electron-ion recombination data for K-shell, L-shell, Na-like, and Mg-like ions of all elements from H through Zn and, additionally, Al- through Ar-like ions of Fe. Improved CIE calculations based on these data are presented here. We have also developed a new systematic method for determining the average emission measure (EM) and electron temperature (T_e) of an emitting plasma. With our new CIE data and our new approach for determining the average EM and T_e we have reanalyzed SUMER observations of the solar corona. We have compared our results with those of previous studies and found some significant differences for the derived EM and T_e. We have also calculated the enhancement of coronal elemental abundances compared to their photospheric abundances, using the SUMER observations themselves to determ...
Tröster, A.; Oettel, M.; Block, B.; Virnau, P.; Binder, K.
2012-02-01
A recently proposed method to obtain the surface free energy σ(R) of spherical droplets and bubbles of fluids, using a thermodynamic analysis of two-phase coexistence in finite boxes at fixed total density, is reconsidered and extended. Building on a comprehensive review of the basic thermodynamic theory, it is shown that from this analysis one can extract both the equimolar radius Re as well as the radius Rs of the surface of tension. Hence the free energy barrier that needs to be overcome in nucleation events where critical droplets and bubbles are formed can be reliably estimated for the range of radii that is of physical interest. It is found that the conventional theory of nucleation, where the interface tension of planar liquid-vapor interfaces is used to predict nucleation barriers, leads to a significant overestimation, and this failure is particularly large for bubbles. Furthermore, different routes to estimate the effective radius-dependent Tolman length δ(Rs) from simulations in the canonical ensemble are discussed. Thus we obtain an instructive exemplification of the basic quantities and relations of the thermodynamic theory of metastable droplets/bubbles using simulations. However, the simulation results for δ(Rs) employing a truncated Lennard-Jones system suffer to some extent from unexplained finite size effects, while no such finite size effects are found in corresponding density functional calculations. The numerical results are compatible with the expectation that δ(Rs → ∞) is slightly negative and of the order of one tenth of a Lennard-Jones diameter, but much larger systems need to be simulated to allow more precise estimates of δ(Rs → ∞).
On the role of the fine structure constant in the alpha/beta rule for calculation of particle masses
Energy Technology Data Exchange (ETDEWEB)
Greulich, Karl Otto [Fritz Lipmann Institut, Beutenbergstr.11, 07745 Jena (Germany)
2016-07-01
The masses of essentially all elementary particles are given almost exactly by the α/β rule (K.O.Greulich, Spring meeting 2014 German Phys Society T 99.4), i.e. particle masses depend on the fine structure (Sommerfeld constant α 1/137). This is somewhat surprising since alpha is rather known as a spectroscopic constant than as a mass ratio. One key to understand this is the observation that the Bohr energy is exactly the 1/α-fold of the ionization energy of the hydrogen atom (Rydberg energy, 13.6 eV). Thereby the Bohr energy is the de Broglie energy of the electron in the ground state (on the Bohr radius). A second mass or energy ratio, the ratio between the energy at rest of the electron and the Bohr energy can be derived analytically to be α{sup -2}. Both results together suggest a general dependence of rest energies or rest masses on α. Simply by the hypothesis that this observation can be extrapolated to higher values of n, the α/β rule follows immediately. Only the beta (1 or 1836.12) term has to be added empirically.
Miller, R. A.; Kohl, F. J.
1977-01-01
Two FORTRAN computer programs for the interpretation of low resolution mass spectra were prepared and tested. One is for the calculation of the molecular isotopic distribution of any species from stored elemental distributions. The program requires only the input of the molecular formula and was designed for compatability with any computer system. The other program is for the determination of all possible combinations of atoms (and radicals) which may form an ion having a particular integer mass. It also uses a simplified input scheme and was designed for compatability with any system.
Center-of-mass corrections revisited a many-body expansion approach
Mihaila, B; Mihaila, Bogdan; Heisenberg, Jochen H.
1999-01-01
A many-body expansion for the computation of the charge form factor in the center-of-mass system is proposed. For convergence testing purposes, we apply our formalism to the case of the harmonic oscillator shell model, where an exact solution exists. We also work out the details of the calculation involving realistic nuclear wave functions. Results obtained for the Argonne $v$18 two-nucleon and Urbana-IX three-nucleon interactions are reported. No corrections due to the meson-exchange charge density are taken into account.
Center-of-mass corrections reexamined: A many-body expansion approach
Mihaila, Bogdan; Heisenberg, Jochen H.
1999-11-01
A many-body expansion for the computation of the charge form factor in the center-of-mass system is proposed. For convergence testing purposes, we apply our formalism to the case of the harmonic oscillator shell model, where an exact solution exists. We also work out the details of the calculation involving realistic nuclear wave functions. Results obtained for the Argonne v18 two-nucleon and Urbana-IX three-nucleon interactions are reported. No corrections due to the meson-exchange charge density are taken into account.
Directory of Open Access Journals (Sweden)
Bo Dong
2015-01-01
Full Text Available During geomagnetic disturbances, the telluric currents which are driven by the induced electric fields will flow in conductive Earth. An approach to model the Earth conductivity structures with lateral conductivity changes for calculating geoelectric fields is presented in this paper. Numerical results, which are obtained by the Finite Element Method (FEM with a planar grid in two-dimensional modelling and a solid grid in three-dimensional modelling, are compared, and the flow of induced telluric currents in different conductivity regions is demonstrated. Then a three-dimensional conductivity structure is modelled and the induced currents in different depths and the geoelectric field at the Earth’s surface are shown. The geovoltages by integrating the geoelectric field along specific paths can be obtained, which are very important regarding calculations of geomagnetically induced currents (GIC in ground-based technical networks, such as power systems.
Wu, D; Yu, W; Fritzsche, S
2016-01-01
A physical model based on Monte-Carlo approach is proposed to calculate the ionization dynamics of warm dense matters within particle-in-cell simulations, where impact ionization, electron-ion recombination and ionization potential depression (IPD) by surrounding plasmas are taken into consideration self-consistently. When compared with other models, which are applied in the literature for plasmas near thermal equilibrium, the temporal relaxation of ionizations can also be simulated by the proposed model with the final thermal equilibrium determined by the competition between impact ionization and its inverse process, i.e., electron-ion recombination. Our model is general and can be applied for both single elements and alloys with quite different compositions. The proposed model is implemented into a particle-in-cell (PIC) simulation code, and the average ionization degree of bulk aluminium varying with temperature is calculated, showing good agreement with the data provided by FLYCHK code.
Yamada, Shunsuke; Akashi, Ryosuke; Tsuneyuki, Shinji
2016-01-01
We present an efficient post-processing method for calculating the electronic structure of nanosystems based on the divide-and-conquer approach to density functional theory (DC-DFT), in which a system is divided into subsystems whose electronic structure is solved separately. In this post process, the Kohn-Sham Hamiltonian of the total system is easily derived from the orbitals and orbital energies of subsystems obtained by DC-DFT without time-consuming and redundant computation. The resultant orbitals spatially extended over the total system are described as linear combinations of the orbitals of the subsystems. The size of the Hamiltonian matrix can be much reduced from that for conventional calculation, so that our method is fast and applicable to general huge systems for investigating the nature of electronic states.
Yamada, Shunsuke; Shimojo, Fuyuki; Akashi, Ryosuke; Tsuneyuki, Shinji
2017-01-01
We present an efficient postprocessing method for calculating the electronic structure of nanosystems based on the divide-and-conquer approach to density functional theory (DC-DFT), in which a system is divided into subsystems whose electronic structure is solved separately. In this postprocess, the Kohn-Sham Hamiltonian of the total system is easily derived from the orbitals and orbital energies of subsystems obtained by DC-DFT without time-consuming and redundant computation. The resultant orbitals spatially extended over the total system are described as linear combinations of the orbitals of the subsystems. The size of the Hamiltonian matrix can be much reduced from that for the conventional calculation, so our method is fast and applicable to general huge systems for investigating the nature of electronic states.
Lear, Sam; Cobb, Steven L
2016-03-01
The ability to calculate molecular properties such as molecular weights, isoelectric points, and extinction coefficients is vital for scientists using and/or synthesizing peptides and peptoids for research. A suite of two web utilities: Peptide Calculator and Peptoid Calculator, available free at http://www.pep-calc.com, are presented. Both tools allow the calculation of peptide/peptoid chemical formulae and molecular weight, ChemDraw structure file export and automatic assignment of mass spectral peaks to deletion sequences and metal/protecting group adducts. Peptide Calculator also provides a calculated isoelectric point, molar extinction coefficient, graphical peptide charge summary and β-strand contiguity profile (for aggregation-prone sequences), indicating potential regions of synthesis difficulty. In addition to the unique automatic spectral assignment features offered across both utilities, Peptoid Calculator represents a first-of-a-kind resource for researchers in the field of peptoid science. With a constantly expanding database of over 120 amino acids, non-natural peptide building blocks and peptoid building blocks, it is anticipated that Pep-Calc.com will act as a valuable asset to those working on the synthesis and/or application of peptides and peptoids in the biophysical and life sciences fields.
An Optimization-Based Approach to Calculate Confidence Interval on Mean Value with Interval Data
Directory of Open Access Journals (Sweden)
Kais Zaman
2014-01-01
Full Text Available In this paper, we propose a methodology for construction of confidence interval on mean values with interval data for input variable in uncertainty analysis and design optimization problems. The construction of confidence interval with interval data is known as a combinatorial optimization problem. Finding confidence bounds on the mean with interval data has been generally considered an NP hard problem, because it includes a search among the combinations of multiple values of the variables, including interval endpoints. In this paper, we present efficient algorithms based on continuous optimization to find the confidence interval on mean values with interval data. With numerical experimentation, we show that the proposed confidence bound algorithms are scalable in polynomial time with respect to increasing number of intervals. Several sets of interval data with different numbers of intervals and type of overlap are presented to demonstrate the proposed methods. As against the current practice for the design optimization with interval data that typically implements the constraints on interval variables through the computation of bounds on mean values from the sampled data, the proposed approach of construction of confidence interval enables more complete implementation of design optimization under interval uncertainty.
Calculation of Total Cost, Tolerance Based on Taguchis, Asymmetric Quality Loss Function Approach
Directory of Open Access Journals (Sweden)
R. S. Kumar
2009-01-01
Full Text Available Problem statement: Current world market force the manufacturing sectors to develop high quality product and process design with minimum possible cost. About 80% of problems in production units may be attributed to 20% of design tolerance causes. While design typically represents the smallest actual cost elements in products (around 5%, it leverages the largest cost influence (around 70%. So design engineers continuously stumble upon problem of design for high quality performance with lower cost. Objectives of this study where to: (i simultaneous selection of design and manufacturing tolerance (ii minimization of total cost (sum of the manufacturing cost and Taguchis asymmetric quality cost (iii minimum cost and its machining tolerance. Approach: Rotor key base assembly was considered as case study to optimize the minimization of assembly total cost and machining tolerance. New global nonlinear optimization techniques called pattern search algorithm had been implemented to find optimal tolerance allocation and total cost. Results: In this study minimum cost arrived was 45.15 Cr and its corresponding tolerances for machining process turning, drilling, face milling, face milling and drilling where 0.063, 0.0508, 0.2127, 0.2127, 0.2540 mm respectively at worst case conditions. Conclusion: Results indicated that optimization by integer programming, sequential quadratic programming and exhaustive search, nonlinear programming, genetic algorithm, simulated annealing, fuzzy logic, number set theory and Monte Carlo simulation did not give much least total cost and also predicted that pattern search algorithm was robust method. Second the method, generally termed as concurrent tolerance synthesis was well suited for engineering environment, where high quality products with low total cost were designed and manufactured.
Energy Technology Data Exchange (ETDEWEB)
Penfold, S; Miller, A [University of Adelaide, Adelaide, SA (Australia)
2015-06-15
Purpose: Stoichiometric calibration of Hounsfield Units (HUs) for conversion to proton relative stopping powers (RStPs) is vital for accurate dose calculation in proton therapy. However proton dose distributions are not only dependent on RStP, but also on relative scattering power (RScP) of patient tissues. RScP is approximated from material density but a stoichiometric calibration of HU-density tables is commonly neglected. The purpose of this work was to quantify the difference in calculated dose of a commercial TPS when using HU-density tables based on tissue substitute materials and stoichiometric calibrated ICRU tissues. Methods: Two HU-density calibration tables were generated based on scans of the CIRS electron density phantom. The first table was based directly on measured HU and manufacturer quoted density of tissue substitute materials. The second was based on the same CT scan of the CIRS phantom followed by a stoichiometric calibration of ICRU44 tissue materials. The research version of Pinnacle{sup 3} proton therapy was used to compute dose in a patient CT data set utilizing both HU-density tables. Results: The two HU-density tables showed significant differences for bone tissues; the difference increasing with increasing HU. Differences in density calibration table translated to a difference in calculated RScP of −2.5% for ICRU skeletal muscle and 9.2% for ICRU femur. Dose-volume histogram analysis of a parallel opposed proton therapy prostate plan showed that the difference in calculated dose was negligible when using the two different HU-density calibration tables. Conclusion: The impact of HU-density calibration technique on proton therapy dose calculation was assessed. While differences were found in the calculated RScP of bony tissues, the difference in dose distribution for realistic treatment scenarios was found to be insignificant.
Institute of Scientific and Technical Information of China (English)
DING De-Sheng; ZHANG Yu
2004-01-01
@@ We present a simple calculation approach for the fundamental and second-harmonic sound beams with an arbitrary distribution source in the quasilinear approximation. The analysis is based on the assumption that the source function with an arbitrary geometry and distribution is expanded into the sum of a set of two-dimensional Gaussian functions. The two- and five-dimensional integral solutions for the fundamental and second-harmonic fields are, respectively, reduced in terms of Gaussian functions and simple one-dimensional integrals. The numerical evaluation of field distributions is then greatly simplified.
Institute of Scientific and Technical Information of China (English)
Shen Weifeng; Jiang Libing; Zhang Mao; Ma Yuefeng; Jiang Guanyu; He Xiaojun
2014-01-01
Objective To review the research methods of mass casualty incident (MCI) systematically and introduce the concept and characteristics of complexity science and artificial system,computational experiments and parallel execution (ACP) method.Data sources We searched PubMed,Web of Knowledge,China Wanfang and China Biology Medicine (CBM) databases for relevant studies.Searches were performed without year or language restrictions and used the combinations of the following key words:“mass casualty incident”,“MCI”,“research method”,“complexity science”,“ACP”,“approach”,“science”,“model”,“system” and “response”.Study selection Articles were searched using the above keywords and only those involving the research methods of mass casualty incident (MCI) were enrolled.Results Research methods of MCI have increased markedly over the past few decades.For now,dominating research methods of MCI are theory-based approach,empirical approach,evidence-based science,mathematical modeling and computer simulation,simulation experiment,experimental methods,scenario approach and complexity science.Conclusions This article provides an overview of the development of research methodology for MCI.The progresses of routine research approaches and complexity science are briefly presented in this paper.Furthermore,the authors conclude that the reductionism underlying the exact science is not suitable for MCI complex systems.And the only feasible alternative is complexity science.Finally,this summary is followed by a review that ACP method combining artificial systems,computational experiments and parallel execution provides a new idea to address researches for complex MCI.
Karthikeyan, N; Prince, J Joseph; Ramalingam, S; Periandy, S
2015-03-15
In this research work, the vibrational IR, polarization Raman, NMR and mass spectra of terephthalic acid (TA) were recorded. The observed fundamental peaks (IR, Raman) were assigned according to their distinctiveness region. The hybrid computational calculations were carried out for calculating geometrical and vibrational parameters by DFT (B3LYP and B3PW91) methods with 6-31++G(d,p) and 6-311++G(d,p) basis sets and the corresponding results were tabulated. The molecular mass spectral data related to base molecule and substitutional group of the compound was analyzed. The modification of the chemical property by the reaction mechanism of the injection of dicarboxylic group in the base molecule was investigated. The (13)C and (1)H NMR spectra were simulated by using the gauge independent atomic orbital (GIAO) method and the absolute chemical shifts related to TMS were compared with experimental spectra. The study on the electronic and optical properties; absorption wavelengths, excitation energy, dipole moment and frontier molecular orbital energies, were performed by hybrid Gaussian calculation methods. The orbital energies of different levels of HOMO and LUMO were calculated and the molecular orbital lobe overlapping showed the inter charge transformation between the base molecule and ligand group. From the frontier molecular orbitals (FMO), the possibility of electrophilic and nucleophilic hit also analyzed. The NLO activity of the title compound related to Polarizability and hyperpolarizability were also discussed. The present molecule was fragmented with respect to atomic mass and the mass variation depends on the substitutions have also been studied. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
Mass Spectrometry Imaging of Biological Tissue: An Approach for Multicenter Studies
Energy Technology Data Exchange (ETDEWEB)
Rompp, Andreas; Both, Jean-Pierre; Brunelle, Alain; Heeren, Ronald M.; Laprevote, Olivier; Prideaux, Brendan; Seyer, Alexandre; Spengler, Bernhard; Stoeckli, Markus; Smith, Donald F.
2015-03-01
Mass spectrometry imaging has become a popular tool for probing the chemical complexity of biological surfaces. This led to the development of a wide range of instrumentation and preparation protocols. It is thus desirable to evaluate and compare the data output from different methodologies and mass spectrometers. Here, we present an approach for the comparison of mass spectrometry imaging data from different laboratories (often referred to as multicenter studies). This is exemplified by the analysis of mouse brain sections in five laboratories in Europe and the USA. The instrumentation includes matrix-assisted laser desorption/ionization (MALDI)-time-of-flight (TOF), MALDI-QTOF, MALDIFourier transform ion cyclotron resonance (FTICR), atmospheric-pressure (AP)-MALDI-Orbitrap, and cluster TOF-secondary ion mass spectrometry (SIMS). Experimental parameters such as measurement speed, imaging bin width, and mass spectrometric parameters are discussed. All datasets were converted to the standard data format imzML and displayed in a common open-source software with identical parameters for visualization, which facilitates direct comparison of MS images. The imzML conversion also allowed exchange of fully functional MS imaging datasets between the different laboratories. The experiments ranged from overview measurements of the full mouse brain to detailed analysis of smaller features (depending on spatial resolution settings), but common histological features such as the corpus callosum were visible in all measurements. High spatial resolution measurements of AP-MALDI-Orbitrap and TOF-SIMS showed comparable structures in the low-micrometer range. We discuss general considerations for planning and performing multicenter studies in mass spectrometry imaging. This includes details on the selection, distribution, and preparation of tissue samples as well as on data handling. Such multicenter studies in combination with ongoing activities for reporting guidelines, a common
Institute of Scientific and Technical Information of China (English)
ZHAO Xue-ling; ZHAO Hong-bin; WANG Bin; ZHU Xiao-song; LI Lin-zhi; ZHANG Chun-qiang
2005-01-01
Objective: To treat injury of the lower cervical spine C6 to C7 with cervical lateral mass plates and T1 pedicle screws through posterior approach. Methods: The data of 8 patients with lower cervical spine C6 or C7 injury (6 patients with fracture and dislocation in C6 and C7 and 2 with fracture in C7) were analyzed retrospectively in this study. For the preoperative American Spinal Injury Association (ASIA) classification, Grade C was found in 3 cases and Grade D in 5 cases. Screws were placed on the lateral masses and the first thoracic pedicle with Margerl technique. Lamina or facet bone allografting was used to achieve a long-term stability. Results: All the 8 patients were followed up for 5-37 months (mean: 15 months). No operative death occurred. There were no examples of aggravation of spinal cord injury or vertebral artery injury, cerebrospinal fluid leak, nerve roots injury, screw malposition or back-out, loose of alignment or implant failure. Clinical symptoms and ASIA classification were improved in all the patients. Postoperative MRI scanning confirmed the satisfactory screw placement in all the cases. Conclusions: Lateral mass plates and pedicle screws through posterior approach are safe and beneficial for patients with lower cervical spine C6 or C7 injury.
Spectrum-splitting approach for Fermi-operator expansion in all-electron Kohn-Sham DFT calculations
Motamarri, Phani; Gavini, Vikram; Bhattacharya, Kaushik; Ortiz, Michael
2017-01-01
We present a spectrum-splitting approach to conduct all-electron Kohn-Sham density functional theory (DFT) calculations by employing Fermi-operator expansion of the Kohn-Sham Hamiltonian. The proposed approach splits the subspace containing the occupied eigenspace into a core subspace, spanned by the core eigenfunctions, and its complement, the valence subspace, and thereby enables an efficient computation of the Fermi-operator expansion by reducing the expansion to the valence-subspace projected Kohn-Sham Hamiltonian. The key ideas used in our approach are as follows: (i) employ Chebyshev filtering to compute a subspace containing the occupied states followed by a localization procedure to generate nonorthogonal localized functions spanning the Chebyshev-filtered subspace; (ii) compute the Kohn-Sham Hamiltonian projected onto the valence subspace; (iii) employ Fermi-operator expansion in terms of the valence-subspace projected Hamiltonian to compute the density matrix, electron density, and band energy. We demonstrate the accuracy and performance of the method on benchmark materials systems involving silicon nanoclusters up to 1330 electrons, a single gold atom, and a six-atom gold nanocluster. The benchmark studies on silicon nanoclusters revealed a staggering fivefold reduction in the Fermi-operator expansion polynomial degree by using the spectrum-splitting approach for accuracies in the ground-state energies of ˜10-4Ha/atom with respect to reference calculations. Further, numerical investigations on gold suggest that spectrum splitting is indispensable to achieve meaningful accuracies, while employing Fermi-operator expansion.
Dryga, Anatoly; Warshel, Arieh
2010-01-01
Simulations of long time process in condensed phases in general and in biomolecules in particular, presents a major challenge that cannot be overcome at present by brute force molecular dynamics (MD) approaches. This work takes the renormalization method, intruded by us sometime ago, and establishes its reliability and potential in extending the time scale of molecular simulations. The validation involves a truncated gramicidin system in the gas phase that is small enough to allow very long explicit simulation and sufficiently complex to present the physics of realistic ion channels. The renormalization approach is found to be reliable and arguably presents the first approach that allows one to exploit the otherwise problematic steered molecular dynamics (SMD) treatments in quantitative and meaningful studies. It is established that we can reproduce the long time behavior of large systems by using Langevin dynamics (LD) simulations of a renormalized implicit model. This is done without spending the enormous time needed to obtain such trajectories in the explicit system. The present study also provides a promising advance in accelerated evaluation of free energy barriers. This is done by adjusting the effective potential in the implicit model to reproduce the same passage time as that obtained in the explicit model, under the influence of an external force. Here having a reasonable effective friction provides a way to extract the potential of mean force (PMF) without investing the time needed for regular PMF calculations. The renormalization approach, which is illustrated here in realistic calculations, is expected to provide a major help in studies of complex landscapes and in exploring long time dynamics of biomolecules. PMID:20836533
Energy Technology Data Exchange (ETDEWEB)
Manning, Karessa L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dolislager, Fredrick G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bellamy, Michael B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2016-11-01
The Preliminary Remediation Goal (PRG) and Dose Compliance Concentration (DCC) calculators are screening level tools that set forth Environmental Protection Agency's (EPA) recommended approaches, based upon currently available information with respect to risk assessment, for response actions at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites, commonly known as Superfund. The screening levels derived by the PRG and DCC calculators are used to identify isotopes contributing the highest risk and dose as well as establish preliminary remediation goals. Each calculator has a residential gardening scenario and subsistence farmer exposure scenarios that require modeling of the transfer of contaminants from soil and water into various types of biota (crops and animal products). New publications of human intake rates of biota; farm animal intakes of water, soil, and fodder; and soil to plant interactions require updates be implemented into the PRG and DCC exposure scenarios. Recent improvements have been made in the biota modeling for these calculators, including newly derived biota intake rates, more comprehensive soil mass loading factors (MLFs), and more comprehensive soil to tissue transfer factors (TFs) for animals and soil to plant transfer factors (BV's). New biota have been added in both the produce and animal products categories that greatly improve the accuracy and utility of the PRG and DCC calculators and encompass greater geographic diversity on a national and international scale.
A Machine Learning Approach for Dynamical Mass Measurements of Galaxy Clusters
Ntampaka, Michelle; Sutherland, Dougal J; Battaglia, Nicholas; Poczos, Barnabas; Schneider, Jeff
2014-01-01
We present a modern machine learning approach for cluster dynamical mass measurements that is a factor of two improvement over using a conventional scaling relation. Different methods are tested against a mock cluster catalog constructed using halos with mass >= 10^14 Msolar/h from Multidark's publicly-available N-body MDPL halo catalog. In the conventional method, we use a standard M(sigma_v) power law scaling relation to infer cluster mass, M, from line-of-sight (LOS) galaxy velocity dispersion, sigma_v. The resulting fractional mass error distribution is broad, with width = 0.86 (68% scatter), and has extended high-error tails. The standard scaling relation can be simply enhanced by including higher-order moments of the LOS velocity distribution. Applying the kurtosis as a linear correction term to log(sigma_v) reduces the width of the error distribution to 0.74 (15% improvement). Machine learning can be used to take full advantage of all the information in the velocity distribution. We employ the Support ...
Energy Technology Data Exchange (ETDEWEB)
Osborne, David L.; Zou, Peng; Johnsen, Howard; Hayden, Carl C.; Taatjes, Craig A.; Knyazev, Vadim D.; North, Simon W.; Peterka, Darcy S.; Ahmed, Musahid; Leone, Stephen R.
2008-08-28
We have developed a multiplexed time- and photon-energy?resolved photoionizationmass spectrometer for the study of the kinetics and isomeric product branching of gasphase, neutral chemical reactions. The instrument utilizes a side-sampled flow tubereactor, continuously tunable synchrotron radiation for photoionization, a multi-massdouble-focusing mass spectrometer with 100percent duty cycle, and a time- and positionsensitive detector for single ion counting. This approach enables multiplexed, universal detection of molecules with high sensitivity and selectivity. In addition to measurement of rate coefficients as a function of temperature and pressure, different structural isomers can be distinguished based on their photoionization efficiency curves, providing a more detailed probe of reaction mechanisms. The multiplexed 3-dimensional data structure (intensity as a function of molecular mass, reaction time, and photoionization energy) provides insights that might not be available in serial acquisition, as well as additional constraints on data interpretation.
An ensemble-based approach for breast mass classification in mammography images
Ribeiro, Patricia B.; Papa, João. P.; Romero, Roseli A. F.
2017-03-01
Mammography analysis is an important tool that helps detecting breast cancer at the very early stages of the disease, thus increasing the quality of life of hundreds of thousands of patients worldwide. In Computer-Aided Detection systems, the identification of mammograms with and without masses (without clinical findings) is highly needed to reduce the false positive rates regarding the automatic selection of regions of interest that may contain some suspicious content. In this work, the introduce a variant of the Optimum-Path Forest (OPF) classifier for breast mass identification, as well as we employed an ensemble-based approach that can enhance the effectiveness of individual classifiers aiming at dealing with the aforementioned purpose. The experimental results also comprise the naïve OPF and a traditional neural network, being the most accurate results obtained through the ensemble of classifiers, with an accuracy nearly to 86%.
Thomas-fermi approach to nuclear mass formula (I). Spherical nuclei
Dutta, A. K.; Arcoragi, J.-P.; Pearson, J. M.; Behrman, R.; Tondeur, F.
1986-09-01
With a view to having a more secure basis for the nuclear mass formula than is provided by the drop(let) model, we make a preliminary study of the possibilities offered by the Skyrme-ETF method. Two ways of incorporating shell effects are considered: the "Strutinsky-integral" method of Chu et al., and the "expectation-value" method of Brack et al. Each of these methods is compared with the HF method in an attempt to see how reliably they extrapolate from the known region of the nuclear chart out to the neutron-drip line. The Strutinsky-integral method is shown to perform particularly well, and to offer a promising approach to a more reliable mass formula.
Findlater, Alexander D; Zahariev, Federico; Gordon, Mark S
2015-04-16
The local correlation "cluster-in-molecule" (CIM) method is combined with the fragment molecular orbital (FMO) method, providing a flexible, massively parallel, and near-linear scaling approach to the calculation of electron correlation energies for large molecular systems. Although the computational scaling of the CIM algorithm is already formally linear, previous knowledge of the Hartree-Fock (HF) reference wave function and subsequent localized orbitals is required; therefore, extending the CIM method to arbitrarily large systems requires the aid of low-scaling/linear-scaling approaches to HF and orbital localization. Through fragmentation, the combined FMO-CIM method linearizes the scaling, with respect to system size, of the HF reference and orbital localization calculations, achieving near-linear scaling at both the reference and electron correlation levels. For the 20-residue alanine α helix, the preliminary implementation of the FMO-CIM method captures 99.6% of the MP2 correlation energy, requiring 21% of the MP2 wall time. The new method is also applied to solvated adamantine to illustrate the multilevel capability of the FMO-CIM method.
Energy Technology Data Exchange (ETDEWEB)
Reitsma, F. E-mail: reitsma@aec.co.za; Naidoo, D
2003-06-01
The problem of modelling a highly absorbing region in a diffusion calculation is well known and many methods have been developed to accommodate the transport effects in diffusion theory. In this work the use of the equivalent cross-sections method for pebble bed type reactors is evaluated by applying it to calculations of control rod (CR) experiments performed at the ASTRA Critical Facility at the Russian Research Centre, Kurchatov Institute in Moscow. The measured reactivity worths of the CRs situated in the side reflector are compared with the calculated values making use of equivalent diffusion parameters in VSOP. Results obtained were favourable for CRs situated within the first ring of reflector blocks, with larger errors obtained for CRs situated further from the core. An additional method that was investigated is the use of equivalent boron concentrations (EBCs) to represent the absorber region. This is shown to be useful if applied correctly and with care especially in the case of differential CR worth. Practical difficulties exist with both approaches, which makes the investigation of an alternative method, which should remove these shortcomings, attractive.
Esque, Jeremy; Cecchini, Marco
2015-04-23
The calculation of the free energy of conformation is key to understanding the function of biomolecules and has attracted significant interest in recent years. Here, we present an improvement of the confinement method that was designed for use in the context of explicit solvent MD simulations. The development involves an additional step in which the solvation free energy of the harmonically restrained conformers is accurately determined by multistage free energy perturbation simulations. As a test-case application, the newly introduced confinement/solvation free energy (CSF) approach was used to compute differences in free energy between conformers of the alanine dipeptide in explicit water. The results are in excellent agreement with reference calculations based on both converged molecular dynamics and umbrella sampling. To illustrate the general applicability of the method, conformational equilibria of met-enkephalin (5 aa) and deca-alanine (10 aa) in solution were also analyzed. In both cases, smoothly converged free-energy results were obtained in agreement with equilibrium sampling or literature calculations. These results demonstrate that the CSF method may provide conformational free-energy differences of biomolecules with small statistical errors (below 0.5 kcal/mol) and at a moderate computational cost even with a full representation of the solvent.
Energy Technology Data Exchange (ETDEWEB)
Petrizzi, L.; Batistoni, P.; Migliori, S. [Associazione EURATOM ENEA sulla Fusione, Frascati (Roma) (Italy); Chen, Y.; Fischer, U.; Pereslavtsev, P. [Association FZK-EURATOM Forschungszentrum Karlsruhe (Germany); Loughlin, M. [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxfordshire, OX (United Kingdom); Secco, A. [Nice Srl Via Serra 33 Camerano Casasco AT (Italy)
2003-07-01
In deuterium-deuterium (D-D) and deuterium-tritium (D-T) fusion plasmas neutrons are produced causing activation of JET machine components. For safe operation and maintenance it is important to be able to predict the induced activation and the resulting shut down dose rates. This requires a suitable system of codes which is capable of simulating both the neutron induced material activation during operation and the decay gamma radiation transport after shut-down in the proper 3-D geometry. Two methodologies to calculate the dose rate in fusion devices have been developed recently and applied to fusion machines, both using the MCNP Monte Carlo code. FZK has developed a more classical approach, the rigorous 2-step (R2S) system in which MCNP is coupled to the FISPACT inventory code with an automated routing. ENEA, in collaboration with the ITER Team, has developed an alternative approach, the direct 1 step method (D1S). Neutron and decay gamma transport are handled in one single MCNP run, using an ad hoc cross section library. The intention was to tightly couple the neutron induced production of a radio-isotope and the emission of its decay gammas for an accurate spatial distribution and a reliable calculated statistical error. The two methods have been used by the two Associations to calculate the dose rate in five positions of JET machine, two inside the vacuum chamber and three outside, at cooling times between 1 second and 1 year after shutdown. The same MCNP model and irradiation conditions have been assumed. The exercise has been proposed and financed in the frame of the Fusion Technological Program of the JET machine. The scope is to supply the designers with the most reliable tool and data to calculate the dose rate on fusion machines. Results showed that there is a good agreement: the differences range between 5-35%. The next step to be considered in 2003 will be an exercise in which the comparison will be done with dose-rate data from JET taken during and
Ternary-fission mass distribution of 252Cf: A level-density approach
Balasubramaniam, M.; Karthikraj, C.; Selvaraj, S.; Arunachalam, N.
2014-11-01
We study here the ternary-fission mass distribution of the 252Cf nucleus for a fixed third fragment 48Ca using the level-density approach within the framework of statistical theory. For the evaluation of nuclear level densities, the single-particle energies of the finite-range droplet model are used. Our results for temperatures T =1 and 2 MeV reproduce qualitatively the experimental expectation of ternary fragmentation of 132Sn +72Ni +48Ca . In addition, different possible ternary-fission modes are highlighted.
First-principles approach to heat and mass transfer effects in model catalyst studies
Matera, S.; Reuter, K.
2009-01-01
We assess heat and mass transfer limitations in in situ studies of model catalysts with a first-principles based multiscale modeling approach that integrates a detailed description of the surface reaction chemistry and the macro-scale flow structures. Using the CO oxidation at RuO2(110) as a prototypical example we demonstrate that factors like a suppressed heat conduction at the backside of the thin single-crystal, and the build-up of a product boundary layer above the flat-faced surface pla...
Position-dependent mass approach and quantization for a torus Lagrangian
Yeşiltaş, Özlem
2016-09-01
We have shown that a Lagrangian for a torus surface can yield second-order nonlinear differential equations using the Euler-Lagrange formulation. It is seen that these second-order nonlinear differential equations can be transformed into the nonlinear quadratic and Mathews-Lakshmanan equations using the position-dependent mass approach developed by Mustafa (J. Phys. A: Math. Theor. 48, 225206 (2015)) for the classical systems. Then, we have applied the quantization procedure to the nonlinear quadratic and Mathews-Lakshmanan equations and found their exact solutions.
An approach to a multi walled carbon nanotube based mass sensor
DEFF Research Database (Denmark)
Mateiu, Ramona Valentina; Davis, Zachary James; Madsen, Dorte Nørgaard;
2004-01-01
We propose an approach to a nanoscale mass sensor based on a gold electrode structure, on which a multi-walled carbon nanotube (MWCNT) bridge can be placed and soldered. The structure is comprised of three electrodes with a width of 2 or 4 mum. Two outer electrodes with a length of 10 or 15 mum...... the bridging nanotube. The free standing MWCNTs were fabricated by chemical vapour deposition of Fe(H) phthalocyanine. A nanomanipulator with an x - y - z translation stage was used for placing the MWCNTs across the source-drain electrodes. The nanotubes were soldered onto the substrate by electron beam...
Study of nuclear structure of odd mass 119-127I nuclei in a phenomenological approach
Singh, Dhanvir; Gupta, Anuradha; Kumar, Amit; Sharma, Chetan; Singh, Suram; Bharti, Arun; Khosa, S. K.; Bhat, G. H.; Sheikh, J. A.
2016-08-01
By using the phenomenological approach of Projected Shell Model (PSM), the positive and negative-parity band structures of odd mass neutron-rich 119-127I nuclei have been studied with the deformed single-particle states generated by the standard Nilsson potential. For these isotopes, the band structures have been analyzed in terms of quasi-particles configurations. The phenomenon of backbending in moment of inertia is also studied in the present work. Besides this, the reduced transition probabilities, i.e. B (E 2) and B (M 1), are obtained from the PSM wavefunction for the first time for yrast bands of these isotopes.
Total Land Water Storage Change over 2003 - 2013 Estimated from a Global Mass Budget Approach
Dieng, H. B.; Champollion, N.; Cazenave, A.; Wada, Y.; Schrama, E.; Meyssignac, B.
2015-01-01
We estimate the total land water storage (LWS) change between 2003 and 2013 using a global water mass budget approach. Hereby we compare the ocean mass change (estimated from GRACE space gravimetry on the one hand, and from the satellite altimetry-based global mean sea level corrected for steric effects on the other hand) to the sum of the main water mass components of the climate system: glaciers, Greenland and Antarctica ice sheets, atmospheric water and LWS (the latter being the unknown quantity to be estimated). For glaciers and ice sheets, we use published estimates of ice mass trends based on various types of observations covering different time spans between 2003 and 2013. From the mass budget equation, we derive a net LWS trend over the study period. The mean trend amounts to +0.30 +/- 0.18 mm/yr in sea level equivalent. This corresponds to a net decrease of -108 +/- 64 cu km/yr in LWS over the 2003-2013 decade. We also estimate the rate of change in LWS and find no significant acceleration over the study period. The computed mean global LWS trend over the study period is shown to be explained mainly by direct anthropogenic effects on land hydrology, i.e. the net effect of groundwater depletion and impoundment of water in man-made reservoirs, and to a lesser extent the effect of naturally-forced land hydrology variability. Our results compare well with independent estimates of human-induced changes in global land hydrology.
Nilsson, C.-M.; Jones, C. J. C.; Thompson, D. J.; Ryue, J.
2009-04-01
Engineering methods for modelling the generation of railway rolling noise are well established. However, these necessarily involve some simplifying assumptions to calculate the sound powers radiated by the wheel and the track. For the rail, this involves using an average vibration together with a radiation efficiency determined for a two-dimensional (2D) problem. In this paper, the sound radiation from a rail is calculated using a method based on a combination of waveguide finite elements and wavenumber boundary elements. This new method allows a number of the simplifying assumptions in the established methods to be avoided. It takes advantage of the 2D geometry of a rail to provide an efficient numerical approach but nevertheless takes into account the three-dimensional nature of the vibration and sound field and the infinite extent of the rail. The approach is used to study a conventional 'open' rail as well as an embedded tram rail of the type used for street running. In the former case it is shown that the conventional approach gives correct results and the complexity of the new method is mostly not necessary. However, for the embedded rail it is found that it is important to take into account the radiation from several wave types in the rail and embedding material. The damping effect of the embedding material on the rail vibration is directly taken into account and, for the example shown, causes the embedded rail to radiate less sound than the open rail above about 600 Hz. The free surface of the embedding material amplifies the sound radiation at some frequencies, while at other frequencies it moves out of phase with the rail and reduces the radiation efficiency. At low frequencies the radiation from the embedded rail resembles a line monopole source which produces greater power than the 'open' rail which forms a line dipole.
Yan, Qing; Gao, Xu; Huang, Lei; Gan, Xiu-Mei; Zhang, Yi-Xin; Chen, You-Peng; Peng, Xu-Ya; Guo, Jin-Song
2014-03-01
The occurrence and fate of twenty-one pharmaceutically active compounds (PhACs) were investigated in different steps of the largest wastewater treatment plant (WWTP) in Southwest China. Concentrations of these PhACs were determined in both wastewater and sludge phases by a high-performance liquid chromatography coupled with electrospray ionization tandem mass spectrometry. Results showed that 21 target PhACs were present in wastewater and 18 in sludge. The calculated total mass load of PhACs per capita to the influent, the receiving water and sludge were 4.95mgd(-1)person(-1), 889.94μgd(-1)person(-1) and 78.57μgd(-1)person(-1), respectively. The overall removal efficiency of the individual PhACs ranged from "negative removal" to almost complete removal. Mass balance analysis revealed that biodegradation is believed to be the predominant removal mechanism, and sorption onto sludge was a relevant removal pathway for quinolone antibiotics, azithromycin and simvastatin, accounting for 9.35-26.96% of the initial loadings. However, the sorption of the other selected PhACs was negligible. The overall pharmaceutical consumption in Chongqing, China, was back-calculated based on influent concentration by considering the pharmacokinetics of PhACs in humans. The back-estimated usage was in good agreement with usage of ofloxacin (agreement ratio: 72.5%). However, the back-estimated usage of PhACs requires further verification. Generally, the average influent mass loads and back-calculated annual per capita consumption of the selected antibiotics were comparable to or higher than those reported in developed countries, while the case of other target PhACs was opposite.
Dries, M; Koopmans, L V E
2016-01-01
Recent studies based on the integrated light of distant galaxies suggest that the initial mass function (IMF) might not be universal. Variations of the IMF with galaxy type and/or formation time may have important consequences for our understanding of galaxy evolution. We have developed a new stellar population synthesis (SPS) code specifically designed to reconstruct the IMF. We implement a novel approach combining regularization with hierarchical Bayesian inference. Within this approach we use a parametrized IMF prior to regulate a direct inference of the IMF. This direct inference gives more freedom to the IMF and allows the model to deviate from parametrized models when demanded by the data. We use Markov Chain Monte Carlo sampling techniques to reconstruct the best parameters for the IMF prior, the age, and the metallicity of a single stellar population. We present our code and apply our model to a number of mock single stellar populations with different ages, metallicities, and IMFs. When systematic unc...
Planar localisation analyses: a novel application of a centre of mass approach.
Edmondson-Jones, A Mark; Irving, Samuel; Moore, David R; Hall, Deborah A
2010-08-01
Sound localisation is one of the key roles for listening, and measuring localisation performance is a mainstay of the hearing research laboratory. Such measurements may consider both accuracy and, for incorrect trials, the size of the error. In terms of error analysis, localisation studies have frequently used general purpose univariate techniques in conjunction with either mean signed or unsigned error measurements. This approach can make inappropriate distributional assumptions and so more suitable alternatives based on directional statistics have also been used. Here we investigate the use of a variety of methods, assess their performance, and comment on their use and availability. We also describe a novel use of a 'centre of mass' approach for describing localisation data jointly in terms of accuracy and size of error. This spatial method offers powerful, yet flexible, statistical analysis using standard multivariate analysis of variance (MANOVA).
Wu, D.; He, X. T.; Yu, W.; Fritzsche, S.
2017-02-01
A Monte Carlo approach to proton stopping in warm dense matter is implemented into an existing particle-in-cell code. This approach is based on multiple electron-electron, electron-ion, and ion-ion binary collision and accounts for both the free and the bound electrons in the plasmas. This approach enables one to calculate the stopping of particles in a more natural manner than existing theoretical treatment. In the low-temperature limit, when "all" electrons are bound to the nucleus, the stopping power coincides with the predictions from the Bethe-Bloch formula and is consistent with the data from the National Institute of Standard and Technology database. At higher temperatures, some of the bound electrons are ionized, and this increases the stopping power in the plasmas, as demonstrated by A. B. Zylstra et al. [Phys. Rev. Lett. 114, 215002 (2015)], 10.1103/PhysRevLett.114.215002. At even higher temperatures, the degree of ionization reaches a maximum and thus decreases the stopping power due to the suppression of collision frequency between projected proton beam and hot plasmas in the target.
Spectrum-splitting approach for Fermi-operator expansion in all-electron Kohn-Sham DFT calculations
Motamarri, Phani; Bhattacharya, Kaushik; Ortiz, Michael
2016-01-01
We present a spectrum-splitting approach to conduct all-electron Kohn-Sham density functional theory (DFT) calculations by employing Fermi-operator expansion of the Kohn-Sham Hamiltonian. The proposed approach splits the subspace containing the occupied eigenspace into a core-subspace, spanned by the core eigenfunctions, and its complement, the valence-subspace, and thereby enables an efficient computation of the Fermi-operator expansion by reducing the expansion to the valence-subspace projected Kohn-Sham Hamiltonian. The key ideas used in our approach are: (i) employ Chebyshev filtering to compute a subspace containing the occupied states followed by a localization procedure to generate non-orthogonal localized functions spanning the Chebyshev-filtered subspace; (ii) compute the Kohn-Sham Hamiltonian projected onto the valence-subspace; (iii) employ Fermi-operator expansion in terms of the valence-subspace projected Hamiltonian to compute the density matrix, electron-density and band energy. We demonstrate ...
A physical approach of the short-term wind power prediction based on CFD pre-calculated flow fields
Institute of Scientific and Technical Information of China (English)
LI Li; LIU Yong-qian; YANG Yong-ping; HAN Shuang; WANG Yi-mei
2013-01-01
A physical approach of the wind power prediction based on the CFD pre-calculated flow fields is proposed in this paper.The flow fields are obtained based on a steady CFD model with the discrete inflow wind conditions as the boundary conditions,and a database is established containing the important parameters including the inflow wind conditions,the flow fields and the corresponding wind power for each wind turbine.The power is predicted via the database by taking the Numerical Weather Prediction (NWP)wind as the input data.In order to evaluate the approach,the short-term wind power prediction for an actual wind farm is conducted as an example during the period of the year 2010.Compared with the measured power,the predicted results enjoy a high accuracy with the annual Root Mean Square Error (RMSE) of 15.2％ and the annual MAE of 10.80％.A good performance is shown in predicting the wind power's changing trend.This approach is independent of the historical data and can be widely used for all kinds of wind farms including the newly-built wind farms.At the same time,it does not take much computation time while it captures the local air flows more precisely by the CFD model.So it is especially practical for engineering projects.
In silico approaches to study mass and energy flows in microbial consortia: a syntrophic case study
Directory of Open Access Journals (Sweden)
Mallette Natasha
2009-12-01
Full Text Available Abstract Background Three methods were developed for the application of stoichiometry-based network analysis approaches including elementary mode analysis to the study of mass and energy flows in microbial communities. Each has distinct advantages and disadvantages suitable for analyzing systems with different degrees of complexity and a priori knowledge. These approaches were tested and compared using data from the thermophilic, phototrophic mat communities from Octopus and Mushroom Springs in Yellowstone National Park (USA. The models were based on three distinct microbial guilds: oxygenic phototrophs, filamentous anoxygenic phototrophs, and sulfate-reducing bacteria. Two phases, day and night, were modeled to account for differences in the sources of mass and energy and the routes available for their exchange. Results The in silico models were used to explore fundamental questions in ecology including the prediction of and explanation for measured relative abundances of primary producers in the mat, theoretical tradeoffs between overall productivity and the generation of toxic by-products, and the relative robustness of various guild interactions. Conclusion The three modeling approaches represent a flexible toolbox for creating cellular metabolic networks to study microbial communities on scales ranging from cells to ecosystems. A comparison of the three methods highlights considerations for selecting the one most appropriate for a given microbial system. For instance, communities represented only by metagenomic data can be modeled using the pooled method which analyzes a community's total metabolic potential without attempting to partition enzymes to different organisms. Systems with extensive a priori information on microbial guilds can be represented using the compartmentalized technique, employing distinct control volumes to separate guild-appropriate enzymes and metabolites. If the complexity of a compartmentalized network creates an
Energy Technology Data Exchange (ETDEWEB)
Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.
2013-04-27
This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account
Nagai, Tetsuro
2017-01-01
Replica-exchange molecular dynamics (REMD) has demonstrated its efficiency by combining trajectories of a wide range of temperatures. As an extension of the method, the author formalizes the mass-manipulating replica-exchange molecular dynamics (MMREMD) method that allows for arbitrary mass scaling with respect to temperature and individual particles. The formalism enables the versatile application of mass-scaling approaches to the REMD method. The key change introduced in the novel formalism is the generalized rules for the velocity and momentum scaling after accepted replica-exchange attempts. As an application of this general formalism, the refinement of the viscosity-REMD (V-REMD) method [P. H. Nguyen, https://doi.org/10.1063/1.3369626" xlink:type="simple">J. Chem. Phys. 132, 144109 (2010)] is presented. Numerical results are provided using a pilot system, demonstrating easier and more optimized applicability of the new version of V-REMD as well as the importance of adherence to the generalized velocity scaling rules. With the new formalism, more sound and efficient simulations will be performed.
Yang, Yang; Liu, Fan; Franc, Vojtech; Halim, Liem Andhyk; Schellekens, Huub; Heck, Albert J. R.
2016-01-01
Many biopharmaceutical products exhibit extensive structural micro-heterogeneity due to an array of co-occurring post-translational modifications. These modifications often effect the functionality of the product and therefore need to be characterized in detail. Here, we present an integrative approach, combining two advanced mass spectrometry-based methods, high-resolution native mass spectrometry and middle-down proteomics, to analyse this micro-heterogeneity. Taking human erythropoietin and the human plasma properdin as model systems, we demonstrate that this strategy bridges the gap between peptide- and protein-based mass spectrometry platforms, providing the most complete profiling of glycoproteins. Integration of the two methods enabled the discovery of three undescribed C-glycosylation sites on properdin, and revealed in addition unexpected heterogeneity in occupancies of C-mannosylation. Furthermore, using various sources of erythropoietin we define and demonstrate the usage of a biosimilarity score to quantitatively assess structural similarity, which would also be beneficial for profiling other therapeutic proteins and even plasma protein biomarkers. PMID:27824045
Estimating riverine discharge of nitrogen from the South Korea by the mass balance approach.
Kim, Taehoon; Kim, Geonha; Kim, Sungwon; Choi, Euiso
2008-01-01
The main objective of this research was to estimate the total mass of nitrogen discharged from various sources in Korea using the mass balance approach. Three different nitrogen mass balances were presented: (1) agricultural activities including raising crops and animal husbandry; (2) domestic activities, and (3) activities in forest and urban areas. These nitrogen balances were combined to estimate riverine discharge of nitrogen to the ocean in national scale. Nitrogen inputs include atmospheric deposition, biological nitrogen fixation, application of inorganic fertilizers/manures, animal feed/imported foodstuffs, and meat/fish. Nitrogen outputs include ammonia volatilization, denitrification, human/animal waste generation, crop/meat production, and riverine discharge to the ocean. The estimated total nitrogen input in Korea was 1,194.5 x 10(3) tons N/year. Nitrogen discharged into rivers was estimated as 408-422 x 10(3) tons N/year, of which 66-71% was diffuse in origin. The estimated diffuse discharges for land uses were estimated as 82 x 10(3) tons N/year from agricultural areas, 7 x 10(3) tons N/year from forestry and 75 x 10(3) tons N/year from urban and industrial areas.
Sigmund, Gerd; Koch, Anja; Orlovius, Anne-Katrin; Guddat, Sven; Thomas, Andreas; Schänzer, Wilhelm; Thevis, Mario
2014-01-01
Since January 2014, the anti-anginal drug trimetazidine [1-(2,3,4-trimethoxybenzyl)-piperazine] has been classified as prohibited substance by the World Anti-Doping Agency (WADA), necessitating specific and robust detection methods in sports drug testing laboratories. In the present study, the implementation of the intact therapeutic agent into two different initial testing procedures based on gas chromatography-mass spectrometry (GC-MS) and liquid chromatography-tandem mass spectrometry (LC-MS/MS) is reported, along with the characterization of urinary metabolites by electrospray ionization-high resolution/high accuracy (tandem) mass spectrometry. For GC-MS analyses, urine samples were subjected to liquid-liquid extraction sample preparation, while LC-MS/MS analyses were conducted by established 'dilute-and-inject' approaches. Both screening methods were validated for trimetazidine concerning specificity, limits of detection (0.5-50 ng/mL), intra-day and inter-day imprecision (doping control samples were used to complement the LC-MS/MS-based assay, although intact trimetazidine was found at highest abundance of the relevant trimetazidine-related analytes in all tested sports drug testing samples. Retrospective data mining regarding doping control analyses conducted between 1999 and 2013 at the Cologne Doping Control Laboratory concerning trimetazidine revealed a considerable prevalence of the drug particularly in endurance and strength sports accounting for up to 39 findings per year.
Musah, Rabi A.; Espinoza, Edgard O.; Cody, Robert B.; Lesiak, Ashton D.; Christensen, Earl D.; Moore, Hannah E.; Maleknia, Simin; Drijfhout, Falko P.
2015-07-01
A high throughput method for species identification and classification through chemometric processing of direct analysis in real time (DART) mass spectrometry-derived fingerprint signatures has been developed. The method entails introduction of samples to the open air space between the DART ion source and the mass spectrometer inlet, with the entire observed mass spectral fingerprint subjected to unsupervised hierarchical clustering processing. A range of both polar and non-polar chemotypes are instantaneously detected. The result is identification and species level classification based on the entire DART-MS spectrum. Here, we illustrate how the method can be used to: (1) distinguish between endangered woods regulated by the Convention for the International Trade of Endangered Flora and Fauna (CITES) treaty; (2) assess the origin and by extension the properties of biodiesel feedstocks; (3) determine insect species from analysis of puparial casings; (4) distinguish between psychoactive plants products; and (5) differentiate between Eucalyptus species. An advantage of the hierarchical clustering approach to processing of the DART-MS derived fingerprint is that it shows both similarities and differences between species based on their chemotypes. Furthermore, full knowledge of the identities of the constituents contained within the small molecule profile of analyzed samples is not required.
Simpson, Scott; Gross, Michael S; Olson, James R; Zurek, Eva; Aga, Diana S
2015-02-17
The COnductor-like Screening MOdel for Realistic Solvents (COSMO-RS) was used to predict the boiling points of several polybrominated diphenyl ethers (PBDEs) and methylated derivatives (MeO-BDEs) of monohydroxylated BDE (OH-BDE) metabolites. The linear correlation obtained by plotting theoretical boiling points calculated by COSMO-RS against experimentally determined retention times from gas chromatography-mass spectrometry facilitated the identification of PBDEs and OH-BDEs. This paper demonstrates the applicability of COSMO-RS in identifying unknown PBDE metabolites of 2,2',4,4'-tetrabromodiphenyl ether (BDE-47) and 2,2',4,4',6-pentabromodiphenyl ether (BDE-100). Metabolites of BDE-47 and BDE-100 were formed through individual incubations of each PBDE with recombinant cytochrome P450 2B6. Using calculated boiling points and characteristic mass spectral fragmentation patterns of the MeO-BDE positional isomers, the identities of the unknown monohydroxylated metabolites were proposed to be 2'-hydroxy-2,3',4,4'-tetrabromodiphenyl ether (2'-OH-BDE-66) from BDE-47, and 2'-hydroxy-2,3',4,4',6-pentabromodiphenyl ether (2'-OH-BDE-119) and 4-hydroxy-2,2',3,4',6-pentabromodiphenyl ether (4-OH-BDE-91) from BDE-100. The collective use of boiling points predicted with COSMO-RS, and characteristic mass spectral fragmentation patterns provided a valuable tool toward the identification of isobaric compounds.
Denoising peptide tandem mass spectra for spectral libraries: a Bayesian approach.
Shao, Wenguang; Lam, Henry
2013-07-05
With the rapid accumulation of data from shotgun proteomics experiments, it has become feasible to build comprehensive and high-quality spectral libraries of tandem mass spectra of peptides. A spectral library condenses experimental data into a retrievable format and can be used to aid peptide identification by spectral library searching. A key step in spectral library building is spectrum denoising, which is best accomplished by merging multiple replicates of the same peptide ion into a consensus spectrum. However, this approach cannot be applied to "singleton spectra," for which only one observed spectrum is available for the peptide ion. We developed a method, based on a Bayesian classifier, for denoising peptide tandem mass spectra. The classifier accounts for relationships between peaks, and can be trained on the fly from consensus spectra and immediately applied to denoise singleton spectra, without hard-coded knowledge about peptide fragmentation. A linear regression model was also trained to predict the number of useful "signal" peaks in a spectrum, thereby obviating the need for arbitrary thresholds for peak filtering. This Bayesian approach accumulates weak evidence systematically to boost the discrimination power between signal and noise peaks, and produces readily interpretable conditional probabilities that offer valuable insights into peptide fragmentation behaviors. By cross validation, spectra denoised by this method were shown to retain more signal peaks, and have higher spectral similarities to replicates, than those filtered by intensity only.
Gao, Ting; Shi, Li-Li; Li, Hai-Bin; Zhao, Shan-Shan; Li, Hui; Sun, Shi-Ling; Su, Zhong-Min; Lu, Ying-Hua
2009-07-07
The combination of genetic algorithm and back-propagation neural network correction approaches (GABP) has successfully improved the calculation accuracy of absorption energies. In this paper, the absorption energies of 160 organic molecules are corrected to test this method. Firstly, the GABP1 is introduced to determine the quantitative relationship between the experimental results and calculations obtained by using quantum chemical methods. After GABP1 correction, the root-mean-square (RMS) deviations of the calculated absorption energies reduce from 0.32, 0.95 and 0.46 eV to 0.14, 0.19 and 0.18 eV for B3LYP/6-31G(d), B3LYP/STO-3G and ZINDO methods, respectively. The corrected results of B3LYP/6-31G(d)-GABP1 are in good agreement with experimental results. Then, the GABP2 is introduced to determine the quantitative relationship between the results of B3LYP/6-31G(d)-GABP1 method and calculations of the low accuracy methods (B3LYP/STO-3G and ZINDO). After GABP2 correction, the RMS deviations of the calculated absorption energies reduce to 0.20 and 0.19 eV for B3LYP/STO-3G and ZINDO methods, respectively. The results show that the RMS deviations after GABP1 and GABP2 correction are similar for B3LYP/STO-3G and ZINDO methods. Thus, the B3LYP/6-31G(d)-GABP1 is a better method to predict absorption energies and can be used as the approximation of experimental results where the experimental results are unknown or uncertain by experimental method. This method may be used for predicting absorption energies of larger organic molecules that are unavailable by experimental methods and by high-accuracy theoretical methods with larger basis sets. The performance of this method was demonstrated by application to the absorption energy of the aldehyde carbazole precursor.
Han, Young-Soo; Tokunaga, Tetsu K
2014-12-01
Renewed interest in managing C balance in soils is motivated by increasing atmospheric concentrations of CO2 and consequent climate change. Here, experiments were conducted in soil columns to determine C mass balances with and without addition of CaSO4-minerals (anhydrite and gypsum), which were hypothesized to promote soil organic carbon (SOC) retention and soil inorganic carbon (SIC) precipitation as calcite under slightly alkaline conditions. Changes in C contents in three phases (gas, liquid and solid) were measured in unsaturated soil columns tested for one year and comprehensive C mass balances were determined. The tested soil columns had no C inputs, and only C utilization by microbial activity and C transformations were assumed in the C chemistry. The measurements showed that changes in C inventories occurred through two processes, SOC loss and SIC gain. However, the measured SOC losses in the treated columns were lower than their corresponding control columns, indicating that the amendments promoted SOC retention. The SOC losses resulted mostly from microbial respiration and loss of CO2 to the atmosphere rather than from chemical leaching. Microbial oxidation of SOC appears to have been suppressed by increased Ca(2+) and SO4(2)(-) from dissolution of CaSO4 minerals. For the conditions tested, SIC accumulation per m(2) soil area under CaSO4-treatment ranged from 130 to 260 g C m(-1) infiltrated water (20-120 g C m(-1) infiltrated water as net C benefit). These results demonstrate the potential for increasing C sequestration in slightly alkaline soils via CaSO4-treatment.
Topin, Jérémie; Diharce, Julien; Fiorucci, Sébastien; Antonczak, Serge; Golebiowski, Jérôme
2014-01-23
Hydrogenases are promising candidates for the catalytic production of green energy by means of biological ways. The major impediment to such a production is rooted in their inhibition under aerobic conditions. In this work, we model dioxygen migration rates in mutants of a hydrogenase of Desulfovibrio fructusovorans. The approach relies on the calculation of the whole potential of mean force for O2 migration within the wild-type as well as in V74M, V74F, and V74Q mutant channels. The three free-energy barriers along the entire migration pathway are converted into chemical rates through modeling based on Transition State Theory. The use of such a model recovers the trend of O2 migration rates among the series.
Zel, Jana; Gruden, Kristina; Cankar, Katarina; Stebih, Dejan; Blejec, Andrej
2007-01-01
Quantitative characterization of nucleic acids is becoming a frequently used method in routine analysis of biological samples, one use being the detection of genetically modified organisms (GMOs). Measurement uncertainty is an important factor to be considered in these analyses, especially where precise thresholds are set in regulations. Intermediate precision, defined as a measure between repeatability and reproducibility, is a parameter describing the real situation in laboratories dealing with quantitative aspects of molecular biology methods. In this paper, we describe the top-down approach to calculating measurement uncertainty, using intermediate precision, in routine GMO testing of food and feed samples. We illustrate its practicability in defining compliance of results with regulations. The method described is also applicable to other molecular methods for a variety of laboratory diagnostics where quantitative characterization of nucleic acids is needed.
Directory of Open Access Journals (Sweden)
E. A. Drozd
2014-01-01
Full Text Available The basis of methodical approach for calculation of the individualized internal doses is the con-firmed original scientific hypothesis that every group of individuals which are homogeneous on demographic characteristics (gender and age, on a curve of dose distribution that is constructed according to the data of individual measurements of Cs137 in the human body (WB measurements, has the determined location, thus, that is constant in time, i.e. percentiles of dose distribution corresponding to the average internal dose of every age group of men and women on a curve of dose distribution occupy the certain, steady in time, location. Keywords: individualized internal dose, percentile of dose distribution, stability.
Kou, Qiang; Wu, Si; Tolic, Nikola; Paša-Tolic, Ljiljana; Liu, Yunlong; Liu, Xiaowen
2017-05-01
Although proteomics has rapidly developed in the past decade, researchers are still in the early stage of exploring the world of complex proteoforms, which are protein products with various primary structure alterations resulting from gene mutations, alternative splicing, post-translational modifications, and other biological processes. Proteoform identification is essential to mapping proteoforms to their biological functions as well as discovering novel proteoforms and new protein functions. Top-down mass spectrometry is the method of choice for identifying complex proteoforms because it provides a 'bird's eye view' of intact proteoforms. The combinatorial explosion of various alterations on a protein may result in billions of possible proteoforms, making proteoform identification a challenging computational problem. We propose a new data structure, called the mass graph, for efficient representation of proteoforms and design mass graph alignment algorithms. We developed TopMG, a mass graph-based software tool for proteoform identification by top-down mass spectrometry. Experiments on top-down mass spectrometry datasets showed that TopMG outperformed existing methods in identifying complex proteoforms. http://proteomics.informatics.iupui.edu/software/topmg/. xwliu@iupui.edu. Supplementary data are available at Bioinformatics online.
Energy Technology Data Exchange (ETDEWEB)
Kou, Qiang; Wu, Si; Tolić, Nikola; Paša-Tolić, Ljiljana; Liu, Yunlong; Liu, Xiaowen
2016-12-21
Motivation: Although proteomics has rapidly developed in the past decade, researchers are still in the early stage of exploring the world of complex proteoforms, which are protein products with various primary structure alterations resulting from gene mutations, alternative splicing, post-translational modifications, and other biological processes. Proteoform identification is essential to mapping proteoforms to their biological functions as well as discovering novel proteoforms and new protein functions. Top-down mass spectrometry is the method of choice for identifying complex proteoforms because it provides a “bird’s eye view” of intact proteoforms. The combinatorial explosion of various alterations on a protein may result in billions of possible proteoforms, making proteoform identification a challenging computational problem. Results: We propose a new data structure, called the mass graph, for efficient representation of proteoforms and design mass graph alignment algorithms. We developed TopMG, a mass graph-based software tool for proteoform identification by top-down mass spectrometry. Experiments on top-down mass spectrometry data sets showed that TopMG outperformed existing methods in identifying complex proteoforms.
Yin, W.; Peyton, A. J.; Stefani, F.; Gerbeth, G.
2009-10-01
A completely contactless flow measurement technique based on the principle of EM induction measurements—contactless inductive flow tomography (CIFT)—has been previously reported by a team based at Forschungszentrum Dresden-Rossendorf (FZD). This technique is suited to the measurement of velocity fields in high conductivity liquids, and the possible applications range from monitoring metal casting and silicon crystal growth in industry to gaining insights into the working of the geodynamo. The forward problem, i.e. calculating the induced magnetic field from a known velocity profile, can be described as a linear relationship when the magnetic Reynolds number is small. Previously, an integral equation method was used to formulate the forward problem; however, although the sensitivity matrices were calculated, they were not explicitly expressed and computation involved the solution of an ill-conditioned system of equations using a so-called deflation method. In this paper, we present the derivation of the sensitivity matrix directly from electromagnetic field theory and the results are expressed very concisely as the cross product of two field vectors. A numerical method based on a finite difference method has also been developed to verify the formulation. It is believed that this approach provides a simple yet fast route to the forward solution of CIFT. Furthermore, a method for sensor design selection based on eigenvalue analysis is presented.
Orion Pad Abort 1 Crew Module Mass Properties Test Approach and Results
Herrera, Claudia; Harding, Adam
2012-01-01
The Flight Loads Laboratory at the Dryden Flight Research Center conducted tests to measure the inertia properties of the Orion Pad Abort 1 (PA-1) Crew Module (CM). These measurements were taken to validate analytical predictions of the inertia properties of the vehicle and assist in reducing uncertainty for derived aero performance coefficients to be calculated post-launch. The first test conducted was to determine the Ixx of the Crew Module. This test approach used a modified torsion pendulum test setup that allowed the suspended Crew Module to rotate about the x axis. The second test used a different approach to measure both the Iyy and Izz properties. This test used a Knife Edge fixture that allowed small rotation of the Crew Module about the y and z axes. Discussions of the techniques and equations used to accomplish each test are presented. Comparisons with the predicted values used for the final flight calculations are made. Problem areas, with explanations and recommendations where available, are addressed. Finally, an evaluation of the value and success of these techniques to measure the moments of inertia of the Crew Module is provided.
Czerny, J.; Schulz, K. G.; Boxhammer, T.; Bellerby, R. G. J.; Büdenbender, J.; Engel, A.; Krug, S. A.; Ludwig, A.; Nachtigall, K.; Nondal, G.; Niehoff, B.; Silyakova, A.; Riebesell, U.
2013-05-01
Recent studies on the impacts of ocean acidification on pelagic communities have identified changes in carbon to nutrient dynamics with related shifts in elemental stoichiometry. In principle, mesocosm experiments provide the opportunity of determining temporal dynamics of all relevant carbon and nutrient pools and, thus, calculating elemental budgets. In practice, attempts to budget mesocosm enclosures are often hampered by uncertainties in some of the measured pools and fluxes, in particular due to uncertainties in constraining air-sea gas exchange, particle sinking, and wall growth. In an Arctic mesocosm study on ocean acidification applying KOSMOS (Kiel Off-Shore Mesocosms for future Ocean Simulation), all relevant element pools and fluxes of carbon, nitrogen and phosphorus were measured, using an improved experimental design intended to narrow down the mentioned uncertainties. Water-column concentrations of particulate and dissolved organic and inorganic matter were determined daily. New approaches for quantitative estimates of material sinking to the bottom of the mesocosms and gas exchange in 48 h temporal resolution as well as estimates of wall growth were developed to close the gaps in element budgets. However, losses elements from the budgets into a sum of insufficiently determined pools were detected, and are principally unavoidable in mesocosm investigation. The comparison of variability patterns of all single measured datasets revealed analytic precision to be the main issue in determination of budgets. Uncertainties in dissolved organic carbon (DOC), nitrogen (DON) and particulate organic phosphorus (POP) were much higher than the summed error in determination of the same elements in all other pools. With estimates provided for all other major elemental pools, mass balance calculations could be used to infer the temporal development of DOC, DON and POP pools. Future elevated pCO2 was found to enhance net autotrophic community carbon uptake in two of
Directory of Open Access Journals (Sweden)
J. Czerny
2013-05-01
Full Text Available Recent studies on the impacts of ocean acidification on pelagic communities have identified changes in carbon to nutrient dynamics with related shifts in elemental stoichiometry. In principle, mesocosm experiments provide the opportunity of determining temporal dynamics of all relevant carbon and nutrient pools and, thus, calculating elemental budgets. In practice, attempts to budget mesocosm enclosures are often hampered by uncertainties in some of the measured pools and fluxes, in particular due to uncertainties in constraining air–sea gas exchange, particle sinking, and wall growth. In an Arctic mesocosm study on ocean acidification applying KOSMOS (Kiel Off-Shore Mesocosms for future Ocean Simulation, all relevant element pools and fluxes of carbon, nitrogen and phosphorus were measured, using an improved experimental design intended to narrow down the mentioned uncertainties. Water-column concentrations of particulate and dissolved organic and inorganic matter were determined daily. New approaches for quantitative estimates of material sinking to the bottom of the mesocosms and gas exchange in 48 h temporal resolution as well as estimates of wall growth were developed to close the gaps in element budgets. However, losses elements from the budgets into a sum of insufficiently determined pools were detected, and are principally unavoidable in mesocosm investigation. The comparison of variability patterns of all single measured datasets revealed analytic precision to be the main issue in determination of budgets. Uncertainties in dissolved organic carbon (DOC, nitrogen (DON and particulate organic phosphorus (POP were much higher than the summed error in determination of the same elements in all other pools. With estimates provided for all other major elemental pools, mass balance calculations could be used to infer the temporal development of DOC, DON and POP pools. Future elevated pCO2 was found to enhance net autotrophic community carbon
Institute of Scientific and Technical Information of China (English)
Milan M.Terzic; Jelena Dotlic; Ivana Likic; Nebojsa Ladjevic; Natasa Brndusic; Nebojsa Arsenovic; Sanja Maricic
2013-01-01
The aim of the study was to investigate which anamnestic,laboratory and ultrasound parameters used in routine practice could predict the nature of adnexal mass,thus enabling referral to relevant specialist.Methods:Study involved the women treated for adnexal tumors throughout a period of 2 years.On admission,detailed anamnestic and laboratory data were obtained,expert ultrasound scan was performed,and power Doppler index (PDI),risk of malignancy index (RMI) and body mass index (BMI) were calculated for all patients.Obtained data were related to histopathological findings,and statistically analyzed.Results:The study included 689 women (112 malignant,544 benignant,and 33 borderline tumors).Malignant and borderline tumors were more frequent in postmenopausal women (P=0.000).Women who had benignant tumors had the lowest BMI (P=0.000).There were significant (P＜0.05) differences among tumor types regarding erythrocyte sedimentation rate,CA125 and carcinoembryonic antigen (CEA) levels.Among ultrasound findings,larger tumor diameter and ascites were more frequent in malignant tumors (P=0.000).Women with malignant tumors had highest values of RMI and PDI (P=0.000).Conclusions:Anamnestic data,ultrasound parameters and laboratory analyses were all found to be good discriminating factors among malignant,benignant and borderline tumors.
Shankar, R.
2000-03-01
Recent experiments have determined the magnetic fields B^c at which gapped quantum Hall states jump from one quantized value of polarization P to another[1- 3] and for gapless states, the B^c's for saturation of P, [3], and the temperature dependence of P [4-5], and 1/T_1[4]. New questions arise: why is m_p, the polarization mass, so different from and m_a, the activation mass? At T>0, which should be used? Are CF's free ? I answer them using the hamiltonian theory of Composite Fermions (CF) [6-7], focusing ν=p/(2ps+ 1). This theory encodes in a hamiltonian ( of unusual form) the binding of electrons to vortices to form CF [8-9]. I compute ma near ν=1/2 and 1/4, using Zhang-Das Sarma's v(q)=2 π e^2 e^-qΛ/q (where Λ measures sample thickness), and B^c for transition from one quantized value of P to the next. I find, as Park and Jain [10] did, that the energy differences can be fit to a free fermions of mass mp though the underlying theory is interacting. I show how rotational invariance and d=2 are behind this. This tells us not to attempt to fit all data to free-field form. I compare my Hartree-Fock B^c (at T = 0); and P and 1/T1 at T>0, to experiments, taking L as a free parameter for each sample, fitting the theory to one data point. This single parameter is able to subsume the effects of interaction, disorder, and LL mixing. 1. R.R. Du, A.S. Yeh, H.L. Stormer, D.C. Tsui, L.N. Pfeifer and K.W. West, Phys. Rev. Lett., 75, 3926, (1995). 2. A.S. Yeh, H.L. Stormer, D.C. Tsui, L.N. Pfeifer, K.W. Baldwin and K.W. West, Phys. Rev. Lett. 82, 592, (1998.) 3. I.V. Kukushkin, K.V. Klitzing and K. Eberl, Phys. Rev. Lett. 82, 3 665, (1999.) 4. A.E. Dementyev, N.N. Kuzma, P. Khandelwal, S.E. Barrett, L.N. Pfeifer, and K. W. West, cond-mat/9907280. 5. S.Melinte, N. Freytag, M. Horvatic, C. Berthier, L.P. Levy, V. Bayot and M. Shayegan, cond-mat/9908098. 6. R. Shankar and G. Murthy, Phys.Rev. Lett. 79, 4437,(1997). G. Murthy and R. Shankar, in ``Composite Fermions", Editor
Energy Technology Data Exchange (ETDEWEB)
Kurth, S.
2002-09-04
The renormalised quark mass in the Schroedinger functional is studied perturbatively with a non-vanishing background field. The framework in which the calculations are done is the Schroedinger functional. Its definition and basic properties are reviewed and it is shown how to make the theory converge faster towards its continuum limit by O(a) improvement. It is explained how the Schroedinger functional scheme avoids the implications of treating a large energy range on a single lattice in order to determine the scale dependence of renormalised quantities. The description of the scale dependence by the step scaling function is introduced both for the renormalised coupling and the renormalised quark masses. The definition of the renormalised coupling in the Schroedinger functional is reviewed, and the concept of the renormalised mass being defined by the axial current and density via the PCAC-relation is explained. The running of the renormalised mass described by its step scaling function is presented as a consequence of the fact that the renormalisation constant of the axial density is scale dependent. The central part of the thesis is the expansion of several correlation functions up to 1-loop order. The expansion coefficients are used to compute the critical quark mass at which the renormalised mass vanishes, as well as the 1-loop coefficient of the renormalisation constant of the axial density. Using the result for this renormalisation constant, the 2-loop anomalous dimension is obtained by conversion from the MS-scheme. Another important application of perturbation theory carried out in this thesis is the determination of discretisation errors. The critical quark mass at 1-loop order is used to compute the deviation of the coupling's step scaling function from its continuum limit at 2-loop order. Several lattice artefacts of the current quark mass, defined by the PCAC relation with the unrenormalised axial current and density, are computed at 1-loop order
Shamsipur, Mojtaba; Allahyari, Leila; Fasihi, Javad; Taherpour, Avat (Arman); Asfari, Zuhair; Valinejad, Azizollah
2016-03-01
Complexation of two 1,3-alternate calix[4]crown ligands with alkali metals (K+, Rb+ and Cs+) has been investigated by electrospray ionization mass spectrometry (ESI-MS) and density functional theory calculations. The binding selectivities of the ligands and the binding constants of their complexes in solution have been determined using the obtained mass spectra. Also the percentage of each formed complex species in the mixture of each ligand and alkali metal has been experimentally evaluated. For both calix[4]crown-5 and calix[4]crown-6 ligands the experimental and theoretical selectivity of their alkali metal complexes found to follow the trend K+ > Rb+ > Cs+. The structures of ligands were optimized by DFT-B3LYP/6-31G method and the structures of complexes were obtained by QM-SCF-MO/PM6 method and discussed in the text.
Energy Technology Data Exchange (ETDEWEB)
Oyama, Y. [Hitachi Car Engineering, Ltd., Tokyo (Japan); Nishimura, Y.; Osuga, M.; Yamauchi, T. [Hitachi, Ltd., Tokyo (Japan)
1997-10-01
Air flow characteristics of hot-wire air flow meters for gasoline fuel-injection systems with supercharging and exhaust gas recycle during transient conditions were investigated to analyze a simple method for calculating air mass in cylinder. It was clarified that the air mass in cylinder could be calculated by compensating for the change of air mass in intake system by using aerodynamic models of intake system. 3 refs., 6 figs., 1 tab.
The general conditional equations which govern the phase equilibria in three-component systems are presented. Using the general conditional equations...a general method has been developed to precalculate the phase equilibria in three-component systems from first principle using computer technique...The method developed has been applied to several model examples and the system Ta-Hf-C. The phase equilibria in three-component systems calculated
Valavanis, A.; Ikonić, Z.; Kelsall, R. W.
2007-05-01
Intervalley mixing between conduction-band states in low-dimensional Si/SiGe heterostructures induces splitting between nominally degenerate energy levels. The symmetric double-valley effective mass approximation and the empirical pseudopotential method are used to find the electronic states in different types of quantum wells. A reasonably good agreement between the two methods is found, with the former being much faster computationally. Aside from being an oscillatory function of well width, the splitting is found to be almost independent of in-plane wave vector, and an increasing function of the magnitude of interface gradient. While the model is defined for symmetric envelope potentials, it is shown to remain reasonably accurate for slightly asymmetric structures such as a double quantum well, making it acceptable for simulation of multilayer intersubband optical devices. Intersubband optical transitions are investigated under both approximations and it is shown that in most cases valley splitting causes linewidth broadening, although under extreme conditions, transition line doublets may result.
Giussi, Juan M; Gastaca, Belen; Albesa, Alberto; Cortizo, M Susana; Allegretti, Patricia E
2011-02-01
The study of tautomerics equilibria is really important because the reactivity of each compound with tautomeric capacity can be determined from the proportion of each tautomer. In the present work the tautomeric equilibria in some γ,δ-unsaturated β-hydroxynitriles and γ,δ-unsaturated β-ketonitriles were studied. The first family of compounds presents two possible theoretical tautomers, nitrile and ketenimine, while the second one presents four possible theoretical tautomers, keto-nitrile, enol (E and Z)-nitrile and keto-ketenimine. The equilibrium in gas phase was studied by gas chromatography-mass spectrometry (GC-MS). Tautomerization enthalpies were calculated by this methodology, and results were compared with those obtained by density functional theory (DFT) calculations, observing a good agreement between them. Nitrile tautomers were favored within the first family of compounds, while keto-nitrile tautomers were favored in the second family. Copyright Â© 2010 Elsevier B.V. All rights reserved.
Ormand, W E; Jensen, M Hjorth
2016-01-01
We present the first calculations for the $c$-coefficients of the isobaric mass multiplet equation (IMME) for nuclei from $A=42$ to $A=54$ based on input from several realistic nucleon-nucleon interactions. We show that there is clear dependence on the short-ranged charge-symmetry breaking (CSB) part of the strong interaction. There is a significant variation in the CSB part between the commonly used CD-Bonn, N$^3$LO and Argonne V18 nucleon-nucleon interactions. All of them give a CSB contribution that is too large when compared to experiment.
Tarana, Michal; Čurík, Roman
2016-05-01
We introduce a computational method developed for study of long-range molecular Rydberg states of such systems that can be approximated by two electrons in a model potential of the atomic cores. The method is based on a two-electron R-matrix approach inside a sphere centered on one of the atoms. The wave function is then connected to a Coulomb region outside the sphere via a multichannel version of the Coulomb Green's function. This approach is applied to a study of Rydberg states of Rb2 for internuclear separations R from 40 to 320 bohrs and energies corresponding to n from 7 to 30. We report bound states associated with the low-lying 3Po resonance and with the virtual state of the rubidium atom that turn into ion-pair-like bound states in the Coulomb potential of the atomic Rydberg core. The results are compared with previous calculations based on single-electron models employing a zero-range contact-potential and short-range modele potential. Czech Science Foundation (Project No. P208/14-15989P).
Valiente, Pedro A; Gil, Alejandro; Batista, Paulo R; Caffarena, Ernesto R; Pons, Tirso; Pascutti, Pedro G
2010-11-30
The standard parameterization of the Linear Interaction Energy (LIE) method has been applied with quite good results to reproduce the experimental absolute binding free energies for several protein-ligand systems. However, we found that this parameterization failed to reproduce the experimental binding free energy of Plasmepsin II (PlmII) in complexes with inhibitors belonging to four dissimilar scaffolds. To overcome this fact, we developed three approaches of LIE, which combine systematic approaches to predict the inhibitor-specific values of α, β, and γ parameters, to gauge their ability to calculate the absolute binding free energies for these PlmII-Inhibitor complexes. Specifically: (i) we modified the linear relationship between the weighted nonpolar desolvation ratio (WNDR) and the α parameter, by introducing two models of the β parameter determined by the free energy perturbation (FEP) method in the absence of the constant term γ, and (ii) we developed a new parameterization model to investigate the linear correlation between WNDR and the correction term γ. Using these parameterizations, we were able to reproduce the experimental binding free energy from these systems with mean absolute errors lower than 1.5 kcal/mol.
Pallàs, R.; Vilaplana, J. M.; Guinau, M.; Falgàs, E.; Alemany, X.; Muñoz, A.
Current trends in landslide hazard assessment involve a complex combination of methodologies. In spite of being the most vulnerable and in need of mitigation poli- cies, developing countries lack the general socioeconomic structures and technical facilities for such complex approaches to be implemented. The main difficulties com- monly encountered in those countries are the scarcity of previous topographic, geo- logical, geotechnical, historical and instrumental data, and the unavailability of aerial- photo coverages at suitable times and scales. In consequence, there is a strong need for developing simple methodologies of landslide hazard assessment and mitigation, which can be readily tested and implemented by developing countries themselves. To explore this line of research, we selected an area of about 20 square km severely hit by Hurricane Mitch, at the Departamento de Chinandega (NW Nicaragua). The abun- dant mass movements (mainly debris flows) produced during the Mitch rainfall event were investigated through aerial photographs at 1:60.000 scale (flight of December 1998), while much less conspicuous pre-Mich landslides were detected on 1:40.000 aerial photographs (1996 flight). We mapped over one hundred mass movements at 1:10.000 scale in the field, and recorded information concerning regolith composi- tion and thickness, mass movement dimensions and volumes, failure angle (around 22 degrees) and land use for each movement. We realised that, due to the extreme fragility of antropic structures found in the area, any mass movement is highly destructive whatever its magnitude. On the other hand, we found an almost complete lack of data concerning frequency of landsliding. Thus, the concepts of magnitude and frequency commonly used for hazard evaluation pur- poses were of little help in this case. With these considerations in mind, we found that hazard evaluation and zoning could be approached by combining two main concepts: (1) the observed degree of slope
von Manteuffel, Andreas; Schabinger, Robert M.
2017-04-01
We study a recently-proposed approach to the numerical evaluation of multi-loop Feynman integrals using available sector decomposition programs. As our main example, we consider the two-loop integrals for the αα s corrections to Drell-Yan lepton production with up to one massive vector boson in physical kinematics. As a reference, we evaluate these planar and non-planar integrals by the method of differential equations through to weight five. Choosing a basis of finite integrals for the numerical evaluation with SecDec 3 leads to tremendous performance improvements and renders the otherwise problematic seven-line topologies numerically accessible. As another example, basis integrals for massless QCD three loop form factors are evaluated with FIESTA 4. Here, employing a basis of finite integrals results in an overall speedup of more than an order of magnitude.
Weisz, Daniel R; Hogg, David W; Rix, Hans-Walter; Dolphin, Andrew E; Dalcanton, Julianne J; Foreman-Mackey, Daniel T; Lang, Dustin; Johnson, L Clifton; Beerman, Lori C; Bell, Eric F; Gordon, Karl D; Gouliermis, Dimitrios; Kalirai, Jason S; Skillman, Evan D; Williams, Benjamin F
2012-01-01
We present a probabilistic approach for inferring the parameters of the present day power-law stellar mass function (MF) of a resolved young star cluster. This technique (a) fully exploits the information content of a given dataset; (b) accounts for observational uncertainties in a straightforward way; (c) assigns meaningful uncertainties to the inferred parameters; (d) avoids the pitfalls associated with binning data; and (e) is applicable to virtually any resolved young cluster, laying the groundwork for a systematic study of the high mass stellar MF (M > 1 Msun). Using simulated clusters and Markov chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, {\\alpha}, are unbiased and that the uncertainty, {\\Delta}{\\alpha}, depends primarily on the number of observed stars and stellar mass range they span, assuming that the uncertainties on individual masses and the completeness are well-characterized. Using idealized mock data, we compute the lower limit pr...
[Historical and biological approaches to the study of Modern Age French plague mass burials].
Bianuccii, Raffaella; Tzortzis, Stéfan; Fornaciari, Gino; Signoli, Michel
2010-01-01
The "Black Death" and subsequent epidemics from 1346 to the early 18th century spread from the Caspian Sea all over Europe six hundred years after the outbreak of the Justinian plague (541-767 AD). Plague has been one of the most devastating infectious diseases that affected the humankind and has caused approximately 200 million human deaths historically. Here we describe the different approaches adopted in the study of several French putative plague mass burials dating to the Modern Age (16th-18th centuries). Through complementation of historical, archaeological and paleobiological data, ample knowledge of both the causes that favoured the spread of the Medieval plague in cities, towns and small villages and of the modification of the customary funerary practices in urban and rural areas due to plague are gained.
Novel Approach to the Dark Matter Problem: Primordial Intermediate-Mass Black Holes
Frampton, Paul H
2016-01-01
A discussion at a Scientific American level of the idea that the constituents of the dark mater in galactic halos are primordial intermediate-mass black holes with masses between ten and one hundred thousand times the solar mass.
Improved EDELWEISS-III sensitivity for low-mass WIMPs using a profile likelihood approach
Energy Technology Data Exchange (ETDEWEB)
Hehn, L. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Armengaud, E.; Boissiere, T. de; Gros, M.; Navick, X.F.; Nones, C.; Paul, B. [CEA Saclay, DSM/IRFU, Gif-sur-Yvette Cedex (France); Arnaud, Q. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Queen' s University, Kingston (Canada); Augier, C.; Billard, J.; Cazes, A.; Charlieux, F.; Jesus, M. de; Gascon, J.; Juillard, A.; Queguiner, E.; Sanglard, V.; Vagneron, L. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Benoit, A.; Camus, P. [Institut Neel, CNRS/UJF, Grenoble (France); Berge, L.; Chapellier, M.; Dumoulin, L.; Giuliani, A.; Le-Sueur, H.; Marnieros, S.; Olivieri, E.; Poda, D. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Bluemer, J. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Broniatowski, A. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Eitel, K.; Kozlov, V.; Siebenborn, B. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Foerster, N.; Heuermann, G.; Scorza, S. [Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Jin, Y. [Laboratoire de Photonique et de Nanostructures, CNRS, Route de Nozay, Marcoussis (France); Kefelian, C. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Kleifges, M.; Tcherniakhovski, D.; Weber, M. [Karlsruher Institut fuer Technologie, Institut fuer Prozessdatenverarbeitung und Elektronik, Karlsruhe (Germany); Kraus, H. [University of Oxford, Department of Physics, Oxford (United Kingdom); Kudryavtsev, V.A. [University of Sheffield, Department of Physics and Astronomy, Sheffield (United Kingdom); Pari, P. [CEA Saclay, DSM/IRAMIS, Gif-sur-Yvette (France); Piro, M.C. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Rensselaer Polytechnic Institute, Troy, NY (United States); Rozov, S.; Yakushev, E. [JINR, Laboratory of Nuclear Problems, Dubna, Moscow Region (Russian Federation); Schmidt, B. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Lawrence Berkeley National Laboratory, Berkeley, CA (United States)
2016-10-15
We report on a dark matter search for a Weakly Interacting Massive Particle (WIMP) in the mass range m{sub χ} element of [4, 30] GeV/c{sup 2} with the EDELWEISS-III experiment. A 2D profile likelihood analysis is performed on data from eight selected detectors with the lowest energy thresholds leading to a combined fiducial exposure of 496 kg-days. External backgrounds from γ- and β-radiation, recoils from {sup 206}Pb and neutrons as well as detector intrinsic backgrounds were modelled from data outside the region of interest and constrained in the analysis. The basic data selection and most of the background models are the same as those used in a previously published analysis based on boosted decision trees (BDT) [1]. For the likelihood approach applied in the analysis presented here, a larger signal efficiency and a subtraction of the expected background lead to a higher sensitivity, especially for the lowest WIMP masses probed. No statistically significant signal was found and upper limits on the spin-independent WIMP-nucleon scattering cross section can be set with a hypothesis test based on the profile likelihood test statistics. The 90 % C.L. exclusion limit set for WIMPs with m{sub χ} = 4 GeV/c{sup 2} is 1.6 x 10{sup -39} cm{sup 2}, which is an improvement of a factor of seven with respect to the BDT-based analysis. For WIMP masses above 15 GeV/c{sup 2} the exclusion limits found with both analyses are in good agreement. (orig.)
Mass Spectrometry-based Approaches to Understand the Molecular Basis of Memory
Directory of Open Access Journals (Sweden)
Arthur Henriques Pontes
2016-10-01
Full Text Available The central nervous system is responsible for an array of cognitive functions such as memory, learning, language and attention. These processes tend to take place in distinct brain regions; yet, they need to be integrated to give rise to adaptive or meaningful behavior. Since cognitive processes result from underlying cellular and molecular changes, genomics and transcriptomics assays have been applied to human and animal models to understand such events. Nevertheless, genes and RNAs are not the end products of most biological functions. In order to gain further insights toward the understanding of brain processes, the field of proteomics has been of increasing importance in the past years. Advancements in liquid chromatography-tandem mass spectrometry (LC-MS/MS have enable the identification and quantification of thousand of proteins with high accuracy and sensitivity, fostering a revolution in the neurosciences. Herein, we review the molecular bases of explicit memory in the hippocampus. We outline the principles of mass spectrometry (MS-based proteomics, highlighting the use of this analytical tool to study memory formation. In addition, we discuss MS-based targeted approaches as the future of protein analysis.
Mass Spectrometry-based Approaches to Understand the Molecular Basis of Memory
Pontes, Arthur; de Sousa, Marcelo
2016-10-01
The central nervous system is responsible for an array of cognitive functions such as memory, learning, language and attention. These processes tend to take place in distinct brain regions; yet, they need to be integrated to give rise to adaptive or meaningful behavior. Since cognitive processes result from underlying cellular and molecular changes, genomics and transcriptomics assays have been applied to human and animal models to understand such events. Nevertheless, genes and RNAs are not the end products of most biological functions. In order to gain further insights toward the understanding of brain processes, the field of proteomics has been of increasing importance in the past years. Advancements in liquid chromatography-tandem mass spectrometry (LC-MS/MS) have enable the identification and quantification of thousand of proteins with high accuracy and sensitivity, fostering a revolution in the neurosciences. Herein, we review the molecular bases of explicit memory in the hippocampus. We outline the principles of mass spectrometry (MS)-based proteomics, highlighting the use of this analytical tool to study memory formation. In addition, we discuss MS-based targeted approaches as the future of protein analysis.
Improved EDELWEISS-III sensitivity for low-mass WIMPs using a profile likelihood approach
Hehn, L; Arnaud, Q; Augier, C; Benoît, A; Bergé, L; Billard, J; Blümer, J; de Boissière, T; Broniatowski, A; Camus, P; Cazes, A; Chapellier, M; Charlieux, F; De Jésus, M; Dumoulin, L; Eitel, K; Foerster, N; Gascon, J; Giuliani, A; Gros, M; Heuermann, G; Jin, Y; Juillard, A; Kéfélian, C; Kleifges, M; Kozlov, V; Kraus, H; Kudryavtsev, V A; Le-Sueur, H; Marnieros, S; Navick, X -F; Nones, C; Olivieri, E; Pari, P; Paul, B; Piro, M -C; Poda, D; Queguiner, E; Rozov, S; Sanglard, V; Schmidt, B; Scorza, S; Siebenborn, B; Tcherniakhovski, D; Vagneron, L; Weber, M; Yakushev, E
2016-01-01
We report on a dark matter search for a Weakly Interacting Massive Particle (WIMP) in the mass range $m_\\chi \\in [4, 30]\\,\\mathrm{GeV}/c^2$ with the EDELWEISS-III experiment. A 2D profile likelihood analysis is performed on data from eight selected detectors with the lowest energy thresholds leading to a combined fiducial exposure of 496 kg-days. External backgrounds from $\\gamma$- and $\\beta$-radiation, recoils from $^{206}$Pb and neutrons as well as detector intrinsic backgrounds were modelled from data outside the region of interest and constrained in the analysis. The basic data selection and most of the background models are the same as those used in a previously published analysis based on Boosted Decision Trees (BDT). For the likelihood approach applied in the analysis presented here, a larger signal efficiency and a subtraction of the expected background lead to a higher sensitivity, especially for the lowest WIMP masses probed. No statistically significant signal was found and upper limits on the spi...
García-Sevillano, M A; García-Barrera, T; Navarro, F; Montero-Lobato, Z; Gómez-Ariza, J L
2015-04-01
Mass spectrometry (MS)-based toxicometabolomics requires analytical approaches for obtaining unbiased metabolic profiles. The present work explores the general application of direct infusion MS using a high mass resolution analyzer (a hybrid systems triple quadrupole-time-of-flight) and a complementary gas chromatography-MS analysis to mitochondria extracts from mouse hepatic cells, emphasizing on mitochondria isolation from hepatic cells with a commercial kit, sample treatment after cell lysis, comprehensive metabolomic analysis and pattern recognition from metabolic profiles. Finally, the metabolomic platform was successfully checked on a case-study based on the exposure experiment of mice Mus musculus to inorganic arsenic during 12 days. Endogenous metabolites alterations were recognized by partial least squares-discriminant analysis. Subsequently, metabolites were identified by combining MS/MS analysis and metabolomics databases. This work reports for the first time the effects of As-exposure on hepatic mitochondria metabolic pathways based on MS, and reveals disturbances in Krebs cycle, β-oxidation pathway, amino acids degradation and perturbations in creatine levels. This non-target analysis provides extensive metabolic information from mitochondrial organelle, which could be applied to toxicology, pharmacology and clinical studies.
Depression, body mass index, and chronic obstructive pulmonary disease – a holistic approach
Catalfo, Giuseppe; Crea, Luciana; Lo Castro, Tiziana; Magnano San Lio, Francesca; Minutolo, Giuseppe; Siscaro, Gherardo; Vaccino, Noemi; Crimi, Nunzio; Aguglia, Eugenio
2016-01-01
Background Several clinical studies suggest common underlying pathogenetic mechanisms of COPD and depressive/anxiety disorders. We aim to evaluate psychopathological and physical effects of aerobic exercise, proposed in the context of pulmonary rehabilitation, in a sample of COPD patients, through the correlation of some psychopathological variables and physical/pneumological parameters. Methods Fifty-two consecutive subjects were enrolled. At baseline, the sample was divided into two subgroups consisting of 38 depression-positive and 14 depression-negative subjects according to the Hamilton Depression Rating Scale (HAM-D). After the rehabilitation treatment, we compared psychometric and physical examinations between the two groups. Results The differences after the rehabilitation program in all assessed parameters demonstrated a significant improvement in psychiatric and pneumological conditions. The reduction of BMI was significantly correlated with fat mass but only in the depression-positive patients. Conclusion Our results suggest that pulmonary rehabilitation improves depressive and anxiety symptoms in COPD. This improvement is significantly related to the reduction of fat mass and BMI only in depressed COPD patients, in whom these parameters were related at baseline. These findings suggest that depressed COPD patients could benefit from a rehabilitation program in the context of a multidisciplinary approach. PMID:26929612
Depression, body mass index, and chronic obstructive pulmonary disease - a holistic approach.
Catalfo, Giuseppe; Crea, Luciana; Lo Castro, Tiziana; Magnano San Lio, Francesca; Minutolo, Giuseppe; Siscaro, Gherardo; Vaccino, Noemi; Crimi, Nunzio; Aguglia, Eugenio
2016-01-01
Several clinical studies suggest common underlying pathogenetic mechanisms of COPD and depressive/anxiety disorders. We aim to evaluate psychopathological and physical effects of aerobic exercise, proposed in the context of pulmonary rehabilitation, in a sample of COPD patients, through the correlation of some psychopathological variables and physical/pneumological parameters. Fifty-two consecutive subjects were enrolled. At baseline, the sample was divided into two subgroups consisting of 38 depression-positive and 14 depression-negative subjects according to the Hamilton Depression Rating Scale (HAM-D). After the rehabilitation treatment, we compared psychometric and physical examinations between the two groups. The differences after the rehabilitation program in all assessed parameters demonstrated a significant improvement in psychiatric and pneumological conditions. The reduction of BMI was significantly correlated with fat mass but only in the depression-positive patients. Our results suggest that pulmonary rehabilitation improves depressive and anxiety symptoms in COPD. This improvement is significantly related to the reduction of fat mass and BMI only in depressed COPD patients, in whom these parameters were related at baseline. These findings suggest that depressed COPD patients could benefit from a rehabilitation program in the context of a multidisciplinary approach.
Bowden, Peter; Beavis, Ron; Marshall, John
2009-11-02
A goodness of fit test may be used to assign tandem mass spectra of peptides to amino acid sequences and to directly calculate the expected probability of mis-identification. The product of the peptide expectation values directly yields the probability that the parent protein has been mis-identified. A relational database could capture the mass spectral data, the best fit results, and permit subsequent calculations by a general statistical analysis system. The many files of the Hupo blood protein data correlated by X!TANDEM against the proteins of ENSEMBL were collected into a relational database. A redundant set of 247,077 proteins and peptides were correlated by X!TANDEM, and that was collapsed to a set of 34,956 peptides from 13,379 distinct proteins. About 6875 distinct proteins were only represented by a single distinct peptide, 2866 proteins showed 2 distinct peptides, and 3454 proteins showed at least three distinct peptides by X!TANDEM. More than 99% of the peptides were associated with proteins that had cumulative expectation values, i.e. probability of false positive identification, of one in one hundred or less. The distribution of peptides per protein from X!TANDEM was significantly different than those expected from random assignment of peptides.
Sugimura, Natsuhiko; Igarashi, Yoko; Aoyama, Reiko; Shibue, Toshimichi
2017-02-01
Analysis of the fragmentation pathways of molecules in mass spectrometry gives a fundamental insight into gas-phase ion chemistry. However, the conventional intrinsic reaction coordinates method requires knowledge of the transition states of ion structures in the fragmentation pathways. Herein, we use the nudged elastic band method, using only the initial and final state ion structures in the fragmentation pathways, and report the advantages and limitations of the method. We found a minimum energy path of p-benzoquinone ion fragmentation with two saddle points and one intermediate structure. The primary energy barrier, which corresponded to the cleavage of the C-C bond adjacent to the CO group, was calculated to be 1.50 eV. An additional energy barrier, which corresponded to the cleavage of the CO group, was calculated to be 0.68 eV. We also found an energy barrier of 3.00 eV, which was the rate determining step of the keto-enol tautomerization in CO elimination from the molecular ion of phenol. The nudged elastic band method allowed the determination of a minimum energy path using only the initial and final state ion structures in the fragmentation pathways, and it provided faster than the conventional intrinsic reaction coordinates method. In addition, this method was found to be effective in the analysis of the charge structures of the molecules during the fragmentation in mass spectrometry.
A simplified calculation approach for settlement of pile groups%群桩沉降简化计算方法
Institute of Scientific and Technical Information of China (English)
张乾青; 张忠苗
2012-01-01
将均质土和成层土中的单桩桩顶沉降分成桩端力引起的沉降、桩身压缩和桩侧阻力引起的沉降3部分分别计算,获得单桩沉降后,运用等代墩法可获得群桩的平均沉降.单桩沉降计算方法可考虑桩端力与桩端位移的非线性关系和桩侧阻力引起沉降的非线性特性,同时计算方法可考虑桩身压缩对桩顶沉降的贡献.算例分析表明,计算值与实测值和其他方法的计算值有较好的一致性,验证了该方法的合理性.%The pile head settlement of a single pile in homogeneous soil and multilayered soils can be subdivided into three aspects, including the pile tip settlement induced by mobilized load at the pile tip, the compression of pile shaft, and the settlement caused by the skin friction. Based on the settlement of a single pile and the equivalent pier method, a simplified calculation approach for average settlement of pile groups is obtained. The nonlinear relationship between the pile tip load and the settlement at the pile tip, and the nonlinear settlement induced by the skin friction can be taken into account, as well as the pile shaft compression. The settlements of a single pile and pile groups drawn from the present method are generally in good agreement with the measured values and the calculated values estimated from other methods.
Nam, Kwangho
2014-10-14
Development of multiscale ab initio quantum mechanical and molecular mechanical (AI-QM/MM) method for periodic boundary molecular dynamics (MD) simulations and their acceleration by multiple time step approach are described. The developed method achieves accuracy and efficiency by integrating the AI-QM/MM level of theory and the previously developed semiempirical (SE) QM/MM-Ewald sum method [J. Chem. Theory Comput. 2005, 1, 2] extended to the smooth particle-mesh Ewald (PME) summation method. In the developed methods, the total energy of the simulated system is evaluated at the SE-QM/MM-PME level of theory to include long-range QM/MM electrostatic interactions, which is then corrected on the fly using the AI-QM/MM level of theory within the real space cutoff. The resulting energy expression enables decomposition of total forces applied to each atom into forces determined at the low-level SE-QM/MM method and correction forces at the AI-QM/MM level, to integrate the system using the reversible reference system propagator algorithm. The resulting method achieves a substantial speed-up of the entire calculation by minimizing the number of time-consuming energy and gradient evaluations at the AI-QM/MM level. Test calculations show that the developed multiple time step AI-QM/MM method yields MD trajectories and potential of mean force profiles comparable to single time step QM/MM results. The developed method, together with message passing interface (MPI) parallelization, accelerates the present AI-QM/MM MD simulations about 30-fold relative to the speed of single-core AI-QM/MM simulations for the molecular systems tested in the present work, making the method less than one order slower than the SE-QM/MM methods under periodic boundary conditions.
Energy Technology Data Exchange (ETDEWEB)
Ali, Ahmed [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Parkhomenko, Alexander Ya.; Rusov, Aleksey V. [P.G. Demidov Yaroslavl State Univ. (Russian Federation). Dept. of Theoretical Physics
2013-12-15
We present a precise calculation of the dilepton invariant-mass spectrum and the decay rate for B{sup {+-}}{yields}{pi}{sup {+-}}l{sup +}l{sup -} (l{sup {+-}}=e{sup {+-}},{mu}{sup {+-}}) in the Standard Model (SM) based on the effective Hamiltonian approach for the b{yields}dl{sup +}l{sup -} transitions. With the Wilson coefficients already known in the next-to-next-to-leading logarithmic (NNLL) accuracy, the remaining theoretical uncertainty in the short-distance contribution resides in the form factors f{sub +}(q{sup 2}), f{sub 0}(q{sup 2}) and f{sub T}(q{sup 2}). Of these, f{sub +}(q{sup 2}) is well measured in the charged-current semileptonic decays B{yields}{pi}l{nu}{sub l} and we use the B-factory data to parametrize it. The corresponding form factors for the B{yields}K transitions have been calculated in the Lattice-QCD approach for large-q{sup 2} and extrapolated to the entire q{sup 2}-region using the so-called z-expansion. Using an SU(3){sub F}-breaking Ansatz, we calculate the B{yields}{pi} tensor form factor, which is consistent with the recently reported lattice B{yields}{pi} analysis obtained at large q{sup 2}. The prediction for the total branching fraction B(B{sup {+-}}{yields}{pi}{sup {+-}}{mu}{sup +}{mu}{sup -})=(1.88{sub +0.32}{sup -0.21}) x 10{sup -8} is in good agreement with the experimental value obtained by the LHCb collaboration. In the low q{sup 2}-region, the Heavy-Quark Symmetry (HQS) relates the three form factors with each other. Accounting for the leading-order symmetry-breaking effects, and using data from the charged-current process B{yields}{pi}l{nu}{sub l} to determine f{sub +}(q{sup 2}), we calculate the dilepton invariant-mass distribution in the low q{sup 2}-region in the B{sup {+-}}{yields}{pi}{sup {+-}}l{sup +}l{sup -} decay. This provides a model-independent and precise calculation of the partial branching ratio for this decay.
Wenzel, Jan; Holzer, Andre; Wormit, Michael; Dreuw, Andreas
2015-06-01
The extended second order algebraic-diagrammatic construction (ADC(2)-x) scheme for the polarization operator in combination with core-valence separation (CVS) approximation is well known to be a powerful quantum chemical method for the calculation of core-excited states and the description of X-ray absorption spectra. For the first time, the implementation and results of the third order approach CVS-ADC(3) are reported. Therefore, the CVS approximation has been applied to the ADC(3) working equations and the resulting terms have been implemented efficiently in the adcman program. By treating the α and β spins separately from each other, the unrestricted variant CVS-UADC(3) for the treatment of open-shell systems has been implemented as well. The performance and accuracy of the CVS-ADC(3) method are demonstrated with respect to a set of small and middle-sized organic molecules. Therefore, the results obtained at the CVS-ADC(3) level are compared with CVS-ADC(2)-x values as well as experimental data by calculating complete basis set limits. The influence of basis sets is further investigated by employing a large set of different basis sets. Besides the accuracy of core-excitation energies and oscillator strengths, the importance of cartesian basis functions and the treatment of orbital relaxation effects are analyzed in this work as well as computational timings. It turns out that at the CVS-ADC(3) level, the results are not further improved compared to CVS-ADC(2)-x and experimental data, because the fortuitous error compensation inherent in the CVS-ADC(2)-x approach is broken. While CVS-ADC(3) overestimates the core excitation energies on average by 0.61% ± 0.31%, CVS-ADC(2)-x provides an averaged underestimation of -0.22% ± 0.12%. Eventually, the best agreement with experiments can be achieved using the CVS-ADC(2)-x method in combination with a diffuse cartesian basis set at least at the triple-ζ level.
Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg; Saar, Martin O.
2016-10-01
We present an extended law of mass-action (xLMA) method for multiphase equilibrium calculations and apply it in the context of reactive transport modeling. This extended LMA formulation differs from its conventional counterpart in that (i) it is directly derived from the Gibbs energy minimization (GEM) problem (i.e., the fundamental problem that describes the state of equilibrium of a chemical system under constant temperature and pressure); and (ii) it extends the conventional mass-action equations with Lagrange multipliers from the Gibbs energy minimization problem, which can be interpreted as stability indices of the chemical species. Accounting for these multipliers enables the method to determine all stable phases without presuming their types (e.g., aqueous, gaseous) or their presence in the equilibrium state. Therefore, the here proposed xLMA method inherits traits of Gibbs energy minimization algorithms that allow it to naturally detect the phases present in equilibrium, which can be single-component phases (e.g., pure solids or liquids) or non-ideal multi-component phases (e.g., aqueous, melts, gaseous, solid solutions, adsorption, or ion exchange). Moreover, our xLMA method requires no technique that tentatively adds or removes reactions based on phase stability indices (e.g., saturation indices for minerals), since the extended mass-action equations are valid even when their corresponding reactions involve unstable species. We successfully apply the proposed method to a reactive transport modeling problem in which we use PHREEQC and GEMS as alternative backends for the calculation of thermodynamic properties such as equilibrium constants of reactions, standard chemical potentials of species, and activity coefficients. Our tests show that our algorithm is efficient and robust for demanding applications, such as reactive transport modeling, where it converges within 1-3 iterations in most cases. The proposed xLMA method is implemented in Reaktoro, a
Kiran Yildirim, Demet; Abdelnasser, Amr; Doner, Zeynep; Kumral, Mustafa
2016-04-01
The Halilar Cu-Pb (-Zn) mineralization that is formed in the volcanogenic metasediments of Bagcagiz Formation at Balikesir province, NW Turkey, represents locally vein-type deposit as well as restricted to fault gouge zone directed NE-SW along with the lower boundary of Bagcagiz Formation and Duztarla granitic intrusion in the study area. Furthermore, This granite is traversed by numerous mineralized sheeted vein systems, which locally transgress into the surrounding metasediments. Therefore, this mineralization closely associated with intense hydrothermal alteration within brecciation, and quartz stockwork veining. The ore mineral assemblage includes chalcopyrite, galena, and some sphalerite with covellite and goethite formed during three phases of mineralization (pre-ore, main ore, and supergene) within an abundant gangue of quartz and calcite. The geologic and field relationships, petrographic and mineralogical studies reveal two alteration zones occurred with the Cu-Pb (-Zn) mineralization along the contact between the Bagcagiz Formation and Duztarla granite; pervasive phyllic alteration (quartz, sericite, and pyrite), and selective propylitic alteration (albite, calcite, epidote, sericite and/or chlorite). This work, by using the mass balance calculations, reports the mass/volume changes (gain and loss) of the chemical components of the hydrothermal alteration zones associated with Halilar Cu-Pb (-Zn) mineralization at Balikesir area (Turkey). It revealed that the phyllic alteration has enrichments of Si, Fe, K, Ba, and LOI with depletion of Mg, Ca, and Na reflect sericitization of alkali feldspar and destruction of ferromagnesian minerals. This zone has high Cu and Pb with Zn contents represents the main mineralized zone. On the other hand, the propylitic zone is characterized by addition of Ca, Na, K, Ti, P, and Ba with LOI and Cu (lower content) referring to the replacement of plagioclase and ferromagnesian minerals by albite, calcite, epidote, and sericite
Rai, Neeraj; Tiwari, Surya P; Maginn, Edward J
2012-09-06
Advances in computational algorithms and methodologies make it possible to use highly accurate quantum mechanical calculations to develop force fields (pair-wise additive intermolecular potentials) for condensed phase simulations. Despite these advances, this approach faces numerous hurdles for the case of actinyl ions, AcO2(n+) (high-oxidation-state actinide dioxo cations), mainly due to the complex electronic structure resulting from an interplay of s, p, d, and f valence orbitals. Traditional methods use a pair of molecules (“dimer”) to generate a potential energy surface (PES) for force field parametrization based on the assumption that many body polarization effects are negligible. We show that this is a poor approximation for aqueous phase uranyl ions and present an alternative approach for the development of actinyl ion force fields that includes important many body solvation effects. Force fields are developed for the UO2(2+) ion with the SPC/Fw, TIP3P, TIP4P, and TIP5P water models and are validated by carrying out detailed molecular simulations on the uranyl aqua ion, one of the most characterized actinide systems. It is shown that the force fields faithfully reproduce available experimental structural data and hydration free energies. Failure to account for solvation effects when generating PES leads to overbinding between UO2(2+) and water, resulting in incorrect hydration free energies and coordination numbers. A detailed analysis of arrangement of water molecules in the first and second solvation shell of UO2(2+) is presented. The use of a simple functional form involving the sum of Lennard-Jones + Coulomb potentials makes the new force field compatible with a large number of available molecular simulation engines and common force fields.
Dries, M.; Trager, S. C.; Koopmans, L. V. E.
2016-11-01
Recent studies based on the integrated light of distant galaxies suggest that the initial mass function (IMF) might not be universal. Variations of the IMF with galaxy type and/or formation time may have important consequences for our understanding of galaxy evolution. We have developed a new stellar population synthesis (SPS) code specifically designed to reconstruct the IMF. We implement a novel approach combining regularization with hierarchical Bayesian inference. Within this approach, we use a parametrized IMF prior to regulate a direct inference of the IMF. This direct inference gives more freedom to the IMF and allows the model to deviate from parametrized models when demanded by the data. We use Markov chain Monte Carlo sampling techniques to reconstruct the best parameters for the IMF prior, the age and the metallicity of a single stellar population. We present our code and apply our model to a number of mock single stellar populations with different ages, metallicities and IMFs. When systematic uncertainties are not significant, we are able to reconstruct the input parameters that were used to create the mock populations. Our results show that if systematic uncertainties do play a role, this may introduce a bias on the results. Therefore, it is important to objectively compare different ingredients of SPS models. Through its Bayesian framework, our model is well suited for this.
Bentley, I; Frauendorf, S
2013-01-01
A model with nucleons in a charge independent potential well interacting by an isovector pairing force is considered. For a 24-dimensional valence space, the Hartree-Bogolyubov (HB) plus Random Phase (RPA) approximation to the lowest eigenvalue of the Hamiltonian is shown to be accurate except near values of the pairing force coupling constant G where the HB solution shifts from a zero to a non-zero pair gap. In the limit G->infinity the HB + RPA is asymptotically exact. The inaccuracy of the HB + RPA in the critical regions of G can be remedied by interpolation. The resulting algoritm is used to calculate pairing corrections in the framework of a Nilsson-Strutinsky calculation of nuclear masses near N = Z for A = 24-100, where N and Z are the numbers of neutrons and protons, and A = N + Z. The dimension of the valence space is 2A in these calculations. Adjusting five liquid drop parameters and a power law expression for the constant G as a function of A allows us to reproduce the measured binding energies of...
Biogeochemical mass balances in a turbid tropical reservoir. Field data and modelling approach
Phuong Doan, Thuy Kim; Némery, Julien; Gratiot, Nicolas; Schmid, Martin
2014-05-01
The turbid tropical Cointzio reservoir, located in the Trans Mexican Volcanic Belt (TMVB), behaves as a warm monomictic water body (area = 6 km2, capacity 66 Mm3, residence time ~ 1 year). It is strategic for the drinking water supply of the city of Morelia, capital of the state of Michoacán, and for downstream irrigation during the dry season. This reservoir is a perfect example of a human-impacted system since its watershed is mainly composed of degraded volcanic soils and is subjected to high erosion processes and agricultural loss. The reservoir is threatened by sediment accumulation and nutrients originating from untreated waters in the upstream watershed. The high content of very fine clay particles and the lack of water treatment plants lead to serious episodes of eutrophication (up to 70 μg chl. a L-1), high levels of turbidity (Secchi depth water vertical profiles, reservoir inflow and outflow) we determined suspended sediment (SS), carbon (C), nitrogen (N) and phosphorus (P) mass balances. Watershed SS yields were estimated at 35 t km2 y-1 of which 89-92 % were trapped in the Cointzio reservoir. As a consequence the reservoir has already lost 25 % of its initial storage capacity since its construction in 1940. Nutrient mass balances showed that 50 % and 46 % of incoming P and N were retained by sedimentation, and mainly eliminated through denitrification respectively. Removal of C by 30 % was also observed both by sedimentation and through gas emission. To complete field data analyses we examined the ability of vertical one dimensional (1DV) numerical models (Aquasim biogeochemical model coupled with k-ɛ mixing model) to reproduce the main biogeochemical cycles in the Cointzio reservoir. The model can describe all the mineralization processes both in the water column and in the sediment. The values of the entire mass balance of nutrients and of the mineralization rates (denitrification and aerobic benthic mineralization) calculated from the model
Testing mixed action approaches to meson spectroscopy with twisted mass sea quarks
Berlin, Joshua; Wagner, Marc
2013-01-01
We explore and compare three mixed action setups with Wilson twisted mass sea quarks and different valence quark actions: (1) Wilson twisted mass, (2) Wilson twisted mass + clover and (3) Wilson + clover. Our main goal is to reduce lattice discretization errors in mesonic spectral quantities, in particular to reduce twisted mass parity and isospin breaking.
Maciejewska, Beata; Łabędzki, Paweł; Piasecki, Artur; Piasecka, Magdalena
The paper presents the methods of heat transfer coefficient determination for boiling research during FC-72 flow in a minichannel. The boundary condition in the form of distributions of temperature on the outer side of the minichannel heated wall was obtained using infrared thermography. It was assumed two-dimensional steady-state heat flow. The local values of the heat transfer coefficients on the surface between the heated foil and boiling liquid, were determined from the Robin boundary condition. Data necessary for the heat transfer coefficient evaluation were obtained from numerical computations using two approaches: calculation procedure based on the Trefftz functions and FEM simulations by ADINA software. The shape functions were linear combinations of the Trefftz functions. Combinations of the Trefftz functions exactly satisfy the differential equation. Coefficients of the linear combination of the shape function in the approximate solution were chosen to minimize residuals on domain boundary and along common edges of adjacent elements. Temperature measurement points were located in boundary nodes. During FEM simulations 4-node FCBI elements were used, fluid flow was assumed to be laminar, incompressible and material constants of the fluid and of the foil were independent on temperature. The results of the comparative analysis were presented and discussed.
Wu, D; Yu, W; Fritzsche, S
2016-01-01
A Monte-Carlo approach to proton stopping in warm dense matter is implemented into an existing particle-in-cell code. The model is based on multiple binary-collisions among electron-electron, electron-ion and ion-ion, taking into account contributions from both free and bound electrons, and allows to calculate particle stopping in much more natural manner. At low temperature limit, when ``all'' electron are bounded at the nucleus, the stopping power converges to the predictions of Bethe-Bloch theory, which shows good consistency with data provided by the NIST. With the rising of temperatures, more and more bound electron are ionized, thus giving rise to an increased stopping power to cold matter, which is consistent with the report of a recently experimental measurement [Phys. Rev. Lett. 114, 215002 (2015)]. When temperature is further increased, with ionizations reaching the maximum, lowered stopping power is observed, which is due to the suppression of collision frequency between projected proton beam and h...
Directory of Open Access Journals (Sweden)
Maciejewska Beata
2017-01-01
Full Text Available The paper presents the methods of heat transfer coefficient determination for boiling research during FC-72 flow in a minichannel. The boundary condition in the form of distributions of temperature on the outer side of the minichannel heated wall was obtained using infrared thermography. It was assumed two-dimensional steady-state heat flow. The local values of the heat transfer coefficients on the surface between the heated foil and boiling liquid, were determined from the Robin boundary condition. Data necessary for the heat transfer coefficient evaluation were obtained from numerical computations using two approaches: calculation procedure based on the Trefftz functions and FEM simulations by ADINA software. The shape functions were linear combinations of the Trefftz functions. Combinations of the Trefftz functions exactly satisfy the differential equation. Coefficients of the linear combination of the shape function in the approximate solution were chosen to minimize residuals on domain boundary and along common edges of adjacent elements. Temperature measurement points were located in boundary nodes. During FEM simulations 4-node FCBI elements were used, fluid flow was assumed to be laminar, incompressible and material constants of the fluid and of the foil were independent on temperature. The results of the comparative analysis were presented and discussed.
Wu, D.; He, X. T.; Yu, W.; Fritzsche, S.
2017-02-01
A physical model based on a Monte Carlo approach is proposed to calculate the ionization dynamics of hot-solid-density plasmas within particle-in-cell (PIC) simulations, and where the impact (collision) ionization (CI), electron-ion recombination (RE), and ionization potential depression (IPD) by surrounding plasmas are taken into consideration self-consistently. When compared with other models, which are applied in the literature for plasmas near thermal equilibrium, the temporal relaxation of ionization dynamics can also be simulated by the proposed model. Besides, this model is general and can be applied for both single elements and alloys with quite different compositions. The proposed model is implemented into a PIC code, with (final) ionization equilibriums sustained by competitions between CI and its inverse process (i.e., RE). Comparisons between the full model and model without IPD or RE are performed. Our results indicate that for bulk aluminium at temperature of 1 to 1000 eV, (i) the averaged ionization degree increases by including IPD; while (ii) the averaged ionization degree is significantly over estimated when the RE is neglected. A direct comparison from the PIC code is made with the existing models for the dependence of averaged ionization degree on thermal equilibrium temperatures and shows good agreements with that generated from Saha-Boltzmann model and/or FLYCHK code.
Pankoke, S.; Buck, B.; Woelfel, H. P.
1998-08-01
Long-term whole-body vibrations can cause degeneration of the lumbar spine. Therefore existing degeneration has to be assessed as well as industrial working places to prevent further damage. Hence, the mechanical stress in the lumbar spine—especially in the three lower vertebrae—has to be known. This stress can be expressed as internal forces. These internal forces cannot be evaluated experimentally, because force transducers cannot be implementated in the force lines because of ethical reasons. Thus it is necessary to calculate the internal forces with a dynamic mathematical model of sitting man.A two dimensional dynamic Finite Element model of sitting man is presented which allows calculation of these unknown internal forces. The model is based on an anatomic representation of the lower lumbar spine (L3-L5). This lumber spine model is incorporated into a dynamic model of the upper torso with neck, head and arms as well as a model of the body caudal to the lumbar spine with pelvis and legs. Additionally a simple dynamic representation of the viscera is used. All these parts are modelled as rigid bodies connected by linear stiffnesses. Energy dissipation is modelled by assigning modal damping ratio to the calculated undamped eigenvalues. Geometry and inertial properties of the model are determined according to human anatomy. Stiffnesses of the spine model are derived from static in-vitro experiments in references [1] and [2]. Remaining stiffness parameters and parameters for energy dissipation are determined by using parameter identification to fit measurements in reference [3]. The model, which is available in 3 different postures, allows one to adjust its parameters for body height and body mass to the values of the person for which internal forces have to be calculated.
Depression, body mass index, and chronic obstructive pulmonary disease – a holistic approach
Directory of Open Access Journals (Sweden)
Catalfo G
2016-02-01
Full Text Available Giuseppe Catalfo,1 Luciana Crea,1 Tiziana Lo Castro,1 Francesca Magnano San Lio,1 Giuseppe Minutolo,1 Gherardo Siscaro,2 Noemi Vaccino,1 Nunzio Crimi,3 Eugenio Aguglia1 1Department of Psychiatry, Policlinico “G. Rodolico” University Hospital, University of Catania, Catania, Italy; 2Operative Unit Neurorehabilitation, IRCCS Fondazione Salvatore Maugeri, Sciacca, Italy; 3Department of Pneumology, Policlinico “G. Rodolico” University Hospital, University of Catania, Catania, Italy Background: Several clinical studies suggest common underlying pathogenetic mechanisms of COPD and depressive/anxiety disorders. We aim to evaluate psychopathological and physical effects of aerobic exercise, proposed in the context of pulmonary rehabilitation, in a sample of COPD patients, through the correlation of some psychopathological variables and physical/pneumological parameters. Methods: Fifty-two consecutive subjects were enrolled. At baseline, the sample was divided into two subgroups consisting of 38 depression-positive and 14 depression-negative subjects according to the Hamilton Depression Rating Scale (HAM-D. After the rehabilitation treatment, we compared psychometric and physical examinations between the two groups. Results: The differences after the rehabilitation program in all assessed parameters demonstrated a significant improvement in psychiatric and pneumological conditions. The reduction of BMI was significantly correlated with fat mass but only in the depression-positive patients. Conclusion: Our results suggest that pulmonary rehabilitation improves depressive and anxiety symptoms in COPD. This improvement is significantly related to the reduction of fat mass and BMI only in depressed COPD patients, in whom these parameters were related at baseline. These findings suggest that depressed COPD patients could benefit from a rehabilitation program in the context of a multidisciplinary approach. Keywords: COPD, depression, aerobic exercise
Energy Technology Data Exchange (ETDEWEB)
Weisz, Daniel R.; Fouesneau, Morgan; Dalcanton, Julianne J.; Clifton Johnson, L.; Beerman, Lori C.; Williams, Benjamin F. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Hogg, David W.; Foreman-Mackey, Daniel T. [Center for Cosmology and Particle Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Rix, Hans-Walter; Gouliermis, Dimitrios [Max Planck Institute for Astronomy, Koenigstuhl 17, D-69117 Heidelberg (Germany); Dolphin, Andrew E. [Raytheon Company, 1151 East Hermans Road, Tucson, AZ 85756 (United States); Lang, Dustin [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); Bell, Eric F. [Department of Astronomy, University of Michigan, 500 Church Street, Ann Arbor, MI 48109 (United States); Gordon, Karl D.; Kalirai, Jason S. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Skillman, Evan D., E-mail: dweisz@astro.washington.edu [Minnesota Institute for Astrophysics, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States)
2013-01-10
We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M {approx}> 1 M {sub Sun }). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, {alpha}, are unbiased and that the uncertainty, {Delta}{alpha}, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on {alpha}, and provide an analytic approximation for {Delta}{alpha} as a function of the observed number of stars and mass range. Comparison with literature studies shows that {approx}3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield ({alpha}) = 2.46, with a 1{sigma} dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the
Lawless, J. G.; Romiez, M. P.
1974-01-01
The application of an analytical approach combining gas chromatography with mass spectrometry (GC-MS) has shown that the amino acid composition of meteorite extracts is quite complex. A computer was used in the evaluation of the data obtained in the investigations. The computer programs developed have been concerned solely with the mass spectra of amino acids. Specialized programs have been written to determine the number of carbon atoms in an amino acid which is a member of any of three subclasses.
Holmes, Lisa; Landsverk, John; Ward, Harriet; Rolls-Reutz, Jennifer; Saldana, Lisa; Wulczyn, Fred; Chamberlain, Patricia
2014-04-01
Estimating costs in child welfare services is critical as new service models are incorporated into routine practice. This paper describes a unit costing estimation system developed in England (cost calculator) together with a pilot test of its utility in the United States where unit costs are routinely available for health services but not for child welfare services. The cost calculator approach uses a unified conceptual model that focuses on eight core child welfare processes. Comparison of these core processes in England and in four counties in the United States suggests that the underlying child welfare processes generated from England were perceived as very similar by child welfare staff in California county systems with some exceptions in the review and legal processes. Overall, the adaptation of the cost calculator for use in the United States child welfare systems appears promising. The paper also compares the cost calculator approach to the workload approach widely used in the United States and concludes that there are distinct differences between the two approaches with some possible advantages to the use of the cost calculator approach, especially in the use of this method for estimating child welfare costs in relation to the incorporation of evidence-based interventions into routine practice.
Ōnuki, Yoshichika; Nakamura, Ai; Aoki, Dai; Boukahil, Mounir; Haga, Yoshinori; Takeuchi, Tetsuya; Harima, Hisatomo; Hedo, Masato; Nakama, Takao
2015-03-01
We succeeded in growing single crystals of EuCo2Si2 by the Bridgman method, and carried out the de Haas-van Alphen (dHvA) experiments. EuCo2Si2 was previously studied from a viewpoint of the trivalent electronic state on the basis of the magnetic susceptibility and X-ray absorption experiments, whereas most of the other Eu compounds order magnetically, with the divalent electronic state. The detected dHvA branches in the present experiments are found to be explained by the results of the full potential linearized augmented plane wave energy band calculations on the basis of a local density approximation (LDA) for YCo2Si2 (LDA) and EuCo2Si2 (LDA + U), revealing the trivalent electronic state. The detected cyclotron effective masses are moderately large, ranging from 1.2 to 2.9 m0.
Calculate Your Body Mass Index
... Clinical Practice Guidelines Resources Continuing Education Funding Training & Career Development Division of Intramural Research Research Resources Scientific Reports Technology Transfer What are Clinical Trials? Children & Clinical Studies NHLBI Trials Clinical Trial Websites Press ...
Carturan, L.; Cazorzi, F.; De Blasi, F.; Dalla Fontana, G.
2014-12-01
Glacier mass balance models rely on accurate spatial calculation of input data, in particular air temperature. Lower temperatures (the so-called glacier cooling effect), and lower temperature variability (the so-called glacier damping effect) generally occur over glaciers, compared to ambient conditions. These effects, which depend on the geometric characteristics of glaciers and display a high spatial and temporal variability, have been mostly investigated on medium- to large-size glaciers so far, while observations on smaller ice bodies are scarce. Using a dataset from 8 on-glacier and 4 off-glacier weather stations, collected in summer 2010 and 2011, we analyzed the air temperature variability and wind regime over three different glaciers in the Ortles-Cevedale. The magnitude of the cooling effect and the occurrence of katabatic boundary layer (KBL) processes showed remarkable differences among the three ice bodies, suggesting the likely existence of important reinforcing mechanisms during glacier decay and disintegration. None of the methods proposed in the literature for calculating on-glacier temperature from off-glacier data fully reproduced our observations. Among them, the more physically-based procedure of Greuell and Böhm (1998) provided the best overall results where the KBL prevail, but it was not effective elsewhere (i.e. on smaller ice bodies and close to the glacier margins). The accuracy of air temperature estimations strongly impacted the results from a mass balance model which was applied to the three investigated glaciers. Most importantly, even small temperature deviations caused distortions in parameter calibration, thus compromising the model generalizability.
Sarais, G; D'Urso, G; Lai, C; Pirisi, F M; Pizza, C; Montoro, P
2016-09-01
In the present study, the discrimination of phytochemical content of Myrtus communis berries from different geographical origin and cultivars was explored by Liquid Chromatography-Electrospray Ionization-Fourier Transform-Mass Spectrometry (LC-ESI-FT-MS) metabolic profiling and quantitative analysis. Experiments were carried on myrtle plants grown in an experimental area of Sardinia region, obtained by the germination of seeds taken from berries collected in each part of the region. A preliminary untargeted approach on fruit's extracts was realized by collecting LC-ESI-FT-(Orbitrap)-MS data obtained by operating in negative ion mode and performing principal component analysis with the result of differentiation of samples. In a second step, targeted analysis with a reduced number of variables was realized. A data matrix was obtained by the data fusion of positive and negative ionization LC-ESI-MS results, by using as variables the peak areas of each known compounds. By the observation of principal component analysis, results found that anthocyanins, and mainly derivatives of cyanidin, are the principal marker compounds responsive for the discrimination of samples based on the geographical origin of the seeds. Based on this finding, finally, an LC-diode array detector method was developed, validated and applied for the quantitative analysis of berries' extracts based on 11 commercial standard compounds corresponding to the identified markers. Copyright © 2016 John Wiley & Sons, Ltd.
Energy Technology Data Exchange (ETDEWEB)
Drechsler, S.L.; Efremov, D.; Grinenko, V. [IFW-Dresden (Germany); Johnston, S. [Inst. of Quantum Matter, University of British Coulumbia, Vancouver (Canada); Rosner, H. [MPI-cPfS, Dresden, (Germany); Kikoin, K. [Tel Aviv University (Israel)
2015-07-01
Combining DFT calculations of the density of states and plasma frequencies with experimental thermodynamic, optical, ARPES, and dHvA data taken from the literature, we estimate both the high-energy (Coulomb, Hund's rule coupling) and the low-energy (el-boson coupling) electronic mass renormalization [H(L)EMR] for typical Fe-pnictides with T{sub c}<40 K, focusing on (K,Rb,Cs)Fe{sub 2}As{sub 2}, (Ca,Na)122, (Ba,K)122, LiFeAs, and LaFeO{sub 1-x}F{sub x}As with and without As-vacancies. Using Eliashberg theory we show that these systems can NOT be described by a very strong el-boson coupling constant λ ≥ ∝ 2, being in conflict with the HEMR as seen by DMFT, ARPES and optics. Instead, an intermediate s{sub ±} coupling regime is realized, mainly based on interband spin fluctuations from one predominant pair of bands. For (Ca,Na)122, there is also a non-negligible intraband el-phonon/orbital fluctuation intraband contribution. The coexistence of magnetic As-vacancies and high-T{sub c}=28 K for LaFeO{sub 1-x}F{sub x}As{sub 1-δ} excludes an orbital fluctuation dominated s{sub ++} scenario at least for that system. In contrast, the line nodal BaFe{sub 2}(As,P){sub 2} near the quantum critical point is found as a superstrongly coupled system. The role of a pseudo-gap is briefly discussed for some of these systems.
Mass parasite control as an approach to stimulate community acceptance of environmental sanitation.
Trainer, E S
1983-01-01
worm density. Baseline community surveys are important for information management. 2 basic components of a parasite control program are stool examination and provision of antihelmintics. The choice of which combination of examination and treatment is best for a given program depends on the budget, parasite prevalence, and desired health education impact. Health education begins with the baseline community surveys, where the general dialogue between project and community people can first develop. Specific health education is needed to promote personal and environmental hygiene. Messages can be disseminated to the people by the use of mass media, print materials, and through discussions. These pilot projects have a snowball approach--where a limited set of activities leads to greater expansion.
Mao, Yuezhi; Horn, Paul R; Mardirossian, Narbe; Head-Gordon, Teresa; Skylaris, Chris-Kriton; Head-Gordon, Martin
2016-07-28
Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set produces set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.
Hu, Yecui; Du, Zhangliu; Wang, Qibing; Li, Guichun
2016-07-01
The conversion of natural vegetation to human-managed ecosystems, especially the agricultural systems, may decrease soil organic carbon (SOC) and total nitrogen (TN) stocks. The objective of present study was to assess SOC and TN stocks losses by combining deep sampling with mass-based calculations upon land-use changes in a typical karst area of southwestern China. We quantified the changes from native forest to grassland, secondary shrub, eucalyptus plantation, sugarcane and corn fields (both defined as croplands), on the SOC and TN stocks down to 100 cm depth using fixed-depth (FD) and equivalent soil mass (ESM) approaches. The results showed that converting forest to cropland and other types significantly led to SOC and TN losses, but the extent depended on both sampling depths and calculation methods selected (i.e., FD or ESM). On average, the shifting from native forest to cropland led to SOC losses by 19.1, 25.1, 30.6, 36.8 and 37.9 % for the soil depths of 0-10, 0-20, 0-40, 0-60 and 0-100 cm, respectively, which highlighted that shallow sampling underestimated SOC losses. Moreover, the FD method underestimated SOC and TN losses for the upper 40 cm layer, but overestimated the losses in the deeper layers. We suggest that the ESM together with deep sampling should be encouraged to detect the differences in SOC stocks. In conclusion, the conversion of forest to managed systems, in particular croplands significantly decreased in SOC and TN stocks, although the effect magnitude to some extent depended on sampling depth and calculation approach selected.
Heidema, A.G.; Nagelkerke, N.
2008-01-01
To discriminate between breast cancer patients and controls, we used a three-step approach to obtain our decision rule. First, we ranked the mass/charge values using random forests, because it generates importance indices that take possible interactions into account. We observed that the top ranked
Troise, A.D.; Ferracane, R.; Palermo, M.; Fogliano, V.
2014-01-01
In this paper a new targeted metabolic profile approach using Orbitrap high resolution mass spectrometry was described. For each foodmatrix various classes of bioactive compounds and some specificmetabolites of interest were selected on the basis of the existing knowledge creating an easy-to-read fi
Valiev, Marat; Garrett, Bruce C.; Tsai, Ming-Kang; Kowalski, Karol; Kathmann, Shawn M.; Schenter, Gregory K.; Dupuis, Michel
2007-08-01
We present an approach to calculate the free energy profile along a condensed-phase reaction path based on high-level electronic structure methods for the reactive region. The bulk of statistical averaging is shifted toward less expensive descriptions by using a hierarchy of representations that includes molecular mechanics, density functional theory, and coupled cluster theories. As an application of this approach we study the reaction of CHCl3 with OH- in aqueous solution.
Energy Technology Data Exchange (ETDEWEB)
Cassell, K.J. (Saint Luke' s Hospital, Guildford (UK))
1983-02-01
A method, developed from the Quantisation Method, of calculating dose-rate distributions around uniformly and non-uniformly loaded brachytherapy sources is described. It allows accurate and straightforward corrections for oblique filtration and self-absorption to be made. Using this method, dose-rate distributions have been calculated for sources of radium 226, gold 198, iridium 192, caesium 137 and cobalt 60, all of which show very good agreement with existing measured and calculated data. This method is now the basis of the Interstitial and Intracavitary Dosimetry (IID) program on the General Electric RT/PLAN computerised treatment planning system.
Energy Technology Data Exchange (ETDEWEB)
Madenoor Ramapriya, Gautham [Purdue University; Jiang, Zheyu [Purdue University; Tawarmalani, Mohit [Purdue University; Agrawal, Rakesh [Purdue University
2015-11-11
We propose a general method to consolidate distillation columns of a distillation configuration using heat and mass integration. The proposed method encompasses all heat and mass integrations known till date, and includes many more. Each heat and mass integration eliminates a distillation column, a condenser, a reboiler and the heat duty associated with a reboiler. Thus, heat and mass integration can potentially offer significant capital and operating cost benefits. In this talk, we will study the various possible heat and mass integrations in detail, and demonstrate their benefits using case studies. This work will lay out a framework to synthesize an entire new class of useful configurations based on heat and mass integration of distillation columns.
Relationship between salivary/pancreatic amylase and body mass index: a systems biology approach.
Bonnefond, Amélie; Yengo, Loïc; Dechaume, Aurélie; Canouil, Mickaël; Castelain, Maxime; Roger, Estelle; Allegaert, Frédéric; Caiazzo, Robert; Raverdy, Violeta; Pigeyre, Marie; Arredouani, Abdelilah; Borys, Jean-Michel; Lévy-Marchal, Claire; Weill, Jacques; Roussel, Ronan; Balkau, Beverley; Marre, Michel; Pattou, François; Brousseau, Thierry; Froguel, Philippe
2017-02-23
Salivary (AMY1) and pancreatic (AMY2) amylases hydrolyze starch. Copy number of AMY1A (encoding AMY1) was reported to be higher in populations with a high-starch diet and reduced in obese people. These results based on quantitative PCR have been challenged recently. We aimed to re-assess the relationship between amylase and adiposity using a systems biology approach. We assessed the association between plasma enzymatic activity of AMY1 or AMY2, and several metabolic traits in almost 4000 French individuals from D.E.S.I.R. longitudinal study. The effect of the number of copies of AMY1A (encoding AMY1) or AMY2A (encoding AMY2) measured through droplet digital PCR was then analyzed on the same parameters in the same study. A Mendelian randomization analysis was also performed. We subsequently assessed the association between AMY1A copy number and obesity risk in two case-control studies (5000 samples in total). Finally, we assessed the association between body mass index (BMI)-related plasma metabolites and AMY1 or AMY2 activity. We evidenced strong associations between AMY1 or AMY2 activity and lower BMI. However, we found a modest contribution of AMY1A copy number to lower BMI. Mendelian randomization identified a causal negative effect of BMI on AMY1 and AMY2 activities. Yet, we also found a significant negative contribution of AMY1 activity at baseline to the change in BMI during the 9-year follow-up, and a significant contribution of AMY1A copy number to lower obesity risk in children, suggesting a bidirectional relationship between AMY1 activity and adiposity. Metabonomics identified a BMI-independent association between AMY1 activity and lactate, a product of complex carbohydrate fermentation. These findings provide new insights into the involvement of amylase in adiposity and starch metabolism.
A deep learning approach for the analysis of masses in mammograms with minimal user intervention.
Dhungel, Neeraj; Carneiro, Gustavo; Bradley, Andrew P
2017-04-01
We present an integrated methodology for detecting, segmenting and classifying breast masses from mammograms with minimal user intervention. This is a long standing problem due to low signal-to-noise ratio in the visualisation of breast masses, combined with their large variability in terms of shape, size, appearance and location. We break the problem down into three stages: mass detection, mass segmentation, and mass classification. For the detection, we propose a cascade of deep learning methods to select hypotheses that are refined based on Bayesian optimisation. For the segmentation, we propose the use of deep structured output learning that is subsequently refined by a level set method. Finally, for the classification, we propose the use of a deep learning classifier, which is pre-trained with a regression to hand-crafted feature values and fine-tuned based on the annotations of the breast mass classification dataset. We test our proposed system on the publicly available INbreast dataset and compare the results with the current state-of-the-art methodologies. This evaluation shows that our system detects 90% of masses at 1 false positive per image, has a segmentation accuracy of around 0.85 (Dice index) on the correctly detected masses, and overall classifies masses as malignant or benign with sensitivity (Se) of 0.98 and specificity (Sp) of 0.7.
How the mass counts: an electrophysiological approach to the processing of lexical features.
Steinhauer, K; Pancheva, R; Newman, A J; Gennari, S; Ullman, M T
2001-04-17
Nouns may refer to countable objects such as tables, or to mass entities such as rice. The mass/count distinction has been discussed in terms of both semantic and syntactic features encoded in the mental lexicon. Here we show that event-related potentials (ERPs) can reflect the processing of such lexical features, even in the absence of any feature-related violations. We demonstrate that count (vs mass) nouns elicit a frontal negativity which is independent of the N400 marker for conceptual-semantic processing, but resembles anterior negativities related to grammatical processing. This finding suggests that the brain differentiates between count and mass nouns primarily on a syntactic basis.
Horenstein, Rachel E; Shefelbine, Sandra J; Mueske, Nicole M; Fisher, Carissa L; Wren, Tishya A L
2015-08-01
The pediatric spina bifida population suffers from decreased mobility and recurrent fractures. This study aimed to develop a method for quantifying bone mass along the entire tibia in youth with spina bifida. This will provide information about all potential sites of bone deficiencies. Computed tomography images of the tibia for 257 children (n=80 ambulatory spina bifida, n=10 non-ambulatory spina bifida, n=167 typically developing) were analyzed. Bone area was calculated at regular intervals along the entire tibia length and then weighted by calibrated pixel intensity for density weighted bone area. Integrals of density weighted bone area were used to quantify bone mass in the proximal and distal epiphyses and diaphysis. Group differences were evaluated using analysis of variance. Non-ambulatory children suffer from decreased bone mass in the diaphysis and proximal and distal epiphyses compared to ambulatory and control children (P≤0.001). Ambulatory children with spina bifida showed statistically insignificant differences in bone mass in comparison to typically developing children at these sites (P>0.5). This method provides insight into tibial bone mass distribution in the pediatric spina bifida population by incorporating information along the whole length of the bone, thereby providing more information than dual-energy x-ray absorptiometry and peripheral quantitative computed tomography. This method can be applied to any population to assess bone mass distribution across the length of any long bone. Copyright © 2015 Elsevier Ltd. All rights reserved.
Institute of Scientific and Technical Information of China (English)
Weijie Zhao; Hanjie Guo; Xuemin Yang; higang Dan
2008-01-01
A universal thermodynamic model of calculating the mass action concentrations of components in a ternary strong elec-trolyte aqueous solution has been developed based on the ion and molecule coexistence theory, and verified in the NaCl-KCl-H2Oternary system at 298.15 K, To compare the difference of the thermodynamic model in binary and ternary strong electrolyte aqueous solutions, the mass action concentrations of components in the NaCI-H20 binary strong electrolyte aqueous solution were also com-puted at 298.15K. A transformation coefficient was required to compare the calculated mass action concentration and reported activ-ity because they were obtained at different standard states and concentration units. The results show that the transformation coeffi-cients between calculated mass action concentrations and reported activities of the same components change in a very narrow range.The calculated mass action concentrations of components in the NaCl-H2O and NaCl-KCl-H2O systems are in good agreement with the reported activities. This indicates that the developed thermodynamic model can reflect the structural characteristics of solutions,and the mass action concentration also strictly follows the mass action law.
Noor, Fatimah A.; Abdullah, Mikrajuddin; Sukirno; Khairurrijal
2010-12-01
Analytical expressions of electron transmittance and tunneling current in an anisotropic TiNx/HfO2/SiO2/p-Si(100) metal—oxide—semiconductor (MOS) capacitor were derived by considering the coupling of transverse and longitudinal energies of an electron. Exponential and Airy wavefunctions were utilized to obtain the electron transmittance and the electron tunneling current. A transfer matrix method, as a numerical approach, was used as a benchmark to assess the analytical approaches. It was found that there is a similarity in the transmittances calculated among exponential- and Airy-wavefunction approaches and the TMM at low electron energies. However, for high energies, only the transmittance calculated by using the Airy-wavefunction approach is the same as that evaluated by the TMM. It was also found that only the tunneling currents calculated by using the Airy-wavefunction approach are the same as those obtained under the TMM for all range of oxide voltages. Therefore, a better analytical description for the tunneling phenomenon in the MOS capacitor is given by the Airy-wavefunction approach. Moreover, the tunneling current density decreases as the titanium concentration of the TiNx metal gate increases because the electron effective mass of TiNx decreases with increasing nitrogen concentration. In addition, the mass anisotropy cannot be neglected because the tunneling currents obtained under the isotropic and anisotropic masses are very different.
Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Teresa; Skylaris, Chris-Kriton; Head-Gordon, Martin
2016-07-01
Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set produces modern density functionals.
A squeeze-like operator approach to position-dependent mass in quantum mechanics
Energy Technology Data Exchange (ETDEWEB)
Moya-Cessa, Héctor M.; Soto-Eguibar, Francisco [Instituto Nacional de Astrofísica, Óptica y Electrónica, Calle Luis Enrique Erro No. 1, Santa María Tonantzintla, San Andrés Cholula, Puebla CP 72840 (Mexico); Christodoulides, Demetrios N. [CREOL/College of Optics and Photonics, University of Central Florida, Orlando, Florida 32816-2700 (United States)
2014-08-15
We provide a squeeze-like transformation that allows one to remove a position dependent mass from the Hamiltonian. Methods to solve the Schrödinger equation may then be applied to find the respective eigenvalues and eigenfunctions. As an example, we consider a position-dependent-mass that leads to the integrable Morse potential and therefore to well-known solutions.
Mass spectrometry imaging of biological tissue : an approach for multicenter studies
Römpp, Andreas; Both, Jean-Pierre; Brunelle, Alain; Heeren, Ron M A; Laprévote, Olivier; Prideaux, Brendan; Seyer, Alexandre; Spengler, Bernhard; Stoeckli, Markus; Smith, Donald F
2015-01-01
Mass spectrometry imaging has become a popular tool for probing the chemical complexity of biological surfaces. This led to the development of a wide range of instrumentation and preparation protocols. It is thus desirable to evaluate and compare the data output from different methodologies and mass
Assembly of a Vacuum Chamber: A Hands-On Approach to Introduce Mass Spectrometry
Bussie`re, Guillaume; Stoodley, Robin; Yajima, Kano; Bagai, Abhimanyu; Popowich, Aleksandra K.; Matthews, Nicholas E.
2014-01-01
Although vacuum technology is essential to many aspects of modern physical and analytical chemistry, vacuum experiments are rarely the focus of undergraduate laboratories. We describe an experiment that introduces students to vacuum science and mass spectrometry. The students first assemble a vacuum system, including a mass spectrometer. While…