WorldWideScience

Sample records for 2-dimensional calculations

  1. An Algorithm to Calculate Phase-Center Offset of Aperture Antennas when Measuring 2-Dimensional Radiation Patterns

    2015-01-01

    An Algorithm to Calculate Phase-Center Offset of Aperture Antennas when Measuring 2-Dimensional Radiation Patterns by Patrick Debroux...Offset of Aperture Antennas when Measuring 2-Dimensional Radiation Patterns Patrick Debroux and Berenice Verdin Survivability/Lethality Analysis... Antennas when Measuring 2-Dimensional Radiation Patterns 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S

  2. On 2-dimensional topological field theories

    Dumitrescu, Florin

    2010-01-01

    In this paper we give a characterization of 2-dimensional topological field theories over a space $X$ as Frobenius bundles with connections over $LX$, the free loop space of $X$. This is a generalization of the folk theorem stating that 2-dimensional topological field theories (over a point) are described by finite-dimensional commutative Frobenius algebras. In another direction, this result extends the description of 1-dimensional topological field theories over a space $X$ as vector bundles with connections over $X$, cf. \\cite{DST}.

  3. Local Duality for 2-Dimensional Local Ring

    Belgacem Draouil

    2008-11-01

    We prove a local duality for some schemes associated to a 2-dimensional complete local ring whose residue field is an -dimensional local field in the sense of Kato–Parshin. Our results generalize the Saito works in the case =0 and are applied to study the Bloch–Ogus complex for such rings in various cases.

  4. Lecture notes on 2-dimensional defect TQFT

    Carqueville, Nils

    2016-01-01

    These notes offer an introduction to the functorial and algebraic description of 2-dimensional topological quantum field theories `with defects', assuming only superficial familiarity with closed TQFTs in terms of commutative Frobenius algebras. The generalisation of this relation is a construction of pivotal 2-categories from defect TQFTs. We review this construction in detail, flanked by a range of examples. Furthermore we explain how open/closed TQFTs are equivalent to Calabi-Yau categories and the Cardy condition, and how to extract such data from pivotal 2-categories.

  5. Anisotropic 2-dimensional Robin Hood model

    Buldyrev, Sergey; Cwilich, Gabriel; Zypman, Fredy

    2009-03-01

    We have considered the Robin Hood model introduced by Zaitsev[1] to discuss flux creep and depinning of interfaces in a two dimensional system. Although the model has been studied extensively analytically in 1-d [2], its scaling laws have been verified numerically only in that case. Recent work suggest that its properties might be important to understand surface friction[3], where its 2-dimensional properties are important. We show that in the 2-dimensional case scaling laws can be found provided one considers carefully the anisotropy of the model, and different ways of introducing that anisotropy lead to different exponents and scaling laws, in analogy with directed percolation, with which this model is closely related[4]. We show that breaking the rotational symmetry between the x and y axes does not change the scaling properties of the model, but the introduction of a preferential direction of accretion (``robbing'' in the language of the model) leads to new scaling exponents. [1] S.I.Zaitsev, Physica A189, 411 (1992) [2] M. Pacuzki, S. Maslov and P.Bak, Phys Rev. E53, 414 (1996) [3] S. Buldyrev, J. Ferrante and F. Zypman Phys. Rev E64, 066110 (2006) [4] G. Odor, Rev. Mod. Phys. 76, 663 (2004) .

  6. The Random Discrete Action for 2-Dimensional Spacetime

    Benincasa, Dionigi M T; Schmitzer, Bernhard

    2010-01-01

    A one-parameter family of random variables, called the Discrete Action, is defined for a 2-dimensional Lorentzian spacetime of finite volume. The single parameter is a discreteness scale. The expectation value of this Discrete Action is calculated for various regions of 2D Minkowski spacetime. When a causally convex region of 2D Minkowski spacetime is divided into subregions using null lines the mean of the Discrete Action is equal to the alternating sum of the numbers of vertices, edges and faces of the null tiling, up to corrections that tend to zero as the discreteness scale is taken to zero. This result is used to predict that the mean of the Discrete Action of the flat Lorentzian cylinder is zero up to corrections, which is verified. The ``topological'' character of the Discrete Action breaks down for causally convex regions of the flat trousers spacetime that contain the singularity and for non-causally convex rectangles.

  7. Skittle: A 2-Dimensional Genome Visualization Tool

    Sanford John C

    2009-12-01

    Full Text Available Abstract Background It is increasingly evident that there are multiple and overlapping patterns within the genome, and that these patterns contain different types of information - regarding both genome function and genome history. In order to discover additional genomic patterns which may have biological significance, novel strategies are required. To partially address this need, we introduce a new data visualization tool entitled Skittle. Results This program first creates a 2-dimensional nucleotide display by assigning four colors to the four nucleotides, and then text-wraps to a user adjustable width. This nucleotide display is accompanied by a "repeat map" which comprehensively displays all local repeating units, based upon analysis of all possible local alignments. Skittle includes a smooth-zooming interface which allows the user to analyze genomic patterns at any scale. Skittle is especially useful in identifying and analyzing tandem repeats, including repeats not normally detectable by other methods. However, Skittle is also more generally useful for analysis of any genomic data, allowing users to correlate published annotations and observable visual patterns, and allowing for sequence and construct quality control. Conclusions Preliminary observations using Skittle reveal intriguing genomic patterns not otherwise obvious, including structured variations inside tandem repeats. The striking visual patterns revealed by Skittle appear to be useful for hypothesis development, and have already led the authors to theorize that imperfect tandem repeats could act as information carriers, and may form tertiary structures within the interphase nucleus.

  8. Calculation of U-value for Concrete Element

    Rose, Jørgen

    1997-01-01

    This report is a U-value calculation of a typical concrete element used in industrial buildings.The calculations are performed using a 2-dimensional finite difference calculation programme.......This report is a U-value calculation of a typical concrete element used in industrial buildings.The calculations are performed using a 2-dimensional finite difference calculation programme....

  9. Construction of 2-dimensional Grosse-Wulkenhaar Model

    Wang, Zhituo

    2011-01-01

    In this paper we construct the noncommutative Grosse-Wulkenhaar model on 2-dimensional Moyal plane with the method of loop vertex expansion. We treat renormalization with this new tool, adapt Nelson's argument and prove Borel summability of the perturbation series. This is the first non-commutative quantum field theory model to be built in a non-perturbative sense.

  10. 4-dimensional spacetimes from 2-dimensional conformal null data

    Goswami, Rituparno; Ellis, George F. R.

    2017-03-01

    In this paper we investigate whether the holographic principle proposed in string theory has a classical counterpart in general relativity theory. We show that there is a partial correspondence: at least in the case of vacuum Petrov type D spacetimes that admit a non-trivial Killing tensor, which encompass all the astrophysical black hole spacetimes, there exists a one-to-one correspondence between gravity in bulk and a 2-dimensional classical conformal scalar field on a null boundary.

  11. Constructive Renormalization of 2-dimensional Grosse-Wulkenhaar Model

    Wang, Zhituo

    2012-01-01

    In this talk we briefly report the recent work on the construction of the 2-dimensional Grosse-Wulkenhaar model with the method of loop vertex expansion. We treat renormalization with this new tool, adapt Nelson's argument and prove Borel summability of the perturbation series. This is the first non-commutative quantum field theory model to be built in a non-perturbative sense.

  12. 2-dimensional numerical modeling of active magnetic regeneration

    Nielsen, Kaspar Kirstein; Pryds, Nini; Smith, Anders

    2009-01-01

    Various aspects of numerical modeling of Active Magnetic Regeneration (AMR) are presented. Using a 2-dimensional numerical model for solving the unsteady heat transfer equations for the AMR system, a range of physical effects on both idealized and non-idealized AMR are investigated. The modeled...... system represents a linear, parallel-plate based AMR. The idealized version of the model is able to predict the theoretical performance of AMR in terms of cooling power and temperature span. This is useful to a certain extent, but a model reproducing experiments to a higher degree is desirable. Therefore...

  13. The Space Complexity of 2-Dimensional Approximate Range Counting

    Wei, Zhewei; Yi, Ke

    2013-01-01

    We study the problem of 2-dimensional orthogonal range counting with additive error. Given a set P of n points drawn from an n × n grid and an error parameter ε, the goal is to build a data structure, such that for any orthogonal range R, the data structure can return the number of points in P ∩ R...... structure. We first describe a data structure that uses bits that answers queries with error εn. We then prove a lower bound that any data structure that answers queries with error O(log n) must use Ω(n log n) bits. This lower bound has two consequences: 1) answering queries with error O(log n) is as hard...

  14. Development of a numerical 2-dimensional beach evolution model

    Baykal, Cüneyt

    2014-01-01

    This paper presents the description of a 2-dimensional numerical model constructed for the simulation of beach evolution under the action of wind waves only over the arbitrary land and sea topographies around existing coastal structures and formations. The developed beach evolution numerical model...... is composed of 4 submodels: a nearshore spectral wave transformation model based on an energy balance equation including random wave breaking and diffraction terms to compute the nearshore wave characteristics, a nearshore wave-induced circulation model based on the nonlinear shallow water equations...... to compute the nearshore depth-averaged wave-induced current velocities and mean water level changes, a sediment transport model to compute the local total sediment transport rates occurring under the action of wind waves, and a bottom evolution model to compute the bed level changes in time based...

  15. STATISTICAL PROBABILITY BASED ALGORITHM FOR EXTRACTING FEATURE POINTS IN 2-DIMENSIONAL IMAGE

    Guan Yepeng; Gu Weikang; Ye Xiuqing; Liu Jilin

    2004-01-01

    An algorithm for automatically extracting feature points is developed after the area of feature points in 2-dimensional (2D) imagebeing located by probability theory, correlated methods and criterion for abnormity. Feature points in 2D image can be extracted only by calculating standard deviation of gray within sampled pixels area in our approach statically. While extracting feature points, the limitation to confirm threshold by tentative method according to some a priori information on processing image can be avoided. It is proved that the proposed algorithm is valid and reliable by extracting feature points on actual natural images with abundant and weak texture, including multi-object with complex background, respectively. It can meet the demand of extracting feature points of 2D image automatically in machine vision system.

  16. Fully Automated Portable Comprehensive 2-Dimensional Gas Chromatography Device.

    Lee, Jiwon; Zhou, Menglian; Zhu, Hongbo; Nidetz, Robert; Kurabayashi, Katsuo; Fan, Xudong

    2016-10-06

    We developed a fully automated portable 2-dimensional (2-D) gas chromatography (GC x GC) device, which had a dimension of 60 cm × 50 cm × 10 cm and weight less than 5 kg. The device incorporated a micropreconcentrator/injector, commercial columns, micro-Deans switches, microthermal injectors, microphotoionization detectors, data acquisition cards, and power supplies, as well as computer control and user interface. It employed multiple channels (4 channels) in the second dimension ((2)D) to increase the (2)D separation time (up to 32 s) and hence (2)D peak capacity. In addition, a nondestructive flow-through vapor detector was installed at the end of the (1)D column to monitor the eluent from (1)D and assist in reconstructing (1)D elution peaks. With the information obtained jointly from the (1)D and (2)D detectors, (1)D elution peaks could be reconstructed with significantly improved (1)D resolution. In this Article, we first discuss the details of the system operating principle and the algorithm to reconstruct (1)D elution peaks, followed by the description and characterization of each component. Finally, 2-D separation of 50 analytes, including alkane (C6-C12), alkene, alcohol, aldehyde, ketone, cycloalkane, and aromatic hydrocarbon, in 14 min is demonstrated, showing the peak capacity of 430-530 and the peak capacity production of 40-80/min.

  17. Surprises in the Evaporation of 2-Dimensional Black Holes

    Ashtekar, Abhay; Ramazanoğlu, Fethi M

    2010-01-01

    Quantum evaporation of Callen-Giddings-Harvey-Strominger (CGHS) black holes is analyzed in the mean field approximation. The resulting semi-classical theory incorporates back reaction. Detailed analytical and numerical calculations show that, while some of the assumptions underlying the standard evaporation paradigm are borne out, several are not. Furthermore, if the black hole is initially macroscopic, the evaporation process exhibits remarkable universal properties. Although the literature on CGHS black holes is quite rich, these features had escaped previous analyses, in part because of lack of required numerical precision, and in part because certain properties and symmetries of the model were not recognized. Finally, our results provide support for the full quantum scenario recently developed by Ashtekar, Taveras and Varadarajan.

  18. Accretions of Dark Matter and Dark Energy onto ($n+2$)-dimensional Schwarzschild Black Hole and Morris-Thorne Wormhole

    Debnath, Ujjal

    2015-01-01

    We have studied accretion of the dark matter and dark energy onto of $(n+2)$-dimensional Schwarzschild black hole and Morris-Thorne wormhole. The mass and the rate of change of mass for $(n+2)$-dimensional Schwarzschild black hole and Morris-Thorne wormhole have been found. We have assumed some candidates of dark energy like holographic dark energy, new agegraphic dark energy, quintessence, tachyon, DBI-essence, etc. The black hole mass and the wormhole mass have been calculated in term of redshift when dark matter and above types of dark energies accrete onto them separately. We have shown that the black hole mass increases and wormhole mass decreases for holographic dark energy, new agegraphic dark energy, quintessence, tachyon accretion and the slope of increasing/decreasing of mass sensitively depends on the dimension. But for DBI-essence accretion, the black hole mass first increases and then decreases and the wormhole mass first decreases and then increases and the slope of increasing/decreasing of mass...

  19. Development of orthogonal 2-dimensional numerical code TFC2D for fluid flow with various turbulence models and numerical schemes

    Park, Ju Yeop; In, Wang Kee; Chun, Tae Hyun; Oh, Dong Seok [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-02-01

    The development of orthogonal 2-dimensional numerical code is made. The present code contains 9 kinds of turbulence models that are widely used. They include a standard k-{epsilon} model and 8 kinds of low Reynolds number ones. They also include 6 kinds of numerical schemes including 5 kinds of low order schemes and 1 kind of high order scheme such as QUICK. To verify the present numerical code, pipe flow, channel flow and expansion pipe flow are solved by this code with various options of turbulence models and numerical schemes and the calculated outputs are compared to experimental data. Furthermore, the discretization error that originates from the use of standard k-{epsilon} turbulence model with wall function is much more diminished by introducing a new grid system than a conventional one in the present code. 23 refs., 58 figs., 6 tabs. (Author)

  20. Tschirnhausen transformation of a cubic generic polynomial and a $2$-dimensional involutive Cremona transformation

    HOSHI, Akinari; Miyake, Katsuya

    2007-01-01

    We study the field isomorphism problem for a cubic generic polynomial $X^3+sX+s$ via Tschirnhausen transformation. Through this process, there naturally appears a $2$-dimensional involutive Cremona transformation. We show that the fixed field under the action of the transformation is purely transcendental over an arbitrary base field.

  1. Investigation of two different anoxia models by 2-dimensional gel electrophoresis

    Wulff, Tune; Jessen, Flemming; Hoffmann, Else Kay

    2006-01-01

    anoxia obtained by NaN3 is a widely used model for simulating anoxia (Ossum et al., 2004). The effects of anoxia were studied by protein expression analysis using 2-dimensional gel electrophoresis followed by MS/MS. In this way we were able to separate more than 1500 protein spots with an apparent range...

  2. The finite size spectrum of the 2-dimensional O(3) nonlinear sigma-model

    Balog, Janos(Institute for Particle and Nuclear Physics, Wigner Research Centre for Physics, MTA Lendület Holographic QFT Group, 1525, Budapest 114, P.O.B. 49, Hungary); Hegedus, Arpad

    2009-01-01

    Nonlinear integral equations are proposed for the description of the full finite size spectrum of the 2-dimensional O(3) nonlinear sigma-model in a periodic box. Numerical results for the energy eigenvalues are compared to the rotator spectrum and perturbation theory for small volumes and with the recently proposed generalized Luscher formulas at large volumes.

  3. Chirality Made Simple: A 1 - and 2-Dimensional Introduction to Stereochemistry

    Gawley, Robert E.

    2005-01-01

    The introduction of chirality in one and two dimensions, along with the concepts of internal and external reflection, can be combined with concepts familiar to all students. Once familiar with 1-Dimensional and 2-Dimensional chirality, the same concepts can be extended to 3-Dimensional and by projecting 3-D back to two, it is possible to interpret…

  4. Generalized Donaldson-Thomas Invariants of 2-Dimensional sheaves on local P^2

    Gholampour, Amin; Sheshmani, Artan

    2013-01-01

    Let X be the total space of the canonical bundle of P^2. We study the generalized Donaldson-Thomas invariants, defined in the work of Joyce-Song, of the moduli spaces of the 2-dimensional Gieseker semistable sheaves on X with first Chern class equal to k times the class of the zero section of X...

  5. Isogeometric analysis of sound propagation through laminar flow in 2-dimensional ducts

    Nørtoft, Peter; Gravesen, Jens; Willatzen, Morten

    2015-01-01

    We consider the propagation of sound through a slowly moving fluid in a 2-dimensional duct. A detailed description of a flow-acoustic model of the problem using B-spline based isogeometric analysis is given. The model couples the non-linear, steady-state, incompressible Navier-Stokes equation in ...

  6. Large-Eddy Simulation on turbulent flow and plume dispersion over a 2-dimensional hill

    Nakayama, H.; Nagai, H.

    2010-05-01

    The dispersion analysis of airborne contaminants including radioactive substances from industrial or nuclear facilities is an important issue for air quality maintenance and safety assessment. In Japan, many nuclear power plants are located at complex coastal terrains. In these cases, terrain effects on the turbulent flow and plume dispersion should be investigated. In this study, we perform Large-Eddy Simulation (LES) of turbulent flow and plume dispersion over a 2-dimensional hill flow and investigate the characteristics of mean and fluctuating concentrations.

  7. Signature change in 2-dimensional black-hole models of loop quantum gravity

    Bojowald, Martin

    2016-01-01

    Signature change has been identified as a generic consequence of holonomy modifications in spherically symmetric models of loop quantum gravity with real connections, which includes modified Schwarzschild solutions. Here, this result is extended to 2-dimensional dilaton models and to different choices of canonical variables, including in particular the Callan-Giddings-Harvey-Strominger (CGHS) solution. New obstructions are found to coupling matter and to including operator-ordering effects in an anomaly-free manner.

  8. Invariants of pure 2-dimensional sheaves inside threefolds and modular forms

    Gholampour, Amin

    2013-01-01

    Motivated by S-duality modularity conjectures in string theory, we study the Donaldson-Thomas type invariants of pure 2-dimensional sheaves inside a nonsingular threefold X in three different situations: (1). X is a K3 fibration over a curve. We study the Donaldson-Thomas invariants of the 2 dimensional Gieseker stable sheaves in X supported on the fibers. Analogous to the Gromov-Witten theory formula established in the work of M.P., we express these invariants in terms of the Euler characteristic of the Hilbert scheme of points on the K3 surface and the Noether-Lefschetz numbers of the fibration, and prove that the invariants have modular properties. (2). X is the total space of the canonical bundle of P^2. We study the generalized Donaldson-Thomas invariants defined by J.S. of the moduli spaces of the 2-dimensional Gieseker semistable sheaves on X with first Chern class equal to k times the class of the zero section of X. When k=1,2 or 3, and semistability implies stability, we express the invariants in ter...

  9. Generalized Donaldson-Thomas Invariants of 2-Dimensional sheaves on local P^2

    Gholampour, Amin

    2013-01-01

    Let X be the total space of the canonical bundle of P^2. We study the generalized Donaldson-Thomas invariants, defined in the work of Joyce-Song, of the moduli spaces of the 2-dimensional Gieseker semistable sheaves on X with first Chern class equal to k times the class of the zero section of X. When k=1, 2 or 3, and semistability implies stability, we express the invariants in terms of known modular forms. We prove a combinatorial formula for the invariants when k=2 in the presence of the strictly semistable sheaves, and verify the BPS integrality conjecture of Joyce-Song in some cases.

  10. Declination Calculator

    National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...

  11. Assessment of segmental myocardial viability using regional 2-dimensional strain echocardiography.

    Migrino, Raymond Q; Zhu, Xiaoguang; Pajewski, Nicholas; Brahmbhatt, Tejas; Hoffmann, Raymond; Zhao, Ming

    2007-04-01

    We determined whether 2-dimensional strain echocardiography can identify viable from infarcted myocardium in a rat ischemia-reperfusion model. A total of 16 male Sprague-Dawley rats underwent left anterior descending coronary artery occlusion for 12 or 30 minutes followed by 60-minute reperfusion. Short-axis 2-dimensional strain echocardiography was performed at the mid-ventricle 60 minutes post-reperfusion. Post-sacrifice, triphenyl tetrazolium chloride was infused to the coronary circulation. Regional end-systolic radial and circumferential strain, and time to peak strain, were measured using software in all 96 segments and correlated with areas of infarct in corresponding histologic slices. Segments with greater than 50% area of infarct had lower end-systolic radial and circumferential strain and longer time to peak strain versus areas with 50% or less strain or no infarct. Extent of infarct correlates with radial and circumferential strain. End-systolic radial strain less than 2% has 88% sensitivity and 95% specificity for detecting infarcted area greater than 50%. Two-dimensional strain echocardiography-derived strain is useful in distinguishing infarcted from viable myocardium.

  12. GO-2D: identifying 2-dimensional cellular-localized functional modules in Gene Ontology

    Yang Da

    2007-01-01

    Full Text Available Abstract Background Rapid progress in high-throughput biotechnologies (e.g. microarrays and exponential accumulation of gene functional knowledge make it promising for systematic understanding of complex human diseases at functional modules level. Based on Gene Ontology, a large number of automatic tools have been developed for the functional analysis and biological interpretation of the high-throughput microarray data. Results Different from the existing tools such as Onto-Express and FatiGO, we develop a tool named GO-2D for identifying 2-dimensional functional modules based on combined GO categories. For example, it refines biological process categories by sorting their genes into different cellular component categories, and then extracts those combined categories enriched with the interesting genes (e.g., the differentially expressed genes for identifying the cellular-localized functional modules. Applications of GO-2D to the analyses of two human cancer datasets show that very specific disease-relevant processes can be identified by using cellular location information. Conclusion For studying complex human diseases, GO-2D can extract functionally compact and detailed modules such as the cellular-localized ones, characterizing disease-relevant modules in terms of both biological processes and cellular locations. The application results clearly demonstrate that 2-dimensional approach complementary to current 1-dimensional approach is powerful for finding modules highly relevant to diseases.

  13. Crossover from 2-dimensional to 3-dimensional aggregations of clusters on square lattice substrates

    Cheng, Yi; Zhu, Yu-Hong; Pan, Qi-Fa; Yang, Bo; Tao, Xiang-Ming; Ye, Gao-Xiang

    2015-11-01

    A Monte Carlo study on the crossover from 2-dimensional to 3-dimensional aggregations of clusters is presented. Based on the traditional cluster-cluster aggregation (CCA) simulation, a modified growth model is proposed. The clusters (including single particles and their aggregates) diffuse with diffusion step length l (1 ≤ l ≤ 7) and aggregate on a square lattice substrate. If the number of particles contained in a cluster is larger than a critical size sc, the particles at the edge of the cluster have a possibility to jump onto the upper layer, which results in the crossover from 2-dimensional to 3-dimensional aggregations. Our simulation results are in good agreement with the experimental findings. Project supported by the National Natural Science Foundation of China (Grant Nos. 11374082 and 11074215), the Science Foundation of Zhejiang Province Department of Education, China (Grant No. Y201018280), the Fundamental Research Funds for Central Universities, China (Grant No. 2012QNA3010), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20100101110005).

  14. A Finger-Shaped Tactile Sensor for Fabric Surfaces Evaluation by 2-Dimensional Active Sliding Touch

    Haihua Hu

    2014-03-01

    Full Text Available Sliding tactile perception is a basic function for human beings to determine the mechanical properties of object surfaces and recognize materials. Imitating this process, this paper proposes a novel finger-shaped tactile sensor based on a thin piezoelectric polyvinylidene fluoride (PVDF film for surface texture measurement. A parallelogram mechanism is designed to ensure that the sensor applies a constant contact force perpendicular to the object surface, and a 2-dimensional movable mechanical structure is utilized to generate the relative motion at a certain speed between the sensor and the object surface. By controlling the 2-dimensional motion of the finger-shaped sensor along the object surface, small height/depth variation of surface texture changes the output charge of PVDF film then surface texture can be measured. In this paper, the finger-shaped tactile sensor is used to evaluate and classify five different kinds of linen. Fast Fourier Transformation (FFT is utilized to get original attribute data of surface in the frequency domain, and principal component analysis (PCA is used to compress the attribute data and extract feature information. Finally, low dimensional features are classified by Support Vector Machine (SVM. The experimental results show that this finger-shaped tactile sensor is effective and high accurate for discriminating the five textures.

  15. A (1 + 2-Dimensional Simplified Keller–Segel Model: Lie Symmetry and Exact Solutions. II

    Roman Cherniha

    2017-01-01

    Full Text Available A simplified Keller–Segel model is studied by means of Lie symmetry based approaches. It is shown that a (1 + 2-dimensional Keller–Segel type system, together with the correctly-specified boundary and/or initial conditions, is invariant with respect to infinite-dimensional Lie algebras. A Lie symmetry classification of the Cauchy problem depending on the initial profile form is presented. The Lie symmetries obtained are used for reduction of the Cauchy problem to that of (1 + 1-dimensional. Exact solutions of some (1 + 1-dimensional problems are constructed. In particular, we have proved that the Cauchy problem for the (1 + 1-dimensional simplified Keller–Segel system can be linearized and solved in an explicit form. Moreover, additional biologically motivated restrictions were established in order to obtain a unique solution. The Lie symmetry classification of the (1 + 2-dimensional Neumann problem for the simplified Keller–Segel system is derived. Because Lie symmetry of boundary-value problems depends essentially on geometry of the domain, which the problem is formulated for, all realistic (from applicability point of view domains were examined. Reduction of the the Neumann problem on a strip is derived using the symmetries obtained. As a result, an exact solution of a nonlinear two-dimensional Neumann problem on a finite interval was found.

  16. The value of preoperative 3-dimensional over 2-dimensional valve analysis in predicting recurrent ischemic mitral regurgitation after mitral annuloplasty

    Wijdh-den Hamer, Inez J.; Bouma, Wobbe; Lai, Eric K.; Levack, Melissa M.; Shang, Eric K.; Pouch, Alison M.; Eperjesi, Thomas J.; Plappert, Theodore J.; Yushkevich, Paul A.; Hung, Judy; Mariani, Massimo A.; Khabbaz, Kamal R.; Gleason, Thomas G.; Mahmood, Feroze; Acker, Michael A.; Woo, Y. Joseph; Cheung, Albert T.; Gillespie, Matthew J.; Jackson, Benjamin M.; Gorman, Joseph H.; Gorman, Robert C.

    2016-01-01

    Objectives: Repair for ischemic mitral regurgitation with undersized annuloplasty is characterized by high recurrence rates. We sought to determine the value of pre-repair 3-dimensional echocardiography over 2-dimensional echocardiography in predicting recurrence at 6 months. Methods: Intraoperative

  17. Determination of chemical concentration with a 2 dimensional CCD array in the Echelle grating spectrometer

    Lewis, D.K. [Lawrence Livermore National Lab., CA (United States); Stevens, C.G.

    1994-11-15

    The Echelle grating spectrometer (EGS) uses a stepped Echelle grating, prisms and a folded light path to miniaturize an infrared spectrometer. Light enters the system through a slit and is spread out along Y by a prism. This light then strikes the grating and is diffracted out along X. This spreading results in a superposition of spectral orders since the grating has a high spectral range. These orders are then separated by again passing through a prism. The end result of a measurement is a 2 dimensional image which contains the folded spectrum of the region under investigation. The data lies in bands from top to bottom, for example, with wavenumber increments as small as 0.1 lying from left to right such that the right end of band N is the same as the left end of band N+1. This is the image which must be analyzed.

  18. Reducing of phase retrieval errors in Fourier analysis of 2-dimensional digital model interferograms

    Gladic, J; Vucic, Z; Gladic, Jadranko; Lovric, Davorin; Vucic, Zlatko

    2006-01-01

    In order to measure the radial displacements of facets on surface of a growing spherical Cu_{2-\\delta}Se crystal with sub-nanometer resolution, we have investigated the reliability and accuracy of standard method of Fourier analysis of fringes obtained applying digital laser interferometry method. Guided by the realistic experimental parameters (density and orientation of fringes), starting from 2-dimensional model interferograms and using unconventional custom designed Gaussian filtering window and unwrapping procedure of the retrieved phase, we have demonstrated that for considerable portion of parameter space the non-negligible inherent phase retrieval error is present solely due to non-integer number of fringes within the digitally recorded image (using CCD camera). Our results indicate the range of experimentally adjustable parameters for which the generated error is acceptably small. We also introduce a modification of the (last part) of the usual phase retrieval algorithm which significantly reduces th...

  19. PARTIAL REGULARITY FOR THE 2-DIMENSIONAL WEIGHTED LANDAU-LIFSHITZ FLOW

    Ye Yunhua; Ding Shijin

    2007-01-01

    We consider the partial regularity of weak solutions to the weighted Landau-Lifshitz flow on a 2-dimensional bounded smooth domain by Ginzburg-Landau type approximation. Under the energy smallness condition, we prove the uniform local C∞ bounds for the approaching solutions. This shows that the approximating solutions are locally uniformly bounded in C∞(Reg({uε}) ∩((Ω)×R+)) which guarantee the smooth convergence in these points. Energy estimates for the approximating equations are used to prove that the singularity set has locally finite two-dimensional parabolic Hausdorff measure and has at most finite points at each fixed time. From the uniform boundedness of approximating solutions in C∞ (Reg({uε }) ∩((Ω)×R+)), we then extract a subsequence converging to a global weak solution to the weighted Landau-Lifshitz flow which is in fact regular away from finitely many points.

  20. Prenatal 2-dimensional and 3-dimensional ultrasonography diagnosis and autoptic findings of isolated ectopia cordis.

    Bianca, S; Bartoloni, G; Auditore, S; Reale, A; Tetto, C; Ingegnosi, C; Pirruccello, B; Ettore, G

    2006-01-01

    Ectopia cordis is a very rare congenital malformation, commonly associated with intracardiac anomalies. It is due to a defect in fusion of the anterior chest wall resulting in an extrathoracic location of the heart. We report prenatal 2-dimensional (2D) and 3D ultrasonography diagnosis and postnatal autoptic findings of an isolated ectopia cordis with tricuspid atresia. Ectopia cordis prenatal diagnosis is easily made with ultrasound by visualizing the heart outside the thoracic cavity. 3D ultrasonography may add more detailed visualization of the heart anomaly even if the 2D ultrasonography alone permits the prenatal diagnosis. Obstetrical management should include a careful search for associated anomalies, especially cardiac, and the assessment of fetal karyotype. As this is considered a sporadic anomaly, the recurrence risk is low and no genetic origin is known.

  1. Exact vacuum solution of a (1+2)-dimensional Poincare gauge theory BTZ solution with torsion

    Garcia, A A; Heinicke, C; Macías, A; Garcia, Alberto A.; Hehl, Friedrich W.; Heinicke, Christian; Macias, Alfredo

    2003-01-01

    In (1+2)-dimensional Poincar\\'e gauge gravity, we start from a Lagrangian depending on torsion and curvature which includes additionally {\\em translational} and {\\em Lorentzian} Chern-Simons terms. Limiting ourselves to to a specific subcase, the Mielke-Baekler (MB) model, we derive the corresponding field equations (of Einstein-Cartan-Chern-Simons type) and find the general vacuum solution. We determine the properties of this solution, in particular its mass and its angular momentum. For vanishing torsion, we recover the BTZ-solution. We also derive the general conformally flat vacuum solution with torsion. In this framework, we discuss {\\em Cartan's} (3-dimensional) {\\em spiral staircase} and find that it is not only a special case of our new vacuum solution, but can alternatively be understood as a solution of the 3-dimensional Einstein-Cartan theory with matter of constant pressure and constant torque. {\\em file 3dexact15.tex}

  2. Mechanisms of seizure propagation in 2-dimensional centre-surround recurrent networks.

    David Hall

    Full Text Available Understanding how seizures spread throughout the brain is an important problem in the treatment of epilepsy, especially for implantable devices that aim to avert focal seizures before they spread to, and overwhelm, the rest of the brain. This paper presents an analysis of the speed of propagation in a computational model of seizure-like activity in a 2-dimensional recurrent network of integrate-and-fire neurons containing both excitatory and inhibitory populations and having a difference of Gaussians connectivity structure, an approximation to that observed in cerebral cortex. In the same computational model network, alternative mechanisms are explored in order to simulate the range of seizure-like activity propagation speeds (0.1-100 mm/s observed in two animal-slice-based models of epilepsy: (1 low extracellular [Formula: see text], which creates excess excitation and (2 introduction of gamma-aminobutyric acid (GABA antagonists, which reduce inhibition. Moreover, two alternative connection topologies are considered: excitation broader than inhibition, and inhibition broader than excitation. It was found that the empirically observed range of propagation velocities can be obtained for both connection topologies. For the case of the GABA antagonist model simulation, consistent with other studies, it was found that there is an effective threshold in the degree of inhibition below which waves begin to propagate. For the case of the low extracellular [Formula: see text] model simulation, it was found that activity-dependent reductions in inhibition provide a potential explanation for the emergence of slowly propagating waves. This was simulated as a depression of inhibitory synapses, but it may also be achieved by other mechanisms. This work provides a localised network understanding of the propagation of seizures in 2-dimensional centre-surround networks that can be tested empirically.

  3. MEMS Calculator

    SRD 166 MEMS Calculator (Web, free access)   This MEMS Calculator determines the following thin film properties from data taken with an optical interferometer or comparable instrument: a) residual strain from fixed-fixed beams, b) strain gradient from cantilevers, c) step heights or thicknesses from step-height test structures, and d) in-plane lengths or deflections. Then, residual stress and stress gradient calculations can be made after an optical vibrometer or comparable instrument is used to obtain Young's modulus from resonating cantilevers or fixed-fixed beams. In addition, wafer bond strength is determined from micro-chevron test structures using a material test machine.

  4. (N+2)-Dimensional Anisotropic Charged Fluid Spheres with Pressure: Riccati Equation

    Bijalwan, Naveen

    2011-01-01

    General exact (N+2)-dimensional,n>=2 solutions in general theory of relativity of Einstein-Maxwell field equations for static anisotropic spherically symmetric distribution of charged fluid are expressed in terms of radial pressure. Subsequently, metrics (e(lambda) and e(nu)), matter density and electric intensity are expressible in terms of pressure. We extend the methodology used by Bijalwan (2011a, 2011c, 2011d) for charged and anisotropic fluid. Consequently, radial pressure is found to be an invertible arbitrary function of w(c1+c2r^2), where c1 and c2(non zero) are arbitrary constants, and r is the radius of star, i.e. p=p(w) . We present a general solution for static anisotropic charged pressure fluid in terms for w. We reduce to the problem of finding solutions to anisotropic charged fluid to that of finding solutions to a Riccati equation. Also, these solutions satisfy barotropic equation of state relating the radial pressure to the energy density.

  5. Substrate induced changes in atomically thin 2-dimensional semiconductors: Fundamentals, engineering, and applications

    Sun, Yinghui; Wang, Rongming; Liu, Kai

    2017-03-01

    Substrate has great influences on materials syntheses, properties, and applications. The influences are particularly crucial for atomically thin 2-dimensional (2D) semiconductors. Their thicknesses are less than 1 nm; however, the lateral sizes can reach up to several inches or more. Therefore, these materials must be placed onto a variety of substrates before subsequent post-processing techniques for final electronic or optoelectronic devices. Recent studies reveal that substrates have been employed as ways to modulate the optical, electrical, mechanical, and chemical properties of 2D semiconductors. In this review, we summarize recent progress upon the effects of substrates on properties of 2D semiconductors, mostly focused on 2D transition metal dichalcogenides, through viewpoints of both fundamental physics and device applications. First, we discuss various effects of substrates, including interface strain, charge transfer, dielectric screening, and optical interference. Second, we show the modulation of 2D semiconductors by substrate engineering, including novel substrates (patterned substrates, 2D-material substrates, etc.) and active substrates (phase transition materials, ferroelectric materials, flexible substrates, etc.). Last, we present prospectives and challenges in this research field. This review provides a comprehensive understanding of the substrate effects, and may inspire new ideas of novel 2D devices based on substrate engineering.

  6. Directed 2-dimensional organisation of collagen: Role of cross-linking and denaturing agents

    Nishtar nishad Fathima; Aruna Dhathathreyan; Thirumalachari Ramasami

    2010-11-01

    The effect of additives like curcumin and surfactants on the self-assembly of collagen from a simple 2-dimensional system of Langmuir films of the protein at air/solution interface has been attempted in this study using quartz crystal microbalance (QCM) and dynamic surface tensiometer. Though pure curcumin is not surface active, a synergistic effect of collagen with curcumin seems to lead to enhanced surface activity in the protein. In general, the presence of additives, increases the surface activity of collagen even for the lowest concentration and the largest change in surface activity is seen for collagen with sodium dodecyl sulfate (SDS). The results suggest interplay between the unexposed hydrophobic groups, and the opening out and solvation of the more charged or polar groups at the surface leading to aggregation followed by self-assembly. Modulation of aggregation at interface in collagen due to these additives may be an approach that could be explored for possible applications in bio-materials and for delivery of protein-drug complexes.

  7. Lie and Conditional Symmetries of a Class of Nonlinear (1 + 2-Dimensional Boundary Value Problems

    Roman Cherniha

    2015-08-01

    Full Text Available A new definition of conditional invariance for boundary value problems involving a wide range of boundary conditions (including initial value problems as a special case is proposed. It is shown that other definitions worked out in order to find Lie symmetries of boundary value problems with standard boundary conditions, followed as particular cases from our definition. Simple examples of direct applicability to the nonlinear problems arising in applications are demonstrated. Moreover, the successful application of the definition for the Lie and conditional symmetry classification of a class of (1 + 2-dimensional nonlinear boundary value problems governed by the nonlinear diffusion equation in a semi-infinite domain is realised. In particular, it is proven that there is a special exponent, k ≠ —2, for the power diffusivity uk when the problem in question with non-vanishing flux on the boundary admits additional Lie symmetry operators compared to the case k ≠ —2. In order to demonstrate the applicability of the symmetries derived, they are used for reducing the nonlinear problems with power diffusivity uk and a constant non-zero flux on the boundary (such problems are common in applications and describing a wide range of phenomena to (1 + 1-dimensional problems. The structure and properties of the problems obtained are briefly analysed. Finally, some results demonstrating how Lie invariance of the boundary value problem in question depends on the geometry of the domain are presented.

  8. Energy Shift Caused by Non-isotropy of 2-Dimensional Anisotropic Quantum Dot in Presence of Uniform Magnetic Field

    FAN Hong-Yi; XU Xue-Fen

    2005-01-01

    Based on the squeezing mechanism in quantum dots in the presence of uniform magnetic field, we derive the energy shift caused by the non-isotropy of 2-dimensional anisotropic quantum dot. We also study sudden squeezing of the size of the quantum dot. The whole discussion is proceeded smoothly by virtue of the entangled state representation.

  9. Phase transfer of 1- and 2-dimensional Cd-based nanocrystals

    Kodanek, Torben; Banbela, Hadeel M.; Naskar, Suraj; Adel, Patrick; Bigall, Nadja C.; Dorfs, Dirk

    2015-11-01

    In this work, luminescent CdSe@CdS dot-in-rod nanocrystals, CdSe@CdS/ZnS nanorods as well as CdSe-CdS core-crown nanoplatelets were transferred into aqueous phase via ligand exchange reactions. For this purpose, bifunctional thiol-based ligands were employed, namely mercaptoacetic acid (MAA), 3-mercaptopropionic acid (MPA), 11-mercaptoundecanoic acid (MUA) as well as 2-(dimethylamino)ethanthiol (DMAET). Systematic investigations by means of photoluminescence quantum yield measurements as well as photoluminescence decay measurements have shown that the luminescence properties of the transferred nanostructures are affected by hole traps (induced by the thiol ligands themselves) as well as by spatial insulation and passivation against the environment. The influence of the tips of the nanorods on the luminescence is, however, insignificant. Accordingly, different ligands yield optimum results for different nanoparticle samples, mainly depending on the inorganic passivation of the respective samples. In case of CdSe@CdS nanorods, the highest emission intensities have been obtained by using short-chain ligands for the transfer preserving more than 50% of the pristine quantum yield of the hydrophobic nanorods. As opposed to this, the best possible quantum efficiency for the CdSe@CdS/ZnS nanorods has been achieved via MUA. The gained knowledge could be applied to transfer for the first time 2-dimensional CdSe-CdS core-crown nanoplatelets into water while preserving significant photoluminescence (up to 12% quantum efficiency).In this work, luminescent CdSe@CdS dot-in-rod nanocrystals, CdSe@CdS/ZnS nanorods as well as CdSe-CdS core-crown nanoplatelets were transferred into aqueous phase via ligand exchange reactions. For this purpose, bifunctional thiol-based ligands were employed, namely mercaptoacetic acid (MAA), 3-mercaptopropionic acid (MPA), 11-mercaptoundecanoic acid (MUA) as well as 2-(dimethylamino)ethanthiol (DMAET). Systematic investigations by means of

  10. 2 Dimensional Hydrodynamic Flood Routing Analysis on Flood Forecasting Modelling for Kelantan River Basin

    Azad Wan Hazdy

    2017-01-01

    Full Text Available Flood disaster occurs quite frequently in Malaysia and has been categorized as the most threatening natural disaster compared to landslides, hurricanes, tsunami, haze and others. A study by Department of Irrigation and Drainage (DID show that 9% of land areas in Malaysia are prone to flood which may affect approximately 4.9 million of the population. 2 Dimensional floods routing modelling demonstrate is turning out to be broadly utilized for flood plain display and is an extremely viable device for evaluating flood. Flood propagations can be better understood by simulating the flow and water level by using hydrodynamic modelling. The hydrodynamic flood routing can be recognized by the spatial complexity of the schematization such as 1D model and 2D model. It was found that most of available hydrological models for flood forecasting are more focus on short duration as compared to long duration hydrological model using the Probabilistic Distribution Moisture Model (PDM. The aim of this paper is to discuss preliminary findings on development of flood forecasting model using Probabilistic Distribution Moisture Model (PDM for Kelantan river basin. Among the findings discuss in this paper includes preliminary calibrated PDM model, which performed reasonably for the Dec 2014, but underestimated the peak flows. Apart from that, this paper also discusses findings on Soil Moisture Deficit (SMD and flood plain analysis. Flood forecasting is the complex process that begins with an understanding of the geographical makeup of the catchment and knowledge of the preferential regions of heavy rainfall and flood behaviour for the area of responsibility. Therefore, to decreases the uncertainty in the model output, so it is important to increase the complexity of the model.

  11. Calculator calculus

    McCarty, George

    1982-01-01

    How THIS BOOK DIFFERS This book is about the calculus. What distinguishes it, however, from other books is that it uses the pocket calculator to illustrate the theory. A computation that requires hours of labor when done by hand with tables is quite inappropriate as an example or exercise in a beginning calculus course. But that same computation can become a delicate illustration of the theory when the student does it in seconds on his calculator. t Furthermore, the student's own personal involvement and easy accomplishment give hi~ reassurance and en­ couragement. The machine is like a microscope, and its magnification is a hundred millionfold. We shall be interested in limits, and no stage of numerical approximation proves anything about the limit. However, the derivative of fex) = 67.SgX, for instance, acquires real meaning when a student first appreciates its values as numbers, as limits of 10 100 1000 t A quick example is 1.1 , 1.01 , 1.001 , •••• Another example is t = 0.1, 0.01, in the functio...

  12. Gamma-effects on 2-dimensional transonic aerodynamics. [specific heat ratio due to shock induced separation

    Tuzla, K.; Russell, D. A.; Wai, J. C.

    1976-01-01

    Nonlifting 10% biconvex airfoils are mounted in a 30 x 40 cm Ludwieg-tube-driven transonic test-section and the flow field recorded with a holographic interferometer. Nitrogen, argon, and carbon dioxide are used as the principal test gases. Experiments are conducted with Reynolds number based on chord of (0.5-3.5) x 10 to the 6th with Mach numbers of 0.70, 0.75, and 0.80. Supporting calculations use inviscid transonic small-disturbance and full-potential computer codes coupled with simple integral boundary-layer modeling. Systematic studies show that significant gamma-effects can occur due to shock-induced separation.

  13. Dynamic Conduction in 2-Dimensional Conductor: Magneto-Conductivity Tensor under Rapid Oscillatory Electric Field

    Pijus Kanti Samanta

    2016-06-01

    Full Text Available The conduction mechanism of metals under rapidly oscillating electric field and static perpendicular magnetic field has been investigated within the regime ω≫1/τ. The conventional Lorentz force equation has been used to calculate the conduction current density within the metal. It was found that the conductivity of the metal is anisotropic in nature. We also found that the diagonal elements of the conductivity tensor are equal while the off-diagonal elements are equal in magnitude but opposite in sign. Further it is also found that the diagonal components are imaginary and inversely varies with ω while the off-diagonal components are inversely proportional to ω2.

  14. Classification of 2-dimensional array patterns: assembling many small neural networks is better than using a large one.

    Chen, Liang; Xue, Wei; Tokuda, Naoyuki

    2010-08-01

    In many pattern classification/recognition applications of artificial neural networks, an object to be classified is represented by a fixed sized 2-dimensional array of uniform type, which corresponds to the cells of a 2-dimensional grid of the same size. A general neural network structure, called an undistricted neural network, which takes all the elements in the array as inputs could be used for problems such as these. However, a districted neural network can be used to reduce the training complexity. A districted neural network usually consists of two levels of sub-neural networks. Each of the lower level neural networks, called a regional sub-neural network, takes the elements in a region of the array as its inputs and is expected to output a temporary class label, called an individual opinion, based on the partial information of the entire array. The higher level neural network, called an assembling sub-neural network, uses the outputs (opinions) of regional sub-neural networks as inputs, and by consensus derives the label decision for the object. Each of the sub-neural networks can be trained separately and thus the training is less expensive. The regional sub-neural networks can be trained and performed in parallel and independently, therefore a high speed can be achieved. We prove theoretically in this paper, using a simple model, that a districted neural network is actually more stable than an undistricted neural network in noisy environments. We conjecture that the result is valid for all neural networks. This theory is verified by experiments involving gender classification and human face recognition. We conclude that a districted neural network is highly recommended for neural network applications in recognition or classification of 2-dimensional array patterns in highly noisy environments.

  15. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    Gkaitatzis, Stamatios; The ATLAS collaboration

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100 µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avai...

  16. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    Gkaitatzis, Stamatios; The ATLAS collaboration; Annovi, Alberto; Kordas, Kostantinos

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avail...

  17. Simulation of Evacuation Characteristics Using a 2-Dimensional Cellular Automata Model for Pedestrian Dynamics

    Liqiang Ji

    2013-01-01

    Full Text Available In public places, the high pedestrian density is one of the direct causes leading to crowding and trample disaster, so it is very necessary to investigate the collective and evacuation characteristics for pedestrian movement. In the occupants’ evacuation process, the people-people interaction and the people-environment interaction are sufficiently considered in this paper, which have been divided into the exit attraction, the repulsion force between people, the friction between people, the repulsion force between human and barrier, and the attraction of surrounding people. Through analyzing the existing models, a new occupant evacuation cellular automata (CA model based on the social force model is presented, which overcomes the shortage of the high density crowd simulation and combines the advantages that CA has sample rules and faster calculating speed. The simulating result shows a great applicability for evacuation under the high density crowd condition, and the segregation phenomena have also been found in the bidirectional pedestrian flow. Besides these, setting isolated belt near the exit or entrance of underpass not only remarkably decreases the density and the risk of tramper disaster but also increases the evacuation efficiency, so it provides a new idea for infrastructure design about the exits and entrances.

  18. MEASUREMENT OF 2-DIMENSIONAL DISPLACEMENT USING 2-D ZERO-REFERENCE MARKS

    Wang Yingnan; Zhou Chenggang; Huang Wenhao

    2005-01-01

    Several 2-D displacement sensing methods are reviewed. As to the cross diffraction grating,there is no absolute zero-reference. In regards to the optical fiber method, the output signal is affected greatly by the quality of the reflecting surface and it is hard to get high resolution. Considering the concentric-circle gratings, the displacement can only be gained with complicated calculating of the experiment data. Compared with the advantages and limitations of the methods above, a novel 2-D zero-reference mark is especially proposed and demonstrated. This kind of mark has an absolute zero-reference when used in pair, and the experimental result is simple to dispose. By superimposing a pair of specially coded 2-D marks, the correct alignment position of the two marks can be detected by the maximum output of the sharp intensity peak. And each slope of the peak is of good linearity which can be used to achieve high resolution in positioning and alignment in two dimensions. Design and fabrication of such 2-D zero-reference marks are introduced in detail. The experiment results are agreed with the theoretical ones.

  19. Focal cerebral ischemia measured by the intra-arterial 133xenon method. Limitations of 2-dimensional blood flow measurements

    Skyhøj Olsen, T; Larsen, B; Bech Skriver, E;

    1981-01-01

    of 133 xenon (rCBF), 2) the initial distribution of isotope during the first 5 sec and 3) the cumulated counts recorded during 15 min. Compton scatter and the "look through phenomenon" were responsible for the majority of counts recorded from the infarcted areas and the blood flow recorded was found...... to be grossly overestimated and much more influenced by the blood flow in the surroundings than in the ischemic area itself. However, using the 3 approaches, infarcted areas were always disclosed by our equipment. It is concluded that 2-dimensional isotope technique is not reliable for quantifying focal...... with infarcts involving cortical surface structures were included in the study. Eleven such patients were found among 43 consecutive patients with completed stroke, all investigated with CT-scan. The blood supply to the infarcted areas was evaluated using 3 different approaches: 1) The first minute washout...

  20. Transient and 2-Dimensional Shear-Wave Elastography Provide Comparable Assessment of Alcoholic Liver Fibrosis and Cirrhosis

    Thiele, Maja; Detlefsen, Sönke; Møller, Linda Maria Sevelsted;

    2016-01-01

    BACKGROUND & AIMS: Alcohol abuse causes half of all deaths from cirrhosis in the West, but few tools are available for noninvasive diagnosis of alcoholic liver disease. We evaluated 2 elastography techniques for diagnosis of alcoholic fibrosis and cirrhosis; liver biopsy with Ishak score...... and collagen-proportionate area were used as reference. METHODS: We performed a prospective study of 199 consecutive patients with ongoing or prior alcohol abuse, but without known liver disease. One group of patients had a high pretest probability of cirrhosis because they were identified at hospital liver...... clinics (in Southern Denmark). The second, lower-risk group, was recruited from municipal alcohol rehabilitation centers and the Danish national public health portal. All subjects underwent same-day transient elastography (FibroScan), 2-dimensional shear wave elastography (Supersonic Aixplorer), and liver...

  1. Focal cerebral ischemia measured by the intra-arterial 133xenon method. Limitations of 2-dimensional blood flow measurements

    Skyhøj Olsen, T; Larsen, B; Bech Skriver, E;

    1981-01-01

    with infarcts involving cortical surface structures were included in the study. Eleven such patients were found among 43 consecutive patients with completed stroke, all investigated with CT-scan. The blood supply to the infarcted areas was evaluated using 3 different approaches: 1) The first minute washout...... of 133 xenon (rCBF), 2) the initial distribution of isotope during the first 5 sec and 3) the cumulated counts recorded during 15 min. Compton scatter and the "look through phenomenon" were responsible for the majority of counts recorded from the infarcted areas and the blood flow recorded was found...... to be grossly overestimated and much more influenced by the blood flow in the surroundings than in the ischemic area itself. However, using the 3 approaches, infarcted areas were always disclosed by our equipment. It is concluded that 2-dimensional isotope technique is not reliable for quantifying focal...

  2. Characterization of TES bolometers used in 2-dimensional Backshort-Under-Grid (BUG) arrays for far-infrared astronomy

    Staguhn, J.G. [NASA/GSFC, Greenbelt, MD 20771 (United States) and SSAI, 10210 Greenbelt Rd., Lanham, MD 20706 (United States)]. E-mail: johannes.staguhn@gsfc.nasa.gov; Allen, C.A. [NASA/GSFC, Greenbelt, MD 20771 (United States); Benford, D.J. [NASA/GSFC, Greenbelt, MD 20771 (United States); Chervenak, J.A. [NASA/GSFC, Greenbelt, MD 20771 (United States); Chuss, D.T. [NASA/GSFC, Greenbelt, MD 20771 (United States); Miller, T.M. [NASA/GSFC, Greenbelt, MD 20771 (United States); QSS, 4500 Forbes Blvd., Lanham, MD 20706 (United States); Moseley, S.H. [NASA/GSFC, Greenbelt, MD 20771 (United States); Wollack, E.J. [NASA/GSFC, Greenbelt, MD 20771 (United States)

    2006-04-15

    We have produced a laboratory demonstration of our new Backshort-Under-Grid (BUG) bolometer array architecture in a monolithic, 2-dimensional, 8x8 format. The detector array is designed as a square grid of suspended, 1{mu}m thick silicon bolometers with superconducting molybdium/gold bilayer TESs. These detectors use an additional layer of gold bars deposited on top of the bilayer, oriented transverse to the direction of the current flow, for the suppression of excess noise. This detector design has earlier been shown to provide near fundamental noise limited device performance. We present results from performance measurements of witness devices. In particular we demonstrate that the inband excess noise level of the TES detectors is less than 20% above the thermodynamic phonon noise limit and not significantly higher out of band at frequencies that cannot be attenuated by the Nyquist filter. Our 8x8 BUG arrays will be used in the near future for astronomical observations in several (sub-)millimeter instruments.

  3. Experimental and model investigation of the time-dependent 2-dimensional distribution of binding in a herringbone microchannel.

    Foley, Jennifer O; Mashadi-Hossein, Afshin; Fu, Elain; Finlayson, Bruce A; Yager, Paul

    2008-04-01

    A microfluidic device known to mix bulk solutions, the herringbone microchannel, was incorporated into a surface-binding assay to determine if the recirculation of solution altered the binding of a model protein (streptavidin) to the surface. Streptavidin solutions were pumped over surfaces functionalized with its ligand, biotin, and the binding of streptavidin to those surfaces was monitored using surface plasmon resonance imaging. Surface binding was compared between a straight microchannel and herringbone microchannels in which the chevrons were oriented with and against the flow direction. A 3-dimensional finite-element model of the surface binding reaction was developed for each of the geometries and showed strong qualitative agreement with the experimental results. Experimental and model results indicated that the forward and reverse herringbone microchannels substantially altered the distribution of protein binding (2-dimensional binding profile) as a function of time when compared to a straight microchannel. Over short distances (less than 1.5 mm) down the length of the microchannel, the model predicted no additional protein binding in the herringbone microchannel compared to the straight microchannel, consistent with previous findings in the literature.

  4. Magnetic Field Calculator

    National Oceanic and Atmospheric Administration, Department of Commerce — The Magnetic Field Calculator will calculate the total magnetic field, including components (declination, inclination, horizontal intensity, northerly intensity,...

  5. Set-up verification and 2-dimensional electronic portal imaging device dosimetry during breath hold compared with free breathing in breast cancer radiation therapy

    Brouwers, Patricia J A M; Lustberg, Tim; Borger, Jacques H.; van Baardwijk, Angela A W; Jager, Jos J.; Murrer, Lars H P; Nijsten, Sebastian M J J G; Reymen, Bart H.; van Loon, Judith G M; Boersma, Liesbeth J.

    2015-01-01

    Purpose: To compare set-up and 2-dimensional (2D) electronic portal imaging device (EPID) dosimetry data of breast cancer patients treated during voluntary moderately deep inspiration breath hold (vmDIBH) and free breathing (FB). Methods and materials: Set-up data were analyzed for 29 and 51 consecu

  6. X-ray crystal-structure refinement of the nearly commensurate phase of 1T-TaS2 in (3+2)-dimensional superspace

    Spijkerman, A; deBoer, J.L.; Meetsma, A.; Wiegers, G.A; vanSmaalen, S.

    1997-01-01

    The structure of the nearly commensurate phase of 1T-TaS2 (1T(2)-TaS2) has been refined in (3 + 2)-dimensional superspace against single-crystal x-ray-diffraction data collected at 300 K. The intensities of main reflections and satellites up to sixth order were measured. The unique data set in P (3)

  7. Geochemical Calculations Using Spreadsheets.

    Dutch, Steven Ian

    1991-01-01

    Spreadsheets are well suited to many geochemical calculations, especially those that are highly repetitive. Some of the kinds of problems that can be conveniently solved with spreadsheets include elemental abundance calculations, equilibrium abundances in nuclear decay chains, and isochron calculations. (Author/PR)

  8. Autistic Savant Calendar Calculators.

    Patti, Paul J.

    This study identified 10 savants with developmental disabilities and an exceptional ability to calculate calendar dates. These "calendar calculators" were asked to demonstrate their abilities, and their strategies were analyzed. The study found that the ability to calculate dates into the past or future varied widely among these…

  9. How Do Calculators Calculate Trigonometric Functions?

    Underwood, Jeremy M.; Edwards, Bruce H.

    How does your calculator quickly produce values of trigonometric functions? You might be surprised to learn that it does not use series or polynomial approximations, but rather the so-called CORDIC method. This paper will focus on the geometry of the CORDIC method, as originally developed by Volder in 1959. This algorithm is a wonderful…

  10. SOLUTION OF THE TIME-DEPENDENT SCHRODINGER-EQUATION FOR 2-DIMENSIONAL SPIN-1/2 HEISENBERG SYSTEMS

    DEVRIES, P; DERAEDT, H

    1993-01-01

    Numerical calculations on two-dimensional spin-1/2 XXZ Heisenberg models are presented. The computational technique entails solving the time-dependent Schrodinger equation. The time propagation of wave functions is performed employing Trotter-Suzuki product formulas. The algorithms are numerically s

  11. Simulating the Osceola Mudflow Lahar Event in the Pacific Northwest using a GPU Based 2-Dimensional Hydraulic Model

    Katz, B. G.; Eppert, S.; Lohmann, D.; Li, S.; Goteti, G.; Kaheil, Y. H.

    2011-12-01

    At 4,400 meters, Mount Rainer has been the point of origin for several major lahar events. The largest event, termed the "Osceola Mudflow," occurred 5,500 years ago and covered an area of approximately 550km2 with a total volume of deposited material from 2 to 4km3. Particularly deadly, large lahars are estimated to have maximum flow velocities in of 100km/h with a density often described as "Flowing Concrete." While rare, these events typically cause total destruction within a lahar inundation zone. It is estimated that approximately 150,000 people live on top of previous deposits left by lahars which can be triggered by anything from earthquakes to glacial and chemical erosion of volcanic bedrock over time to liquefaction caused by extreme rainfall events. A novel methodology utilizing a 2 dimensional hydraulic model has been implemented allowing for high resolution (30m) lahar inundation maps to be generated. The utility of this model above or in addition to other methodologies such as that of Iverson (1998), lies in its portability to other lahar zones as well as its ability to model any total volume specified by the user. The process for generating lahar flood plains requires few inputs including: a Digital Terrain Map of any resolution (DTM), a mask defining the locations for lahar genesis, a raster of friction coefficients, and a time series depicting uniform material accumulation over the genesis mask which is allowed to flow down-slope. Finally, a significant improvement in speed has been made for solving the two dimensional model by utilizing the latest in graphics processing unit (GPU) technology which has resulted in a greater than 200 times speed up in model run time over previous CPU-based methods. The model runs for the Osceola Mudflow compare favorably with USGS derived inundation regions as derived using field measurements and GIS based approaches such as the LAHARZ program suit. Overall gradation of low to high risk match well, however the new

  12. Core calculations of JMTR

    Nagao, Yoshiharu [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment

    1998-03-01

    In material testing reactors like the JMTR (Japan Material Testing Reactor) of 50 MW in Japan Atomic Energy Research Institute, the neutron flux and neutron energy spectra of irradiated samples show complex distributions. It is necessary to assess the neutron flux and neutron energy spectra of an irradiation field by carrying out the nuclear calculation of the core for every operation cycle. In order to advance core calculation, in the JMTR, the application of MCNP to the assessment of core reactivity and neutron flux and spectra has been investigated. In this study, in order to reduce the time for calculation and variance, the comparison of the results of the calculations by the use of K code and fixed source and the use of Weight Window were investigated. As to the calculation method, the modeling of the total JMTR core, the conditions for calculation and the adopted variance reduction technique are explained. The results of calculation are shown. Significant difference was not observed in the results of neutron flux calculations according to the difference of the modeling of fuel region in the calculations by K code and fixed source. The method of assessing the results of neutron flux calculation is described. (K.I.)

  13. A 2-dimensional heat transfer analysis of a sheet-and-tube flat plate PV/thermal collector

    Carriere, J.; Harrison, S. [Queen' s Univ., Kingston, ON (Canada). Dept. of Mechanical and Materials Engineering Solar Calorimetry Lab

    2008-08-15

    Temperature gradients in photovoltaic/thermal (PV/T) systems can have a significant impact on the reliability and life-span of system components. However, many simple PV/T models do not consider temperature gradients. In this study, a detailed heat transfer model was used to quantify temperature gradients within a PV/T panel in order to predict thermal and electrical performance as a function of fluid and atmospheric temperatures. The PV/T system consisted of a PV laminate bonded to a thermal collector. A glass cover was used as a secondary glazing system. The effect of increasing the thermal resistance between the various layers in the construction was evaluated in order to measure the temperature gradient through the absorber thickness. A 2-D finite difference model of heat flow in the collector was conducted in order to study the magnitude of the temperature gradient. Steady-state heat flow was calculated along the width of the system as well as between the layers. Heat flux was calculated to the centre of each element. Total absorptivity in each layer was determined by adding the absorption of each portion of the spectrum. Heat losses through the top of the collector were estimated using a 1-D analysis. The study showed that current methods of calculating fin efficiency are not valid when temperature gradients are not considered. Future studies will examine the effect of thermal expansion and shear stresses. 9 refs., 8 figs.

  14. Electrical installation calculations advanced

    Kitcher, Christopher

    2013-01-01

    All the essential calculations required for advanced electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practiceA step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3For apprentices and electrical installatio

  15. Electrical installation calculations basic

    Kitcher, Christopher

    2013-01-01

    All the essential calculations required for basic electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practice. A step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3Fo

  16. Calculating correct compilers

    Bahr, Patrick; Hutton, Graham

    2015-01-01

    In this article, we present a new approach to the problem of calculating compilers. In particular, we develop a simple but general technique that allows us to derive correct compilers from high-level semantics by systematic calculation, with all details of the implementation of the compilers...... falling naturally out of the calculation process. Our approach is based upon the use of standard equational reasoning techniques, and has been applied to calculate compilers for a wide range of language features and their combination, including arithmetic expressions, exceptions, state, various forms...

  17. Radar Signature Calculation Facility

    Federal Laboratory Consortium — FUNCTION: The calculation, analysis, and visualization of the spatially extended radar signatures of complex objects such as ships in a sea multipath environment and...

  18. Electronics Environmental Benefits Calculator

    U.S. Environmental Protection Agency — The Electronics Environmental Benefits Calculator (EEBC) was developed to assist organizations in estimating the environmental benefits of greening their purchase,...

  19. Calculators and Polynomial Evaluation.

    Weaver, J. F.

    The intent of this paper is to suggest and illustrate how electronic hand-held calculators, especially non-programmable ones with limited data-storage capacity, can be used to advantage by students in one particular aspect of work with polynomial functions. The basic mathematical background upon which calculator application is built is summarized.…

  20. Interval arithmetic in calculations

    Bairbekova, Gaziza; Mazakov, Talgat; Djomartova, Sholpan; Nugmanova, Salima

    2016-10-01

    Interval arithmetic is the mathematical structure, which for real intervals defines operations analogous to ordinary arithmetic ones. This field of mathematics is also called interval analysis or interval calculations. The given math model is convenient for investigating various applied objects: the quantities, the approximate values of which are known; the quantities obtained during calculations, the values of which are not exact because of rounding errors; random quantities. As a whole, the idea of interval calculations is the use of intervals as basic data objects. In this paper, we considered the definition of interval mathematics, investigated its properties, proved a theorem, and showed the efficiency of the new interval arithmetic. Besides, we briefly reviewed the works devoted to interval analysis and observed basic tendencies of development of integral analysis and interval calculations.

  1. Unit Cost Compendium Calculations

    U.S. Environmental Protection Agency — The Unit Cost Compendium (UCC) Calculations raw data set was designed to provide for greater accuracy and consistency in the use of unit costs across the USEPA...

  2. Calculativeness and trust

    Frederiksen, Morten

    2014-01-01

    Williamson’s characterisation of calculativeness as inimical to trust contradicts most sociological trust research. However, a similar argument is found within trust phenomenology. This paper re-investigates Williamson’s argument from the perspective of Løgstrup’s phenomenological theory of trust....... Contrary to Williamson, however, Løgstrup’s contention is that trust, not calculativeness, is the default attitude and only when suspicion is awoken does trust falter. The paper argues that while Williamson’s distinction between calculativeness and trust is supported by phenomenology, the analysis needs...... to take actual subjective experience into consideration. It points out that, first, Løgstrup places trust alongside calculativeness as a different mode of engaging in social interaction, rather conceiving of trust as a state or the outcome of a decision-making process. Secondly, the analysis must take...

  3. EFFECTIVE DISCHARGE CALCULATION GUIDE

    D.S.BIEDENHARN; C.R.THORNE; P.J.SOAR; R.D.HEY; C.C.WATSON

    2001-01-01

    This paper presents a procedure for calculating the effective discharge for rivers with alluvial channels.An alluvial river adjusts the bankfull shape and dimensions of its channel to the wide range of flows that mobilize the boundary sediments. It has been shown that time-averaged river morphology is adjusted to the flow that, over a prolonged period, transports most sediment. This is termed the effective discharge.The effective discharge may be calculated provided that the necessary data are available or can be synthesized. The procedure for effective discharge calculation presented here is designed to have general applicability, have the capability to be applied consistently, and represent the effects of physical processes responsible for determining the channel, dimensions. An example of the calculations necessary and applications of the effective discharge concept are presented.

  4. Magnetic Field Grid Calculator

    National Oceanic and Atmospheric Administration, Department of Commerce — The Magnetic Field Properties Calculator will computes the estimated values of Earth's magnetic field(declination, inclination, vertical component, northerly...

  5. Topological Structure of Disclination Lines in 2-Dimensional Liquid Crystals%2维液晶中向错线的拓扑结构

    张慧; 杨国宏

    2002-01-01

    Using φ-mapping method and topological current theory, the topological structure of disclination lines in 2-dimensional liquid crystals is studied. By introducing the strength density and the topological current of many disclination lines, it is pointed out that the disclination lines are determined by the singulaities of the director field, and topologically quantizedby the Hopf indices and Brouwer degrees. Due to the equivalence in physics of the director fields n (x) and -n (x), the Hopf indices can be integers or half-integers,representing a generalization of our previous studies of integer Hopf indices.

  6. Current interruption transients calculation

    Peelo, David F

    2014-01-01

    Provides an original, detailed and practical description of current interruption transients, origins, and the circuits involved, and how they can be calculated Current Interruption Transients Calculationis a comprehensive resource for the understanding, calculation and analysis of the transient recovery voltages (TRVs) and related re-ignition or re-striking transients associated with fault current interruption and the switching of inductive and capacitive load currents in circuits. This book provides an original, detailed and practical description of current interruption transients, origins,

  7. Source and replica calculations

    Whalen, P.P.

    1994-02-01

    The starting point of the Hiroshima-Nagasaki Dose Reevaluation Program is the energy and directional distributions of the prompt neutron and gamma-ray radiation emitted from the exploding bombs. A brief introduction to the neutron source calculations is presented. The development of our current understanding of the source problem is outlined. It is recommended that adjoint calculations be used to modify source spectra to resolve the neutron discrepancy problem.

  8. Scientific calculating peripheral

    Ethridge, C.D.; Nickell, J.D. Jr.; Hanna, W.H.

    1979-09-01

    A scientific calculating peripheral for small intelligent data acquisition and instrumentation systems and for distributed-task processing systems is established with a number-oriented microprocessor controlled by a single component universal peripheral interface microcontroller. A MOS/LSI number-oriented microprocessor provides the scientific calculating capability with Reverse Polish Notation data format. Master processor task definition storage, input data sequencing, computation processing, result reporting, and interface protocol is managed by a single component universal peripheral interface microcontroller.

  9. A Tunable Terahertz Detector based on Self-Assembled Plasmonic Structure on a GaAs 2-Dimensional Electron Gas

    Biradar, Anandrao Shesherao

    The presented work in this report is about Real time Estimation of wind and analyzing current wind correction algorithm in commercial off the shelf Autopilot board. The open source ArduPilot Mega 2.5 (APM 2.5) board manufactured by 3D Robotics is used. Currently there is lot of development being done in the field of Unmanned Aerial Systems (UAVs), various aerial platforms and corresponding; autonomous systems for them. This technology has advanced to such a stage that UAVs can be used for specific designed missions and deployed with reliability. But in some areas like missions requiring high maneuverability with greater efficiency is still under research area. This would help in increasing reliability and augmenting range of UAVs significantly. One of the problems addressed through this thesis work is, current autopilot systems have algorithm that handles wind by attitude correction with appropriate Crab angle. But the real time wind vector (direction) and its calculated velocity is based on geometrical and algebraic transformation between ground speed and air speed vectors. This method of wind estimation and prediction, many a times leads to inaccuracy in attitude correction. The same has been proved in the following report with simulation and actual field testing. In later part, new ways to tackle while flying windy conditions have been proposed.

  10. Revised users manual, Pulverized Coal Gasification or Combustion: 2-dimensional (87-PCGC-2): Final report, Volume 2. [87-PCGC-2

    Smith, P.J.; Smoot, L.D.; Brewster, B.S.

    1987-12-01

    A two-dimensional, steady-state model for describing a variety of reactive and non-reactive flows, including pulverized coal combustion and gasification, is presented. Recent code revisions and additions are described. The model, referred to as 87-PCGC-2, is applicable to cylindrical axi-symmetric systems. Turbulence is accounted for in both the fluid mechanics equations and the combustion scheme. Radiation from gases, walls, and particles is taken into account using either a flux method or discrete ordinates method. The particle phase is modeled in a Lagrangian framework, such that mean paths of particle groups are followed. Several multi-step coal devolatilization schemes are included along with a heterogeneous reaction scheme that allows for both diffusion and chemical reaction. Major gas-phase reactions are modeled assuming local instantaneous equilibrium, and thus the reaction rates are limited by the turbulent rate mixing. A NO/sub x/ finite rate chemistry submodel is included which integrates chemical kinetics and the statistics of the turbulence. The gas phase is described by elliptic partial differential equations that are solved by an iterative line-by-line technique. Under-relaxation is used to achieve numerical stability. The generalized nature of the model allows for calculation of isothermal fluid mechanicsgaseous combustion, droplet combustion, particulate combustion and various mixtures of the above, including combustion of coal-water and coal-oil slurries. Both combustion and gasification environments are permissible. User information and theory are presented, along with sample problems. 106 refs.

  11. Calculations in apheresis.

    Neyrinck, Marleen M; Vrielink, Hans

    2015-02-01

    It's important to work smoothly with your apheresis equipment when you are an apheresis nurse. Attention should be paid to your donor/patient and the product you're collecting. It gives additional value to your work when you are able to calculate the efficiency of your procedures. You must be capable to obtain an optimal product without putting your donor/patient at risk. Not only the total blood volume (TBV) of the donor/patient plays an important role, but also specific blood values influence the apheresis procedure. Therefore, not all donors/patients should be addressed in the same way. Calculation of TBV, extracorporeal volume, and total plasma volume is needed. Many issues determine your procedure time. By knowing the collection efficiency (CE) of your apheresis machine, you can calculate the number of blood volumes to be processed to obtain specific results. You can calculate whether you need one procedure to obtain specific results or more. It's not always needed to process 3× the TBV. In this way, it can be avoided that the donor/patient is needless long connected to the apheresis device. By calculating the CE of each device, you can also compare the various devices for quality control reasons, but also nurses/operators.

  12. Taphonomic implications from Upper Triassic mass flow deposits: 2-dimensional reconstructions of an ammonoid mass occurrence (Carnian, Taurus Mountains, Turkey)

    Mayrhofer, Susanne; Mayrhofer, Susanne

    2014-10-01

    Ammonoid mass occurrences of Late Triassic age were investigated in sections from A şağlyaylabel and Yukarlyaylabel, which are located in the Taurus Platform-Units of eastern Turkey. The cephalopod beds are almost monospecific, with > 99.9 % of individuals from the ceratitic genus Kasimlarceltites, which comprises more than hundreds of millions of ammonoid specimens. The ontogenetic composition of the event fauna varies from bed to bed, suggesting that these redeposited shell-rich sediments had different source areas. The geographical extent of the mass occurrence can be traced over large areas up to 10 km2. Each of the Early Carnian (Julian 2) ammonoid mass occurrences signifies a single storm (e.g. storm-wave action) or tectonic event (e.g. earthquake) that caused gravity flows and turbidity currents. Three types of ammonoid accumulation deposits are distinguished by their genesis: 1) matrix-supported floatstones, produced by low density debris flows, 2) mixed floatstones and packstones formed by high density debris flows, and 3) densely ammonoid shell-supported packstones which result from turbidity currents. Two-dimensional calculations on the mass occurrences, based on sectioning, reveal aligned ammonoid shells, implying transport in a diluted sediment. The ammonoid shells are predominantely redeposited, preserved as mixed autochthonous/parautochnonous/ allochthonous communities based on biogenic and sedimentological concentration mechanisms ( = in-situ or post-mortem deposited). This taphonomic evaluation of the Kasimlarceltites beds thus reveals new insights into the environment of deposition of the Carnian section, namely that it had a proximal position along a carbonate platform edge that was influenced by a nearby shallow water regime. The Kasimlarceltites-abundance zone is a marker-zone in the study area, developed during the drowning of a shallow water platform, which can be traceable over long distances.

  13. Taphonomic implications from Upper Triassic mass flow deposits: 2-dimensional reconstructions of an ammonoid mass occurrence (Carnian, Taurus Mountains, Turkey

    Mayrhofer Susanne

    2014-10-01

    Full Text Available Ammonoid mass occurrences of Late Triassic age were investigated in sections from A şağlyaylabel and Yukarlyaylabel, which are located in the Taurus Platform-Units of eastern Turkey. The cephalopod beds are almost monospecific, with > 99.9 % of individuals from the ceratitic genus Kasimlarceltites, which comprises more than hundreds of millions of ammonoid specimens. The ontogenetic composition of the event fauna varies from bed to bed, suggesting that these redeposited shell-rich sediments had different source areas. The geographical extent of the mass occurrence can be traced over large areas up to 10 km2. Each of the Early Carnian (Julian 2 ammonoid mass occurrences signifies a single storm (e.g. storm-wave action or tectonic event (e.g. earthquake that caused gravity flows and turbidity currents. Three types of ammonoid accumulation deposits are distinguished by their genesis: 1 matrix-supported floatstones, produced by low density debris flows, 2 mixed floatstones and packstones formed by high density debris flows, and 3 densely ammonoid shell-supported packstones which result from turbidity currents. Two-dimensional calculations on the mass occurrences, based on sectioning, reveal aligned ammonoid shells, implying transport in a diluted sediment. The ammonoid shells are predominantely redeposited, preserved as mixed autochthonous/parautochnonous/ allochthonous communities based on biogenic and sedimentological concentration mechanisms ( = in-situ or post-mortem deposited. This taphonomic evaluation of the Kasimlarceltites beds thus reveals new insights into the environment of deposition of the Carnian section, namely that it had a proximal position along a carbonate platform edge that was influenced by a nearby shallow water regime. The Kasimlarceltites-abundance zone is a marker-zone in the study area, developed during the drowning of a shallow water platform, which can be traceable over long distances.

  14. INVAP's Nuclear Calculation System

    Ignacio Mochi

    2011-01-01

    Full Text Available Since its origins in 1976, INVAP has been on continuous development of the calculation system used for design and optimization of nuclear reactors. The calculation codes have been polished and enhanced with new capabilities as they were needed or useful for the new challenges that the market imposed. The actual state of the code packages enables INVAP to design nuclear installations with complex geometries using a set of easy-to-use input files that minimize user errors due to confusion or misinterpretation. A set of intuitive graphic postprocessors have also been developed providing a fast and complete visualization tool for the parameters obtained in the calculations. The capabilities and general characteristics of this deterministic software package are presented throughout the paper including several examples of its recent application.

  15. Calculating Quenching Weights

    Salgado, C A; Salgado, Carlos A.; Wiedemann, Urs Achim

    2003-01-01

    We calculate the probability (``quenching weight'') that a hard parton radiates an additional energy fraction due to scattering in spatially extended QCD matter. This study is based on an exact treatment of finite in-medium path length, it includes the case of a dynamically expanding medium, and it extends to the angular dependence of the medium-induced gluon radiation pattern. All calculations are done in the multiple soft scattering approximation (Baier-Dokshitzer-Mueller-Peign\\'e-Schiff--Zakharov ``BDMPS-Z''-formalism) and in the single hard scattering approximation (N=1 opacity approximation). By comparison, we establish a simple relation between transport coefficient, Debye screening mass and opacity, for which both approximations lead to comparable results. Together with this paper, a CPU-inexpensive numerical subroutine for calculating quenching weights is provided electronically. To illustrate its applications, we discuss the suppression of hadronic transverse momentum spectra in nucleus-nucleus colli...

  16. OFTIFEL PERSONALIZED NUTRITIONAL CALCULATOR

    Malte BETHKE

    2016-11-01

    Full Text Available A food calculator for elderly people was elaborated by Centiv GmbH, an active partner in the European FP7 OPTIFEL Project, based on the functional requirement specifications and the existing recommendations for daily allowances across Europe, data which were synthetized and used to give aims in amounts per portion. The OPTIFEL Personalised Nutritional Calculator is the only available online tool which allows to determine on a personalised level the required nutrients for elderly people (65+. It has been developed mainly to support nursing homes providing best possible (personalised nutrient enriched food to their patients. The European FP7 OPTIFEL project “Optimised Food Products for Elderly Populations” aims to develop innovative products based on vegetables and fruits for elderly populations to increase length of independence. The OPTIFEL Personalised Nutritional Calculator is recommended to be used by nursing homes.

  17. Fast Near-Field Calculation for Volume Integral Equations for Layered Media

    Kim, Oleksiy S.; Meincke, Peter; Breinbjerg, Olav

    2005-01-01

    An efficient technique based on the Fast Fourier Transform (FFT) for calculating near-field scattering by dielectric objects in layered media is presented. A higher or-der method of moments technique is employed to solve the volume integral equation for the unknown induced volume current density....... Afterwards, the scattered electric field can be easily computed at a regular rectangular grid on any horizontal plane us-ing a 2-dimensional FFT. This approach provides significant speedup in the near-field calculation in comparison to a straightforward numerical evaluation of the ra-diation integral since...

  18. Spin Resonance Strength Calculations

    Courant, E. D.

    2009-08-01

    In calculating the strengths of depolarizing resonances it may be convenient to reformulate the equations of spin motion in a coordinate system based on the actual trajectory of the particle, as introduced by Kondratenko, rather than the conventional one based on a reference orbit. It is shown that resonance strengths calculated by the conventional and the revised formalisms are identical. Resonances induced by radiofrequency dipoles or solenoids are also treated; with rf dipoles it is essential to consider not only the direct effect of the dipole but also the contribution from oscillations induced by it.

  19. Spin resonance strength calculations

    Courant,E.D.

    2008-10-06

    In calculating the strengths of depolarizing resonances it may be convenient to reformulate the equations of spin motion in a coordinate system based on the actual trajectory of the particle, as introduced by Kondratenko, rather than the conventional one based on a reference orbit. It is shown that resonance strengths calculated by the conventional and the revised formalisms are identical. Resonances induced by radiofrequency dipoles or solenoids are also treated; with rf dipoles it is essential to consider not only the direct effect of the dipole but also the contribution from oscillations induced by it.

  20. Curvature calculations with GEOCALC

    Moussiaux, A.; Tombal, P.

    1987-04-01

    A new method for calculating the curvature tensor has been recently proposed by D. Hestenes. This method is a particular application of geometric calculus, which has been implemented in an algebraic programming language on the form of a package called GEOCALC. They show how to apply this package to the Schwarzchild case and they discuss the different results.

  1. Haida Numbers and Calculation.

    Cogo, Robert

    Experienced traders in furs, blankets, and other goods, the Haidas of the 1700's had a well-developed decimal system for counting and calculating. Their units of linear measure included the foot, yard, and fathom, or six feet. This booklet lists the numbers from 1 to 20 in English and Haida; explains the Haida use of ten, hundred, and thousand…

  2. Daylight calculations in practice

    Iversen, Anne; Roy, Nicolas; Hvass, Mette;

    programs can give different results. This can be due to restrictions in the program itself and/or be due to the skills of the persons setting up the models. This is crucial as daylight calculations are used to document that the demands and recommendations to daylight levels outlined by building authorities...

  3. Dynamics Calculation of Spoke

    2011-01-01

    Compared with ellipse cavity, the spoke cavity has many advantages, especially for the low and medium beam energy. It will be used in the superconductor accelerator popular in the future. Based on the spoke cavity, we design and calculate an accelerator

  4. Radioprotection calculations for MEGAPIE.

    Zanini, L

    2005-01-01

    The MEGAwatt PIlot Experiment (MEGAPIE) liquid lead-bismuth spallation neutron source will commence operation in 2006 at the SINQ facility of the Paul Scherrer Institut. Such an innovative system presents radioprotection concerns peculiar to a liquid spallation target. Several radioprotection issues have been addressed and studied by means of the Monte Carlo transport code, FLUKA. The dose rates in the room above the target, where personnel access may be needed at times, from the activated lead-bismuth and from the volatile species produced were calculated. Results indicate that the dose rate level is of the order of 40 mSv h(-1) 2 h after shutdown, but it can be reduced below the mSv h(-1) level with slight modifications to the shielding. Neutron spectra and dose rates from neutron transport, of interest for possible damage to radiation sensitive components, have also been calculated.

  5. PIC: Protein Interactions Calculator.

    Tina, K G; Bhadra, R; Srinivasan, N

    2007-07-01

    Interactions within a protein structure and interactions between proteins in an assembly are essential considerations in understanding molecular basis of stability and functions of proteins and their complexes. There are several weak and strong interactions that render stability to a protein structure or an assembly. Protein Interactions Calculator (PIC) is a server which, given the coordinate set of 3D structure of a protein or an assembly, computes various interactions such as disulphide bonds, interactions between hydrophobic residues, ionic interactions, hydrogen bonds, aromatic-aromatic interactions, aromatic-sulphur interactions and cation-pi interactions within a protein or between proteins in a complex. Interactions are calculated on the basis of standard, published criteria. The identified interactions between residues can be visualized using a RasMol and Jmol interface. The advantage with PIC server is the easy availability of inter-residue interaction calculations in a single site. It also determines the accessible surface area and residue-depth, which is the distance of a residue from the surface of the protein. User can also recognize specific kind of interactions, such as apolar-apolar residue interactions or ionic interactions, that are formed between buried or exposed residues or near the surface or deep inside.

  6. On the uniqueness of the (2,2-dimensional supertorus associated to a nontrivial representation of its underlying 2-torus, and having nontrivial odd brackets

    R. Peniche

    2004-01-01

    Full Text Available It is proved that up to isomorphism there is only one (2,2-dimensional supertorus associated to a nontrivial representation of its underlying 2-torus, and that it has nontrivial odd brackets. This supertorus is obtained by finding out first a canonical form for its Lie superalgebra, and then using Lie's technique to represent it faithfully as supervector fields on a supermanifold. Those supervector fields can be integrated, and through their various integral flows the composition law for the supergroup is straightforwardly deduced. It turns out that this supertorus is precisely the supergroup described by Guhr (1993 following a formal analogy with the classical unitary group U(2 but with no further intrinsic characterization.

  7. 方形薄板二维驻波的研究%Research of 2-dimensional standing waves in square plate

    方奕忠; 王钢; 沈韩; 崔新图; 廖德驹; 冯饶慧

    2014-01-01

    通过对多种振源情形下的方形薄板二维驻波图形(克拉尼图形)的观测与研究,得到不同频率下驻波图形的波节数 n+1,m+1及波矢 k ,从而导出波速(相速)u 。实验结果与理论解析解(严格解)相比较符合得很好。%2-dimensional standing waves in square plates (Chladni figures) in several cases was studied both experimentally and theoretically .Wave nodes and wave vector of the standing wave fig-ures were gotten at different frequency ,and the wave velocity was deduced .The results of the experi-ment agreed well to the analytic solutions of theory .

  8. QSL Squasher: A Fast Quasi-Separatrix Layer Map Calculator

    Tassev, Svetlin

    2016-01-01

    Quasi-Separatrix Layers (QSLs) are a useful proxy for the locations where current sheets can develop in the solar corona, and give valuable information about the connectivity in complicated magnetic field configurations. However, calculating QSL maps even for 2-dimensional slices through 3-dimensional models of coronal magnetic fields is a non-trivial task as it usually involves tracing out millions of magnetic field lines with immense precision. Thus, extending QSL calculations to three dimensions has rarely been done until now. In order to address this challenge, we present QSL Squasher -- a public, open-source code, which is optimized for calculating QSL maps in both two and three dimensions on GPUs. The code achieves large processing speeds for three reasons, each of which results in an order-of-magnitude speed-up. 1) The code is parallelized using OpenCL. 2) The precision requirements for the QSL calculation are drastically reduced by using perturbation theory. 3) A new boundary detection criterion betwe...

  9. Calculations in furnace technology

    Davies, Clive; Hopkins, DW; Owen, WS

    2013-01-01

    Calculations in Furnace Technology presents the theoretical and practical aspects of furnace technology. This book provides information pertinent to the development, application, and efficiency of furnace technology. Organized into eight chapters, this book begins with an overview of the exothermic reactions that occur when carbon, hydrogen, and sulfur are burned to release the energy available in the fuel. This text then evaluates the efficiencies to measure the quantity of fuel used, of flue gases leaving the plant, of air entering, and the heat lost to the surroundings. Other chapters consi

  10. Acute calculous cholecystitis

    Angarita, Fernando A.; University Health Network; Acuña, Sergio A.; Mount Sinai Hospital; Jimenez, Carolina; University of Toronto; Garay, Javier; Pontificia Universidad Javeriana; Gömez, David; University of Toronto; Domínguez, Luis Carlos; Pontificia Universidad Javeriana

    2010-01-01

    Acute calculous cholecystitis is the most important cause of cholecystectomies worldwide. We review the physiopathology of the inflammatory process in this organ secondary to biliary tract obstruction, as well as its clinical manifestations, workup, and the treatment it requires. La colecistitis calculosa aguda es la causa más importante de colecistectomías en el mundo. En esta revisión de tema se resume la fisiopatología del proceso inflamatorio de la vesículabiliar secundaria a la obstru...

  11. Zero Temperature Hope Calculations

    Rozsnyai, B F

    2002-07-26

    The primary purpose of the HOPE code is to calculate opacities over a wide temperature and density range. It can also produce equation of state (EOS) data. Since the experimental data at the high temperature region are scarce, comparisons of predictions with the ample zero temperature data provide a valuable physics check of the code. In this report we show a selected few examples across the periodic table. Below we give a brief general information about the physics of the HOPE code. The HOPE code is an ''average atom'' (AA) Dirac-Slater self-consistent code. The AA label in the case of finite temperature means that the one-electron levels are populated according to the Fermi statistics, at zero temperature it means that the ''aufbau'' principle works, i.e. no a priory electronic configuration is set, although it can be done. As such, it is a one-particle model (any Hartree-Fock model is a one particle model). The code is an ''ion-sphere'' model, meaning that the atom under investigation is neutral within the ion-sphere radius. Furthermore, the boundary conditions for the bound states are also set at the ion-sphere radius, which distinguishes the code from the INFERNO, OPAL and STA codes. Once the self-consistent AA state is obtained, the code proceeds to generate many-electron configurations and proceeds to calculate photoabsorption in the ''detailed configuration accounting'' (DCA) scheme. However, this last feature is meaningless at zero temperature. There is one important feature in the HOPE code which should be noted; any self-consistent model is self-consistent in the space of the occupied orbitals. The unoccupied orbitals, where electrons are lifted via photoexcitation, are unphysical. The rigorous way to deal with that problem is to carry out complete self-consistent calculations both in the initial and final states connecting photoexcitations, an enormous computational task

  12. Linewidth calculations and simulations

    Strandberg, Ingrid

    2016-01-01

    We are currently developing a new technique to further enhance the sensitivity of collinear laser spectroscopy in order to study the most exotic nuclides available at radioactive ion beam facilities, such as ISOLDE at CERN. The overall goal is to evaluate the feasibility of the new method. This report will focus on the determination of the expected linewidth (hence resolution) of this approach. Different effects which could lead to a broadening of the linewidth, e.g. the ions' energy spread and their trajectories inside the trap, are studied with theoretical calculations as well as simulations.

  13. Matlab numerical calculations

    Lopez, Cesar

    2015-01-01

    MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java. This book is designed for use as a scientific/business calculator so that you can get numerical solutions to problems involving a wide array of mathematics using MATLAB. Just look up the function y

  14. Multilayer optical calculations

    Byrnes, Steven J

    2016-01-01

    When light hits a multilayer planar stack, it is reflected, refracted, and absorbed in a way that can be derived from the Fresnel equations. The analysis is treated in many textbooks, and implemented in many software programs, but certain aspects of it are difficult to find explicitly and consistently worked out in the literature. Here, we derive the formulas underlying the transfer-matrix method of calculating the optical properties of these stacks, including oblique-angle incidence, absorption-vs-position profiles, and ellipsometry parameters. We discuss and explain some strange consequences of the formulas in the situation where the incident and/or final (semi-infinite) medium are absorptive, such as calculating $T>1$ in the absence of gain. We also discuss some implementation details like complex-plane branch cuts. Finally, we derive modified formulas for including one or more "incoherent" layers, i.e. very thick layers in which interference can be neglected. This document was written in conjunction with ...

  15. Calculating Speed of Sound

    Bhatnagar, Shalabh

    2017-01-01

    Sound is an emerging source of renewable energy but it has some limitations. The main limitation is, the amount of energy that can be extracted from sound is very less and that is because of the velocity of the sound. The velocity of sound changes as per medium. If we could increase the velocity of the sound in a medium we would be probably able to extract more amount of energy from sound and will be able to transfer it at a higher rate. To increase the velocity of sound we should know the speed of sound. If we go by the theory of classic mechanics speed is the distance travelled by a particle divided by time whereas velocity is the displacement of particle divided by time. The speed of sound in dry air at 20 °C (68 °F) is considered to be 343.2 meters per second and it won't be wrong in saying that 342.2 meters is the velocity of sound not the speed as it's the displacement of the sound not the total distance sound wave covered. Sound travels in the form of mechanical wave, so while calculating the speed of sound the whole path of wave should be considered not just the distance traveled by sound. In this paper I would like to focus on calculating the actual speed of sound wave which can help us to extract more energy and make sound travel with faster velocity.

  16. Molecular Dynamics Calculations

    1996-01-01

    The development of thermodynamics and statistical mechanics is very important in the history of physics, and it underlines the difficulty in dealing with systems involving many bodies, even if those bodies are identical. Macroscopic systems of atoms typically contain so many particles that it would be virtually impossible to follow the behavior of all of the particles involved. Therefore, the behavior of a complete system can only be described or predicted in statistical ways. Under a grant to the NASA Lewis Research Center, scientists at the Case Western Reserve University have been examining the use of modern computing techniques that may be able to investigate and find the behavior of complete systems that have a large number of particles by tracking each particle individually. This is the study of molecular dynamics. In contrast to Monte Carlo techniques, which incorporate uncertainty from the outset, molecular dynamics calculations are fully deterministic. Although it is still impossible to track, even on high-speed computers, each particle in a system of a trillion trillion particles, it has been found that such systems can be well simulated by calculating the trajectories of a few thousand particles. Modern computers and efficient computing strategies have been used to calculate the behavior of a few physical systems and are now being employed to study important problems such as supersonic flows in the laboratory and in space. In particular, an animated video (available in mpeg format--4.4 MB) was produced by Dr. M.J. Woo, now a National Research Council fellow at Lewis, and the G-VIS laboratory at Lewis. This video shows the behavior of supersonic shocks produced by pistons in enclosed cylinders by following exactly the behavior of thousands of particles. The major assumptions made were that the particles involved were hard spheres and that all collisions with the walls and with other particles were fully elastic. The animated video was voted one of two

  17. Impact cratering calculations

    Ahrens, Thomas J.; Okeefe, J. D.; Smither, C.; Takata, T.

    1991-01-01

    In the course of carrying out finite difference calculations, it was discovered that for large craters, a previously unrecognized type of crater (diameter) growth occurred which was called lip wave propagation. This type of growth is illustrated for an impact of a 1000 km (2a) silicate bolide at 12 km/sec (U) onto a silicate half-space at earth gravity (1 g). The von Misses crustal strength is 2.4 kbar. The motion at the crater lip associated with this wave type phenomena is up, outward, and then down, similar to the particle motion of a surface wave. It is shown that the crater diameter has grown d/a of approximately 25 to d/a of approximately 4 via lip propagation from Ut/a = 5.56 to 17.0 during the time when rebound occurs. A new code is being used to study partitioning of energy and momentum and cratering efficiency with self gravity for finite-sized objects rather than the previously discussed planetary half-space problems. These are important and fundamental subjects which can be addressed with smoothed particle hydrodynamic (SPH) codes. The SPH method was used to model various problems in astrophysics and planetary physics. The initial work demonstrates that the energy budget for normal and oblique impacts are distinctly different than earlier calculations for silicate projectile impact on a silicate half space. Motivated by the first striking radar images of Venus obtained by Magellan, the effect of the atmosphere on impact cratering was studied. In order the further quantify the processes of meteor break-up and trajectory scattering upon break-up, the reentry physics of meteors striking Venus' atmosphere versus that of the Earth were studied.

  18. MBPT calculations with ABINIT

    Giantomassi, Matteo; Huhs, Georg; Waroquiers, David; Gonze, Xavier

    2014-03-01

    Many-Body Perturbation Theory (MBPT) defines a rigorous framework for the description of excited-state properties based on the Green's function formalism. Within MBPT, one can calculate charged excitations using e.g. Hedin's GW approximation for the electron self-energy. In the same framework, neutral excitations are also well described through the solution of the Bethe-Salpeter equation (BSE). In this talk, we report on the recent developments concerning the parallelization of the MBPT algorithms available in the ABINIT code (www.abinit.org). In particular, we discuss how to improve the parallel efficiency thanks to a hybrid version that employs MPI for the coarse-grained parallelization and OpenMP (a de facto standard for parallel programming on shared memory architectures) for the fine-grained parallelization of the most CPU-intensive parts. Benchmark results obtained with the new implementation are discussed. Finally, we present results for the GW corrections of amorphous SiO2 in the presence of defects and the BSE absorption spectrum. This work has been supported by the Prace project (PaRtnership for Advanced Computing in Europe, http://www.prace-ri.eu).

  19. Reconstruction 3-dimensional image from 2-dimensional image of status optical coherence tomography (OCT) for analysis of changes in retinal thickness

    Arinilhaq,; Widita, Rena [Department of Physics, Nuclear Physics and Biophysics Research Group, Institut Teknologi Bandung (Indonesia)

    2014-09-30

    Optical Coherence Tomography is often used in medical image acquisition to diagnose that change due easy to use and low price. Unfortunately, this type of examination produces a two-dimensional retinal image of the point of acquisition. Therefore, this study developed a method that combines and reconstruct 2-dimensional retinal images into three-dimensional images to display volumetric macular accurately. The system is built with three main stages: data acquisition, data extraction and 3-dimensional reconstruction. At data acquisition step, Optical Coherence Tomography produced six *.jpg images of each patient were further extracted with MATLAB 2010a software into six one-dimensional arrays. The six arrays are combined into a 3-dimensional matrix using a kriging interpolation method with SURFER9 resulting 3-dimensional graphics of macula. Finally, system provides three-dimensional color graphs based on the data distribution normal macula. The reconstruction system which has been designed produces three-dimensional images with size of 481 × 481 × h (retinal thickness) pixels.

  20. Origin of fine oscillations in the photoluminescence spectrum of 2-dimensional electron gas formed in AlGaN/GaN high electron mobility transistor structures

    Jana, Dipankar, E-mail: dip2602@gmail.com; Porwal, S.; Oak, S. M.; Sharma, T. K., E-mail: tarun@rrcat.gov.in [Semiconductor Physics and Devices Laboratory, Raja Ramanna Centre for Advanced Technology, Indore 452013, Madhya Pradesh (India); Jain, Anubha [Solid State Physics Laboratory, Lucknow Road, New Delhi 110054 (India)

    2015-10-28

    An unambiguous identification of the fine oscillations observed in the low temperature photoluminescence (PL) spectra of AlGaN/GaN based high electron mobility transistor (HEMT) structures is carried out. In literature, such oscillations have been erroneously identified as the sub-levels of 2-dimensional electron gas (2DEG) formed at AlGaN/GaN heterointerface. Here, the origin of these oscillations is probed by performing the angle dependent PL and reflectivity measurements under identical conditions. Contrary to the reports available in literature, we find that the fine oscillations are not related to 2DEG sub-levels. The optical characteristics of these oscillations are mainly governed by an interference phenomenon. In particular, peculiar temperature dependent redshift and excitation intensity dependent blueshift, which have been interpreted as the characteristics of 2DEG sub-levels in HEMT structures by other researchers, are understood by invoking the wavelength and temperature dependence of the refractive index of GaN within the framework of interference phenomenon. The results of other researchers are also consistently explained by considering the fine oscillatory features as the interference oscillations.

  1. The rating reliability calculator

    Solomon David J

    2004-04-01

    Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.

  2. Cosmological Calculations on the GPU

    Bard, Deborah; Allen, Mark T; Yepremyan, Hasmik; Kratochvil, Jan M

    2012-01-01

    Cosmological measurements require the calculation of nontrivial quantities over large datasets. The next generation of survey telescopes (such as DES, PanSTARRS, and LSST) will yield measurements of billions of galaxies. The scale of these datasets, and the nature of the calculations involved, make cosmological calculations ideal models for implementation on graphics processing units (GPUs). We consider two cosmological calculations, the two-point angular correlation function and the aperture mass statistic, and aim to improve the calculation time by constructing code for calculating them on the GPU. Using CUDA, we implement the two algorithms on the GPU and compare the calculation speeds to comparable code run on the CPU. We obtain a code speed-up of between 10 - 180x faster, compared to performing the same calculation on the CPU. The code has been made publicly available.

  3. New Arsenic Cross Section Calculations

    Kawano, Toshihiko [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-03-04

    This report presents calculations for the new arsenic cross section. Cross sections for 73,74,75 As above the resonance range were calculated with a newly developed Hauser-Feshbach code, CoH3.

  4. Global nuclear-structure calculations

    Moeller, P.; Nix, J.R.

    1990-04-20

    The revival of interest in nuclear ground-state octupole deformations that occurred in the 1980's was stimulated by observations in 1980 of particularly large deviations between calculated and experimental masses in the Ra region, in a global calculation of nuclear ground-state masses. By minimizing the total potential energy with respect to octupole shape degrees of freedom in addition to {epsilon}{sub 2} and {epsilon}{sub 4} used originally, a vastly improved agreement between calculated and experimental masses was obtained. To study the global behavior and interrelationships between other nuclear properties, we calculate nuclear ground-state masses, spins, pairing gaps and {Beta}-decay and half-lives and compare the results to experimental qualities. The calculations are based on the macroscopic-microscopic approach, with the microscopic contributions calculated in a folded-Yukawa single-particle potential.

  5. Equilibrium calculations of firework mixtures

    Hobbs, M.L. [Sandia National Labs., Albuquerque, NM (United States); Tanaka, Katsumi; Iida, Mitsuaki; Matsunaga, Takehiro [National Inst. of Materials and Chemical Research, Tsukuba, Ibaraki (Japan)

    1994-12-31

    Thermochemical equilibrium calculations have been used to calculate detonation conditions for typical firework components including three report charges, two display charges, and black powder which is used as a fuse or launch charge. Calculations were performed with a modified version of the TIGER code which allows calculations with 900 gaseous and 600 condensed product species at high pressure. The detonation calculations presented in this paper are thought to be the first report on the theoretical study of firework detonation. Measured velocities for two report charges are available and compare favorably to predicted detonation velocities. However, the measured velocities may not be true detonation velocities. Fast deflagration rather than an ideal detonation occurs when reactants contain significant amounts of slow reacting constituents such as aluminum or titanium. Despite such uncertainties in reacting pyrotechnics, the detonation calculations do show the complex nature of condensed phase formation at elevated pressures and give an upper bound for measured velocities.

  6. Neural correlates underlying mental calculation in abacus experts: a functional magnetic resonance imaging study.

    Hanakawa, Takashi; Honda, Manabu; Okada, Tomohisa; Fukuyama, Hidenao; Shibasaki, Hiroshi

    2003-06-01

    Experts of abacus operation demonstrate extraordinary ability in mental calculation. There is psychological evidence that abacus experts utilize a mental image of an abacus to remember and manipulate large numbers in solving problems; however, the neural correlates underlying this expertise are unknown. Using functional magnetic resonance imaging, we compared the neural correlates associated with three mental-operation tasks (numeral, spatial, verbal) among six experts in abacus operations and eight nonexperts. In general, there was more involvement of neural correlates for visuospatial processing (e.g., right premotor and parietal areas) for abacus experts during the numeral mental-operation task. Activity of these areas and the fusiform cortex was correlated with the size of numerals used in the numeral mental-operation task. Particularly, the posterior superior parietal cortex revealed significantly enhanced activity for experts compared with controls during the numeral mental-operation task. Comparison with the other mental-operation tasks indicated that activity in the posterior superior parietal cortex was relatively specific to computation in 2-dimensional space. In conclusion, mental calculation of abacus experts is likely associated with enhanced involvement of the neural resources for visuospatial information processing in 2-dimensional space.

  7. CALCULATION OF LASER CUTTING COSTS

    Bogdan Nedic

    2016-09-01

    Full Text Available The paper presents description methods of metal cutting and calculation of treatment costs based on model that is developed on Faculty of mechanical engineering in Kragujevac. Based on systematization and analysis of large number of calculation models of cutting with unconventional methods, mathematical model is derived, which is used for creating a software for calculation costs of metal cutting. Software solution enables resolving the problem of calculating the cost of laser cutting, comparison' of costs made by other unconventional methods and provides documentation that consists of reports on estimated costs.

  8. Calculator. Owning a Small Business.

    Parma City School District, OH.

    Seven activities are presented in this student workbook designed for an exploration of small business ownership and the use of the calculator in this career. Included are simulated situations in which students must use a calculator to compute property taxes; estimate payroll taxes and franchise taxes; compute pricing, approximate salaries,…

  9. Calculation of Spectra of Solids:

    Lindgård, Per-Anker

    1975-01-01

    The Gilat-Raubenheimer method simplified to tetrahedron division is used to calculate the real and imaginary part of the dynamical response function for electrons. A frequency expansion for the real part is discussed. The Lindhard function is calculated as a test for numerical accuracy. The condu...

  10. Closure and Sealing Design Calculation

    T. Lahnalampi; J. Case

    2005-08-26

    The purpose of the ''Closure and Sealing Design Calculation'' is to illustrate closure and sealing methods for sealing shafts, ramps, and identify boreholes that require sealing in order to limit the potential of water infiltration. In addition, this calculation will provide a description of the magma that can reduce the consequences of an igneous event intersecting the repository. This calculation will also include a listing of the project requirements related to closure and sealing. The scope of this calculation is to: summarize applicable project requirements and codes relating to backfilling nonemplacement openings, removal of uncommitted materials from the subsurface, installation of drip shields, and erecting monuments; compile an inventory of boreholes that are found in the area of the subsurface repository; describe the magma bulkhead feature and location; and include figures for the proposed shaft and ramp seals. The objective of this calculation is to: categorize the boreholes for sealing by depth and proximity to the subsurface repository; develop drawing figures which show the location and geometry for the magma bulkhead; include the shaft seal figures and a proposed construction sequence; and include the ramp seal figure and a proposed construction sequence. The intent of this closure and sealing calculation is to support the License Application by providing a description of the closure and sealing methods for the Safety Analysis Report. The closure and sealing calculation will also provide input for Post Closure Activities by describing the location of the magma bulkhead. This calculation is limited to describing the final configuration of the sealing and backfill systems for the underground area. The methods and procedures used to place the backfill and remove uncommitted materials (such as concrete) from the repository and detailed design of the magma bulkhead will be the subject of separate analyses or calculations. Post

  11. Practical astronomy with your calculator

    Duffett-Smith, Peter

    1989-01-01

    Practical Astronomy with your Calculator, first published in 1979, has enjoyed immense success. The author's clear and easy to follow routines enable you to solve a variety of practical and recreational problems in astronomy using a scientific calculator. Mathematical complexity is kept firmly in the background, leaving just the elements necessary for swiftly making calculations. The major topics are: time, coordinate systems, the Sun, the planetary system, binary stars, the Moon, and eclipses. In the third edition there are entirely new sections on generalised coordinate transformations, nutr

  12. Transfer Area Mechanical Handling Calculation

    B. Dianda

    2004-06-23

    This calculation is intended to support the License Application (LA) submittal of December 2004, in accordance with the directive given by DOE correspondence received on the 27th of January 2004 entitled: ''Authorization for Bechtel SAX Company L.L. C. to Include a Bare Fuel Handling Facility and Increased Aging Capacity in the License Application, Contract Number DE-AC28-01R W12101'' (Arthur, W.J., I11 2004). This correspondence was appended by further Correspondence received on the 19th of February 2004 entitled: ''Technical Direction to Bechtel SAIC Company L.L. C. for Surface Facility Improvements, Contract Number DE-AC28-OIRW12101; TDL No. 04-024'' (BSC 2004a). These documents give the authorization for a Fuel Handling Facility to be included in the baseline. The purpose of this calculation is to establish preliminary bounding equipment envelopes and weights for the Fuel Handling Facility (FHF) transfer areas equipment. This calculation provides preliminary information only to support development of facility layouts and preliminary load calculations. The limitations of this preliminary calculation lie within the assumptions of section 5 , as this calculation is part of an evolutionary design process. It is intended that this calculation is superseded as the design advances to reflect information necessary to support License Application. The design choices outlined within this calculation represent a demonstration of feasibility and may or may not be included in the completed design. This calculation provides preliminary weight, dimensional envelope, and equipment position in building for the purposes of defining interface variables. This calculation identifies and sizes major equipment and assemblies that dictate overall equipment dimensions and facility interfaces. Sizing of components is based on the selection of commercially available products, where applicable. This is not a specific recommendation for the future use

  13. MFTF-B performance calculations

    Thomassen, K.I.; Jong, R.A.

    1982-12-06

    In this report we document the operating scenario models and calculations as they exist and comment on those aspects of the models where performance is sensitive to the assumptions that are made. We also focus on areas where improvements need to be made in the mathematical descriptions of phenomena, work which is in progress. To illustrate the process of calculating performance, and to be very specific in our documentation, part 2 of this report contains the complete equations and sequence of calculations used to determine parameters for the MARS mode of operation in MFTF-B. Values for all variables for a particular set of input parameters are also given there. The point design so described is typical, but should be viewed as a snapshot in time of our ongoing estimations and predictions of performance.

  14. Insertion device calculations with mathematica

    Carr, R. [Stanford Synchrotron Radiation Lab., CA (United States); Lidia, S. [Univ. of California, Davis, CA (United States)

    1995-02-01

    The design of accelerator insertion devices such as wigglers and undulators has usually been aided by numerical modeling on digital computers, using code in high level languages like Fortran. In the present era, there are higher level programming environments like IDL{reg_sign}, MatLab{reg_sign}, and Mathematica{reg_sign} in which these calculations may be performed by writing much less code, and in which standard mathematical techniques are very easily used. The authors present a suite of standard insertion device modeling routines in Mathematica to illustrate the new techniques. These routines include a simple way to generate magnetic fields using blocks of CSEM materials, trajectory solutions from the Lorentz force equations for given magnetic fields, Bessel function calculations of radiation for wigglers and undulators and general radiation calculations for undulators.

  15. The Collective Practice of Calculation

    Schrøder, Ida

    and judgement to reach decisions to invest in social services. The line is not drawn between the two, but between the material arrangements that make decisions possible. This implies that the insisting on qualitatively based decisions gives the professionals agency to collectively engage in practical......The calculation of costs plays an increasingly large role in the decision-making processes of public sector human service organizations. This has brought scholars of management accounting to investigate the relationship between caring professions and demands to make economic entities of the service...... on the idea that professions are hybrids by introducing the notion of qualculation as an entry point to investigate decision-making in child protection work as an extreme case of calculating on the basis of other elements than quantitative numbers. The analysis reveals that it takes both calculation...

  16. Friction and wear calculation methods

    Kragelsky, I V; Kombalov, V S

    1981-01-01

    Friction and Wear: Calculation Methods provides an introduction to the main theories of a new branch of mechanics known as """"contact interaction of solids in relative motion."""" This branch is closely bound up with other sciences, especially physics and chemistry. The book analyzes the nature of friction and wear, and some theoretical relationships that link the characteristics of the processes and the properties of the contacting bodies essential for practical application of the theories in calculating friction forces and wear values. The effect of the environment on friction and wear is a

  17. Multifragmentation calculated with relativistic forces

    Feldmeier, H; Papp, G

    1995-01-01

    A saturating hamiltonian is presented in a relativistically covariant formalism. The interaction is described by scalar and vector mesons, with coupling strengths adjusted to the nuclear matter. No explicit density depe ndence is assumed. The hamiltonian is applied in a QMD calculation to determine the fragment distribution in O + Br collision at different energies (50 -- 200 MeV/u) to test the applicability of the model at low energies. The results are compared with experiment and with previous non-relativistic calculations. PACS: 25.70Mn, 25.75.+r

  18. Molecular calculations with B functions

    Steinborn, E O; Ema, I; López, R; Ramírez, G

    1998-01-01

    A program for molecular calculations with B functions is reported and its performance is analyzed. All the one- and two-center integrals, and the three-center nuclear attraction integrals are computed by direct procedures, using previously developed algorithms. The three- and four-center electron repulsion integrals are computed by means of Gaussian expansions of the B functions. A new procedure for obtaining these expansions is also reported. Some results on full molecular calculations are included to show the capabilities of the program and the quality of the B functions to represent the electronic functions in molecules.

  19. Methods for Melting Temperature Calculation

    Hong, Qi-Jun

    Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly. We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments. We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which

  20. Ab Initio Calculations of Oxosulfatovanadates

    Frøberg, Torben; Johansen, Helge

    1996-01-01

    Restricted Hartree-Fock and multi-configurational self-consistent-field calculations together with secondorder perturbation theory have been used to study the geometry, the electron density, and the electronicspectrum of (VO2SO4)-. A bidentate sulphate attachment to vanadium was found to be stable...

  1. Dead reckoning calculating without instruments

    Doerfler, Ronald W

    1993-01-01

    No author has gone as far as Doerfler in covering methods of mental calculation beyond simple arithmetic. Even if you have no interest in competing with computers you'll learn a great deal about number theory and the art of efficient computer programming. -Martin Gardner

  2. ITER Port Interspace Pressure Calculations

    Carbajo, Juan J [ORNL; Van Hove, Walter A [ORNL

    2016-01-01

    The ITER Vacuum Vessel (VV) is equipped with 54 access ports. Each of these ports has an opening in the bioshield that communicates with a dedicated port cell. During Tokamak operation, the bioshield opening must be closed with a concrete plug to shield the radiation coming from the plasma. This port plug separates the port cell into a Port Interspace (between VV closure lid and Port Plug) on the inner side and the Port Cell on the outer side. This paper presents calculations of pressures and temperatures in the ITER (Ref. 1) Port Interspace after a double-ended guillotine break (DEGB) of a pipe of the Tokamak Cooling Water System (TCWS) with high temperature water. It is assumed that this DEGB occurs during the worst possible conditions, which are during water baking operation, with water at a temperature of 523 K (250 C) and at a pressure of 4.4 MPa. These conditions are more severe than during normal Tokamak operation, with the water at 398 K (125 C) and 2 MPa. Two computer codes are employed in these calculations: RELAP5-3D Version 4.2.1 (Ref. 2) to calculate the blowdown releases from the pipe break, and MELCOR, Version 1.8.6 (Ref. 3) to calculate the pressures and temperatures in the Port Interspace. A sensitivity study has been performed to optimize some flow areas.

  3. Calculations for cosmic axion detection

    Krauss, L.; Moody, J.; Wilczek, F.; Morris, D. E.

    1985-01-01

    Calculations are presented, using properly nomalized couplings and masses for Dine-Fischler-Srednicki axions, of power rates and signal temperatures for axion-photon conversion in microwave cavities. The importance of the galactic-halo axion line shape is emphasized. Spin-coupled detection as an alternative to magnetic-field-coupled detection is mentioned.

  4. Theoretical Calculation of MMF's Bandwidth

    LI Xiao-fu; JIANG De-sheng; YU Hai-hu

    2004-01-01

    The difference between over-filled launch bandwidth (OFL BW) and restricted mode launch bandwidth (RML BW) is described. A theoretical model is founded to calculate the OFL BW of grade index multimode fiber (GI-MMF),and the result is useful to guide the modification of the manufacturing method.

  5. Data Acquisition and Flux Calculations

    Rebmann, C.; Kolle, O; Heinesch, B;

    2012-01-01

    In this chapter, the basic theory and the procedures used to obtain turbulent fluxes of energy, mass, and momentum with the eddy covariance technique will be detailed. This includes a description of data acquisition, pretreatment of high-frequency data and flux calculation....

  6. Changes in protein abundance between tender and tough meat from bovine Longissimus thoracis muscle assessed by isobaric Tag for Relative and Absolute Quantitation (iTRAQ) and 2-dimensional gel electrophoresis analysis

    Bjarnadóttir, S G; Hollung, K; Høy, M

    2012-01-01

    -DE analysis (P flux through the tricarboxylate cycle [2......The aim of this study was to find potential biomarkers for meat tenderness in bovine Longissimus thoracis muscle and to compare results from isobaric Tag for Relative and Absolute Quantitation (iTRAQ) and 2-dimensional gel electrophoresis (2-DE) analysis. The experiment included 4 tender and 4......-oxoglutarate dehydrogenase complex component E2 (OGDC-E2)], apoptosis (galectin-1) and regulatory role in the release of Ca2+ from intracellular stores (annexin A6). Even though the overlap in significantly changing proteins was relatively low between iTRAQ and 2-DE analysis, certain proteins predicted to have...

  7. CONTRIBUTION FOR MINING ATMOSPHERE CALCULATION

    Franica Trojanović

    1989-12-01

    Full Text Available Humid air is an unavoidable feature of mining atmosphere, which plays a significant role in defining the climate conditions as well as permitted circumstances for normal mining work. Saturated humid air prevents heat conduction from the human body by means of evaporation. Consequently, it is of primary interest in the mining practice to establish the relative air humidity either by means of direct or indirect methods. Percentage of water in the surrounding air may be determined in various procedures including tables, diagrams or particular calculations, where each technique has its specific advantages and disadvantages. Classical calculation is done according to Sprung's formula, in which case partial steam pressure should also be taken from the steam table. The new method without the use of diagram or tables, established on the functional relation of pressure and temperature on saturated line, is presented here for the first time (the paper is published in Croatian.

  8. Archimedes' calculations of square roots

    Davies, E B

    2011-01-01

    We reconsider Archimedes' evaluations of several square roots in 'Measurement of a Circle'. We show that several methods proposed over the last century or so for his evaluations fail one or more criteria of plausibility. We also provide internal evidence that he probably used an interpolation technique. The conclusions are relevant to the precise calculations by which he obtained upper and lower bounds on pi.

  9. Parallel plasma fluid turbulence calculations

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-12-31

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center`s CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated.

  10. Anion release and uptake kinetics: structural changes of layered 2-dimensional ZnNiHN upon uptake of acetate and chlorinated acetate anions.

    Machingauta, Cleopas; Hossenlopp, Jeanne M

    2013-12-01

    X-ray diffraction and UV-vis spectroscopy were used for the investigation of ion exchange reaction kinetics of nitrates with acetate (Ac), chloro acetate (ClAc), dichloro acetate (dClAc) and trichloro acetate (tClAc) anions, using zinc nickel hydroxy nitrate (ZnNiHN) as the exchange precursor. The exchange reactions conducted at 24, 30, 40 and 50°C revealed that rate constants were inversely related to the calculated anion electronic spatial extent (ESE), while a direct relationship between rate constants and the average oxygen charges was observed. Temporal solid phase structural transformations were shown to be affected by the nature of the guest anions. The amount of nitrates released into solution has been shown to decrease as the guest anions became more chlorinated. Use of isoconversional approach revealed that activation energies changed significantly with α during dClAc intercalation than for the other anions. The topotactic intercalation of the guest anions, except dClAc, followed the Avrami-Erofe'ev kinetic model for the entire reaction progress.

  11. Quantitative analysis of aortic regurgitation: real-time 3-dimensional and 2-dimensional color Doppler echocardiographic method--a clinical and a chronic animal study

    Shiota, Takahiro; Jones, Michael; Tsujino, Hiroyuki; Qin, Jian Xin; Zetts, Arthur D.; Greenberg, Neil L.; Cardon, Lisa A.; Panza, Julio A.; Thomas, James D.

    2002-01-01

    BACKGROUND: For evaluating patients with aortic regurgitation (AR), regurgitant volumes, left ventricular (LV) stroke volumes (SV), and absolute LV volumes are valuable indices. AIM: The aim of this study was to validate the combination of real-time 3-dimensional echocardiography (3DE) and semiautomated digital color Doppler cardiac flow measurement (ACM) for quantifying absolute LV volumes, LVSV, and AR volumes using an animal model of chronic AR and to investigate its clinical applicability. METHODS: In 8 sheep, a total of 26 hemodynamic states were obtained pharmacologically 20 weeks after the aortic valve noncoronary (n = 4) or right coronary (n = 4) leaflet was incised to produce AR. Reference standard LVSV and AR volume were determined using the electromagnetic flow method (EM). Simultaneous epicardial real-time 3DE studies were performed to obtain LV end-diastolic volumes (LVEDV), end-systolic volumes (LVESV), and LVSV by subtracting LVESV from LVEDV. Simultaneous ACM was performed to obtain LVSV and transmitral flows; AR volume was calculated by subtracting transmitral flow volume from LVSV. In a total of 19 patients with AR, real-time 3DE and ACM were used to obtain LVSVs and these were compared with each other. RESULTS: A strong relationship was found between LVSV derived from EM and those from the real-time 3DE (r = 0.93, P real-time 3DE and that from ACM was observed (r = 0.73, P real-time 3DE and ACM was found (r = 0.90, P real-time 3DE for quantifying LV volumes, LVSV, and AR volumes was validated by the chronic animal study and was shown to be clinically applicable.

  12. AGING FACILITY CRITICALITY SAFETY CALCULATIONS

    C.E. Sanders

    2004-09-10

    The purpose of this design calculation is to revise and update the previous criticality calculation for the Aging Facility (documented in BSC 2004a). This design calculation will also demonstrate and ensure that the storage and aging operations to be performed in the Aging Facility meet the criticality safety design criteria in the ''Project Design Criteria Document'' (Doraswamy 2004, Section 4.9.2.2), and the functional nuclear criticality safety requirement described in the ''SNF Aging System Description Document'' (BSC [Bechtel SAIC Company] 2004f, p. 3-12). The scope of this design calculation covers the systems and processes for aging commercial spent nuclear fuel (SNF) and staging Department of Energy (DOE) SNF/High-Level Waste (HLW) prior to its placement in the final waste package (WP) (BSC 2004f, p. 1-1). Aging commercial SNF is a thermal management strategy, while staging DOE SNF/HLW will make loading of WPs more efficient (note that aging DOE SNF/HLW is not needed since these wastes are not expected to exceed the thermal limits form emplacement) (BSC 2004f, p. 1-2). The description of the changes in this revised document is as follows: (1) Include DOE SNF/HLW in addition to commercial SNF per the current ''SNF Aging System Description Document'' (BSC 2004f). (2) Update the evaluation of Category 1 and 2 event sequences for the Aging Facility as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2004c, Section 7). (3) Further evaluate the design and criticality controls required for a storage/aging cask, referred to as MGR Site-specific Cask (MSC), to accommodate commercial fuel outside the content specification in the Certificate of Compliance for the existing NRC-certified storage casks. In addition, evaluate the design required for the MSC that will accommodate DOE SNF/HLW. This design calculation will achieve the objective of providing the

  13. Calculation of gas turbine characteristic

    Mamaev, B. I.; Murashko, V. L.

    2016-04-01

    The reasons and regularities of vapor flow and turbine parameter variation depending on the total pressure drop rate π* and rotor rotation frequency n are studied, as exemplified by a two-stage compressor turbine of a power-generating gas turbine installation. The turbine characteristic is calculated in a wide range of mode parameters using the method in which analytical dependences provide high accuracy for the calculated flow output angle and different types of gas dynamic losses are determined with account of the influence of blade row geometry, blade surface roughness, angles, compressibility, Reynolds number, and flow turbulence. The method provides satisfactory agreement of results of calculation and turbine testing. In the design mode, the operation conditions for the blade rows are favorable, the flow output velocities are close to the optimal ones, the angles of incidence are small, and the flow "choking" modes (with respect to consumption) in the rows are absent. High performance and a nearly axial flow behind the turbine are obtained. Reduction of the rotor rotation frequency and variation of the pressure drop change the flow parameters, the parameters of the stages and the turbine, as well as the form of the characteristic. In particular, for decreased n, nonmonotonic variation of the second stage reactivity with increasing π* is observed. It is demonstrated that the turbine characteristic is mainly determined by the influence of the angles of incidence and the velocity at the output of the rows on the losses and the flow output angle. The account of the growing flow output angle due to the positive angle of incidence for decreased rotation frequencies results in a considerable change of the characteristic: poorer performance, redistribution of the pressure drop at the stages, and change of reactivities, growth of the turbine capacity, and change of the angle and flow velocity behind the turbine.

  14. Rate calculation with colored noise

    Bartsch, Thomas; Benito, R M; Borondo, F

    2016-01-01

    The usual identification of reactive trajectories for the calculation of reaction rates requires very time-consuming simulations, particularly if the environment presents memory effects. In this paper, we develop a new method that permits the identification of reactive trajectories in a system under the action of a stochastic colored driving. This method is based on the perturbative computation of the invariant structures that act as separatrices for reactivity. Furthermore, using this perturbative scheme, we have obtained a formally exact expression for the reaction rate in multidimensional systems coupled to colored noisy environments.

  15. Electronics reliability calculation and design

    Dummer, Geoffrey W A; Hiller, N

    1966-01-01

    Electronics Reliability-Calculation and Design provides an introduction to the fundamental concepts of reliability. The increasing complexity of electronic equipment has made problems in designing and manufacturing a reliable product more and more difficult. Specific techniques have been developed that enable designers to integrate reliability into their products, and reliability has become a science in its own right. The book begins with a discussion of basic mathematical and statistical concepts, including arithmetic mean, frequency distribution, median and mode, scatter or dispersion of mea

  16. Band calculation of lonsdaleite Ge

    Chen, Pin-Shiang; Fan, Sheng-Ting; Lan, Huang-Siang; Liu, Chee Wee

    2017-01-01

    The band structure of Ge in the lonsdaleite phase is calculated using first principles. Lonsdaleite Ge has a direct band gap at the Γ point. For the conduction band, the Γ valley is anisotropic with the low transverse effective mass on the hexagonal plane and the large longitudinal effective mass along the c axis. For the valence band, both heavy-hole and light-hole effective masses are anisotropic at the Γ point. The in-plane electron effective mass also becomes anisotropic under uniaxial tensile strain. The strain response of the heavy-hole mass is opposite to the light hole.

  17. Semiclassical calculation of decay rates

    Bessa, A; Fraga, E S

    2008-01-01

    Several relevant aspects of quantum-field processes can be well described by semiclassical methods. In particular, the knowledge of non-trivial classical solutions of the field equations, and the thermal and quantum fluctuations around them, provide non-perturbative information about the theory. In this work, we discuss the calculation of the one-loop effective action from the semiclasssical viewpoint. We intend to use this formalism to obtain an accurate expression for the decay rate of non-static metastable states.

  18. Digital calculations of engine cycles

    Starkman, E S; Taylor, C Fayette

    1964-01-01

    Digital Calculations of Engine Cycles is a collection of seven papers which were presented before technical meetings of the Society of Automotive Engineers during 1962 and 1963. The papers cover the spectrum of the subject of engine cycle events, ranging from an examination of composition and properties of the working fluid to simulation of the pressure-time events in the combustion chamber. The volume has been organized to present the material in a logical sequence. The first two chapters are concerned with the equilibrium states of the working fluid. These include the concentrations of var

  19. The Dental Trauma Internet Calculator

    Gerds, Thomas Alexander; Lauridsen, Eva Fejerskov; Christensen, Søren Steno Ahrensburg

    2012-01-01

    Background/Aim Prediction tools are increasingly used to inform patients about the future dental health outcome. Advanced statistical methods are required to arrive at unbiased predictions based on follow-up studies. Material and Methods The Internet risk calculator at the Dental Trauma Guide...... provides prognoses for teeth with traumatic injuries based on the Copenhagen trauma database: http://www.dentaltraumaguide.org The database includes 2191 traumatized permanent teeth from 1282 patients that were treated at the dental trauma unit at the University Hospital in Copenhagen (Denmark...

  20. Calculational Tool for Skin Contamination Dose Assessment

    Hill, R L

    2002-01-01

    Spreadsheet calculational tool was developed to automate the calculations preformed for dose assessment of skin contamination. This document reports on the design and testing of the spreadsheet calculational tool.

  1. Quantitative analysis of aortic regurgitation: real-time 3-dimensional and 2-dimensional color Doppler echocardiographic method--a clinical and a chronic animal study

    Shiota, Takahiro; Jones, Michael; Tsujino, Hiroyuki; Qin, Jian Xin; Zetts, Arthur D.; Greenberg, Neil L.; Cardon, Lisa A.; Panza, Julio A.; Thomas, James D.

    2002-01-01

    BACKGROUND: For evaluating patients with aortic regurgitation (AR), regurgitant volumes, left ventricular (LV) stroke volumes (SV), and absolute LV volumes are valuable indices. AIM: The aim of this study was to validate the combination of real-time 3-dimensional echocardiography (3DE) and semiautomated digital color Doppler cardiac flow measurement (ACM) for quantifying absolute LV volumes, LVSV, and AR volumes using an animal model of chronic AR and to investigate its clinical applicability. METHODS: In 8 sheep, a total of 26 hemodynamic states were obtained pharmacologically 20 weeks after the aortic valve noncoronary (n = 4) or right coronary (n = 4) leaflet was incised to produce AR. Reference standard LVSV and AR volume were determined using the electromagnetic flow method (EM). Simultaneous epicardial real-time 3DE studies were performed to obtain LV end-diastolic volumes (LVEDV), end-systolic volumes (LVESV), and LVSV by subtracting LVESV from LVEDV. Simultaneous ACM was performed to obtain LVSV and transmitral flows; AR volume was calculated by subtracting transmitral flow volume from LVSV. In a total of 19 patients with AR, real-time 3DE and ACM were used to obtain LVSVs and these were compared with each other. RESULTS: A strong relationship was found between LVSV derived from EM and those from the real-time 3DE (r = 0.93, P <.001, mean difference (3D - EM) = -1.0 +/- 9.8 mL). A good relationship between LVSV and AR volumes derived from EM and those by ACM was found (r = 0.88, P <.001). A good relationship between LVSV derived from real-time 3DE and that from ACM was observed (r = 0.73, P <.01, mean difference = 2.5 +/- 7.9 mL). In patients, a good relationship between LVSV obtained by real-time 3DE and ACM was found (r = 0.90, P <.001, mean difference = 0.6 +/- 9.8 mL). CONCLUSION: The combination of ACM and real-time 3DE for quantifying LV volumes, LVSV, and AR volumes was validated by the chronic animal study and was shown to be clinically applicable.

  2. Calculation of sound propagation in fibrous materials

    Tarnow, Viggo

    1996-01-01

    Calculations of attenuation and velocity of audible sound waves in glass wools are presented. The calculations use only the diameters of fibres and the mass density of glass wools as parameters. The calculations are compared with measurements.......Calculations of attenuation and velocity of audible sound waves in glass wools are presented. The calculations use only the diameters of fibres and the mass density of glass wools as parameters. The calculations are compared with measurements....

  3. Flow Field Calculations for Afterburner

    ZhaoJianxing; LiuQuanzhong; 等

    1995-01-01

    In this paper a calculation procedure for simulating the coimbustion flow in the afterburner with the heat shield,flame stabilizer and the contracting nozzle is described and evaluated by comparison with experimental data.The modified two-equation κ-ε model is employed to consider the turbulence effects,and the κ-ε-g turbulent combustion model is used to determine the reaction rate.To take into accunt the influence of heat radiation on gas temperature distribution,heat flux model is applied to predictions of heat flux distributions,The solution domain spanned the entire region between centerline and afterburner wall ,with the heat shield represented as a blockage to the mesh.The enthalpy equation and wall boundary of the heat shield require special handling for two passages in the afterburner,In order to make the computer program suitable to engineering applications,a subregional scheme is developed for calculating flow fields of complex geometries.The computational grids employed are 100×100 and 333×100(non-uniformly distributed).The numerical results are compared with experimental data,Agreement between predictions and measurements shows that the numerical method and the computational program used in the study are fairly reasonable and appopriate for primary design of the afterburner.

  4. 47 CFR 1.1623 - Probability calculation.

    2010-10-01

    ... Mass Media Services General Procedures § 1.1623 Probability calculation. (a) All calculations shall be... determine their new intermediate probabilities. (g) Multiply each applicant's probability pursuant...

  5. Painless causality in defect calculations

    Cheung, C; Cheung, Charlotte; Magueijo, Joao

    1997-01-01

    Topological defects must respect causality, a statement leading to restrictive constraints on the power spectrum of the total cosmological perturbations they induce. Causality constraints have for long been known to require the presence of an under-density in the surrounding matter compensating the defect network on large scales. This so-called compensation can never be neglected and significantly complicates calculations in defect scenarios, eg. computing cosmic microwave background fluctuations. A quick and dirty way to implement the compensation are the so-called compensation fudge factors. Here we derive the complete photon-baryon-CDM backreaction effects in defect scenarios. The fudge factor comes out as an algebraic identity and so we drop the negative qualifier ``fudge''. The compensation scale is computed and physically interpreted. Secondary backreaction effects exist, and neglecting them constitutes the well-defined approximation scheme within which one should consider compensation factor calculatio...

  6. Dyscalculia and the Calculating Brain.

    Rapin, Isabelle

    2016-08-01

    Dyscalculia, like dyslexia, affects some 5% of school-age children but has received much less investigative attention. In two thirds of affected children, dyscalculia is associated with another developmental disorder like dyslexia, attention-deficit disorder, anxiety disorder, visual and spatial disorder, or cultural deprivation. Infants, primates, some birds, and other animals are born with the innate ability, called subitizing, to tell at a glance whether small sets of scattered dots or other items differ by one or more item. This nonverbal approximate number system extends mostly to single digit sets as visual discrimination drops logarithmically to "many" with increasing numerosity (size effect) and crowding (distance effect). Preschoolers need several years and specific teaching to learn verbal names and visual symbols for numbers and school agers to understand their cardinality and ordinality and the invariance of their sequence (arithmetic number line) that enables calculation. This arithmetic linear line differs drastically from the nonlinear approximate number system mental number line that parallels the individual number-tuned neurons in the intraparietal sulcus in monkeys and overlying scalp distribution of discrete functional magnetic resonance imaging activations by number tasks in man. Calculation is a complex skill that activates both visual and spatial and visual and verbal networks. It is less strongly left lateralized than language, with approximate number system activation somewhat more right sided and exact number and arithmetic activation more left sided. Maturation and increasing number skill decrease associated widespread non-numerical brain activations that persist in some individuals with dyscalculia, which has no single, universal neurological cause or underlying mechanism in all affected individuals.

  7. Global Smooth Solutions for the 2-Dimensional Landau-Lifshitz-Darwin Coupled Model with Small Initial Data%二维Landau-Lifshitz-Darwin耦合模型带小初值的整体光滑解

    黄丙远; 赵坤

    2012-01-01

    The local existence of smooth solution with the periodic initial value condition is firstly obtained by using Galerkin method. Based on it, the global existence of the smooth solution for the 2 -dimensional Landau -Lifshitz - Darwin coupled system with small initial data is further derived by making a priori estimate globally in time.%利用Galerkin方法得到了周期边值问题的局部光滑解,然后在小初值的条件下对光滑解做关于时间的整体先验估计,得到了二维Landau-Lifshitz-Darwin方程组在小初值条件下的整体光滑解.

  8. Dynamic Self-Organizing Landmark Extraction Method Based on 2-Dimensional Growing Dynamic Self-Organizing Feature Map%基于二维GDSOM的路标动态自组织提取方法

    王作为; 张汝波

    2012-01-01

    A dynamic self-organizing structural feature extraction method is presented based on distance sensor. The procedure consists of three parts; design of active exploration behavior, dimensionality reduction process of spatio-temporal information and self-organizing landmark extraction method. In this paper, active exploration behavior based on follow-wall is designed to obtain high correlative spatio-temporal sequence information. Activity neurons based on variety detection and activation intensity are used to reduce the dimensionality of spatio-temporal sequence. Finally, a method of 2-Dimensional growing dynamic self-organizing feature map (2-Dimensional GDSOM) is proposed to achieve self-organizing extraction and identification of environmental landmarks. The experimental results demonstrate the effectiveness of the method.%提出一种基于距离传感器的结构化特征的动态、自组织提取方法.该方法由3个部分组成:主动感知行为的设计,时空信息的降维处理及路标的自组织提取.设计基于沿墙走的“主动感知行为”来获得高相关性的感知时空序列信息;给出基于变化检测和激活强度的活性神经元来对时空序列信息降维;最后提出一种二维动态增长自组织特征图方法,实现环境路标的自组织提取和识别.实验结果验证该方法的有效性.

  9. Factors affecting calculation of L

    Ciotola, Mark P.

    2001-08-01

    A detectable extraterrestrial civilization can be modeled as a series of successive regimes over time each of which is detectable for a certain proportion of its lifecycle. This methodology can be utilized to produce an estimate for L. Potential components of L include quantity of fossil fuel reserves, solar energy potential, quantity of regimes over time, lifecycle patterns of regimes, proportion of lifecycle regime is actually detectable, and downtime between regimes. Relationships between these components provide a means of calculating the lifetime of communicative species in a detectable state, L. An example of how these factors interact is provided, utilizing values that are reasonable given known astronomical data for components such as solar energy potential while existing knowledge about the terrestrial case is used as a baseline for other components including fossil fuel reserves, quantity of regimes over time, and lifecycle patterns of regimes, proportion of lifecycle regime is actually detectable, and gaps of time between regimes due to recovery from catastrophic war or resource exhaustion. A range of values is calculated for L when parameters are established for each component so as to determine the lowest and highest values of L. roadmap for SETI research at the SETI Institute for the next few decades. Three different approaches were identified. 1) Continue the radio search: build an affordable array incorporating consumer market technologies, expand the search frequency, and increase the target list to 100,000 stars. This array will also serve as a technology demonstration and enable the international radio astronomy community to realize an array that is a hundred times larger and capable (among other things) of searching a million stars. 2) Begin searches for very fast optical pulses from a million stars. 3) As Moore's Law delivers increased computational capacity, build an omni-directional sky survey array capable of detecting strong, transient

  10. RTU Comparison Calculator Enhancement Plan

    Miller, James D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wang, Weimin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Katipamula, Srinivas [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-07-01

    Over the past two years, Department of Energy’s Building Technologies Office (BTO) has been investigating ways to increase the operating efficiency of the packaged rooftop units (RTUs) in the field. First, by issuing a challenge to the RTU manufactures to increase the integrated energy efficiency ratio (IEER) by 60% over the existing ASHRAE 90.1-2010 standard. Second, by evaluating the performance of an advanced RTU controller that reduces the energy consumption by over 40%. BTO has previously also funded development of a RTU comparison calculator (RTUCC). RTUCC is a web-based tool that provides the user a way to compare energy and cost savings for two units with different efficiencies. However, the RTUCC currently cannot compare savings associated with either the RTU Challenge unit or the advanced RTU controls retrofit. Therefore, BTO has asked PNNL to enhance the tool so building owners can compare energy and savings associated with this new class of products. This document provides the details of the enhancements that are required to support estimating energy savings from use of RTU challenge units or advanced controls on existing RTUs.

  11. Selfconsistent calculations for hyperdeformed nuclei

    Molique, H.; Dobaczewski, J.; Dudek, J.; Luo, W.D. [Universite Louis Pasteur, Strasbourg (France)

    1996-12-31

    Properties of the hyperdeformed nuclei in the A {approximately} 170 mass range are re-examined using the self-consistent Hartree-Fock method with the SOP parametrization. A comparison with the previous predictions that were based on a non-selfconsistent approach is made. The existence of the {open_quotes}hyper-deformed shell closures{close_quotes} at the proton and neutron numbers Z=70 and N=100 and their very weak dependence on the rotational frequency is suggested; the corresponding single-particle energy gaps are predicted to play a role similar to that of the Z=66 and N=86 gaps in the super-deformed nuclei of the A {approximately} 150 mass range. Selfconsistent calculations suggest also that the A {approximately} 170 hyperdeformed structures have neglegible mass asymmetry in their shapes. Very importantly for the experimental studies, both the fission barriers and the {open_quotes}inner{close_quotes} barriers (that separate the hyperdeformed structures from those with smaller deformations) are predicted to be relatively high, up to the factor of {approximately}2 higher than the corresponding ones in the {sup 152}Dy superdeformed nucleus used as a reference.

  12. RTU Comparison Calculator Enhancement Plan

    Miller, James D.; Wang, Weimin; Katipamula, Srinivas

    2014-03-31

    Over the past two years, Department of Energy’s Building Technologies Office (BTO) has been investigating ways to increase the operating efficiency of the packaged rooftop units (RTUs) in the field. First, by issuing a challenge to the RTU manufactures to increase the integrated energy efficiency ratio (IEER) by 60% over the existing ASHRAE 90.1-2010 standard. Second, by evaluating the performance of an advanced RTU controller that reduces the energy consumption by over 40%. BTO has previously also funded development of a RTU comparison calculator (RTUCC). RTUCC is a web-based tool that provides the user a way to compare energy and cost savings for two units with different efficiencies. However, the RTUCC currently cannot compare savings associated with either the RTU Challenge unit or the advanced RTU controls retrofit. Therefore, BTO has asked PNNL to enhance the tool so building owners can compare energy and savings associated with this new class of products. This document provides the details of the enhancements that are required to support estimating energy savings from use of RTU challenge units or advanced controls on existing RTUs.

  13. Explosion Calculations of SN1087

    Wooden, Diane H.; Morrison, David (Technical Monitor)

    1994-01-01

    Explosion calculations of SNT1987A generate pictures of Rayleigh-Taylor fingers of radioactive Ni-56 which are boosted to velocities of several thousand km/s. From the KAO observations of the mid-IR iron lines, a picture of the iron in the ejecta emerges which is consistent with the "frothy iron fingers" having expanded to fill about 50% of the metal-rich volume of the ejecta. The ratio of the nickel line intensities yields a high ionization fraction of greater than or equal to 0.9 in the volume associated with the iron-group elements at day 415, before dust condenses in the ejecta. From the KAO observations of the dust's thermal emission, it is deduced that when the grains condense their infrared radiation is trapped, their apparent opacity is gray, and they have a surface area filling factor of about 50%. The dust emission from SN1987A is featureless: no 9.7 micrometer silicate feature, nor PAH features, nor dust emission features of any kind are seen at any time. The total dust opacity increases with time even though the surface area filling factor and the dust/gas ratio remain constant. This suggests that the dust forms along coherent structures which can maintain their radial line-of-sight opacities, i.e., along fat fingers. The coincidence of the filling factor of the dust and the filling factor of the iron strongly suggests that the dust condenses within the iron, and therefore the dust is iron-rich. It only takes approximately 4 x 10(exp -4) solar mass of dust for the ejecta to be optically thick out to approximately 100 micrometers; a lower limit of 4 x 10(exp -4) solar mass of condensed grains exists in the metal-rich volume, but much more dust could be present. The episode of dust formation started at about 530 days and proceeded rapidly, so that by 600 days 45% of the bolometric luminosity was being emitted in the IR; by 775 days, 86% of the bolometric luminosity was being reradiated by the dust. Measurements of the bolometric luminosity of SN1987A from

  14. A New Approach for Calculating Vacuum Susceptibility

    宗红石; 平加伦; 顾建中

    2004-01-01

    Based on the Dyson-Schwinger approach, we propose a new method for calculating vacuum susceptibilities. As an example, the vector vacuum susceptibility is calculated. A comparison with the results of the previous approaches is presented.

  15. Dynamics Calculation of Travel Wave Tube

    2011-01-01

    During the dynamics calculating of the travel tube, we must obtain the field map in the tube. The field map can be affected by not only the beam loading, but also the attenuation coefficient. The calculation of the attenuation coefficient

  16. Pressure Vessel Calculations for VVER-440 Reactors

    Hordósy, G.; Hegyi, Gy.; Keresztúri, A.; Maráczy, Cs.; Temesvári, E.; Vértes, P.; Zsolnay, É.

    2003-06-01

    Monte Carlo calculations were performed for a selected cycle of the Paks NPP Unit II to test a computational model. In the model the source term was calculated by the core design code KARATE and the neutron transport calculations were performed by the MCNP. Different forms of the source specification were examined. The calculated results were compared with measurements and in most cases fairly good agreement was found.

  17. A general formalism for phase space calculations

    Norbury, John W.; Deutchman, Philip A.; Townsend, Lawrence W.; Cucinotta, Francis A.

    1988-01-01

    General formulas for calculating the interactions of galactic cosmic rays with target nuclei are presented. Methods for calculating the appropriate normalization volume elements and phase space factors are presented. Particular emphasis is placed on obtaining correct phase space factors for 2-, and 3-body final states. Calculations for both Lorentz-invariant and noninvariant phase space are presented.

  18. Status Report of NNLO QCD Calculations

    Klasen, M

    2005-01-01

    We review recent progress in next-to-next-to-leading order (NNLO) perturbative QCD calculations with special emphasis on results ready for phenomenological applications. Important examples are new results on structure functions and jet or Higgs boson production. In addition, we describe new calculational techniques based on twistors and their potential for efficient calculations of multiparticle amplitudes.

  19. Mathematical Creative Activity and the Graphic Calculator

    Duda, Janina

    2011-01-01

    Teaching mathematics using graphic calculators has been an issue of didactic discussions for years. Finding ways in which graphic calculators can enrich the development process of creative activity in mathematically gifted students between the ages of 16-17 is the focus of this article. Research was conducted using graphic calculators with…

  20. Decimals, Denominators, Demons, Calculators, and Connections

    Sparrow, Len; Swan, Paul

    2005-01-01

    The authors provide activities for overcoming some fraction misconceptions using calculators specially designed for learners in primary years. The writers advocate use of the calculator as a way to engage children in thinking about mathematics. By engaging with a calculator as part of mathematics learning, children are learning about and using the…

  1. Inverse Calculation of Power Density for Laser Surface Treatment

    Römer, G.R.B.E.; Meijer, J.

    2000-01-01

    Laser beam surface treatment requires a well-defined temperature profile. In this paper an analytic method is presented to solve the inverse problem of heat conduction in solids, based on the 2-dimensional Fourier transform. As a result, the required power density profile of the laser beam can be ca

  2. On the Use of a Direct Radiative Transfer Equation Solver for Path Loss Calculation in Underwater Optical Wireless Channels

    Li, Changping

    2015-07-22

    In this letter, we propose a fast numerical solution for the steady state radiative transfer equation based on the approach in [1] in order to calculate the optical path loss of light propagation suffering from attenuation due to the absorption and scattering in various water types. We apply an optimal non-uniform method to discretize the angular space and an upwind type finite difference method to discretize the spatial space. A Gauss-Seidel iterative method is then applied to solve the fully discretized system of linear equations. Finally, we extend the resulting radiance in 2-dimensional to 3-dimensional by the azimuthal symmetric assumption to compute the received optical power under the given receiver aperture and field of view. The accuracy and efficiency of the proposed scheme are validated by uniform RTE solver and Monte Carlo simulations.

  3. Microscopic Calculations of 240Pu Fission

    Younes, W; Gogny, D

    2007-09-11

    Hartree-Fock-Bogoliubov calculations have been performed with the Gogny finite-range effective interaction for {sup 240}Pu out to scission, using a new code developed at LLNL. A first set of calculations was performed with constrained quadrupole moment along the path of most probable fission, assuming axial symmetry but allowing for the spontaneous breaking of reflection symmetry of the nucleus. At a quadrupole moment of 345 b, the nucleus was found to spontaneously scission into two fragments. A second set of calculations, with all nuclear moments up to hexadecapole constrained, was performed to approach the scission configuration in a controlled manner. Calculated energies, moments, and representative plots of the total nuclear density are shown. The present calculations serve as a proof-of-principle, a blueprint, and starting-point solutions for a planned series of more comprehensive calculations to map out a large set of scission configurations, and the associated fission-fragment properties.

  4. Calculation of the Moments of Polygons.

    1987-06-01

    2.1) VowUK-1N0+IDIO TUUNTKPlNO.YKNO C Calculate AREA YKXK-YKPIND*IKNO-YKNO*XKP1NO AIKA-hEEA4YKXX C Calculate ACEIT ACENT (1)- ACEIT ( 1) VSUNI4TKIK... ACEIT (2) -ACENT(2) .VSUNYKXK C Calculate SECHON 3ECNON (1) -SCNON( 1) TKXK*(XX~PIdO*VSUNXKKO**2) SECNO(2) -SEn N(2) .yrf* (XKP114*YKP1MO.XKO*YXO+VB1hi

  5. Surface Tension Calculation of Undercooled Alloys

    2001-01-01

    Based on the Butler equation and extrapolated thermodynamic data of undercooled alloys from those of liquid stable alloys, a method for surface tension calculation of undercooled alloys is proposed. The surface tensions of liquid stable and undercooled Ni-Cu (xNi=0.42) and Ni-Fe (xNi=0.3 and 0.7) alloys are calculated using STCBE (Surface Tension Calculation based on Butler Equation) program. The agreement between calculated values and experimental data is good enough, and the temperature dependence of the surface tension can be reasonable down to 150-200 K under the liquid temperature of the alloys.

  6. The conundrum of calculating carbon footprints

    Strobel, Bjarne W.; Erichsen, Anders Christian; Gausset, Quentin

    2016-01-01

    A pre-condition for reducing global warming is to minimise the emission of greenhouse gasses (GHGs). A common approach to informing people about the link between behaviour and climate change rests on developing GHG calculators that quantify the ‘carbon footprint’ of a product, a sector or an actor....... There is, however, an abundance of GHG calculators that rely on very different premises and give very different estimates of carbon footprints. In this chapter, we compare and analyse the main principles of calculating carbon footprints, and discuss how calculators can inform (or misinform) people who wish...

  7. MATNORM: Calculating NORM using composition matrices

    Pruseth, Kamal L.

    2009-09-01

    This paper discusses the implementation of an entirely new set of formulas to calculate the CIPW norm. MATNORM does not involve any sophisticated programming skill and has been developed using Microsoft Excel spreadsheet formulas. These formulas are easy to understand and a mere knowledge of the if-then-else construct in MS-Excel is sufficient to implement the whole calculation scheme outlined below. The sequence of calculation used here differs from that of the standard CIPW norm calculation, but the results are very similar. The use of MS-Excel macro programming and other high-level programming languages has been deliberately avoided for simplicity.

  8. Pile Load Capacity – Calculation Methods

    Wrana Bogumił

    2015-12-01

    Full Text Available The article is a review of the current problems of the foundation pile capacity calculations. The article considers the main principles of pile capacity calculations presented in Eurocode 7 and other methods with adequate explanations. Two main methods are presented: α – method used to calculate the short-term load capacity of piles in cohesive soils and β – method used to calculate the long-term load capacity of piles in both cohesive and cohesionless soils. Moreover, methods based on cone CPTu result are presented as well as the pile capacity problem based on static tests.

  9. Myocardial dysfunction in patients with chronic kidney disease evaluated by global 2-dimensional strain imaging%二维超声应变评价慢性肾病心肌损害的价值

    王泓; 曹铁生; 杨斌; 傅宁华; 李娟; 孙晖

    2011-01-01

    Objective To evaluate whether global 2-dimensional strain imaging can offer additional benefit over conventional echocardiography to detect subclinical myocardial damage in patients with chronic kidney disease(CKD). Methods Conventional echocardiography and global 2-dimension strain imaging were performed in 39 patients with CKD [23 men and 16 women,mean age (45.6± 14.6) years] and 29 control subjects. Twenty patients had CKD stage 2 or 3(group 1 ) and nineteen patients had CKD stage 4 or 5(group 2). Left ventricular structure and function were evaluated by conventional echocardiography. Global longitudinal and circumferential strain and strain rate were analyzed. Results There were no differences in ejection fraction and fraction shortening between CKD patients and controls. Compared with controls, CKD groups had significantly decreased value of global longitudinal strain and strain rate. Global longitudinal strain decreased from - (23.8 ± 3.1 ) % in controls to - ( 18. 5 ± 2.4) % in group 1 and to - (15.2 ± 3.2) % in group 2 ( P <0. 001 ). Compared with controls, there was no difference in global circumferential strain and strain rate between group 1 and controls, but global circumferential strain and strain rate of group 2 was reduced [ - (17.1± 3. 0) % vs -(21.2±2.8)%, P<0.05;-(1.0±0.2)% vs -(1.3±0.3)%, P<0.05]. In correlation analyses, global longitudinal strain was positively related to eGFR( r =0. 376, P <0. 001 ) and inversely related to left ventricular mass index( r = - 0. 473, P <0.01). Conclusions Global 2-dimensional strain imaging may represent a useful tool for the assessment of subclinical myocardial dysfunction in patients with CKD.%目的 探讨整体二维超声应变成像评价慢性肾病(CKD)心肌损害的价值.方法 39例CKD患者,其中20例为2~3期CKD患者(组1),19例为4~5期CKD患者(组2),设29例正常对照.使用常规超声心动图评价左室结构和功能,使用二维超声应变成像评价心肌整

  10. Atomic Structure Calculations for Neutral Oxygen

    Norah Alonizan; Rabia Qindeel; Nabil Ben Nessib

    2016-01-01

    Energy levels and oscillator strengths for neutral oxygen have been calculated using the Cowan (CW), SUPERSTRUCTURE (SS), and AUTOSTRUCTURE (AS) atomic structure codes. The results obtained with these atomic codes have been compared with MCHF calculations and experimental values from the National Institute of Standards and Technology (NIST) database.

  11. 10 CFR 766.102 - Calculation methodology.

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology....

  12. Calculation of cohesive energy of actinide metals

    钱存富; 陈秀芳; 余瑞璜; 耿平; 段占强

    1997-01-01

    According to empirical electron theory of solids and molecules (EET), an equation for calculating the cohesive energy of actinide metals is given, the cohesive energy of 9 actinide metals with known crystal structure is calculated, which is identical with the experimental values on the whole, and the cohesive energy of 6 actinide metals with unknown crystal structure is forecast.

  13. Calculation reliability in vehicle accident reconstruction.

    Wach, Wojciech

    2016-06-01

    The reconstruction of vehicle accidents is subject to assessment in terms of the reliability of a specific system of engineering and technical operations. In the article [26] a formalized concept of the reliability of vehicle accident reconstruction, defined using Bayesian networks, was proposed. The current article is focused on the calculation reliability since that is the most objective section of this model. It is shown that calculation reliability in accident reconstruction is not another form of calculation uncertainty. The calculation reliability is made dependent on modeling reliability, adequacy of the model and relative uncertainty of calculation. All the terms are defined. An example is presented concerning the analytical determination of the collision location of two vehicles on the road in the absence of evidential traces. It has been proved that the reliability of this kind of calculations generally does not exceed 0.65, despite the fact that the calculation uncertainty itself can reach only 0.05. In this example special attention is paid to the analysis of modeling reliability and calculation uncertainty using sensitivity coefficients and weighted relative uncertainty.

  14. Calculating "g" from Acoustic Doppler Data

    Torres, Sebastian; Gonzalez-Espada, Wilson J.

    2006-01-01

    Traditionally, the Doppler effect for sound is introduced in high school and college physics courses. Students calculate the perceived frequency for several scenarios relating a stationary or moving observer and a stationary or moving sound source. These calculations assume a constant velocity of the observer and/or source. Although seldom…

  15. Efficient Calculation of Earth Penetrating Projectile Trajectories

    2006-09-01

    CALCULATION OF EARTH PENETRATING PROJECTILE TRAJECTORIES by Daniel F . Youch September 2006 Thesis Advisor: Joshua Gordis... Daniel F . Youch 5. FUNDING NUMBERS 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING...EFFICIENT CALCULATION OF EARTH PENETRATING PROJECTILE TRAJECTORIES Daniel F . Youch Lieutenant Commander, United States Navy B.S., Temple

  16. Direct calculation of wind turbine tip loss

    Wood, D.H.; Okulov, Valery; Bhattacharjee, D.

    2016-01-01

    . We develop three methods for the direct calculation of the tip loss. The first is the computationally expensive calculation of the velocities induced by the helicoidal wake which requires the evaluation of infinite sums of products of Bessel functions. The second uses the asymptotic evaluation...

  17. Calculating Electromagnetic Fields Of A Loop Antenna

    Schieffer, Mitchell B.

    1987-01-01

    Approximate field values computed rapidly. MODEL computer program developed to calculate electromagnetic field values of large loop antenna at all distances to observation point. Antenna assumed to be in x-y plane with center at origin of coordinate system. Calculates field values in both rectangular and spherical components. Also solves for wave impedance. Written in MicroSoft FORTRAN 77.

  18. New tool for standardized collector performance calculations

    Perers, Bengt; Kovacs, Peter; Olsson, Marcus;

    2011-01-01

    A new tool for standardized calculation of solar collector performance has been developed in cooperation between SP Technical Research Institute Sweden, DTU Denmark and SERC Dalarna University. The tool is designed to calculate the annual performance for a number of representative cities in Europe...

  19. Calculation of Temperature Rise in Calorimetry.

    Canagaratna, Sebastian G.; Witt, Jerry

    1988-01-01

    Gives a simple but fuller account of the basis for accurately calculating temperature rise in calorimetry. Points out some misconceptions regarding these calculations. Describes two basic methods, the extrapolation to zero time and the equal area method. Discusses the theoretical basis of each and their underlying assumptions. (CW)

  20. Calculation of LDL apoB

    Sniderman, A.D.; Tremblay, A.J.; Graaf, J. de; Couture, P.

    2014-01-01

    OBJECTIVES: This study tests the validity of the Hattori formula to calculate LDL apoB based on plasma lipids and total apoB. METHODS: In 2178 patients in a tertiary care lipid clinic, LDL apoB calculated as suggested by Hattori et al. was compared to directly measured LDL apoB isolated by ultracent

  1. Investment Return Calculations and Senior School Mathematics

    Fitzherbert, Richard M.; Pitt, David G. W.

    2010-01-01

    The methods for calculating returns on investments are taught to undergraduate level business students. In this paper, the authors demonstrate how such calculations are within the scope of senior school students of mathematics. In providing this demonstration the authors hope to give teachers and students alike an illustration of the power and the…

  2. 40 CFR 1065.850 - Calculations.

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Calculations. 1065.850 Section 1065.850 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Testing With Oxygenated Fuels § 1065.850 Calculations. Use the...

  3. Teaching Discrete Mathematics with Graphing Calculators.

    Masat, Francis E.

    Graphing calculator use is often thought of in terms of pre-calculus or continuous topics in mathematics. This paper contains examples and activities that demonstrate useful, interesting, and easy ways to use a graphing calculator with discrete topics. Examples are given for each of the following topics: functions, mathematical induction and…

  4. Using Calculators in Mathematics 12. Student Text.

    Rising, Gerald R.; And Others

    This student textbook is designed to incorporate programable calculators in grade 12 mathematics. The seven chapters contained in this document are: (1) Using Calculators in Mathematics; (2) Sequences, Series, and Limits; (3) Iteration, Mathematical Induction, and the Binomial Theorem; (4) Applications of the Fundamental Counting Principle; (5)…

  5. 46 CFR 154.520 - Piping calculations.

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Piping calculations. 154.520 Section 154.520 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS... Process Piping Systems § 154.520 Piping calculations. A piping system must be designed to meet...

  6. Data base to compare calculations and observations

    Tichler, J.L.

    1985-01-01

    Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed. (PSB)

  7. 76 FR 71431 - Civil Penalty Calculation Methodology

    2011-11-17

    ... Uniform Fine Assessment (UFA) algorithm, which FMCSA currently uses for calculation of civil penalties. UFA takes into account the statutory penalty factors under 49 U.S.C. 521(b)(2)(D). The evaluation will... will impose a minimum civil penalty that is calculated by UFA. In many cases involving small...

  8. Identification and comparative proteomic study of quail and duck egg white protein using 2-dimensional gel electrophoresis and matrix-assisted laser desorption/ionization time-of-flight tandem mass spectrometry analysis.

    Hu, S; Qiu, N; Liu, Y; Zhao, H; Gao, D; Song, R; Ma, M

    2016-05-01

    A proteomic study of egg white proteins from 2 major poultry species, namely quail (Coturnix coturnix) and duck (Anas platyrhynchos), was performed with comparison to those of chicken (Gallus gallus) through 2-dimensional polyacrylamide gel electrophoresis (2-DE) analysis. By using matrix-assisted laser desorption/ionization time-of-flight tandem mass spectrometry (MALDI-TOF MS/MS), 29 protein spots representing 10 different kinds of proteins as well as 17 protein spots designating 9 proteins were successfully identified in quail and duck egg white, respectively. This report suggested a closer relationship between quail and chicken egg white proteome patterns, whereas the duck egg white protein distribution on the 2-DE map was more distinct. In duck egg white, some well-known major proteins, such as ovomucoid, clusterin, extracellular fatty acid-binding protein precursor (ex-FABP), and prostaglandin D2 synthase (PG D2 synthase), were not detected, while two major protein spots identified as "deleted in malignant brain tumors 1" protein (DMBT1) and vitellogenin-2 were found specific to duck in the corresponding range on the 2-DE gel map. These interspecies diversities may be associated with the egg white protein functions in cell defense or regulating/supporting the embryonic development to adapt to the inhabiting environment or reproduction demand during long-term evolution. The findings of this work will give insight into the advantages involved in the application on egg white proteins from various egg sources, which may present novel beneficial properties in the food industry or related to human health.

  9. Heat Calculation of Borehole Heat Exchangers

    S. Filatov

    2013-01-01

    Full Text Available The paper considers a heat calculation method of borehole heat exchangers (BHE which can be used for designing and optimization of their design values and included in a comprehensive mathematical model of heat supply system with a heat pump based on utilization of low-grade heat from the ground.The developed method of calculation is based on the reduction of the problem general solution pertaining to heat transfer in BHE with due account of heat transfer between top-down and bottom-up flows of heat carrier to the solution for a boundary condition of one kind on the borehole wall. Used the a method of electrothermal analogy has been used for a calculation of the thermal resistance and  the required shape factors for calculation of  a borehole filler thermal resistance have been obtained numerically. The paper presents results of heat calculation of various BHE designs in accordance with the proposed method.

  10. Spreadsheet Based Scaling Calculations and Membrane Performance

    Wolfe, T D; Bourcier, W L; Speth, T F

    2000-12-28

    Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total Flux and Scaling Program (TFSP), written for Excel 97 and above, provides designers and operators new tools to predict membrane system performance, including scaling and fouling parameters, for a wide variety of membrane system configurations and feedwaters. The TFSP development was funded under EPA contract 9C-R193-NTSX. It is freely downloadable at www.reverseosmosis.com/download/TFSP.zip. TFSP includes detailed calculations of reverse osmosis and nanofiltration system performance. Of special significance, the program provides scaling calculations for mineral species not normally addressed in commercial programs, including aluminum, iron, and phosphate species. In addition, ASTM calculations for common species such as calcium sulfate (CaSO{sub 4}{times}2H{sub 2}O), BaSO{sub 4}, SrSO{sub 4}, SiO{sub 2}, and LSI are also provided. Scaling calculations in commercial membrane design programs are normally limited to the common minerals and typically follow basic ASTM methods, which are for the most part graphical approaches adapted to curves. In TFSP, the scaling calculations for the less common minerals use subsets of the USGS PHREEQE and WATEQ4F databases and use the same general calculational approach as PHREEQE and WATEQ4F. The activities of ion complexes are calculated iteratively. Complexes that are unlikely to form in significant concentration were eliminated to simplify the calculations. The calculation provides the distribution of ions and ion complexes that is used to calculate an effective ion product ''Q.'' The effective ion product is then compared to temperature adjusted solubility products (Ksp's) of solids in order to calculate a Saturation Index (SI

  11. Ti-84 Plus graphing calculator for dummies

    McCalla

    2013-01-01

    Get up-to-speed on the functionality of your TI-84 Plus calculator Completely revised to cover the latest updates to the TI-84 Plus calculators, this bestselling guide will help you become the most savvy TI-84 Plus user in the classroom! Exploring the standard device, the updated device with USB plug and upgraded memory (the TI-84 Plus Silver Edition), and the upcoming color screen device, this book provides you with clear, understandable coverage of the TI-84's updated operating system. Details the new apps that are available for download to the calculator via the USB cabl

  12. Energy of plate tectonics calculation and projection

    N. H. Swedan

    2013-02-01

    Full Text Available Mathematics and observations suggest that the energy of the geological activities resulting from plate tectonics is equal to the latent heat of melting, calculated at mantle's pressure, of the new ocean crust created at midocean ridges following sea floor spreading. This energy varies with the temperature of ocean floor, which is correlated with surface temperature. The objective of this manuscript is to calculate the force that drives plate tectonics, estimate the energy released, verify the calculations based on experiments and observations, and project the increase of geological activities with surface temperature rise caused by climate change.

  13. Assessment of seismic margin calculation methods

    Kennedy, R.P.; Murray, R.C.; Ravindra, M.K.; Reed, J.W.; Stevenson, J.D.

    1989-03-01

    Seismic margin review of nuclear power plants requires that the High Confidence of Low Probability of Failure (HCLPF) capacity be calculated for certain components. The candidate methods for calculating the HCLPF capacity as recommended by the Expert Panel on Quantification of Seismic Margins are the Conservative Deterministic Failure Margin (CDFM) method and the Fragility Analysis (FA) method. The present study evaluated these two methods using some representative components in order to provide further guidance in conducting seismic margin reviews. It is concluded that either of the two methods could be used for calculating HCLPF capacities. 21 refs., 9 figs., 6 tabs.

  14. Program Calculates Current Densities Of Electronic Designs

    Cox, Brian

    1996-01-01

    PDENSITY computer program calculates current densities for use in calculating power densities of electronic designs. Reads parts-list file for given design, file containing current required for each part, and file containing size of each part. For each part in design, program calculates current density in units of milliamperes per square inch. Written by use of AWK utility for Sun4-series computers running SunOS 4.x and IBM PC-series and compatible computers running MS-DOS. Sun version of program (NPO-19588). PC version of program (NPO-19171).

  15. Hamming generalized corrector for reactivity calculation

    Suescun-Diaz, Daniel; Ibarguen-Gonzalez, Maria C.; Figueroa-Jimenez, Jorge H. [Pontificia Universidad Javeriana Cali, Cali (Colombia). Dept. de Ciencias Naturales y Matematicas

    2014-06-15

    This work presents the Hamming method generalized corrector for numerically resolving the differential equation of delayed neutron precursor concentration from the point kinetics equations for reactivity calculation, without using the nuclear power history or the Laplace transform. A study was carried out of several correctors with their respective modifiers with different time step calculations, to offer stability and greater precision. Better results are obtained for some correctors than with other existing methods. Reactivity can be calculated with precision of the order h{sup 5}, where h is the time step. (orig.)

  16. Pressure vessel calculations for VVER-440 reactors.

    Hordósy, G; Hegyi, Gy; Keresztúri, A; Maráczy, Cs; Temesvári, E; Vértes, P; Zsolnay, E

    2005-01-01

    For the determination of the fast neutron load of the reactor pressure vessel a mixed calculational procedure was developed. The procedure was applied to the Unit II of Paks NPP, Hungary. The neutron source on the outer surfaces of the reactor was determined by a core design code, and the neutron transport calculations outside the core were performed by the Monte Carlo code MCNP. The reaction rate in the activation detectors at surveillance positions and at the cavity were calculated and compared with measurements. In most cases, fairly good agreement was found.

  17. The WFIRST Galaxy Survey Exposure Time Calculator

    Hirata, Christopher M.; Gehrels, Neil; Kneib, Jean-Paul; Kruk, Jeffrey; Rhodes, Jason; Wang, Yun; Zoubian, Julien

    2013-01-01

    This document describes the exposure time calculator for the Wide-Field Infrared Survey Telescope (WFIRST) high-latitude survey. The calculator works in both imaging and spectroscopic modes. In addition to the standard ETC functions (e.g. background and SN determination), the calculator integrates over the galaxy population and forecasts the density and redshift distribution of galaxy shapes usable for weak lensing (in imaging mode) and the detected emission lines (in spectroscopic mode). The source code is made available for public use.

  18. Temperature calculation in fire safety engineering

    Wickström, Ulf

    2016-01-01

    This book provides a consistent scientific background to engineering calculation methods applicable to analyses of materials reaction-to-fire, as well as fire resistance of structures. Several new and unique formulas and diagrams which facilitate calculations are presented. It focuses on problems involving high temperature conditions and, in particular, defines boundary conditions in a suitable way for calculations. A large portion of the book is devoted to boundary conditions and measurements of thermal exposure by radiation and convection. The concepts and theories of adiabatic surface temperature and measurements of temperature with plate thermometers are thoroughly explained. Also presented is a renewed method for modeling compartment fires, with the resulting simple and accurate prediction tools for both pre- and post-flashover fires. The final chapters deal with temperature calculations in steel, concrete and timber structures exposed to standard time-temperature fire curves. Useful temperature calculat...

  19. Measured and Calculated Volumes of Wetland Depressions

    U.S. Environmental Protection Agency — Measured and calculated volumes of wetland depressions This dataset is associated with the following publication: Wu, Q., and C. Lane. Delineation and quantification...

  20. Spectra: Time series power spectrum calculator

    Gallardo, Tabaré

    2017-01-01

    Spectra calculates the power spectrum of a time series equally spaced or not based on the Spectral Correlation Coefficient (Ferraz-Mello 1981, Astron. Journal 86 (4), 619). It is very efficient for detection of low frequencies.

  1. Large Numbers and Calculators: A Classroom Activity.

    Arcavi, Abraham; Hadas, Nurit

    1989-01-01

    Described is an activity demonstrating how a scientific calculator can be used in a mathematics classroom to introduce new content while studying a conventional topic. Examples of reading and writing large numbers, and reading hidden results are provided. (YP)

  2. Fair and Reasonable Rate Calculation Data -

    Department of Transportation — This dataset provides guidelines for calculating the fair and reasonable rates for U.S. flag vessels carrying preference cargoes subject to regulations contained at...

  3. Quantum Monte Carlo Calculations of Light Nuclei

    Pieper, Steven C

    2007-01-01

    During the last 15 years, there has been much progress in defining the nuclear Hamiltonian and applying quantum Monte Carlo methods to the calculation of light nuclei. I describe both aspects of this work and some recent results.

  4. Multigrid Methods in Electronic Structure Calculations

    Briggs, E L; Bernholc, J

    1996-01-01

    We describe a set of techniques for performing large scale ab initio calculations using multigrid accelerations and a real-space grid as a basis. The multigrid methods provide effective convergence acceleration and preconditioning on all length scales, thereby permitting efficient calculations for ill-conditioned systems with long length scales or high energy cut-offs. We discuss specific implementations of multigrid and real-space algorithms for electronic structure calculations, including an efficient multigrid-accelerated solver for Kohn-Sham equations, compact yet accurate discretization schemes for the Kohn-Sham and Poisson equations, optimized pseudo\\-potentials for real-space calculations, efficacious computation of ionic forces, and a complex-wavefunction implementation for arbitrary sampling of the Brillioun zone. A particular strength of a real-space multigrid approach is its ready adaptability to massively parallel computer architectures, and we present an implementation for the Cray-T3D with essen...

  5. 46 CFR 170.090 - Calculations.

    2010-10-01

    ... necessary to compute and plot any of the following curves as part of the calculations required in this subchapter, these plots must also be submitted: (1) Righting arm or moment curves. (2) Heeling arm or...

  6. Representation and calculation of economic uncertainties

    Schjær-Jacobsen, Hans

    2002-01-01

    Management and decision making when certain information is available may be a matter of rationally choosing the optimal alternative by calculation of the utility function. When only uncertain information is available (which is most often the case) decision-making calls for more complex methods...... of representation and calculation and the basis for choosing the optimal alternative may become obscured by uncertainties of the utility function. In practice, several sources of uncertainties of the required information impede optimal decision making in the classical sense. In order to be able to better handle...... to uncertain economic numbers are discussed. When solving economic models for decision-making purposes calculation of uncertain functions will have to be carried out in addition to the basic arithmetical operations. This is a challenging numerical problem since improper methods of calculation may introduce...

  7. Note about socio-economic calculations

    Landex, Alex; Andersen, Jonas Lohmann Elkjær; Salling, Kim Bang

    2006-01-01

    these effects must be described qualitatively. This note describes the socio-economic evaluation based on market prices and not factor prices which has been the tradition in Denmark till now. This is due to the recommendation from the Ministry of Transport to start using calculations based on market prices......This note gives a short introduction of how to make socio-economic evaluations in connection with the teaching at the Centre for Traffic and Transport (CTT). It is not a manual for making socio-economic calculations in transport infrastructure projects – in this context we refer to the guidelines...... for socio-economic calculations within the transportation area (Ministry of Traffic, 2003). The note also explains the theory of socio-economic calculations – reference is here made to ”Road Infrastructure Planning – a Decision-oriented approach” (Leleur, 2000). Socio-economic evaluations of infrastructure...

  8. Obliged to Calculate: "My School", Markets, and Equipping Parents for Calculativeness

    Gobby, Brad

    2016-01-01

    This paper argues neoliberal programs of government in education are equipping parents for calculativeness. Regimes of testing and the publication of these results and other organizational data are contributing to a public economy of numbers that increasingly oblige citizens to calculate. Using the notions of calculative and market devices, this…

  9. A revised calculational model for fission

    Atchison, F.

    1998-09-01

    A semi-empirical parametrization has been developed to calculate the fission contribution to evaporative de-excitation of nuclei with a very wide range of charge, mass and excitation-energy and also the nuclear states of the scission products. The calculational model reproduces measured values (cross-sections, mass distributions, etc.) for a wide range of fissioning systems: Nuclei from Ta to Cf, interactions involving nucleons up to medium energy and light ions. (author)

  10. A Java Interface for Roche Lobe Calculations

    Leahy, D. A.; Leahy, J. C.

    2015-09-01

    A JAVA interface for calculating various properties of the Roche lobe has been created. The geometry of the Roche lobe is important for studying interacting binary stars, particularly those with compact objects which have a companion which fills the Roche lobe. There is no known analytic solution to the Roche lobe problem. Here the geometry of the Roche lobe is calculated numerically to high accuracy and made available to the user for arbitrary input mass ratio, q.

  11. Realistic level density calculation for heavy nuclei

    Cerf, N. [Institut de Physique Nucleaire, Orsay (France); Pichon, B. [Observatoire de Paris, Meudon (France); Rayet, M.; Arnould, M. [Institut d`Astronomie et d`Astrophysique, Bruxelles (Belgium)

    1994-12-31

    A microscopic calculation of the level density is performed, based on a combinatorial evaluation using a realistic single-particle level scheme. This calculation relies on a fast Monte Carlo algorithm, allowing to consider heavy nuclei (i.e., large shell model spaces) which could not be treated previously in combinatorial approaches. An exhaustive comparison of the predicted neutron s-wave resonance spacings with experimental data for a wide range of nuclei is presented.

  12. Flow calculation of a bulb turbine

    Goede, E.; Pestalozzi, J.

    1987-01-01

    In recent years remarkable progress has been made in the field of theoretical flow calculation. Studying the relevant literature one might receive the impression that most problems have been solved. But probing more deeply into details one becomes aware that by no means all questions are answered. The report tries to point out what may be expected of the quasi-three-dimensional flow calculation method employed and - much more important - what it must not be expected to accomplish. (orig.)

  13. Green's function calculations of light nuclei

    Sun, ZhongHao; Wu, Qiang; Xu, FuRong

    2016-09-01

    The influence of short-range correlations in nuclei was investigated with realistic nuclear force. The nucleon-nucleon interaction was renormalized with V lowk technique and applied to the Green's function calculations. The Dyson equation was reformulated with algebraic diagrammatic constructions. We also analyzed the binding energy of 4He, calculated with chiral potential and CD-Bonn potential. The properties of Green's function with realistic nuclear forces are also discussed.

  14. Calculation Methods for Wallenius’ Noncentral Hypergeometric Distribution

    Fog, Agner

    2008-01-01

    distribution are derived. Range of applicability, numerical problems, and efficiency are discussed for each method. Approximations to the mean and variance are also discussed. This distribution has important applications in models of biased sampling and in models of evolutionary systems....... is the conditional distribution of independent binomial variates given their sum. No reliable calculation method for Wallenius' noncentral hypergeometric distribution has hitherto been described in the literature. Several new methods for calculating probabilities from Wallenius' noncentral hypergeometric...

  15. Users enlist consultants to calculate costs, savings

    1982-05-24

    Consultants who calculate payback provide expertise and a second opinion to back up energy managers' proposals. They can lower the costs of an energy-management investment by making complex comparisons of systems and recommending the best system for a specific application. Examples of payback calculations include simple payback for a school system, a university, and a Disneyland hotel, as well as internal rate of return for a corporate office building and a chain of clothing stores. (DCK)

  16. DOWNSCALE APPLICATION OF BOILER THERMAL CALCULATION APPROACH

    Zelený, Zbynĕk; Hrdlička, Jan

    2016-01-01

    Commonly used thermal calculation methods are intended primarily for large scale boilers. Hot water small scale boilers, which are commonly used for home heating have many specifics, that distinguish them from large scale boilers especially steam boilers. This paper is focused on application of thermal calculation procedure that is designed for large scale boilers, on a small scale boiler for biomass combustion of load capacity 25 kW. Special issue solved here is influence of formation of dep...

  17. Reciprocity Theorems for Ab Initio Force Calculations

    Wei, C; Mele, E J; Rappe, A M; Lewis, Steven P.; Rappe, Andrew M.

    1996-01-01

    We present a method for calculating ab initio interatomic forces which scales quadratically with the size of the system and provides a physically transparent representation of the force in terms of the spatial variation of the electronic charge density. The method is based on a reciprocity theorem for evaluating an effective potential acting on a charged ion in the core of each atom. We illustrate the method with calculations for diatomic molecules.

  18. R-matrix calculation for photoionization

    2000-01-01

    We have employed the R-matrix method to calculate differe ntial cross sections for photoionization of helium leaving helium ion in an exci ted state for incident photon energy between the N=2 and N=3 thresholds (69~73 eV) of He+ ion. Differential cross sections for photoionization in the N=2 level at emission angle 0° are provide. Our results are in good agreem ent with available experimental data and theoretical calculations.

  19. Efficient Finite Element Calculation of Nγ

    Clausen, Johan; Damkilde, Lars; Krabbenhøft, K.

    2007-01-01

    This paper deals with the computational aspects of the Mohr-Coulomb material model, in particular the calculation of the bearing capacity factor Nγfor a strip and a circular footing.......This paper deals with the computational aspects of the Mohr-Coulomb material model, in particular the calculation of the bearing capacity factor Nγfor a strip and a circular footing....

  20. Computerized calculation of material balances in carbonization

    Chistyakov, A.M.

    1980-09-01

    Charge formulations and carbonisation schedules are described by empirical formulae used to calculate the yield of coking products. An algorithm is proposed for calculating the material balance, and associated computer program. The program can be written in conventional languages, e.g. Fortran, Algol etc. The information obtained can be used for on-line assessment of the effects of charge composition and properties on the coke and by-products yields, as well as the effects of the carbonisation conditions.

  1. Calculating Cumulative Binomial-Distribution Probabilities

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.

  2. PROSPECTS OF MANAGEMENT ACCOUNTING AND COST CALCULATION

    Marian ŢAICU

    2014-11-01

    Full Text Available Progress in improving production technology requires appropriate measures to achieve an efficient management of costs. This raises the need for continuous improvement of management accounting and cost calculation. Accounting information in general, and management accounting information in particular, have gained importance in the current economic conditions, which are characterized by risk and uncertainty. The future development of management accounting and cost calculation is essential to meet the information needs of management.

  3. Linear Response Calculations of Spin Fluctuations

    Savrasov, S. Y.

    1998-09-01

    A variational formulation of the time-dependent linear response based on the Sternheimer method is developed in order to make practical ab initio calculations of dynamical spin susceptibilities of solids. Using gradient density functional and a muffin-tin-orbital representation, the efficiency of the approach is demonstrated by applications to selected magnetic and strongly paramagnetic metals. The results are found to be consistent with experiment and are compared with previous theoretical calculations.

  4. Environmental flow allocation and statistics calculator

    Konrad, Christopher P.

    2011-01-01

    The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.

  5. 用迭代方法和双共轭梯度法重建二维电导率剖面分布%Reconstruction of the 2-Dimensional Inhomogeneous Conductivity Profile Using Iterative and Bi-Conjugate Gradient Method

    杨峰; 聂在平

    2000-01-01

    本文着重阐述采用积分方程的迭代方法并结合双共轭梯度(BCG)法对低频近场非均匀背景介质中二维轴对称电导率剖面的反演,并仅用z向采集的数据进行目标重建。首先,基于待反演目标区内、外的电场积分方程。建立起反演积分方程,将积分方程离散化为矩阵方程,用迭代方法求解目标区电导率分布。在每次迭代过程中,格林函数不断被更新。同时用正则化方法来消除解的不适定性。文中利用不完备的测量数据对复杂的电导率剖面进行了反演。数值模拟结果表明本文方法与文献[8]相比,且有更快的收敛速度和更高的成象质量。%In this paper,the inversion and reconstruction of 2-dimensional inhomogeneous conductivity profile for low-frquency near field using iterative algorithm and bi-conjugate gradient method (BCG) is decribed. Based on the internal and external electric field integration equations, the inversion equation is derived. By using the method of moments ,the integral equation can be discretized into matrix form. The Green′s function is updated in every iteration step. Then, ill-posed problem is circumvented by imposing additional constraints on the solution using the Tikhonov regularzation method. Numerical results show that this method is of faster convergence and higher inverse quality than that which introduced in reference [8].

  6. Proteome analysis of vaccinia virus IHD-W-infected HEK 293 cells with 2-dimensional gel electrophoresis and MALDI-PSD-TOF MS of on solid phase support N-terminally sulfonated peptides

    Bartel Sebastian

    2011-08-01

    Full Text Available Abstract Background Despite the successful eradication of smallpox by the WHO-led vaccination programme, pox virus infections remain a considerable health threat. The possible use of smallpox as a bioterrorism agent as well as the continuous occurrence of zoonotic pox virus infections document the relevance to deepen the understanding for virus host interactions. Since the permissiveness of pox infections is independent of hosts surface receptors, but correlates with the ability of the virus to infiltrate the antiviral host response, it directly depends on the hosts proteome set. In this report the proteome of HEK293 cells infected with Vaccinia Virus strain IHD-W was analyzed by 2-dimensional gel electrophoresis and MALDI-PSD-TOF MS in a bottom-up approach. Results The cellular and viral proteomes of VACV IHD-W infected HEK293 cells, UV-inactivated VACV IHD-W-treated as well as non-infected cells were compared. Derivatization of peptides with 4-sulfophenyl isothiocyanate (SPITC carried out on ZipTipμ-C18 columns enabled protein identification via the peptides' primary sequence, providing improved s/n ratios as well as signal intensities of the PSD spectra. The expression of more than 24 human proteins was modulated by the viral infection. Effects of UV-inactivated and infectious viruses on the hosts' proteome concerning energy metabolism and proteins associated with gene expression and protein-biosynthesis were quite similar. These effects might therefore be attributed to virus entry and virion proteins. However, the modulation of proteins involved in apoptosis was clearly correlated to infectious viruses. Conclusions The proteome analysis of infected cells provides insight into apoptosis modulation, regulation of cellular gene expression and the regulation of energy metabolism. The confidence of protein identifications was clearly improved by the peptides' derivatization with SPITC on a solid phase support. Some of the identified proteins

  7. Good Practices in Free-energy Calculations

    Pohorille, Andrew; Jarzynski, Christopher; Chipot, Christopher

    2013-01-01

    As access to computational resources continues to increase, free-energy calculations have emerged as a powerful tool that can play a predictive role in drug design. Yet, in a number of instances, the reliability of these calculations can be improved significantly if a number of precepts, or good practices are followed. For the most part, the theory upon which these good practices rely has been known for many years, but often overlooked, or simply ignored. In other cases, the theoretical developments are too recent for their potential to be fully grasped and merged into popular platforms for the computation of free-energy differences. The current best practices for carrying out free-energy calculations will be reviewed demonstrating that, at little to no additional cost, free-energy estimates could be markedly improved and bounded by meaningful error estimates. In energy perturbation and nonequilibrium work methods, monitoring the probability distributions that underlie the transformation between the states of interest, performing the calculation bidirectionally, stratifying the reaction pathway and choosing the most appropriate paradigms and algorithms for transforming between states offer significant gains in both accuracy and precision. In thermodynamic integration and probability distribution (histogramming) methods, properly designed adaptive techniques yield nearly uniform sampling of the relevant degrees of freedom and, by doing so, could markedly improve efficiency and accuracy of free energy calculations without incurring any additional computational expense.

  8. Paramedics’ Ability to Perform Drug Calculations

    Eastwood, Kathyrn J

    2009-11-01

    Full Text Available Background: The ability to perform drug calculations accurately is imperative to patient safety. Research into paramedics’ drug calculation abilities was first published in 2000 and for nurses’ abilities the research dates back to the late 1930s. Yet, there have been no studies investigating an undergraduate paramedic student’s ability to perform drug or basic mathematical calculations. The objective of this study was to review the literature and determine the ability of undergraduate and qualified paramedics to perform drug calculations.Methods: A search of the prehospital-related electronic databases was undertaken using the Ovid and EMBASE systems available through the Monash University Library. Databases searched included the Cochrane Central Register of Controlled Trials (CENTRAL, MEDLINE, CINAHL, JSTOR, EMBASE and Google Scholar, from their beginning until the end of August 2009. We reviewed references from articles retrieved.Results: The electronic database search located 1,154 articles for review. Six additional articles were identified from reference lists of retrieved articles. Of these, 59 were considered relevant. After reviewing the 59 articles only three met the inclusion criteria. All articles noted some level of mathematical deficiencies amongst their subjects.Conclusions: This study identified only three articles. Results from these limited studies indicate a significant lack of mathematical proficiency amongst the paramedics sampled. A need exists to identify if undergraduate paramedic students are capable of performing the required drug calculations in a non-clinical setting.[WestJEM. 2009;10:240-243.

  9. Comparison of Polar Cap (PC) index calculations.

    Stauning, P.

    2012-04-01

    The Polar Cap (PC) index introduced by Troshichev and Andrezen (1985) is derived from polar magnetic variations and is mainly a measure of the intensity of the transpolar ionospheric currents. These currents relate to the polar cap antisunward ionospheric plasma convection driven by the dawn-dusk electric field, which in turn is generated by the interaction of the solar wind with the Earth's magnetosphere. Coefficients to calculate PCN and PCS index values from polar magnetic variations recorded at Thule and Vostok, respectively, have been derived by several different procedures in the past. The first published set of coefficients for Thule was derived by Vennerstrøm, 1991 and is still in use for calculations of PCN index values by DTU Space. Errors in the program used to calculate index values were corrected in 1999 and again in 2001. In 2005 DMI adopted a unified procedure proposed by Troshichev for calculations of the PCN index. Thus there exists 4 different series of PCN index values. Similarly, at AARI three different sets of coefficients have been used to calculate PCS indices in the past. The presentation discusses the principal differences between the various PC index procedures and provides comparisons between index values derived from the same magnetic data sets using the different procedures. Examples from published papers are examined to illustrate the differences.

  10. Accurate free energy calculation along optimized paths.

    Chen, Changjun; Xiao, Yi

    2010-05-01

    The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.

  11. Perturbation calculation of thermodynamic density of states.

    Brown, G; Schulthess, T C; Nicholson, D M; Eisenbach, M; Stocks, G M

    2011-12-01

    The density of states g (ε) is frequently used to calculate the temperature-dependent properties of a thermodynamic system. Here a derivation is given for calculating the warped density of states g*(ε) resulting from the addition of a perturbation. The method is validated for a classical Heisenberg model of bcc Fe and the errors in the free energy are shown to be second order in the perturbation. Taking the perturbation to be the difference between a first-principles quantum-mechanical energy and a corresponding classical energy, this method can significantly reduce the computational effort required to calculate g(ε) for quantum systems using the Wang-Landau approach.

  12. Using Inverted Indices for Accelerating LINGO Calculations

    Kristensen, Thomas Greve; Nielsen, Jesper; Pedersen, Christian Nørgaard Storm

    2011-01-01

    The ever growing size of chemical data bases calls for the development of novel methods for representing and comparing molecules. One such method called LINGO is based on fragmenting the SMILES string representation of molecules. Comparison of molecules can then be performed by calculating...... the Tanimoto coefficient which is called the LINGOsim when used on LINGO multisets. This paper introduces a verbose representation for storing LINGO multisets which makes it possible to transform them into sparse fingerprints such that fingerprint data structures and algorithms can be used to accelerate...... queries. The previous best method for rapidly calculating the LINGOsim similarity matrix required specialised hardware to yield a significant speedup over existing methods. By representing LINGO multisets in the verbose representation and using inverted indices it is possible to calculate LINGOsim...

  13. Using inverted indices for accelerating LINGO calculations.

    Kristensen, Thomas G; Nielsen, Jesper; Pedersen, Christian N S

    2011-03-28

    The ever growing size of chemical databases calls for the development of novel methods for representing and comparing molecules. One such method called LINGO is based on fragmenting the SMILES string representation of molecules. Comparison of molecules can then be performed by calculating the Tanimoto coefficient, which is called LINGOsim when used on LINGO multisets. This paper introduces a verbose representation for storing LINGO multisets, which makes it possible to transform them into sparse fingerprints such that fingerprint data structures and algorithms can be used to accelerate queries. The previous best method for rapidly calculating the LINGOsim similarity matrix required specialized hardware to yield a significant speedup over existing methods. By representing LINGO multisets in the verbose representation and using inverted indices, it is possible to calculate LINGOsim similarity matrices roughly 2.6 times faster than existing methods without relying on specialized hardware.

  14. Automated one-loop calculations with GOSAM

    Cullen, Gavin [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Greiner, Nicolas [Illinois Univ., Urbana-Champaign, IL (United States). Dept. of Physics; Max-Planck-Institut fuer Physik, Muenchen (Germany); Heinrich, Gudrun; Reiter, Thomas [Max-Planck-Institut fuer Physik, Muenchen (Germany); Luisoni, Gionata [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology; Mastrolia, Pierpaolo [Max-Planck-Institut fuer Physik, Muenchen (Germany); Padua Univ. (Italy). Dipt. di Fisica; Ossola, Giovanni [New York City Univ., NY (United States). New York City College of Technology; New York City Univ., NY (United States). The Graduate School and University Center; Tramontano, Francesco [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2011-11-15

    We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop. (orig.)

  15. Benchmarking calculations of excitonic couplings between bacteriochlorophylls

    Kenny, Elise P

    2015-01-01

    Excitonic couplings between (bacterio)chlorophyll molecules are necessary for simulating energy transport in photosynthetic complexes. Many techniques for calculating the couplings are in use, from the simple (but inaccurate) point-dipole approximation to fully quantum-chemical methods. We compared several approximations to determine their range of applicability, noting that the propagation of experimental uncertainties poses a fundamental limit on the achievable accuracy. In particular, the uncertainty in crystallographic coordinates yields an uncertainty of about 20% in the calculated couplings. Because quantum-chemical corrections are smaller than 20% in most biologically relevant cases, their considerable computational cost is rarely justified. We therefore recommend the electrostatic TrEsp method across the entire range of molecular separations and orientations because its cost is minimal and it generally agrees with quantum-chemical calculations to better than the geometric uncertainty. We also caution ...

  16. Detailed Burnup Calculations for Research Reactors

    Leszczynski, F. [Centro Atomico Bariloche (CNEA), 8400 S. C. de Bariloche (Argentina)

    2011-07-01

    A general method (RRMCQ) has been developed by introducing a microscopic burn up scheme which uses the Monte Carlo calculated spatial power distribution of a research reactor core and a depletion code for burn up calculations, as a basis for solving nuclide material balance equations for each spatial region in which the system is divided. Continuous energy dependent cross-section libraries and full 3D geometry of the system is input for the calculations. The resulting predictions for the system at successive burn up time steps are thus based on a calculation route where both geometry and cross-sections are accurately represented, without geometry simplifications and with continuous energy data. The main advantage of this method over the classical deterministic methods currently used is that RRMCQ System is a direct 3D method without the limitations and errors introduced on the homogenization of geometry and condensation of energy of deterministic methods. The Monte Carlo and burn up codes adopted until now are the widely used MCNP5 and ORIGEN2 codes, but other codes can be used also. For using this method, there is a need of a well-known set of nuclear data for isotopes involved in burn up chains, including burnable poisons, fission products and actinides. For fixing the data to be included on this set, a study of the present status of nuclear data is performed, as part of the development of RRMCQ method. This study begins with a review of the available cross-section data of isotopes involved in burn up chains for research nuclear reactors. The main data needs for burn up calculations are neutron cross-sections, decay constants, branching ratios, fission energy and yields. The present work includes results of selected experimental benchmarks and conclusions about the sensitivity of different sets of cross-section data for burn up calculations, using some of the main available evaluated nuclear data files. Basically, the RRMCQ detailed burn up method includes four

  17. Dose calculations for intakes of ore dust

    O`Brien, R.S

    1998-08-01

    This report describes a methodology for calculating the committed effective dose for mixtures of radionuclides, such as those which occur in natural radioactive ores and dusts. The formulae are derived from first principles, with the use of reasonable assumptions concerning the nature and behaviour of the radionuclide mixtures. The calculations are complicated because these `ores` contain a range of particle sizes, have different degrees of solubility in blood and other body fluids, and also have different biokinetic clearance characteristics from the organs and tissues in the body. The naturally occurring radionuclides also tend to occur in series, i.e. one is produced by the radioactive decay of another `parent` radionuclide. The formulae derived here can be used, in conjunction with a model such as LUDEP, for calculating total dose resulting from inhalation and/or ingestion of a mixture of radionuclides, and also for deriving annual limits on intake and derived air concentrations for these mixtures. 15 refs., 14 tabs., 3 figs.

  18. Numerical inductance calculations based on first principles.

    Shatz, Lisa F; Christensen, Craig W

    2014-01-01

    A method of calculating inductances based on first principles is presented, which has the advantage over the more popular simulators in that fundamental formulas are explicitly used so that a deeper understanding of the inductance calculation is obtained with no need for explicit discretization of the inductor. It also has the advantage over the traditional method of formulas or table lookups in that it can be used for a wider range of configurations. It relies on the use of fast computers with a sophisticated mathematical computing language such as Mathematica to perform the required integration numerically so that the researcher can focus on the physics of the inductance calculation and not on the numerical integration.

  19. Challenges in Large Scale Quantum Mechanical Calculations

    Ratcliff, Laura E; Huhs, Georg; Deutsch, Thierry; Masella, Michel; Genovese, Luigi

    2016-01-01

    During the past decades, quantum mechanical methods have undergone an amazing transition from pioneering investigations of experts into a wide range of practical applications, made by a vast community of researchers. First principles calculations of systems containing up to a few hundred atoms have become a standard in many branches of science. The sizes of the systems which can be simulated have increased even further during recent years, and quantum-mechanical calculations of systems up to many thousands of atoms are nowadays possible. This opens up new appealing possibilities, in particular for interdisciplinary work, bridging together communities of different needs and sensibilities. In this review we will present the current status of this topic, and will also give an outlook on the vast multitude of applications, challenges and opportunities stimulated by electronic structure calculations, making this field an important working tool and bringing together researchers of many different domains.

  20. Cosmology calculations almost without general relativity

    Jordan, T F

    2003-01-01

    The Friedmann equation can be derived for a Newtonian universe. Changing mass density to energy density gives exactly the Friedmann equation of general relativity. Accounting for work done by pressure then yields the two Einstein equations that govern the expansion of the universe. Descriptions and explanations of radiation pressure and vacuum pressure are added to complete a basic kit of cosmology tools. It provides a basis for teaching cosmology to undergraduates in a way that quickly equips them to do basic calculations. This is demonstrated with calculations involving: characteristics of the expansion for densities dominated by radiation, matter, or vacuum; the closeness of the density to the critical density; how much vacuum energy compared to matter energy is needed to make the expansion accelerate; and how little is needed to make it stop. Travel time and luninosity distance are calculated in terms of the redshift and the densities of matter and vacuum energy, using a scaled Friedmann equation with the...

  1. Parallel scalability of Hartree–Fock calculations

    Chow, Edmond, E-mail: echow@cc.gatech.edu; Liu, Xing [School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0765 (United States); Smelyanskiy, Mikhail; Hammond, Jeff R. [Parallel Computing Lab, Intel Corporation, Santa Clara, California 95054-1549 (United States)

    2015-03-14

    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree–Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  2. Lagrange interpolation for the radiation shielding calculation

    Isozumi, Y; Miyatake, H; Kato, T; Tosaki, M

    2002-01-01

    Basing on some formulas of Lagrange interpolation derived in this paper, a computer program for table calculations has been prepared. Main features of the program are as follows; 1) maximum degree of polynomial in Lagrange interpolation is 10, 2) tables with both one variable and two variables can be applied, 3) logarithmic transformations of function and/or variable values can be included and 4) tables with discontinuities and cusps can be applied. The program has been carefully tested by using the data tables in the manual of shielding calculation for radiation facilities. For all available tables in the manual, calculations with the program have been reasonably performed under conditions of 1) logarithmic transformation of both function and variable values and 2) degree 4 or 5 of the polynomial.

  3. eQuilibrator--the biochemical thermodynamics calculator.

    Flamholz, Avi; Noor, Elad; Bar-Even, Arren; Milo, Ron

    2012-01-01

    The laws of thermodynamics constrain the action of biochemical systems. However, thermodynamic data on biochemical compounds can be difficult to find and is cumbersome to perform calculations with manually. Even simple thermodynamic questions like 'how much Gibbs energy is released by ATP hydrolysis at pH 5?' are complicated excessively by the search for accurate data. To address this problem, eQuilibrator couples a comprehensive and accurate database of thermodynamic properties of biochemical compounds and reactions with a simple and powerful online search and calculation interface. The web interface to eQuilibrator (http://equilibrator.weizmann.ac.il) enables easy calculation of Gibbs energies of compounds and reactions given arbitrary pH, ionic strength and metabolite concentrations. The eQuilibrator code is open-source and all thermodynamic source data are freely downloadable in standard formats. Here we describe the database characteristics and implementation and demonstrate its use.

  4. Daylight calculations using constant luminance curves

    Betman, E. [CRICYT, Mendoza (Argentina). Laboratorio de Ambiente Humano y Vivienda

    2005-02-01

    This paper presents a simple method to manually estimate daylight availability and to make daylight calculations using constant luminance curves calculated with local illuminance and irradiance data and the all-weather model for sky luminance distribution developed in the Atmospheric Science Research Center of the University of New York (ARSC) by Richard Perez et al. Work with constant luminance curves has the advantage that daylight calculations include the problem's directionality and preserve the information of the luminous climate of the place. This permits accurate knowledge of the resource and a strong basis to establish conclusions concerning topics related to the energy efficiency and comfort in buildings. The characteristics of the proposed method are compared with the method that uses the daylight factor. (author)

  5. Calculation of Radiation Damage in SLAC Targets

    Wirth, B D; Monasterio, P; Stein, W

    2008-04-03

    Ti-6Al-4V alloys are being considered as a positron producing target in the Next Linear Collider, with an incident photon beam and operating temperatures between room temperature and 300 C. Calculations of displacement damage in Ti-6Al-4V alloys have been performed by combining high-energy particle FLUKA simulations with SPECTER calculations of the displacement cross section from the resulting energy-dependent neutron flux plus the displacements calculated from the Lindhard model from the resulting energy-dependent ion flux. The radiation damage calculations have investigated two cases, namely the damage produced in a Ti-6Al-4V SLAC positron target where the irradiation source is a photon beam with energies between 5 and 11 MeV. As well, the radiation damage dose in displacements per atom, dpa, has been calculated for a mono-energetic 196 MeV proton irradiation experiment performed at Brookhaven National Laboratory (BLIP experiment). The calculated damage rate is 0.8 dpa/year for the Ti-6Al-4V SLAC photon irradiation target, and a total damage exposure of 0.06 dpa in the BLIP irradiation experiment. In both cases, the displacements are predominantly ({approx}80%) produced by recoiling ions (atomic nuclei) from photo-nuclear collisions or proton-nuclear collisions, respectively. Approximately 25% of the displacement damage results from the neutrons in both cases. Irradiation effects studies in titanium alloys have shown substantial increases in the yield and ultimate strength of up to 500 MPa and a corresponding decrease in uniform ductility for neutron and high energy proton irradiation at temperatures between 40 and 300 C. Although the data is limited, there is an indication that the strength increases will saturate by doses on the order of a few dpa. Microstructural investigations indicate that the dominant features responsible for the strength increases were dense precipitation of a {beta} (body-centered cubic) phase precipitate along with a high number density

  6. Contribution of Disclination Lines to Free Energy of 2-Dimensional Liquid Crystals in Single-Elastic Constant Approximation%单一弹性常数近似下向错线对二维液晶自由能的贡献

    王玉生; 张慧; 杨国宏

    2004-01-01

    In light of the φ-mapping method, the contribution of disclination lines to the free energy density of 2-dimensional liquid crystals is studied in the single-elastic constant approximation. It is pointed out that, compared with the previous theory, the free energy density can be divided into two parts. One is the usual distorted energy density of director field around the disclination lines. The other is the free energy density of the disclination lines themselves which is centralized at the disclination lines and topoligically quantized in a unit of 1/2kπ. The topological quantum numbers are determined by the Hopf indices and Brouwer degrees of the director field at the disclination lines, i.e., the disclination strength. From the method of Lagrangian multipliers, the equilibrium equation and the molecular field of 2-dimensional liquid crystals are also obtained. It is shown that the physical meaning of the Lagrangian multiplier is just the distorted energy density.

  7. Precise calculations of the deuteron quadrupole moment

    Gross, Franz L. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2016-06-01

    Recently, two calculations of the deuteron quadrupole moment have have given predictions that agree with the measured value to within 1%, resolving a long-standing discrepancy. One of these uses the covariant spectator theory (CST) and the other chiral effective field theory (cEFT). In this talk I will first briefly review the foundations and history of the CST, and then compare these two calculations with emphasis on how the same physical processes are being described using very different language. The comparison of the two methods gives new insights into the dynamics of the low energy NN interaction.

  8. Local orbitals in electron scattering calculations*

    Winstead, Carl L.; McKoy, Vincent

    2016-05-01

    We examine the use of local orbitals to improve the scaling of calculations that incorporate target polarization in a description of low-energy electron-molecule scattering. After discussing the improved scaling that results, we consider the results of a test calculation that treats scattering from a two-molecule system using both local and delocalized orbitals. Initial results are promising. Contribution to the Topical Issue "Advances in Positron and Electron Scattering", edited by Paulo Limao-Vieira, Gustavo Garcia, E. Krishnakumar, James Sullivan, Hajime Tanuma and Zoran Petrovic.

  9. Numerical calculation of impurity charge state distributions

    Crume, E. C.; Arnurius, D. E.

    1977-09-01

    The numerical calculation of impurity charge state distributions using the computer program IMPDYN is discussed. The time-dependent corona atomic physics model used in the calculations is reviewed, and general and specific treatments of electron impact ionization and recombination are referenced. The complete program and two examples relating to tokamak plasmas are given on a microfiche so that a user may verify that his version of the program is working properly. In the discussion of the examples, the corona steady-state approximation is shown to have significant defects when the plasma environment, particularly the electron temperature, is changing rapidly.

  10. The new pooled cohort equations risk calculator

    Preiss, David; Kristensen, Søren L

    2015-01-01

    total cardiovascular risk score. During development of joint guidelines released in 2013 by the American College of Cardiology (ACC) and American Heart Association (AHA), the decision was taken to develop a new risk score. This resulted in the ACC/AHA Pooled Cohort Equations Risk Calculator. This risk...... disease and any measure of social deprivation. An early criticism of the Pooled Cohort Equations Risk Calculator has been its alleged overestimation of ASCVD risk which, if confirmed in the general population, is likely to result in statin therapy being prescribed to many individuals at lower risk than...

  11. Idiot savant calendrical calculators: maths or memory?

    O'Connor, N; Hermelin, B

    1984-11-01

    Eight idiot savant calendrical calculators were tested on dates in the years 1963, 1973, 1983, 1986 and 1993. The study was carried out in 1983. Speeds of correct response were minimal in 1983 and increased markedly into the past and the future. The response time increase was matched by an increase in errors. Speeds of response were uncorrelated with measured IQ, but the numbers were insufficient to justify any inference in terms of IQ-independence. Results are interpreted as showing that memory alone is inadequate to explain the calendrical calculating performance of the idiot savant subjects.

  12. Calculated Electron Fluxes at Airplane Altitudes

    Schaefer, R K; Stanev, T

    1993-01-01

    A precision measurement of atmospheric electron fluxes has been performed on a Japanese commercial airliner (Enomoto, {\\it et al.}, 1991). We have performed a monte carlo calculation of the cosmic ray secondary electron fluxes expected in this experiment. The monte carlo uses the hadronic portion of our neutrino flux cascade program combined with the electromagnetic cascade portion of the CERN library program GEANT. Our results give good agreement with the data, provided we boost the overall normalization of the primary cosmic ray flux by 12\\% over the normalization used in the neutrino flux calculation.

  13. Program Calculates Power Demands Of Electronic Designs

    Cox, Brian

    1995-01-01

    CURRENT computer program calculates power requirements of electronic designs. For given design, CURRENT reads in applicable parts-list file and file containing current required for each part. Program also calculates power required for circuit at supply potentials of 5.5, 5.0, and 4.5 volts. Written by use of AWK utility for Sun4-series computers running SunOS 4.x and IBM PC-series and compatible computers running MS-DOS. Sun version of program (NPO-19590). PC version of program (NPO-19111).

  14. Calculated optical absorption of different perovskite phases

    Castelli, Ivano Eligio; Thygesen, Kristian Sommer; Jacobsen, Karsten Wedel

    2015-01-01

    We present calculations of the optical properties of a set of around 80 oxides, oxynitrides, and organometal halide cubic and layered perovskites (Ruddlesden-Popper and Dion-Jacobson phases) with a bandgap in the visible part of the solar spectrum. The calculations show that for different classes...... of perovskites the solar light absorption efficiency varies greatly depending not only on bandgap size and character (direct/indirect) but also on the dipole matrix elements. The oxides exhibit generally a fairly weak absorption efficiency due to indirect bandgaps while the most efficient absorbers are found...... in the classes of oxynitride and organometal halide perovskites with strong direct transitions....

  15. Relaxation Method For Calculating Quantum Entanglement

    Tucci, R R

    2001-01-01

    In a previous paper, we showed how entanglement of formation can be defined as a minimum of the quantum conditional mutual information (a.k.a. quantum conditional information transmission). In classical information theory, the Arimoto-Blahut method is one of the preferred methods for calculating extrema of mutual information. We present a new method akin to the Arimoto-Blahut method for calculating entanglement of formation. We also present several examples computed with a computer program called Causa Comun that implements the ideas of this paper.

  16. DFT calculations with the exact functional

    Burke, Kieron

    2014-03-01

    I will discuss several works in which we calculate the exact exchange-correlation functional of density functional theory, mostly using the density-matrix renormalization group method invented by Steve White, our collaborator. We demonstrate that a Mott-Hubard insulator is a band metal. We also perform Kohn-Sham DFT calculations with the exact functional and prove that a simple algoritm always converges. But we find convergence becomes harder as correlations get stronger. An example from transport through molecular wires may also be discussed. Work supported by DOE grant DE-SC008696.

  17. Calculating reliability measures for ordinal data.

    Gamsu, C V

    1986-11-01

    Establishing the reliability of measures taken by judges is important in both clinical and research work. Calculating the statistic of choice, the kappa coefficient, unfortunately is not a particularly quick and simple procedure. Two much-needed practical tools have been developed to overcome these difficulties: a comprehensive and easily understood guide to the manual calculation of the most complex form of the kappa coefficient, weighted kappa for ordinal data, has been written; and a computer program to run under CP/M, PC-DOS and MS-DOS has been developed. With simple modification the program will also run on a Sinclair Spectrum home computer.

  18. Improving on calculation of martensitic phenomenological theory

    2003-01-01

    Exemplified by the martensitic transformation from DO3 to 18R in Cu-14.2Al-4.3Ni alloy and according to the principle that invariant-habit-plane can be obtained by self-accommodation between variants with twin relationships, and on the basis of displacement vector, volume fractions of two variants with twin relationships in martensitic transformation, habit-plane indexes, and orientation relationships between martensite and austenite after phase transformation can be calculated. Because no additional rotation matrixes are needed to be considered and mirror symmetric operations are used, the calculation process is simple and the results are accurate.

  19. Transmission pipeline calculations and simulations manual

    Menon, E Shashi

    2014-01-01

    Transmission Pipeline Calculations and Simulations Manual is a valuable time- and money-saving tool to quickly pinpoint the essential formulae, equations, and calculations needed for transmission pipeline routing and construction decisions. The manual's three-part treatment starts with gas and petroleum data tables, followed by self-contained chapters concerning applications. Case studies at the end of each chapter provide practical experience for problem solving. Topics in this book include pressure and temperature profile of natural gas pipelines, how to size pipelines for specified f

  20. Pumping slots: Coupling impedance calculations and estimates

    Kurennoy, S.

    1993-08-01

    Coupling impedances of small pumping holes in vacuum-chamber walls have been calculated at low frequencies, i.e., for wavelengths large compared to a typical hole size, in terms of electric and magnetic polarizabilities of the hole. The polarizabilities can be found by solving and electro- or magnetostatic problem and are known analytically for the case of the elliptic shape of the hole in a thin wall. The present paper studies the case of pumping slots. Using results of numerical calculations and analytical approximations of polarizabilities, we give formulae for practically important estimates of slot contribution to low-frequency coupling impedances.

  1. Necessity of Exact Calculation for Transition Probability

    LIU Fu-Sui; CHEN Wan-Fang

    2003-01-01

    This paper shows that exact calculation for transition probability can make some systems deviate fromFermi golden rule seriously. This paper also shows that the corresponding exact calculation of hopping rate inducedby phonons for deuteron in Pd-D system with the many-body electron screening, proposed by Ichimaru, can explainthe experimental fact observed in Pd-D system, and predicts that perfection and low-dimension of Pd lattice are veryimportant for the phonon-induced hopping rate enhancement in Pd-D system.

  2. A New Thermodynamic Calculation Method for Binary Alloys: Part I: Statistical Calculation of Excess Functions

    2002-01-01

    The improved form of calculation formula for the activities of the components in binary liquids and solid alloys has been derived based on the free volume theory considering excess entropy and Miedema's model for calculating the formation heat of binary alloys. A calculation method of excess thermodynamic functions for binary alloys, the formulas of integral molar excess properties and partial molar excess properties for solid ordered or disordered binary alloys have been developed. The calculated results are in good agreement with the experimental values.

  3. Engineering calculations in radiative heat transfer

    Gray, W A; Hopkins, D W

    1974-01-01

    Engineering Calculations in Radiative Heat Transfer is a six-chapter book that first explains the basic principles of thermal radiation and direct radiative transfer. Total exchange of radiation within an enclosure containing an absorbing or non-absorbing medium is then described. Subsequent chapters detail the radiative heat transfer applications and measurement of radiation and temperature.

  4. Net analyte signal calculation for multivariate calibration

    Ferre, J.; Faber, N.M.

    2003-01-01

    A unifying framework for calibration and prediction in multivariate calibration is shown based on the concept of the net analyte signal (NAS). From this perspective, the calibration step can be regarded as the calculation of a net sensitivity vector, whose length is the amount of net signal when the

  5. Towards the exact calculation of medium nuclei

    Gandolfi, Stefano [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Carlson, Joseph Allen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lonardoni, Diego [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wang, Xiaobao [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-12-19

    The prediction of the structure of light and medium nuclei is crucial to test our knowledge of nuclear interactions. The calculation of the nuclei from two- and three-nucleon interactions obtained from rst principle is, however, one of the most challenging problems for many-body nuclear physics.

  6. Complex Kohn calculations on an overset grid

    Greenman, Loren; Lucchese, Robert; McCurdy, C. William

    2016-05-01

    An implentation of the overset grid method for complex Kohn scattering calculations is presented, along with static exchange calculations of electron-molecule scattering for small molecules including methane. The overset grid method uses multiple numerical grids, for instance Finite Element Method - Discrete Variable Representation (FEM-DVR) grids, expanded radially around multiple centers (corresponding to the individual atoms in each molecule as well as the center-of-mass of the molecule). The use of this flexible grid allows the complex angular dependence of the wavefunctions near the atomic centers to be well-described, but also allows scattering wavefunctions that oscillate rapidly at large distances to be accurately represented. Additionally, due to the use of multiple grids (and also grid shells), the method is easily parallelizable. The method has been implemented in ePolyscat, a multipurpose suite of programs for general molecular scattering calculations. It is interfaced with a number of quantum chemistry programs (including MolPro, Gaussian, GAMESS, and Columbus), from which it can read molecular orbitals and wavefunctions obtained using standard computational chemistry methods. The preliminary static exchange calculations serve as a test of the applicability.

  7. Calculation of Nucleon Electromagnetic Form Factors

    Renner, D B; Dolgov, D S; Eicker, N; Lippert, T; Negele, J W; Pochinsky, A V; Schilling, K; Lippert, Th.

    2002-01-01

    The fomalism is developed to express nucleon matrix elements of the electromagnetic current in terms of form factors consistent with the translational, rotational, and parity symmetries of a cubic lattice. We calculate the number of these form factors and show how appropriate linear combinations approach the continuum limit.

  8. Calculating Free Energies Using Average Force

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  9. Calculating Traffic based on Road Sensor Data

    Bisseling, Rob; Gao, Fengnan; Hafkenscheid, Patrick; Idema, Reijer; Jetka, Tomasz; Guerra Ones, Valia; Sikora, Monika

    2014-01-01

    Road sensors gather a lot of statistical data about traffic. In this paper, we discuss how a measure for the amount of traffic on the roads can be derived from this data, such that the measure is independent of the number and placement of sensors, and the calculations can be performed quickly for la

  10. Computational chemistry: Making a bad calculation

    Winter, Arthur

    2015-06-01

    Computations of the energetics and mechanism of the Morita-Baylis-Hillman reaction are ``not even wrong'' when compared with experiments. While computational abstinence may be the purest way to calculate challenging reaction mechanisms, taking prophylactic measures to avoid regrettable outcomes may be more realistic.

  11. Ammonia synthesis from first principles calculations

    Honkala, Johanna Karoliina; Hellman, Anders; Remediakis, Ioannis

    2005-01-01

    The rate of ammonia synthesis over a nanoparticle ruthenium catalyst can be calculated directly on the basis of a quantum chemical treatment of the problem using density functional theory. We compared the results to measured rates over a ruthenium catalyst supported on magnesium aluminum spinet...

  12. Calculation of tubular joints as compound shells

    Golovanov, A. I.

    A scheme for joining isoparametric finite shell elements with a bend in the middle surface is described. A solution is presented for the problem of the stress-strain state of a T-joint loaded by internal pressure. A refined scheme is proposed for calculating structures of this kind with allowance for the stiffness of the welded joint.

  13. IOL Power Calculation after Corneal Refractive Surgery

    Maddalena De Bernardo

    2014-01-01

    Full Text Available Purpose. To describe the different formulas that try to overcome the problem of calculating the intraocular lens (IOL power in patients that underwent corneal refractive surgery (CRS. Methods. A Pubmed literature search review of all published articles, on keyword associated with IOL power calculation and corneal refractive surgery, as well as the reference lists of retrieved articles, was performed. Results. A total of 33 peer reviewed articles dealing with methods that try to overcome the problem of calculating the IOL power in patients that underwent CRS were found. According to the information needed to try to overcome this problem, the methods were divided in two main categories: 18 methods were based on the knowledge of the patient clinical history and 15 methods that do not require such knowledge. The first group was further divided into five subgroups based on the parameters needed to make such calculation. Conclusion. In the light of our findings, to avoid postoperative nasty surprises, we suggest using only those methods that have shown good results in a large number of patients, possibly by averaging the results obtained with these methods.

  14. Gaseous Nitrogen Orifice Mass Flow Calculator

    Ritrivi, Charles

    2013-01-01

    The Gaseous Nitrogen (GN2) Orifice Mass Flow Calculator was used to determine Space Shuttle Orbiter Water Spray Boiler (WSB) GN2 high-pressure tank source depletion rates for various leak scenarios, and the ability of the GN2 consumables to support cooling of Auxiliary Power Unit (APU) lubrication during entry. The data was used to support flight rationale concerning loss of an orbiter APU/hydraulic system and mission work-arounds. The GN2 mass flow-rate calculator standardizes a method for rapid assessment of GN2 mass flow through various orifice sizes for various discharge coefficients, delta pressures, and temperatures. The calculator utilizes a 0.9-lb (0.4 kg) GN2 source regulated to 40 psia (.276 kPa). These parameters correspond to the Space Shuttle WSB GN2 Source and Water Tank Bellows, but can be changed in the spreadsheet to accommodate any system parameters. The calculator can be used to analyze a leak source, leak rate, gas consumables depletion time, and puncture diameter that simulates the measured GN2 system pressure drop.

  15. Block Tridiagonal Matrices in Electronic Structure Calculations

    Petersen, Dan Erik

    This thesis focuses on some of the numerical aspects of the treatment of the electronic structure problem, in particular that of determining the ground state electronic density for the non–equilibrium Green’s function formulation of two–probe systems and the calculation of transmission in the Lan...

  16. Vibrational Spectra and Quantum Calculations of Ethylbenzene

    Jian Wang; Xue-jun Qiu; Yan-mei Wang; Song Zhang; Bing Zhang

    2012-01-01

    Normal vibrations of ethylbenzene in the first excited state have been studied using resonant two-photon ionization spectroscopy.The band origin of ethylbenzene of S1←S0 transition appeared at 37586 cm-1.A vibrational spectrum of 2000 cm-1 above the band origin in the first excited state has been obtained.Several chain torsions and normal vibrations are obtained in the spectrum.The energies of the first excited state are calculated by the time-dependent density function theory and configuration interaction singles (CIS) methods with various basis sets.The optimized structures and vibrational frequencies of the S0 and S1 states are calculated using Hartree-Fock and CIS methods with 6-311++G(2d,2p) basis set.The calculated geometric structures in the S0 and S1 states are gauche conformations that the symmetric plane of ethyl group is perpendicular to the ring plane.All the observed spectral bands have been successfully assigned with the help of our calculations.

  17. Calculation of Thermochemical Constants of Propellants

    K. P. Rao

    1979-01-01

    Full Text Available A method for calculation of thermo chemical constants and products of explosion of propellants from the knowledge of molecular formulae and heats of formation of the ingredients is given. A computer programme in AUTOMATH-400 has been established for the method. The results of application of the method for a number of propellants are given.

  18. Calculations of dietary exposure to acrylamide

    Boon, P.E.; Mul, de A.; Voet, van der H.; Donkersgoed, van G.; Brette, M.; Klaveren, van J.D.

    2005-01-01

    In this paper we calculated the usual and acute exposure to acrylamide (AA) in the Dutch population and young children (1-6 years). For this AA levels of different food groups were used as collected by the Institute for Reference Materials and Measurements (IRMM) of the European Commission's Directo

  19. Precipitates/Salts Model Sensitivity Calculation

    P. Mariner

    2001-12-20

    The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO{sub 2}) on the chemical evolution of water in the drift.

  20. Heat pipe thermosyphon heat performance calculation

    Novomestský, Marcel; Kapjor, Andrej; Papučík, Štefan; Siažik, Ján

    2016-06-01

    In this article the heat performance of the heat pipe thermosiphon is achieved through numerical model. The heat performance is calculated from few simplified equations which depends on the working fluid and geometry. Also the thermal conductivity is good to mentioning, because is really interesting how big differences are between heat pipes and full solid surfaces.

  1. Conductance calculations with a wavelet basis set

    Thygesen, Kristian Sommer; Bollinger, Mikkel; Jacobsen, Karsten Wedel

    2003-01-01

    . The linear-response conductance is calculated from the Green's function which is represented in terms of a system-independent basis set containing wavelets with compact support. This allows us to rigorously separate the central region from the contacts and to test for convergence in a systematic way...

  2. 40 CFR 1065.650 - Emission calculations.

    2010-07-01

    ... into the system boundary, this work flow rate signal becomes negative; in this case, include these negative work rate values in the integration to calculate total work from that work path. Some work paths... interval. When power flows into the system boundary, the power/work flow rate signal becomes negative;...

  3. 7 CFR 760.307 - Payment calculation.

    2010-01-01

    ...) The monthly feed cost calculated by using the normal carrying capacity of the eligible grazing land of...) By 56. (j) The monthly feed cost using the normal carrying capacity of the eligible grazing land... pastureland by (ii) The normal carrying capacity of the specific type of eligible grazing land or...

  4. Tubular stabilizer bars – calculations and construction

    Adam-Markus WITTEK

    2011-01-01

    Full Text Available The article outlines the calculation methods for tubular stabilizer bars. Modern technological and structural solutions in contemporary cars are reflected also in the construction, selection and manufacturing of tubular stabilizer bars. A proper construction and the selection of parameters influence the strength properties, the weight, durability and reliability as well as the selection of an appropriate production method.

  5. Stabilizer bars: Part 1. Calculations and construction

    Adam-Markus WITTEK

    2010-01-01

    Full Text Available The article outlines the calculation methods for stabilizer bars. Modern technological and structural solutions in contemporary cars are reflected also in the construction and manufacturing of stabilizer bars. A proper construction and the selection of parameters influence the strength properties, the weight, durability and reliability as well as the selection of an appropriate production method.

  6. 7 CFR 1416.704 - Payment calculation.

    2010-01-01

    ... for: (1) Seedlings or cuttings, for trees, bushes or vine replanting; (2) Site preparation and debris...) Replacement, rehabilitation, and pruning; and (6) Labor used to transplant existing seedlings established..., the county committee shall calculate payment based on the number of qualifying trees, bushes or...

  7. On the calculation of Mossbauer isomer shift

    Filatov, Michael

    2007-01-01

    A quantum chemical computational scheme for the calculation of isomer shift in Mossbauer spectroscopy is suggested. Within the described scheme, the isomer shift is treated as a derivative of the total electronic energy with respect to the radius of a finite nucleus. The explicit use of a finite nuc

  8. Normalisation of database expressions involving calculations

    Denneheuvel, S. van; Renardel de Lavalette, G.R.

    2008-01-01

    In this paper we introduce a relational algebra extended with a calculate operator and derive, for expressions in the corresponding language PCSJL, a normalisation procedure. PCSJL plays a role in the implementation of the Rule Language RL; the normalisation is to be used for query optimisation.

  9. Using Angle calculations to demonstrate vowel shifts

    Fabricius, Anne

    2008-01-01

    This paper gives an overview of the long-term trends of diachronic changes evident within the short vowel system of RP during the 20th century. more specifically, it focusses on changing juxtapositions of the TRAP, STRUT and LOT, FOOT vowel centroid positions. The paper uses geometric calculation...

  10. Procedures for Calculating Residential Dehumidification Loads

    Winkler, Jon [National Renewable Energy Lab. (NREL), Golden, CO (United States); Booten, Chuck [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-06-01

    Residential building codes and voluntary labeling programs are continually increasing the energy efficiency requirements of residential buildings. Improving a building's thermal enclosure and installing energy-efficient appliances and lighting can result in significant reductions in sensible cooling loads leading to smaller air conditioners and shorter cooling seasons. However due to fresh air ventilation requirements and internal gains, latent cooling loads are not reduced by the same proportion. Thus, it's becoming more challenging for conventional cooling equipment to control indoor humidity at part-load cooling conditions and using conventional cooling equipment in a non-conventional building poses the potential risk of high indoor humidity. The objective of this project was to investigate the impact the chosen design condition has on the calculated part-load cooling moisture load, and compare calculated moisture loads and the required dehumidification capacity to whole-building simulations. Procedures for sizing whole-house supplemental dehumidification equipment have yet to be formalized; however minor modifications to current Air-Conditioner Contractors of America (ACCA) Manual J load calculation procedures are appropriate for calculating residential part-load cooling moisture loads. Though ASHRAE 1% DP design conditions are commonly used to determine the dehumidification requirements for commercial buildings, an appropriate DP design condition for residential buildings has not been investigated. Two methods for sizing supplemental dehumidification equipment were developed and tested. The first method closely followed Manual J cooling load calculations; whereas the second method made more conservative assumptions impacting both sensible and latent loads.

  11. Radionuclide release calculations for SAR-08

    Thomson, Gavin; Miller, Alex; Smith, Graham; Jackson, Duncan (Enviros Consulting Ltd, Wolverhampton (United Kingdom))

    2008-04-15

    Following a review by the Swedish regulatory authorities of the post-closure safety assessment of the SFR 1 disposal facility for low and intermediate waste (L/ILW), SAFE, the SKB has prepared an updated assessment called SAR-08. This report describes the radionuclide release calculations that have been undertaken as part of SAR-08. The information, assumptions and data used in the calculations are reported and the results are presented. The calculations address issues raised in the regulatory review, but also take account of new information including revised inventory data. The scenarios considered include the main case of expected behaviour of the system, with variants; low probability releases, and so-called residual scenarios. Apart from these scenario uncertainties, data uncertainties have been examined using a probabilistic approach. Calculations have been made using the AMBER software. This allows all the component features of the assessment model to be included in one place. AMBER has been previously used to reproduce results the corresponding calculations in the SAFE assessment. It is also used in demonstration of the IAEA's near surface disposal assessment methodology ISAM and has been subject to very substantial verification tests and has been used in verifying other assessment codes. Results are presented as a function of time for the release of radionuclides from the near field, and then from the far field into the biosphere. Radiological impacts of the releases are reported elsewhere. Consideration is given to each radionuclide and to each component part of the repository. The releases from the entire repository are also presented. The peak releases rates are, for most scenarios, due to organic C-14. Other radionuclides which contribute to peak release rates include inorganic C-14, Ni-59 and Ni-63. (author)

  12. Calculating Contained Firing Facility (CFF) explosive

    Lyle, J W.

    1998-10-20

    The University of California awarded LLNL contract No. B345381 for the design of the facility to Parsons Infrastructure Technology, Inc., of Pasadena, California. The Laboratory specified that the firing chamber be able to withstand repeated fxings of 60 Kg of explosive located in the center of the chamber, 4 feet above the floor, and repeated firings of 35 Kg of explosive at the same height and located anywhere within 2 feet of the edge of a region on the floor called the anvil. Other requirements were that the chamber be able to accommodate the penetrations of the existing bullnose of the Bunker 801 flash X-ray machine and the roof of the underground camera room. These requirements and provisions for blast-resistant doors formed the essential basis for the design. The design efforts resulted in a steel-reinforced concrete snucture measuring (on the inside) 55 x 5 1 feet by 30 feet high. The walls and ceiling are to be approximately 6 feet thick. Because the 60-Kg charge is not located in the geometric center of the volume and a 35-K:: charge could be located anywhere in a prescribed area, there will be different dynamic pressures and impulses on the various walls floor, and ceiling, depending upon the weights and locations of the charges. The detailed calculations and specifications to achieve the design criteria were performed by Parsons and are included in Reference 1. Reference 2, Structures to Resist the E xts of Accidental L%plosions (TMS- 1300>, is the primary design manual for structures of this type. It includes an analysis technique for the calculation of blast loadings within a cubicle or containment-type structure. Parsons used the TM5- 1300 methods to calculate the loadings on the various fling chamber surfaces for the design criteria explosive weights and locations. At LLNL the same methods were then used to determine the firing zones for other weights and elevations that would give the same or lesser loadings. Although very laborious, a hand

  13. CANISTER HANDLING FACILITY CRITICALITY SAFETY CALCULATIONS

    C.E. Sanders

    2005-04-07

    This design calculation revises and updates the previous criticality evaluation for the canister handling, transfer and staging operations to be performed in the Canister Handling Facility (CHF) documented in BSC [Bechtel SAIC Company] 2004 [DIRS 167614]. The purpose of the calculation is to demonstrate that the handling operations of canisters performed in the CHF meet the nuclear criticality safety design criteria specified in the ''Project Design Criteria (PDC) Document'' (BSC 2004 [DIRS 171599], Section 4.9.2.2), the nuclear facility safety requirement in ''Project Requirements Document'' (Canori and Leitner 2003 [DIRS 166275], p. 4-206), the functional/operational nuclear safety requirement in the ''Project Functional and Operational Requirements'' document (Curry 2004 [DIRS 170557], p. 75), and the functional nuclear criticality safety requirements described in the ''Canister Handling Facility Description Document'' (BSC 2004 [DIRS 168992], Sections 3.1.1.3.4.13 and 3.2.3). Specific scope of work contained in this activity consists of updating the Category 1 and 2 event sequence evaluations as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2004 [DIRS 167268], Section 7). The CHF is limited in throughput capacity to handling sealed U.S. Department of Energy (DOE) spent nuclear fuel (SNF) and high-level radioactive waste (HLW) canisters, defense high-level radioactive waste (DHLW), naval canisters, multicanister overpacks (MCOs), vertical dual-purpose canisters (DPCs), and multipurpose canisters (MPCs) (if and when they become available) (BSC 2004 [DIRS 168992], p. 1-1). It should be noted that the design and safety analyses of the naval canisters are the responsibility of the U.S. Department of the Navy (Naval Nuclear Propulsion Program) and will not be included in this document. In addition, this calculation is valid for

  14. First-principles calculations of novel materials

    Sun, Jifeng

    Computational material simulation is becoming more and more important as a branch of material science. Depending on the scale of the systems, there are many simulation methods, i.e. first-principles calculation (or ab-initio), molecular dynamics, mesoscale methods and continuum methods. Among them, first-principles calculation, which involves density functional theory (DFT) and based on quantum mechanics, has become to be a reliable tool in condensed matter physics. DFT is a single-electron approximation in solving the many-body problems. Intrinsically speaking, both DFT and ab-initio belong to the first-principles calculation since the theoretical background of ab-initio is Hartree-Fock (HF) approximation and both are aimed at solving the Schrodinger equation of the many-body system using the self-consistent field (SCF) method and calculating the ground state properties. The difference is that DFT introduces parameters either from experiments or from other molecular dynamic (MD) calculations to approximate the expressions of the exchange-correlation terms. The exchange term is accurately calculated but the correlation term is neglected in HF. In this dissertation, DFT based first-principles calculations were performed for all the novel materials and interesting materials introduced. Specifically, the DFT theory together with the rationale behind related properties (e.g. electronic, optical, defect, thermoelectric, magnetic) are introduced in Chapter 2. Starting from Chapter 3 to Chapter 5, several representative materials were studied. In particular, a new semiconducting oxytelluride, Ba2TeO is studied in Chapter 3. Our calculations indicate a direct semiconducting character with a band gap value of 2.43 eV, which agrees well with the optical experiment (˜ 2.93 eV). Moreover, the optical and defects properties of Ba2TeO are also systematically investigated with a view to understanding its potential as an optoelectronic or transparent conducting material. We find

  15. On the Origins of Calculation Abilities

    A. Ardila

    1993-01-01

    Full Text Available A historical review of calculation abilities is presented. Counting, starting with finger sequencing, has been observed in different ancient and contemporary cultures, whereas number representation and arithmetic abilities are found only during the last 5000–6000 years. The rationale for selecting a base of ten in most numerical systems and the clinical association between acalculia and finger agnosia are analyzed. Finger agnosia (as a restricted form of autotopagnosia, right–left discrimination disturbances, semantic aphasia, and acalculia are proposed to comprise a single neuropsychological syndrome associated with left angular gyrus damage. A classification of calculation disturbances resulting from brain damage is presented. It is emphasized that using historical/anthropological analysis, it becomes evident that acalculia, finger agnosia, and disorders in right–left discrimination (as in general, in the use of spatial concepts must constitute a single clinical syndrome, resulting from the disruption of some common brain activity and the impairment of common cognitive mechanisms.

  16. High-Power Wind Turbine: Performance Calculation

    Goldaev Sergey V.

    2015-01-01

    Full Text Available The paper is devoted to high-power wind turbine performance calculation using Pearson’s chi-squared test the statistical hypothesis on distribution of general totality of air velocities by Weibull-Gnedenko. The distribution parameters are found by numerical solution of transcendental equation with the definition of the gamma function interpolation formula. Values of the operating characteristic of the incomplete gamma function are defined by numerical integration using Weddle’s rule. The comparison of the calculated results using the proposed methodology with those obtained by other authors found significant differences in the values of the sample variance and empirical Pearson. The analysis of the initial and maximum wind speed influence on performance of the high-power wind turbine is done

  17. Isogeometric analysis in electronic structure calculations

    Cimrman, Robert; Kolman, Radek; Tůma, Miroslav; Vackář, Jiří

    2016-01-01

    In electronic structure calculations, various material properties can be obtained by means of computing the total energy of a system as well as derivatives of the total energy w.r.t. atomic positions. The derivatives, also known as Hellman-Feynman forces, require, because of practical computational reasons, the discretized charge density and wave functions having continuous second derivatives in the whole solution domain. We describe an application of isogeometric analysis (IGA), a spline modification of finite element method (FEM), to achieve the required continuity. The novelty of our approach is in employing the technique of B\\'ezier extraction to add the IGA capabilities to our FEM based code for ab-initio calculations of electronic states of non-periodic systems within the density-functional framework, built upon the open source finite element package SfePy. We compare FEM and IGA in benchmark problems and several numerical results are presented.

  18. Equation of State from Lattice QCD Calculations

    Gupta, Rajan [Los Alamos National Laboratory

    2011-01-01

    We provide a status report on the calculation of the Equation of State (EoS) of QCD at finite temperature using lattice QCD. Most of the discussion will focus on comparison of recent results obtained by the HotQCD and Wuppertal-Budapest collaborations. We will show that very significant progress has been made towards obtaining high precision results over the temperature range of T = 150-700 MeV. The various sources of systematic uncertainties will be discussed and the differences between the two calculations highlighted. Our final conclusion is that these lattice results of EoS are precise enough to be used in the phenomenological analysis of heavy ion experiments at RHIC and LHC.

  19. Labview virtual instruments for calcium buffer calculations.

    Reitz, Frederick B; Pollack, Gerald H

    2003-01-01

    Labview VIs based upon the calculator programs of Fabiato and Fabiato (J. Physiol. Paris 75 (1979) 463) are presented. The VIs comprise the necessary computations for the accurate preparation of multiple-metal buffers, for the back-calculation of buffer composition given known free metal concentrations and stability constants used, for the determination of free concentrations from a given buffer composition, and for the determination of apparent stability constants from absolute constants. As implemented, the VIs can concurrently account for up to three divalent metals, two monovalent metals and four ligands thereof, and the modular design of the VIs facilitates further extension of their capacity. As Labview VIs are inherently graphical, these VIs may serve as useful templates for those wishing to adapt this software to other platforms.

  20. Tearing mode stability calculations with pressure flattening

    Ham, C J; Cowley, S C; Hastie, R J; Hender, T C; Liu, Y Q

    2013-01-01

    Calculations of tearing mode stability in tokamaks split conveniently into an external region, where marginally stable ideal MHD is applicable, and a resonant layer around the rational surface where sophisticated kinetic physics is needed. These two regions are coupled by the stability parameter. Pressure and current perturbations localized around the rational surface alter the stability of tearing modes. Equations governing the changes in the external solution and - are derived for arbitrary perturbations in axisymmetric toroidal geometry. The relationship of - with and without pressure flattening is obtained analytically for four pressure flattening functions. Resistive MHD codes do not contain the appropriate layer physics and therefore cannot predict stability directly. They can, however, be used to calculate -. Existing methods (Ham et al. 2012 Plasma Phys. Control. Fusion 54 025009) for extracting - from resistive codes are unsatisfactory when there is a finite pressure gradient at the rational surface ...

  1. Normal mode calculations of trigonal selenium

    Hansen, Flemming Yssing; McMurry, H. L.

    1980-01-01

    symmetry. The intrachain force field is projected from a valence type field including a bond stretch, angle bend, and dihedral torsion. With these coordinates we obtain the strong dispersion of the upper optic modes as observed by neutron scattering, where other models have failed and give flat bands......The phonon dispersion relations for trigonal selenium have been calculated on the basis of a short range potential field model. Electrostatic long range forces have not been included. The force field is defined in terms of symmetrized coordinates which reflect partly the symmetry of the space group....... In this way we have eliminated the ambiguity in the choice of valence coordinates, which has been a problem in previous models which used valence type interactions. Calculated sound velocities and elastic moduli are also given. The Journal of Chemical Physics is copyrighted by The American Institute...

  2. A Methodology for Calculating Radiation Signatures

    Klasky, Marc Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wilcox, Trevor [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bathke, Charles G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); James, Michael R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-05-01

    A rigorous formalism is presented for calculating radiation signatures from both Special Nuclear Material (SNM) as well as radiological sources. The use of MCNP6 in conjunction with CINDER/ORIGEN is described to allow for the determination of both neutron and photon leakages from objects of interest. In addition, a description of the use of MCNP6 to properly model the background neutron and photon sources is also presented. Examinations of the physics issues encountered in the modeling are investigated so as to allow for guidance in the user discerning the relevant physics to incorporate into general radiation signature calculations. Furthermore, examples are provided to assist in delineating the pertinent physics that must be accounted for. Finally, examples of detector modeling utilizing MCNP are provided along with a discussion on the generation of Receiver Operating Curves, which are the suggested means by which to determine detectability radiation signatures emanating from objects.

  3. Numerical calculations of magnetic properties of nanostructures

    Kapitan, Vitalii; Nefedev, Konstantin

    2015-01-01

    Magnetic force microscopy and scanning tunneling microscopy data could be used to test computer numerical models of magnetism. The elaborated numerical model of a face-centered lattice Ising spins is based on pixel distribution in the image of magnetic nanostructures obtained by using scanning microscope. Monte Carlo simulation of the magnetic structure model allowed defining the temperature dependence of magnetization; calculating magnetic hysteresis curves and distribution of magnetization on the surface of submonolayer and monolayer nanofilms of cobalt, depending on the experimental conditions. Our developed package of supercomputer parallel software destined for a numerical simulation of the magnetic-force experiments and allows obtaining the distribution of magnetization in one-dimensional arrays of nanodots and on their basis. There has been determined interpretation of magneto-force microscopy images of magnetic nanodots states. The results of supercomputer simulations and numerical calculations are in...

  4. A priori calculations for the rotational stabilisation

    Iwata Yoritaka

    2013-12-01

    Full Text Available The synthesis of chemical elements are mostly realised by low-energy heavy-ion reactions. Synthesis of exotic and heavy nuclei as well as that of superheavy nuclei is essential not only to find out the origin and the limit of the chemical elements but also to clarify the historical/chemical evolution of our universe. Despite the life time of exotic nuclei is not so long, those indispensable roles in chemical evolution has been pointed out. Here we are interested in examining the rotational stabilisation. In this paper a priori calculation (before microscopic density functional calculations is carried out for the rotational stabilisation effect in which the balance between the nuclear force, the Coulomb force and the centrifugal force is taken into account.

  5. Calculation of $K\\to\\pi l\

    Gamiz, E.; /CAFPE, Granada /Granada U., Theor. Phys. Astrophys. /Fermilab; DeTar, C.; /Utah U.; El-Khadra, A.X.; /Illinois U., Urbana; Kronfeld, A.S.; /Fermilab; Mackenzie, P.B.; /Fermilab; Simone, J.; /Fermilab

    2011-11-01

    We report on the status of the Fermilab-MILC calculation of the form factor f{sub +}{sup K}{pi}(q{sup 2} = 0), needed to extract the CKM matrix element |V{sub us}| from experimental data on K semileptonic decays. The HISQ formulation is used in the simulations for the valence quarks, while the sea quarks are simulated with the asqtad action (MILC N{sub f} = 2 + 1 configurations). We discuss the general methodology of the calculation, including the use of twisted boundary conditions to get values of the momentum transfer close to zero and the different techniques applied for the correlators fits. We present initial results for lattice spacings a {approx} 0.12 fm and a {approx} 0.09 fm, and several choices of the light quark masses.

  6. Pressure Correction in Density Functional Theory Calculations

    Lee, S H

    2008-01-01

    First-principles calculations based on density functional theory have been widely used in studies of the structural, thermoelastic, rheological, and electronic properties of earth-forming materials. The exchange-correlation term, however, is implemented based on various approximations, and this is believed to be the main reason for discrepancies between experiments and theoretical predictions. In this work, by using periclase MgO as a prototype system we examine the discrepancies in pressure and Kohn-Sham energy that are due to the choice of the exchange-correlation functional. For instance, we choose local density approximation and generalized gradient approximation. We perform extensive first-principles calculations at various temperatures and volumes and find that the exchange-correlation-based discrepancies in Kohn-Sham energy and pressure should be independent of temperature. This implies that the physical quantities, such as the equation of states, heat capacity, and the Gr\\"{u}neisen parameter, estimat...

  7. The Gravity- Powered Calculator, a Galilean Exhibit

    Cerreta, Pietro

    2014-04-01

    The Gravity-Powered Calculator is an exhibit of the Exploratorium in San Francisco. It is presented by its American creators as an amazing device that extracts the square roots of numbers, using only the force of gravity. But if you analyze his concept construction one can not help but recall the research of Galileo on falling bodies, the inclined plane and the projectile motion; exactly what the American creators did not put into prominence with their exhibit. Considering the equipment only for what it does, in my opinion, is very reductive compared to the historical roots of the Galilean mathematical physics contained therein. Moreover, if accurate deductions are contained in the famous study of S. Drake on the Galilean drawings and, in particular on Folio 167 v, the parabolic paths of the ball leaping from its launch pad after descending a slope really actualize Galileo's experiments. The exhibit therefore may be best known as a `Galilean calculator'.

  8. On the Origins of Calculation Abilities

    Ardila, A.

    1993-01-01

    A historical review of calculation abilities is presented. Counting, starting with finger sequencing, has been observed in different ancient and contemporary cultures, whereas number representation and arithmetic abilities are found only during the last 5000–6000 years. The rationale for selecting a base of ten in most numerical systems and the clinical association between acalculia and finger agnosia are analyzed. Finger agnosia (as a restricted form of autotopagnosia), right–left discrimina...

  9. Scaling Calculations for a Relativistic Gyrotron.

    2014-09-26

    a relativistic gyrotron. The results of calculations are given in Section 3. The non- linear , slow-time-scale equations of motion used for these...corresponds to a cylindrical resonator and a thin annular electron beam ;, " with the beam radius chosen to coincide with a maximum of the resonator...entering the cavity. A tractable set of non- linear equations based on a slow-time-scale formulation developed previously was used. For this

  10. A Paleolatitude Calculator for Paleoclimate Studies.

    van Hinsbergen, Douwe J J; de Groot, Lennart V; van Schaik, Sebastiaan J; Spakman, Wim; Bijl, Peter K; Sluijs, Appy; Langereis, Cor G; Brinkhuis, Henk

    2015-01-01

    Realistic appraisal of paleoclimatic information obtained from a particular location requires accurate knowledge of its paleolatitude defined relative to the Earth's spin-axis. This is crucial to, among others, correctly assess the amount of solar energy received at a location at the moment of sediment deposition. The paleolatitude of an arbitrary location can in principle be reconstructed from tectonic plate reconstructions that (1) restore the relative motions between plates based on (marine) magnetic anomalies, and (2) reconstruct all plates relative to the spin axis using a paleomagnetic reference frame based on a global apparent polar wander path. Whereas many studies do employ high-quality relative plate reconstructions, the necessity of using a paleomagnetic reference frame for climate studies rather than a mantle reference frame appears under-appreciated. In this paper, we briefly summarize the theory of plate tectonic reconstructions and their reference frames tailored towards applications of paleoclimate reconstruction, and show that using a mantle reference frame, which defines plate positions relative to the mantle, instead of a paleomagnetic reference frame may introduce errors in paleolatitude of more than 15° (>1500 km). This is because mantle reference frames cannot constrain, or are specifically corrected for the effects of true polar wander. We used the latest, state-of-the-art plate reconstructions to build a global plate circuit, and developed an online, user-friendly paleolatitude calculator for the last 200 million years by placing this plate circuit in three widely used global apparent polar wander paths. As a novelty, this calculator adds error bars to paleolatitude estimates that can be incorporated in climate modeling. The calculator is available at www.paleolatitude.org. We illustrate the use of the paleolatitude calculator by showing how an apparent wide spread in Eocene sea surface temperatures of southern high latitudes may be in part

  11. Prediction and calculation for new energy development

    Fu Yuhua; Fu Anjie

    2008-01-01

    Some important questions for new energy development were discussed, such as the prediction and calculation of sea surface temperature, ocean wave, offshore platform price, typhoon track, fn'e status, vibration due to earth-quake, energy price, stock market's trend and so on with the fractal methods ( including the four ones of constant di-mension fractal, variable dimension fractal, complex number dimension fractal and fractal series) and the improved res-caled range analysis (R/S analysis).

  12. Calculation and application of liquidus projection

    CHEN Shuanglin; CAO Weisheng; YANG Ying; ZHANG Fan; WU Kaisheng; DU Yong; Y.Austin Chang

    2006-01-01

    Liquidus projection usually refers to a two-dimensional projection of ternary liquidus univariant lines at constant pressure. The algorithms used in Pandat for the calculation of liquidus projection with isothermal lines and invariant reaction equations in a ternary system are presented. These algorithms have been extended to multicomponent liquidus projections and have also been implemented in Pandat. Some examples on ternary and quaternary liquidus projections are presented.

  13. Flow calculation in a bulb turbine

    Goede, E.; Pestalozzi, J.

    1987-02-01

    In recent years remarkable progress has been made in the field of computational fluid dynamics. Sometimes the impression may arise when reading the relevant literature that most of the problems in this field have already been solved. Upon studying the matter more deeply, however, it is apparent that some questions still remain unanswered. The use of the quasi-3D (Q3D) computational method for calculating the flow in a fuel hydraulic turbine is described.

  14. Calculation of reactor antineutrino spectra in TEXONO

    Chen Dong Liang; Mao Ze Pu; Wong, T H

    2002-01-01

    In the low energy reactor antineutrino physics experiments, either for the researches of antineutrino oscillation and antineutrino reactions, or for the measurement of abnormal magnetic moment of antineutrino, the flux and the spectra of reactor antineutrino must be described accurately. The method of calculation of reactor antineutrino spectra was discussed in detail. Furthermore, based on the actual circumstances of NP2 reactors and the arrangement of detectors, the flux and the spectra of reactor antineutrino in TEXONO were worked out

  15. Perturbative calculation of quasi-normal modes

    Siopsis, G

    2005-01-01

    I discuss a systematic method of analytically calculating the asymptotic form of quasi-normal frequencies. In the case of a four-dimensional Schwarzschild black hole, I expand around the zeroth-order approximation to the wave equation proposed by Motl and Neitzke. In the case of a five-dimensional AdS black hole, I discuss a perturbative solution of the Heun equation. The analytical results are in agreement with the results from numerical analysis.

  16. Theoretical Calculations of Atomic Data for Spectroscopy

    Bautista, Manuel A.

    2000-01-01

    Several different approximations and techniques have been developed for the calculation of atomic structure, ionization, and excitation of atoms and ions. These techniques have been used to compute large amounts of spectroscopic data of various levels of accuracy. This paper presents a review of these theoretical methods to help non-experts in atomic physics to better understand the qualities and limitations of various data sources and assess how reliable are spectral models based on those data.

  17. Calculation of Loudspeaker Cabinet Diffraction and Correction

    LE Yi; SHEN Yong; XIA Jie

    2011-01-01

    A method of calculating the cabinet edge diffractions for loudspeaker driver when mounted in an enclosure is proposed,based on the extended Biot-Tolstoy-Medwin model.Up to the third order,cabinet diffractions are discussed in detail and the diffractive effects on the radiated sound field of the loudspeaker system are quantitatively described,with a correction function built to compensate for the diffractive interference.The method is applied to a practical loudspeaker enclosure that has rectangular facets.The diffractive effects of the cabinet on the forward sound radiation are investigated and predictions of the calculations show quite good agreements with experimental measurements.Most loudspeaker systems employ box-like cabinets.The response of a loudspeaker mounted in a box is much rougher than that of the same driver mounted on a large baffle.Although resonances in the box are partly responsible for the lack of smoothness,a major contribution is the diffraction of the cabinet edges,which aggravates the final response performance.Consequently,an analysis of the cabinet diffraction problem is required.%A method of calculating the cabinet edge diffractions for loudspeaker driver when mounted in an enclosure is proposed, based on the extended Biot-Tolstoy-Medwin model. Up to the third order, cabinet diffractions are discussed in detail and the diffractive effects on the radiated sound field of the loudspeaker system are quantitatively described, with a correction function built to compensate for the diffractive interference. The method is applied to a practical loudspeaker enclosure that has rectangular facets. The diffractive effects of the cabinet on the forward sound radiation are investigated and predictions of the calculations show quite good agreements with experimental measurements.

  18. Configuration mixing calculations in soluble models

    Cambiaggio, M. C.; Plastino, A.; Szybisz, L.; Miller, H. G.

    1983-07-01

    Configuration mixing calculations have been performed in two quasi-spin models using basis states which are solutions of a particular set of Hartree-Fock equations. Each of these solutions, even those which do not correspond to the global minimum, is found to contain interesting physical information. Relatively good agreement with the exact lowest-lying states has been obtained. In particular, one obtains a better approximation to the ground state than that provided by Hartree-Fock.

  19. A Paleolatitude Calculator for Paleoclimate Studies.

    Douwe J J van Hinsbergen

    Full Text Available Realistic appraisal of paleoclimatic information obtained from a particular location requires accurate knowledge of its paleolatitude defined relative to the Earth's spin-axis. This is crucial to, among others, correctly assess the amount of solar energy received at a location at the moment of sediment deposition. The paleolatitude of an arbitrary location can in principle be reconstructed from tectonic plate reconstructions that (1 restore the relative motions between plates based on (marine magnetic anomalies, and (2 reconstruct all plates relative to the spin axis using a paleomagnetic reference frame based on a global apparent polar wander path. Whereas many studies do employ high-quality relative plate reconstructions, the necessity of using a paleomagnetic reference frame for climate studies rather than a mantle reference frame appears under-appreciated. In this paper, we briefly summarize the theory of plate tectonic reconstructions and their reference frames tailored towards applications of paleoclimate reconstruction, and show that using a mantle reference frame, which defines plate positions relative to the mantle, instead of a paleomagnetic reference frame may introduce errors in paleolatitude of more than 15° (>1500 km. This is because mantle reference frames cannot constrain, or are specifically corrected for the effects of true polar wander. We used the latest, state-of-the-art plate reconstructions to build a global plate circuit, and developed an online, user-friendly paleolatitude calculator for the last 200 million years by placing this plate circuit in three widely used global apparent polar wander paths. As a novelty, this calculator adds error bars to paleolatitude estimates that can be incorporated in climate modeling. The calculator is available at www.paleolatitude.org. We illustrate the use of the paleolatitude calculator by showing how an apparent wide spread in Eocene sea surface temperatures of southern high

  20. Index calculation by means of harmonic expansion

    Imamura, Yosuke

    2015-01-01

    We review derivation of superconformal indices by means of supersymmetric localization and spherical harmonic expansion for 3d N=2, 4d N=1, and 6d N=(1,0) supersymmetric gauge theories. We demonstrate calculation of indices for vector multiplets in each dimensions by analysing energy eigenmodes in S^pxR. For the 6d index we consider the perturbative contribution only. We put focus on technical details of harmonic expansion rather than physical applications.

  1. Bias in Dynamic Monte Carlo Alpha Calculations

    Sweezy, Jeremy Ed [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nolen, Steven Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adams, Terry R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-06

    A 1/N bias in the estimate of the neutron time-constant (commonly denoted as α) has been seen in dynamic neutronic calculations performed with MCATK. In this paper we show that the bias is most likely caused by taking the logarithm of a stochastic quantity. We also investigate the known bias due to the particle population control method used in MCATK. We conclude that this bias due to the particle population control method is negligible compared to other sources of bias.

  2. Preconditioned iterations to calculate extreme eigenvalues

    Brand, C.W.; Petrova, S. [Institut fuer Angewandte Mathematik, Leoben (Austria)

    1994-12-31

    Common iterative algorithms to calculate a few extreme eigenvalues of a large, sparse matrix are Lanczos methods or power iterations. They converge at a rate proportional to the separation of the extreme eigenvalues from the rest of the spectrum. Appropriate preconditioning improves the separation of the eigenvalues. Davidson`s method and its generalizations exploit this fact. The authors examine a preconditioned iteration that resembles a truncated version of Davidson`s method with a different preconditioning strategy.

  3. CALCULATION OF KAON ELECTROMAGNETIC FORM FACTOR

    WANG ZHI-GANG; WAN SHAO-LONG; WANG KE-LIN

    2001-01-01

    The kaon meson electromagnetic form factor is calculated in the framework of coupled Schwinger-Dyson and Bethe-Salpeter formulation in simplified impulse approximation (dressed vertex) with modified fiat-bottom potential,which is a combination of the flat-bottom potential taking into consideration the infrared and ultraviolet asymptotic behaviours of the effective quark-gluon coupling. All the numerical results give a good fit to experimental values.

  4. TINTE. Nuclear calculation theory description report

    Gerwin, H.; Scherer, W.; Lauer, A. [Forschungszentrum Juelich GmbH (DE). Institut fuer Energieforschung (IEF), Sicherheitsforschung und Reaktortechnik (IEF-6); Clifford, I. [Pebble Bed Modular Reactor (Pty) Ltd. (South Africa)

    2010-01-15

    The Time Dependent Neutronics and Temperatures (TINTE) code system deals with the nuclear and the thermal transient behaviour of the primary circuit of the High-temperature Gas-cooled Reactor (HTGR), taking into consideration the mutual feedback effects in twodimensional axisymmetric geometry. This document contains a complete description of the theoretical basis of the TINTE nuclear calculation, including the equations solved, solution methods and the nuclear data used in the solution. (orig.)

  5. Warhead Performance Calculations for Threat Hazard Assessment

    1996-08-01

    correlation can be drawn between an explosive’s heat of combustion, heat of detonation , and its EWF. The method of Baroody and Peters41 was used to calculate...from air-blast tests can be rationalized to a combination of an explosive’s heat of combustion and heat of detonation ratioed to the heat of...Center, China Lake, California, NWC TM 3754, February 1979. 41. Baroody, E. and Peters, S., Heats of Explosion, Heat of Detonation , and Reaction

  6. Toward a nitrogen footprint calculator for Tanzania

    Hutton, Mary Olivia; Leach, Allison M.; Leip, Adrian; Galloway, James N.; Bekunda, Mateete; Sullivan, Clare; Lesschen, Jan Peter

    2017-03-01

    We present the first nitrogen footprint model for a developing country: Tanzania. Nitrogen (N) is a crucial element for agriculture and human nutrition, but in excess it can cause serious environmental damage. The Sub-Saharan African nation of Tanzania faces a two-sided nitrogen problem: while there is not enough soil nitrogen to produce adequate food, excess nitrogen that escapes into the environment causes a cascade of ecological and human health problems. To identify, quantify, and contribute to solving these problems, this paper presents a nitrogen footprint tool for Tanzania. This nitrogen footprint tool is a concept originally designed for the United States of America (USA) and other developed countries. It uses personal resource consumption data to calculate a per-capita nitrogen footprint. The Tanzania N footprint tool is a version adapted to reflect the low-input, integrated agricultural system of Tanzania. This is reflected by calculating two sets of virtual N factors to describe N losses during food production: one for fertilized farms and one for unfertilized farms. Soil mining factors are also calculated for the first time to address the amount of N removed from the soil to produce food. The average per-capita nitrogen footprint of Tanzania is 10 kg N yr‑1. 88% of this footprint is due to food consumption and production, while only 12% of the footprint is due to energy use. Although 91% of farms in Tanzania are unfertilized, the large contribution of fertilized farms to N losses causes unfertilized farms to make up just 83% of the food production N footprint. In a developing country like Tanzania, the main audiences for the N footprint tool are community leaders, planners, and developers who can impact decision-making and use the calculator to plan positive changes for nitrogen sustainability in the developing world.

  7. Automation of 2-loop Amplitude Calculations

    Jones, S P

    2016-01-01

    Some of the tools and techniques that have recently been used to compute Higgs boson pair production at NLO in QCD are discussed. The calculation relies on the use of integral reduction, to reduce the number of integrals which must be computed, and expressing the amplitude in terms of a quasi-finite basis, which simplifies their numeric evaluation. Emphasis is placed on sector decomposition and Quasi-Monte Carlo (QMC) integration which are used to numerically compute the master integrals.

  8. Uncertainty calculation in (operational) modal analysis

    Pintelon, R.; Guillaume, P.; Schoukens, J.

    2007-08-01

    In (operational) modal analysis the modal parameters of a structure are identified from the response of that structure to (unmeasurable operational) perturbations. A key issue that remains to be solved is the calculation of uncertainty bounds on the estimated modal parameters. The present paper fills this gap. The theory is illustrated by means of a simulation and a real measurement example (operational modal analysis of a bridge).

  9. Eigenvalue translation method for mode calculations.

    Gerck, E; Cruz, C H

    1979-05-01

    A new method is described for the first few modes calculations in a interferometer that has several advantages over the Allmat subroutine, the Prony method, and the Fox and Li method. In the illustrative results shown for some cases it can be seen that the eigenvalue translation method is typically 100-fold times faster than the usual Fox and Li method and ten times faster than Allmat.

  10. Inductance Calculations of Variable Pitch Helical Inductors

    2015-08-01

    current. Using the classical skin depth definition , we can adjust the effec- tive diameters used to calculate the inductances. The classical skin depth can...are not. The definition of classical skin depth is an approximation that assumes that all the cmrent is flowing evenly within the region encompassed...inductance can be applied to other more complex forms of geometry, including tapered coils, by simply using the more general forms of the self- and

  11. Practical Rhumb Line Calculations on the Spheroid

    Bennett, G. G.

    About ten years ago this author wrote the software for a suite of navigation programmes which was resident in a small hand-held computer. In the course of this work it became apparent that the standard text books of navigation were perpetuating a flawed method of calculating rhumb lines on the Earth considered as an oblate spheroid. On further investigation it became apparent that these incorrect methods were being used in programming a number of calculator/computers and satellite navigation receivers. Although the discrepancies were not large, it was disquieting to compare the results of the same rhumb line calculations from a number of such devices and find variations of some miles when the output was given, and therefore purported to be accurate, to a tenth of a mile in distance and/or a tenth of a minute of arc in position. The problem has been highlighted in the past and the references at the end of this show that a number of methods have been proposed for the amelioration of this problem. This paper summarizes formulae that the author recommends should be used for accurate solutions. Most of these may be found in standard geodetic text books, such as, but also provided are new formulae and schemes of solution which are suitable for use with computers or tables. The latter also take into account situations when a near-indeterminate solution may arise. Some examples are provided in an appendix which demonstrate the methods. The data for these problems do not refer to actual terrestrial situations but have been selected for illustrative purposes only. Practising ships' navigators will find the methods described in detail in this paper to be directly applicable to their work and also they should find ready acceptance because they are similar to current practice. In none of the references cited at the end of this paper has the practical task of calculating, using either a computer or tabular techniques, been addressed.

  12. TEA: A Code Calculating Thermochemical Equilibrium Abundances

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver

    2016-07-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.

  13. Coupled-cluster calculations of nucleonic matter

    Hagen, G; Ekström, A; Wendt, K A; Baardsen, G; Gandolfi, S; Hjorth-Jensen, M; Horowitz, C J

    2014-01-01

    Background: The equation of state (EoS) of nucleonic matter is central for the understanding of bulk nuclear properties, the physics of neutron star crusts, and the energy release in supernova explosions. Purpose: This work presents coupled-cluster calculations of infinite nucleonic matter using modern interactions from chiral effective field theory (EFT). It assesses the role of correlations beyond particle-particle and hole-hole ladders, and the role of three-nucleon-forces (3NFs) in nuclear matter calculations with chiral interactions. Methods: This work employs the optimized nucleon-nucleon NN potential NNLOopt at next-to-next-to leading-order, and presents coupled-cluster computations of the EoS for symmetric nuclear matter and neutron matter. The coupled-cluster method employs up to selected triples clusters and the single-particle space consists of a momentum-space lattice. We compare our results with benchmark calculations and control finite-size effects and shell oscillations via twist-averaged bound...

  14. Modified embedded atom method calculations of interfaces

    Baskes, M.I.

    1996-05-01

    The Embedded Atom Method (EAM) is a semi-empirical calculational method developed a decade ago to calculate the properties of metallic systems. By including many-body effects this method has proven to be quite accurate in predicting bulk and surface properties of metals and alloys. Recent modifications have extended this applicability to a large number of elements in the periodic table. For example the modified EAM (MEAM) is able to include the bond-bending forces necessary to explain the elastic properties of semiconductors. This manuscript will briefly review the MEAM and its application to the binary systems discussed below. Two specific examples of interface behavior will be highlighted to show the wide applicability of the method. In the first example a thin overlayer of nickel on silicon will be studied. Note that this example is representative of an important technological class of materials, a metal on a semiconductor. Both the structure of the Ni/Si interface and its mechanical properties will be presented. In the second example the system aluminum on sapphire will be examined. Again the class of materials is quite different, a metal on an ionic material. The calculated structure and energetics of a number of (111) Al layers on the (0001) surface of sapphire will be compared to recent experiments.

  15. Starting Time Calculation for Induction Motor.

    Abhishek Garg

    2015-05-01

    Full Text Available This Paper Presents The Starting Time Calculation For A Squirrel Cage Induction Motor. The Importance Of Starting Time Lies In Determining The Duration Of Large Current, Which Flows During The Starting Of An Induction Motor. Normally, The Starting Current Of An Induction Motor Is Six To Eight Time Of Full Load Current. Plenty Of Methods Have Been Discovered To Start Motor In A Quick Time, But Due To Un-Economic Nature, Use Are Limited. Hence, For Large Motors Direct Online Starting Is Most Popular Amongst All Due To Its Economic And Feasible Nature. But Large Current With Dol Starting Results In A Heavy Potential Drop In The Power System. Thus, Special Care And Attention Is Required In Order To Design A Healthy System. A Very Simple Method To Calculate The Starting Time Of Motor Is Proposed In This Paper. Respective Simulation Study Has Been Carried Out Using Matlab 7.8.0 Environment, Which Demonstrates The Effectiveness Of The Starting Time Calculation.

  16. How Accurately can we Calculate Thermal Systems?

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  17. Vestibule and Cask Preparation Mechanical Handling Calculation

    N. Ambre

    2004-05-26

    The scope of this document is to develop the size, operational envelopes, and major requirements of the equipment to be used in the vestibule, cask preparation area, and the crane maintenance area of the Fuel Handling Facility. This calculation is intended to support the License Application (LA) submittal of December 2004, in accordance with the directive given by DOE correspondence received on the 27th of January 2004 entitled: ''Authorization for Bechtel SAIC Company L.L.C. to Include a Bare Fuel Handling Facility and Increased Aging Capacity in the License Application, Contract Number DE-AC28-01R W12101'' (Ref. 167124). This correspondence was appended by further correspondence received on the 19th of February 2004 entitled: ''Technical Direction to Bechtel SAIC Company L.L. C. for Surface Facility Improvements, Contract Number DE-AC28-01R W12101; TDL No. 04-024'' (Ref. 16875 1). These documents give the authorization for a Fuel Handling Facility to be included in the baseline. The limitations of this preliminary calculation lie within the assumptions of section 5 , as this calculation is part of an evolutionary design process.

  18. Methods for Calculation of Geogenetic Depth

    Liu Ruixun; Lü Guxian; Wang Fangzheng; Wei Changshan; Guo Chusun

    2004-01-01

    Some current methods for the calculation of the geogenetic depth are based on the hydrostatic model, it is induced that the depth in certain underground place is equal to the pressure divided by the specific weight of rock, on the assumption that the rock is hydrostatic and overlain by no other force but gravity. However, most of rock is in a deformation environment and non-hydrostatic state, especially in an orogenic belt, so that the calculated depth may be exaggerated in comparison with the actual depth according to the hydrostatic formula. In the finite slight deformation and elastic model, the relative actual depth value from the 3-axis strain data was obtained with the measurement of strain including that of superimposed tectonic forces but excluding that of time factor for the strain. If some data on the strain speed are obtained, the depth would be more realistically calculated according to the rheological model because the geological body often experiences long-term creep strains.

  19. Calculation of sulfide capacities of multicomponent slags

    Pelton, Arthur D.; Eriksson, Gunnar; Romero-Serrano, Antonio

    1993-10-01

    The Reddy-Blander model for the sulfide capacities of slags has been modified for the case of acid slags and to include A12O3 and TiO2 as components. The model has been extended to calculate a priori sulfide capacities of multicomponent slags, from a knowledge of the thermodynamic activities of the component oxides, with no adjustable parameters. Agreement with measurements is obtained within experimental uncertainty for binary, ternary, and quinary slags involving the components SiO2-Al2O3-TiO2-CaO-MgO-FeO-MnO over wide ranges of composition. The oxide activities used in the computations are calculated from a database of model parameters obtained by optimizing thermodynamic and phase equilibrium data for oxide systems. Sulfur has now been included in this database. A computing system with automatic access to this and other databases has been developed to permit the calculation of the sulfur content of slags in multicomponent slag/metal/gas/solid equilibria.

  20. Quantum mechanical calculations and mineral spectroscopy

    Kubicki, J. D.

    2006-05-01

    Interpretation of spectra in systems of environmental interest is not generally straightforward due to the lack of close analogs and a clear structure of some components of the system. Computational chemistry can be used as an objective method to test interpretations of spectra. This talk will focus on applying ab initio methods to complement vibrational, NMR, and EXAFS spectroscopic information. Examples of systems studied include phosphate/Fe-hydroxides, arsenate/Al- and Fe-hydroxide, fractured silica surfaces. Phosphate interactions with Fe-hydroxides are important in controlling nutrient availability in soils and transport within streams. In addition, organo-phosphate bonding may be a key attachment mechanism for bacteria at Fe-oxide surfaces. Interpretation of IR spectra is enhanced by model predictions of vibrational frequencies for various surface complexes. Ab initio calculations were used to help explain As(V) and As(III) adsorption behavior onto amorphous Al- and Fe-hydroxides in conjunction with EXAFS measurements. Fractured silica surfaces have been implicated in silicosis. These calculations test structures that could give rise to radical formation on silica surfaces. Calculations to simulate the creation of Si and SiO radical species on sufaces and their subsequent production of OH radicals will be discussed.

  1. Treatment Registration and Nuclide Decay Calculation System

    WU Jian-guo; XU Bo; CHEN Zhi-jun; ZHOU Ai-qing; WANG Xue-qin; ZHANG Bin; MA Tao; SHEN Jun-jin; LIU Jie; JIN Hai-xia

    2008-01-01

    Objective:To design a software to do the complicated and multiple calcula-tions automatically in routine internal radionuclide irradiation therapy to avoid mistakes and shorten patients waiting times. Methods:The software is designed on the Microsoft Windows XP operating system. Visual Basic 5.0 and Microsoft Access 2000 are used re-spectively as the programming language and database system here. The data and DBGrid controls and VB data window guide of Visual Basic were used to control access to and Ac-cess database. Results: Not only can the radioactivity of any radionuclide be calculated, but also the administered total iodine dose of therapy for hyperthyroidism or thyroid cancer and the total administered 153 Sm-EDTMP solutions for remedy of bone metastasis of malig-nant tumor can be ciphered out. Conclusion: The work becomes easier, faster, more cor-rect and interesting when the software can make the complicated and multiple calculations automatically. Patients' information, diagnosis and treatment can be recorded for further study.

  2. Lattice QCD Calculation of Nucleon Structure

    Liu, Keh-Fei [University of Kentucky, Lexington, KY (United States). Dept. of Physics and Astronomy; Draper, Terrence [University of Kentucky, Lexington, KY (United States). Dept. of Physics and Astronomy

    2016-08-30

    It is emphasized in the 2015 NSAC Long Range Plan that "understanding the structure of hadrons in terms of QCD's quarks and gluons is one of the central goals of modern nuclear physics." Over the last three decades, lattice QCD has developed into a powerful tool for ab initio calculations of strong-interaction physics. Up until now, it is the only theoretical approach to solving QCD with controlled statistical and systematic errors. Since 1985, we have proposed and carried out first-principles calculations of nucleon structure and hadron spectroscopy using lattice QCD which entails both algorithmic development and large-scale computer simulation. We started out by calculating the nucleon form factors -- electromagnetic, axial-vector, πNN, and scalar form factors, the quark spin contribution to the proton spin, the strangeness magnetic moment, the quark orbital angular momentum, the quark momentum fraction, and the quark and glue decomposition of the proton momentum and angular momentum. The first round of calculations were done with Wilson fermions in the `quenched' approximation where the dynamical effects of the quarks in the sea are not taken into account in the Monte Carlo simulation to generate the background gauge configurations. Beginning in 2000, we have started implementing the overlap fermion formulation into the spectroscopy and structure calculations. This is mainly because the overlap fermion honors chiral symmetry as in the continuum. It is going to be more and more important to take the symmetry into account as the simulations move closer to the physical point where the u and d quark masses are as light as a few MeV only. We began with lattices which have quark masses in the sea corresponding to a pion mass at ~ 300 MeV and obtained the strange form factors, charm and strange quark masses, the charmonium spectrum and the Ds meson decay constant fDs, the strangeness and charmness, the meson mass

  3. Lattice QCD Calculation of Nucleon Structure

    Liu, Keh-Fei; Draper, Terrence

    2016-08-30

    It is emphasized in the 2015 NSAC Long Range Plan [1] that \\understanding the structure of hadrons in terms of QCD's quarks and gluons is one of the central goals of modern nuclear physics." Over the last three decades, lattice QCD has developed into a powerful tool for ab initio calculations of strong-interaction physics. Up until now, it is the only theoretical approach to solving QCD with controlled statistical and systematic errors. Since 1985, we have proposed and carried out rst-principles calculations of nucleon structure and hadron spectroscopy using lattice QCD which entails both algorithmic development and large scale computer simulation. We started out by calculating the nucleon form factors { electromagnetic [2], axial-vector [3], NN [4], and scalar [5] form factors, the quark spin contribution [6] to the proton spin, the strangeness magnetic moment [7], the quark orbital angular momentum [8], the quark momentum fraction [9], and the quark and glue decomposition of the proton momentum and angular momentum [10]. These rst round of calculations were done with Wilson fermions in the `quenched' approximation where the dynamical e ects of the quarks in the sea are not taken into account in the Monte Carlo simulation to generate the background gauge con gurations. Beginning in 2000, we have started implementing the overlap fermion formulation into the spectroscopy and structure calculations [11, 12]. This is mainly because the overlap fermion honors chiral symmetry as in the continuum. It is going to be more and more important to take the symmetry into account as the simulations move closer to the physical point where the u and d quark masses are as light as a few MeV only. We began with lattices which have quark masses in the sea corresponding to a pion mass at 300 MeV and obtained the strange form factors [13], charm and strange quark masses, the charmonium spectrum and the Ds meson decay constant fDs [14], the strangeness and charmness [15], the

  4. Numerical precision calculations for LHC physics

    Reuschle, Christian Andreas

    2013-02-05

    In this thesis I present aspects of QCD calculations, which are related to the fully numerical evaluation of next-to-leading order (NLO) QCD amplitudes, especially of the one-loop contributions, and the efficient computation of associated collider observables. Two interrelated topics have thereby been of concern to the thesis at hand, which give rise to two major parts. One large part is focused on the general group-theoretical behavior of one-loop QCD amplitudes, with respect to the underlying SU(N{sub c}) theory, in order to correctly and efficiently handle the color degrees of freedom in QCD one-loop amplitudes. To this end a new method is introduced that can be used in order to express color-ordered partial one-loop amplitudes with multiple quark-antiquark pairs as shuffle sums over cyclically ordered primitive one-loop amplitudes. The other large part is focused on the local subtraction of divergences off the one-loop integrands of primitive one-loop amplitudes. A method for local UV renormalization has thereby been developed, which uses local UV counterterms and efficient recursive routines. Together with suitable virtual soft and collinear subtraction terms, the subtraction method is extended to the virtual contributions in the calculations of NLO observables, which enables the fully numerical evaluation of the one-loop integrals in the virtual contributions. The method has been successfully applied to the calculation of jet rates in electron-positron annihilation to NLO accuracy in the large-N{sub c} limit.

  5. Using reciprocity in Boundary Element Calculations

    Juhl, Peter Møller; Cutanda Henriquez, Vicente

    2010-01-01

    The concept of reciprocity is widely used in both theoretical and experimental work. In Boundary Element calculations reciprocity is sometimes employed in the solution of computationally expensive scattering problems, which sometimes can be more efficiently dealt with when formulated...... as the reciprocal radiation problem. The present paper concerns the situation of having a point source (which is reciprocal to a point receiver) at or near a discretized boundary element surface. The accuracy of the original and the reciprocal problem is compared in a test case for which an analytical solution...

  6. Parallel solutions of correlation dimension calculation

    2005-01-01

    The calculation of correlation dimension is a key problem of the fractals. The standard algorithm requires O(N2) computations. The previous improvement methods endeavor to sequentially reduce redundant computation on condition that there are many different dimensional phase spaces, whose application area and performance improvement degree are limited. This paper presents two fast parallel algorithms: O(N2/p + logp) time p processors PRAM algorithm and O(N2/p) time p processors LARPBS algorithm. Analysis and results of numeric computation indicate that the speedup of parallel algorithms relative to sequence algorithms is efficient. Compared with the PRAM algorithm, The LARPBS algorithm is practical, optimally scalable and cost optimal.

  7. Calculations in fundamental physics mechanics and heat

    Heddle, T

    2013-01-01

    Calculations in Fundamental Physics, Volume I: Mechanics and Heat focuses on the mechanisms of heat. The manuscript first discusses motion, including parabolic, angular, and rectilinear motions, relative velocity, acceleration of gravity, and non-uniform acceleration. The book then discusses combinations of forces, such as polygons and resolution, friction, center of gravity, shearing force, and bending moment. The text looks at force and acceleration, energy and power, and machines. Considerations include momentum, horizontal or vertical motion, work and energy, pulley systems, gears and chai

  8. Calculational Investigation for Mine-Clearance Experiments

    1981-08-31

    Charge Calculation LFT7 Dese ir p= 10 k/m310 lb/ft Charge AMB1IENT AIR Ojrn FIGURE 17. SAP Problem 5.0013 Initial Mesh Configuration 3’ zones of air...LI co OCO -c i C 0 *Lt’ 3d ) 0as~ a a 4)45 I I I I ~14-~ 12 Problem 5.0008 S10 Ii aa I 044 Qiý4 4 0 I 2- 7,Ref lected Shock Brief Negative Phase 0 2 4

  9. Rooftop Unit Comparison Calculator User Manual

    Miller, James D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-04-30

    This document serves as a user manual for the Packaged rooftop air conditioners and heat pump units comparison calculator (RTUCC) and is an aggregation of the calculator’s website documentation. Content ranges from new-user guide material like the “Quick Start” to the more technical/algorithmic descriptions of the “Methods Pages.” There is also a section listing all the context-help topics that support the features on the “Controls” page. The appendix has a discussion of the EnergyPlus runs that supported the development of the building-response models.

  10. Speed mathematics secrets skills for quick calculation

    Handley, Bill

    2011-01-01

    Using this book will improve your understanding of math and haveyou performing like a genius!People who excel at mathematics use better strategies than the restof us; they are not necessarily more intelligent.Speed Mathematics teaches simple methods that will enable you tomake lightning calculations in your head-including multiplication,division, addition, and subtraction, as well as working withfractions, squaring numbers, and extracting square and cube roots.Here's just one example of this revolutionary approach to basicmathematics:96 x 97 =Subtract each number from 100.96 x 97 =4 3Subtract

  11. What Factors Affect Intraocular Lens Power Calculation?

    Fayette, Rose M; Cakiner-Egilmez, Tulay

    2015-01-01

    Obtaining precise postoperative target refraction is of utmost importance in today's modern cataract and refractive surgery. Emerging literature has linked postoperative surprises to corneal curvature, axial length, and estimation of the effective IOL position. As demonstrated in this case presentation, an inaccuracy in the axial length measurement can lead to a myopic surprise. A review of the literature has demonstrated that prevention of postoperative refractive surprises requires highly experienced nurses, technicians, and/ or biometrists to take meticulous measurements using biometry devices, and surgeons to re-evaluate these calculations prior to the surgery.

  12. A Lattice Calculation of Parton Distributions

    Alexandrou, Constantia; Hadjiyiannakou, Kyriakos; Jansen, Karl; Steffens, Fernanda; Wiese, Christian

    2016-01-01

    We present results for the $x$ dependence of the unpolarized, helicity, and transversity isovector quark distributions in the proton using lattice QCD, employing the method of quasi-distributions proposed by Ji in 2013. Compared to a previous calculation by us, the errors are reduced by a factor of about 2.5. Moreover, we present our first results for the polarized sector of the proton, which indicate an asymmetry in the proton sea in favor of the $u$ antiquarks for the case of helicity distributions, and an asymmetry in favor of the $d$ antiquarks for the case of transversity distributions.

  13. Motor Torque Calculations For Electric Vehicle

    Saurabh Chauhan

    2015-08-01

    Full Text Available Abstract It is estimated that 25 of the total cars across the world will run on electricity by 2025. An important component that is an integral part of all electric vehicles is the motor. The amount of torque that the driving motor delivers is what plays a decisive role in determining the speed acceleration and performance of an electric vehicle. The following work aims at simplifying the calculations required to decide the capacity of the motor that should be used to drive a vehicle of particular specifications.

  14. Configurational space continuity and free energy calculations

    Tian, Pu

    2016-01-01

    Free energy is arguably the most importance function(al) for understanding of molecular systems. A number of rigorous and approximate free energy calculation/estimation methods have been developed over many decades. One important issue, the continuity of an interested macrostate (or path) in configurational space, has not been well articulated, however. As a matter of fact, some important special cases have been intensively discussed. In this perspective, I discuss the relevance of configurational space continuity in development of more efficient and reliable next generation free energy methodologies.

  15. Calculations in bridge aeroelasticity via CFD

    Brar, P.S.; Raul, R.; Scanlan, R.H. [Johns Hopkins Univ., Baltimore, MD (United States)

    1996-12-31

    The central focus of the present study is the numerical calculation of flutter derivatives. These aeroelastic coefficients play an important role in determining the stability or instability of long, flexible structures under ambient wind loading. A class of Civil Engineering structures most susceptible to such an instability are long-span bridges of the cable-stayed or suspended-span variety. The disastrous collapse of the Tacoma Narrows suspension bridge in the recent past, due to a flutter instability, has been a big impetus in motivating studies in flutter of bridge decks.

  16. Molecular transport calculations with Wannier Functions

    Thygesen, Kristian Sommer; Jacobsen, Karsten Wedel

    2005-01-01

    We present a scheme for calculating coherent electron transport in atomic-scale contacts. The method combines a formally exact Green's function formalism with a mean-field description of the electronic structure based on the Kohn-Sham scheme of density functional theory. We use an accurate plane...... is applied to a hydrogen molecule in an infinite Pt wire and a benzene-dithiol (BDT) molecule between Au(111) surfaces. We show that the transmission function of BDT in a wide energy window around the Fermi level can be completely accounted for by only two molecular orbitals. (c) 2005 Elsevier B.V. All...

  17. Electrical Conductivity Calculations from the Purgatorio Code

    Hansen, S B; Isaacs, W A; Sterne, P A; Wilson, B G; Sonnad, V; Young, D A

    2006-01-09

    The Purgatorio code [Wilson et al., JQSRT 99, 658-679 (2006)] is a new implementation of the Inferno model describing a spherically symmetric average atom embedded in a uniform plasma. Bound and continuum electrons are treated using a fully relativistic quantum mechanical description, giving the electron-thermal contribution to the equation of state (EOS). The free-electron density of states can also be used to calculate scattering cross sections for electron transport. Using the extended Ziman formulation, electrical conductivities are then obtained by convolving these transport cross sections with externally-imposed ion-ion structure factors.

  18. On the Calculation of Formal Concept Stability

    Hui-lai Zhi

    2014-01-01

    Full Text Available The idea of stability has been used in many applications. However, computing stability is still a challenge and the best algorithms known so far have algorithmic complexity quadratic to the size of the lattice. To improve the effectiveness, a critical term is introduced in this paper, that is, minimal generator, which serves as the minimal set that makes a concept stable when deleting some objects from the extent. Moreover, by irreducible elements, minimal generator is derived. Finally, based on inclusion-exclusion principle and minimal generator, formulas for the calculation of concept stability are proposed.

  19. Calculated Bulk Properties of the Actinide Metals

    Skriver, Hans Lomholt; Andersen, O. K.; Johansson, B.

    1978-01-01

    Self-consistent relativistic calculations of the electronic properties for seven actinides (Ac-Am) have been performed using the linear muffin-tin orbitals method within the atomic-sphere approximation. Exchange and correlation were included in the local spin-density scheme. The theory explains...... the variation of the atomic volume and the bulk modulus through the 5f series in terms of an increasing 5f binding up to plutonium followed by a sudden localisation (through complete spin polarisation) in americium...

  20. Quantum Statistical Calculation of Exchange Bias

    WANG Huai-Yu; DAI Zhen-Hong

    2004-01-01

    The phenomenon of exchange bias of ferromagnetic (FM) films, which are coupled with an antiferromagnetic (AFM) film, is studied by Heisenberg model by use of the many-body Green's function method of quantum statistical theory for the uncompensated case. Exchange bias HE and coercivity Hc are calculated as functions of the FM film thickness L, temperature, the strength of the exchange interaction across the interface between FM and AFM and the anisotropy of the FM. Hc decreases with increasing L when the FM film is beyond some thickness. The dependence of the exchange bias HE on the FM film thickness and on temperature is also qualitatively in agreement with experiments.

  1. Calculation of thermal noise in grating reflectors

    Heinert, Daniel; Friedrich, Daniel; Hild, Stefan; Kley, Ernst-Bernhard; Leavey, Sean; Martin, Iain W; Nawrodt, Ronny; Tünnermann, Andreas; Vyatchanin, Sergey P; Yamamoto, Kazuhiro

    2013-01-01

    Grating reflectors have been repeatedly discussed to improve the noise performance of metrological applications due to the reduction or absence of any coating material. So far, however, no quantitative estimate on the thermal noise of these reflective structures exists. In this work we present a theoretical calculation of a grating reflector's noise. We further apply it to a proposed 3rd generation gravitational wave detector. Depending on the grating geometry, the grating material and the temperature we obtain a thermal noise decrease by up to a factor of ten compared to conventional dielectric mirrors. Thus the use of grating reflectors can substantially improve the noise performance in metrological applications.

  2. A Lattice Calculation of Parton Distributions

    Alexandrou, Constantia [Cyprus Univ. Nicosia (Cyprus). Dept. of Physics; The Cyprus Institute, Nicosia (Cyprus); Cichy, Krzysztof [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Poznan Univ. (Poland). Faculty of Physics; Drach, Vincent [Univ. of Southern Denmark, Odense (Denmark). CP3-Origins; Univ. of Southern Denmark, Odense (Denmark). Danish IAS; Garcia-Ramos, Elena [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Hadjiyiannakou, Kyriakos [Cyprus Univ. Nicosia (Cyprus). Dept. of Physics; Jansen, Karl; Steffens, Fernanda; Wiese, Christian [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC

    2015-04-15

    We report on our exploratory study for the direct evaluation of the parton distribution functions from lattice QCD, based on a recently proposed new approach. We present encouraging results using N{sub f}=2+1+1 twisted mass fermions with a pion mass of about 370 MeV. The focus of this work is a detailed description of the computation, including the lattice calculation, the matching to an infinite momentum and the nucleon mass correction. In addition, we test the effect of gauge link smearing in the operator to estimate the influence of the Wilson line renormalization, which is yet to be done.

  3. Photoionization of zinc by TDLDA calculations

    Stener, M.; Decleva, P.

    1997-10-01

    Absolute photoionization cross section profiles of Zn have been calculated at TDLDA and LDA level, employing a very accurate B-spline basis set and the modified Sternheimer approach. The van Leeuwen - Baerends exchange correlation potential has been used, since its correct asymptotic behaviour is able to support virtual states and describe core-excited resonances. A comparison with available theoretical and experimental data has been performed when possible. The present method has been proven to be robust to analyse wide photon energy regions (from threshold up to 200 eV) and discuss the various shapes of one-electron resonances.

  4. Influence of metallic dental implants and metal artefacts on dose calculation accuracy

    Maerz, Manuel; Koelbl, Oliver; Dobler, Barbara [Regensburg University Medical Center, Department of Radiotherapy, Regensburg (Germany)

    2014-10-31

    Metallic dental implants cause severe streaking artefacts in computed tomography (CT) data, which inhibit the correct representation of shape and density of the metal and the surrounding tissue. The aim of this study was to investigate the impact of dental implants on the accuracy of dose calculations in radiation therapy planning and the benefit of metal artefact reduction (MAR). A second aim was to determine the treatment technique which is less sensitive to the presence of metallic implants in terms of dose calculation accuracy. Phantoms consisting of homogeneous water equivalent material surrounding dental implants were designed. Artefact-containing CT data were corrected using the correct density information. Intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) plans were calculated on corrected and uncorrected CT data and compared to 2-dimensional dose measurements using GafChromic trademark EBT2 films. For all plans the accuracy of dose calculations is significantly higher if performed on corrected CT data (p = 0.015). The agreement of calculated and measured dose distributions is significantly higher for VMAT than for IMRT plans for calculations on uncorrected CT data (p = 0.011) as well as on corrected CT data (p = 0.029). For IMRT and VMAT the application of metal artefact reduction significantly increases the agreement of dose calculations with film measurements. VMAT was found to provide the highest accuracy on corrected as well as on uncorrected CT data. VMAT is therefore preferable over IMRT for patients with metallic implants, if plan quality is comparable for the two techniques. (orig.) [German] Zahnimplantate aus Metall verursachen in Computertomographiedaten (CT) streifenfoermige Artefakte. Diese verhindern eine korrekte Zuordnung von Form und Dichteeigenschaften des Metalls und des umgebenden Gewebes. Ziel dieser Studie war es, den Einfluss von Zahnimplantaten auf die Genauigkeit der Dosisberechnung in der

  5. Handbook for the calculation of reactor protections; Formulaire sur le calcul de la protection des reacteurs

    NONE

    1963-07-01

    This note constitutes the first edition of a Handbook for the calculation of reactor protections. This handbook makes it possible to calculate simply the different neutron and gamma fluxes and consequently, to fix the minimum quantities of materials necessary under general safety conditions both for the personnel and for the installations. It contains a certain amount of nuclear data, calculation methods, and constants corresponding to the present state of our knowledge. (authors) [French] Cette note constitue la premiere edition du 'Formulaire sur le calcul de la protection des reacteurs'. Ce formulaire permet de calculer de facon simple les difterents flux de neutrons et de gamma et, par suite, de fixer les quantites minima de materiaux a utiliser pour que les conditions generales de securite soient respectees, tant pour le personnel que pour les installations. Il contient un certain nombre de donnees nucleaires, de methodes de calcul et de constantes correspondant a l'etat actuel de nos connaissances. (auteurs)

  6. The spacing calculator software—A Visual Basic program to calculate spatial properties of lineaments

    Ekneligoda, Thushan C.; Henkel, Herbert

    2006-05-01

    A software tool is presented which calculates the spatial properties azimuth, length, spacing, and frequency of lineaments that are defined by their starting and ending co-ordinates in a two-dimensional (2-D) planar co-ordinate system. A simple graphical interface with five display windows creates a user-friendly interactive environment. All lineaments are considered in the calculations, and no secondary sampling grid is needed for the elaboration of the spatial properties. Several rule-based decisions are made to determine the nearest lineament in the spacing calculation. As a default procedure, the programme defines a window that depends on the mode value of the length distribution of the lineaments in a study area. This makes the results more consistent, compared to the manual method of spacing calculation. Histograms are provided to illustrate and elaborate the distribution of the azimuth, length and spacing. The core of the tool is the spacing calculation between neighbouring parallel lineaments, which gives direct information about the variation of block sizes in a given category of structures. The 2-D lineament frequency is calculated for the actual area that is occupied by the lineaments.

  7. Calculating lunar retreat rates using tidal rhythmites

    Kvale, E.P.; Johnson, H.W.; Sonett, C.P.; Archer, A.W.; Zawistoski, A.N.N.

    1999-01-01

    Tidal rhythmites are small-scale sedimenta??r}- structures that can preserve a hierarchy of astronomically induced tidal periods. They can also preserve a record of periodic nontidal sedimentation. If properly interpreted and understood, tidal rhjthmites can be an important component of paleoastronomy and can be used to extract information on ancient lunar orbital dynamics including changes in Earth-Moon distance through geologic time. Herein we present techniques that can be used to calculate ancient Earth-Moon distances. Each of these techniques, when used on a modern high-tide data set, results in calculated estimates of lunar orbital periods and an EarthMoon distance that fall well within 1 percent of the actual values. Comparisons to results from modern tidal data indicate that ancient tidal rhythmite data as short as 4 months can provide suitable estimates of lunar orbital periods if these tidal records are complete. An understanding of basic tidal theory allows for the evaluation of completeness of the ancient tidal record as derived from an analysis of tidal rhythmites. Utilizing the techniques presented herein, it appears from the rock record that lunar orbital retreat slowed sometime during the midPaleozoic. Copyright ??1999, SEPM (Society for Sedimentary Geology).

  8. Fastlim. A fast LHC limit calculator

    Papucci, Michele [Michigan Univ., Ann Arbor, MI (United States). Michigan Center for Theoretical Physics; Sakurai, Kazuki [King' s College London (United Kingdom). Physics Dept.; Weiler, Andreas [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Zeune, Lisa [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2014-02-15

    Fastlim is a tool to calculate conservative limits on extensions of the Standard Model from direct LHC searches without performing any Monte Carlo event generation. The program reconstructs the visible cross sections from pre-calculated efficiency tables and cross section tables for simplified event topologies. As a proof of concept of the approach, we have implemented searches relevant for supersymmetric models with R-parity conservation. Fastlim takes the spectrum and coupling information of a given model point and provides, for each signal region of the implemented analyses, the visible cross sections normalised to the corresponding upper limit, reported by the experiments, as well as the exclusion p-value. To demonstrate the utility of the program we study the sensitivity of the recent ATLAS missing energy searches to the parameter space of natural SUSY models. The program structure allows the straight-forward inclusion of external efficiency tables and can be generalised to R-parity violating scenarios and non-SUSY models. This paper serves as a self-contained user guide, and indicates the conventions and approximations used.

  9. Fastlim: a fast LHC limit calculator

    Papucci, Michele [University of Michigan, Michigan Center for Theoretical Physics, Ann Arbor, MI (United States); Sakurai, Kazuki [King' s College London, Physics Department, London (United Kingdom); Weiler, Andreas [CERN TH-PH Division, Meyrin (Switzerland); Zeune, Lisa [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany)

    2014-11-15

    Fastlim is a tool to calculate conservative limits on extensions of the Standard Model from direct LHC searches without performing any Monte Carlo event generation. The program reconstructs the visible cross sections (cross sections after event selection cuts) from pre-calculated efficiency tables and cross section tables for simplified event topologies. As a proof of concept of the approach, we have implemented searches relevant for supersymmetric models with R-parity conservation. Fastlim takes the spectrum and coupling information of a given model point and provides, for each signal region of the implemented analyses, the visible cross sections normalised to the corresponding upper limit, reported by the experiments, as well as the CL{sub s} value. To demonstrate the utility of the program we study the sensitivity of the recent ATLAS missing energy searches to the parameter space of natural SUSY models. The program structure allows the straightforward inclusion of external efficiency tables and can be generalised to R-parity violating scenarios and non-SUSY models. This paper serves as a self-contained user guide and indicates the conventions and approximations used. (orig.)

  10. Calculation of fractional electron capture probabilities

    Schoenfeld, E

    1998-01-01

    A 'Table of Radionuclides' is being prepared which will supersede the 'Table de Radionucleides' formerly issued by the LMRI/LPRI (France). In this effort it is desirable to have a uniform basis for calculating theoretical values of fractional electron capture probabilities. A table has been compiled which allows one to calculate conveniently and quickly the fractional probabilities P sub K , P sub L , P sub M , P sub N and P sub O , their ratios and the assigned uncertainties for allowed and non-unique first forbidden electron capture transitions of known transition energy for radionuclides with atomic numbers from Z=3 to 102. These results have been applied to a total of 28 transitions of 14 radionuclides ( sup 7 Be, sup 2 sup 2 Na, sup 5 sup 1 Cr, sup 5 sup 4 Mn, sup 5 sup 5 Fe, sup 6 sup 8 Ge , sup 6 sup 8 Ga, sup 7 sup 5 Se, sup 1 sup 0 sup 9 Cd, sup 1 sup 2 sup 5 I, sup 1 sup 3 sup 9 Ce, sup 1 sup 6 sup 9 Yb, sup 1 sup 9 sup 7 Hg, sup 2 sup 0 sup 2 Tl). The values are in reasonable agreement with measure...

  11. Criticality Calculations with MCNP6 - Practical Lectures

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3); Rising, Michael Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3); Alwin, Jennifer Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3)

    2016-11-29

    These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input model for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.

  12. Fastlim: a fast LHC limit calculator.

    Papucci, Michele; Sakurai, Kazuki; Weiler, Andreas; Zeune, Lisa

    Fastlim is a tool to calculate conservative limits on extensions of the Standard Model from direct LHC searches without performing any Monte Carlo event generation. The program reconstructs the visible cross sections (cross sections after event selection cuts) from pre-calculated efficiency tables and cross section tables for simplified event topologies. As a proof of concept of the approach, we have implemented searches relevant for supersymmetric models with R-parity conservation. Fastlim takes the spectrum and coupling information of a given model point and provides, for each signal region of the implemented analyses, the visible cross sections normalised to the corresponding upper limit, reported by the experiments, as well as the [Formula: see text] value. To demonstrate the utility of the program we study the sensitivity of the recent ATLAS missing energy searches to the parameter space of natural SUSY models. The program structure allows the straightforward inclusion of external efficiency tables and can be generalised to R-parity violating scenarios and non-SUSY models. This paper serves as a self-contained user guide and indicates the conventions and approximations used.

  13. Nonlinear calculating method of pile settlement

    贺炜; 王桂尧; 王泓华

    2008-01-01

    To study calculating method of settlement on top of extra-long large-diameter pile, the relevant research results were summarized. The hyperbola model, a nonlinear load transfer function, was introduced to establish the basic differential equation with load transfer method. Assumed that the displacement of pile shaft was the high order power series of buried depth, through merging the same orthometric items and arranging the relevant coefficients, the solution which could take the nonlinear pile-soil interaction and stratum properties of soil into account was solved by power series. On the basis of the solution, by determining the load transfer depth with criterion of settlement on pile tip, the method by making boundary conditions compatible was advised to solve the load-settlement curve of pile. The relevant flow chart and mathematic expressions of boundary conditions were also listed. Lastly, the load transfer methods based on both two-broken-line model and hyperbola model were applied to analyzing a real project. The related coefficients of fitting curves by hyperbola were not less than 0.96, which shows that the hyperbola model is truthfulness, and is propitious to avoid personal error. The calculating value of load-settlement curve agrees well with the measured one, which indicates that it can be applied in engineering practice and making the theory that limits the design bearing capacity by settlement on pile top comes true.

  14. Quantum Monte Carlo Calculations of Neutron Matter

    Carlson, J; Ravenhall, D G

    2003-01-01

    Uniform neutron matter is approximated by a cubic box containing a finite number of neutrons, with periodic boundary conditions. We report variational and Green's function Monte Carlo calculations of the ground state of fourteen neutrons in a periodic box using the Argonne $\\vep $ two-nucleon interaction at densities up to one and half times the nuclear matter density. The effects of the finite box size are estimated using variational wave functions together with cluster expansion and chain summation techniques. They are small at subnuclear densities. We discuss the expansion of the energy of low-density neutron gas in powers of its Fermi momentum. This expansion is strongly modified by the large nn scattering length, and does not begin with the Fermi-gas kinetic energy as assumed in both Skyrme and relativistic mean field theories. The leading term of neutron gas energy is ~ half the Fermi-gas kinetic energy. The quantum Monte Carlo results are also used to calibrate the accuracy of variational calculations ...

  15. Phage therapy pharmacology: calculating phage dosing.

    Abedon, Stephen

    2011-01-01

    Phage therapy, which can be described as a phage-mediated biocontrol of bacteria (or, simply, biocontrol), is the application of bacterial viruses-also bacteriophages or phages-to reduce densities of nuisance or pathogenic bacteria. Predictive calculations for phage therapy dosing should be useful toward rational development of therapeutic as well as biocontrol products. Here, I consider the theoretical basis of a number of concepts relevant to phage dosing for phage therapy including minimum inhibitory concentration (but also "inundation threshold"), minimum bactericidal concentration (but also "clearance threshold"), decimal reduction time (D value), time until bacterial eradication, threshold bacterial density necessary to support phage population growth ("proliferation threshold"), and bacterial density supporting half-maximal phage population growth rates (K(B)). I also address the concepts of phage killing titers, multiplicity of infection, and phage peak densities. Though many of the presented ideas are not unique to this chapter, I nonetheless provide variations on derivations and resulting formulae, plus as appropriate discuss relative importance. The overriding goal is to present a variety of calculations that are useful toward phage therapy dosing so that they may be found in one location and presented in a manner that allows facile appreciation, comparison, and implementation. The importance of phage density as a key determinant of the phage potential to eradicate bacterial targets is stressed throughout the chapter.

  16. Calculation of the CIPW norm: New formulas

    Pruseth, Kamal L.

    2009-02-01

    A completely new set of formulas, based on matrix algebra, has been suggested for the calculation of the CIPW norm for igneous rocks to achieve highly consistent and accurate norms. The suggested sequence of derivation of the normative minerals greatly deviates from the sequence followed in the classical scheme. The formulas are presented in a form convenient for error-free implementation in computer programs. Accurate formulas along with the use of variable molecular weights for CaO and FeO; corrected formula weights for apatite, pyrite and fluorite; and suggested measures to avoid significant rounding off errors to achieve absolute match between the sum of the input weights of the oxides and the sum of the weights of the estimated normative minerals. Using an analogous procedure for determining the oxidation ratios of igneous rocks as used in the SINCLAS system of Verma et al (2002, 2003), the suggested calculation scheme exactly reproduces their results except for apatite for reasons explained in the text, but with a superior match between the totals for about 11,200 analyses representing rocks of a wide range of composition

  17. Calculation of the CIPW norm: New formulas

    Kamal L Pruseth

    2009-02-01

    A completely new set of formulas,based on matrix algebra,has been suggested for the calculation of the CIPW norm for igneous rocks to achieve highly consistent and accurate norms.The suggested sequence of derivation of the normative minerals greatly deviates from the sequence followed in the classical scheme.The formulas are presented in a form convenient for error-free implementation in computer programs.Accurate formulas along with the use of variable molecular weights for CaO and FeO;corrected formula weights for apatite,pyrite and fluorite;and suggested measures to avoid significant rounding off errors to achieve absolute match between the sum of the input weights of the oxides and the sum of the weights of the estimated normative minerals.Using an analogous procedure for determining the oxidation ratios of igneous rocks as used in the SINCLAS system of Ver ma et al (2002,2003),the suggested calculation scheme exactly reproduces their results except for apatite for reasons explained in the text,but with a superior match between the totals for about 11,200 analyses representing rocks of a wide range of composition.

  18. ARTc: Anisotropic reflectivity and transmissivity calculator

    Malehmir, Reza; Schmitt, Douglas R.

    2016-08-01

    While seismic anisotropy is known to exist within the Earth's crust and even deeper, isotropic or even highly symmetric elastic anisotropic assumptions for seismic imaging is an over-simplification which may create artifacts in the image, target mis-positioning and hence flawed interpretation. In this paper, we have developed the ARTc algorithm to solve reflectivity, transmissivity as well as velocity and particle polarization in the most general case of elastic anisotropy. This algorithm is able to provide reflectivity solution from the boundary between two anisotropic slabs with arbitrary symmetry and orientation up to triclinic. To achieve this, the algorithm solves full elastic wave equation to find polarization, slowness and amplitude of all six wave-modes generated from the incident plane-wave and welded interface. In the first step to calculate the reflectivity, the algorithm solves properties of the incident wave such as particle polarization and slowness. After calculation of the direction of generated waves, the algorithm solves their respective slowness and particle polarization. With this information, the algorithm then solves a system of equations incorporating the imposed boundary conditions to arrive at the scattered wave amplitudes, and thus reflectivity and transmissivity. Reflectivity results as well as slowness and polarization are then tested in complex computational anisotropic models to ensure their accuracy and reliability. ARTc is coded in MATLAB ® and bundled with an interactive GUI and bash script to run on single or multi-processor computers.

  19. An Efficient Algorithm to Calculate BICM Capacity

    Böcherer, Georg; Alvarado, Alex; Corroy, Steven; Mathar, Rudolf

    2012-01-01

    Bit-interleaved coded modulation (BICM) is a practical approach for reliable communication over the AWGN channel in the bandwidth limited regime. For a signal point constellation with 2^m points, BICM labels the signal points with bit strings of length m and then treats these m bits separately both at transmitter and receiver. To determine the capacity of BICM, the mutual information between input and output has to be maximized over the bit pmfs. This is a non-convex optimization problem. So far, the optimal pmfs were determined via exhaustive search, which is of exponential complexity in m. In this work, an algorithm called bit-alternating convex concave method (BACM) is developed. This algorithm calculates BICM capacity with a complexity that scales approximately as m^3. The algorithm iteratively applies convex optimization techniques. BACM is used to calculate BICM capacity of 4,8,16,32, and 64-PAM in AWGN. For constellations with more than 8 points, the presented values are the first results known in lite...

  20. Proton Affinity Calculations with High Level Methods.

    Kolboe, Stein

    2014-08-12

    Proton affinities, stretching from small reference compounds, up to the methylbenzenes and naphthalene and anthracene, have been calculated with high accuracy computational methods, viz. W1BD, G4, G3B3, CBS-QB3, and M06-2X. Computed and the currently accepted reference proton affinities are generally in excellent accord, but there are deviations. The literature value for propene appears to be 6-7 kJ/mol too high. Reported proton affinities for the methylbenzenes seem 4-5 kJ/mol too high. G4 and G3 computations generally give results in good accord with the high level W1BD. Proton affinity values computed with the CBS-QB3 scheme are too low, and the error increases with increasing molecule size, reaching nearly 10 kJ/mol for the xylenes. The functional M06-2X fails markedly for some of the small reference compounds, in particular, for CO and ketene, but calculates methylbenzene proton affinities with high accuracy.

  1. LIKEDM: Likelihood calculator of dark matter detection

    Huang, Xiaoyuan; Tsai, Yue-Lin Sming; Yuan, Qiang

    2017-04-01

    With the large progress in searches for dark matter (DM) particles with indirect and direct methods, we develop a numerical tool that enables fast calculations of the likelihoods of specified DM particle models given a number of observational data, such as charged cosmic rays from space-borne experiments (e.g., PAMELA, AMS-02), γ-rays from the Fermi space telescope, and underground direct detection experiments. The purpose of this tool - LIKEDM, likelihood calculator for dark matter detection - is to bridge the gap between a particle model of DM and the observational data. The intermediate steps between these two, including the astrophysical backgrounds, the propagation of charged particles, the analysis of Fermi γ-ray data, as well as the DM velocity distribution and the nuclear form factor, have been dealt with in the code. We release the first version (v1.0) focusing on the constraints from indirect detection of DM with charged cosmic and gamma rays. Direct detection will be implemented in the next version. This manual describes the framework, usage, and related physics of the code.

  2. EOSPEC: a complementary toolbox for MODTRAN calculations

    Dion, Denis

    2016-09-01

    For more than a decade, Defence Research and Development Canada (DRDC) has been developing a Library of computer models for the calculations of atmospheric effects on EO-IR sensor performances. The Library, called EOSPEC-LIB (EO-IR Sensor PErformance Computation LIBrary) has been designed as a complement to MODTRAN, the radiative transfer code developed by the Air Force Research Laboratory and Spectral Science Inc. in the USA. The Library comprises modules for the definition of the atmospheric conditions, including aerosols, and provides modules for the calculation of turbulence and fine refraction effects. SMART (Suite for Multi-resolution Atmospheric Radiative Transfer), a key component of EOSPEC, allows one to perform fast computations of transmittances and radiances using MODTRAN through a wide-band correlated-k computational approach. In its most recent version, EOSPEC includes a MODTRAN toolbox whose functions help generate in a format compatible to MODTRAN 5 and 6 atmospheric and aerosol profiles, user-defined refracted optical paths and inputs for configuring the MODTRAN sea radiance (BRDF) model. The paper gives an overall description of the EOSPEC features and capacities. EOSPEC provides augmented capabilities for computations in the lower atmosphere, and for computations in maritime environments.

  3. Hohlraum calculations for the NIF opacity platform

    Dodd, E. S.; Perry, T. S.; Tregillis, I. L.; Kline, J. L.; Heeter, R. F.; Liedahl, D. A.; Opachich, Y. P.

    2015-11-01

    A summary of initial hohlraum calculations for planned opacity experiments at the National Ignition Facility (NIF) will be given. The purpose of these experiments is to make LTE opacity measurements of iron at the same conditions as previous experiments on Sandia's Z facility: 156 eV and 190 eV. Ongoing discrepancies between opacity data and theory make corroborating data highly important. The target considered in these calculations is a standard cylindrical hohlraum, with diameter 5.75 mm, but baffles have been placed between the laser hot spot and the sample to maintain the iron in LTE. The hohlraum is driven with a 3 ns flat top laser pulse, but limited to 500 kJ and only the outer beams. The inner beams will be used to drive a capsule implosion, which backlights the iron for the absorption measurements. The iron itself is a thin disk, mixed with magnesium as a spectroscopic tracer, and tamped with beryllium to minimize expansion. A description of the experimental set-up will be given. Supported under the US DOE by the Los Alamos National Security, LLC under contract DE-AC52-06NA25396.

  4. Electron mobility calculation for graphene on substrates

    Hirai, Hideki; Ogawa, Matsuto [Department of Electrical and Electronic Engineering, Graduate School of Engineering, Kobe University, 1-1, Rokko-dai, Nada-ku, Kobe 657-8501 (Japan); Tsuchiya, Hideaki, E-mail: tsuchiya@eedept.kobe-u.ac.jp [Department of Electrical and Electronic Engineering, Graduate School of Engineering, Kobe University, 1-1, Rokko-dai, Nada-ku, Kobe 657-8501 (Japan); Japan Science and Technology Agency, CREST, Chiyoda, Tokyo 102-0075 (Japan); Kamakura, Yoshinari; Mori, Nobuya [Japan Science and Technology Agency, CREST, Chiyoda, Tokyo 102-0075 (Japan); Division of Electrical, Electronic and Information Engineering, Graduate School of Engineering, Osaka University, Suita, Osaka 565-0871 (Japan)

    2014-08-28

    By a semiclassical Monte Carlo method, the electron mobility in graphene is calculated for three different substrates: SiO{sub 2}, HfO{sub 2}, and hexagonal boron nitride (h-BN). The calculations account for polar and non-polar surface optical phonon (OP) scatterings induced by the substrates and charged impurity (CI) scattering, in addition to intrinsic phonon scattering in pristine graphene. It is found that HfO{sub 2} is unsuitable as a substrate, because the surface OP scattering of the substrate significantly degrades the electron mobility. The mobility on the SiO{sub 2} and h-BN substrates decreases due to CI scattering. However, the mobility on the h-BN substrate exhibits a high electron mobility of 170 000 cm{sup 2}/(V·s) for electron densities less than 10{sup 12 }cm{sup −2}. Therefore, h-BN should be an appealing substrate for graphene devices, as confirmed experimentally.

  5. Group Contribution Methods for Phase Equilibrium Calculations.

    Gmehling, Jürgen; Constantinescu, Dana; Schmid, Bastian

    2015-01-01

    The development and design of chemical processes are carried out by solving the balance equations of a mathematical model for sections of or the whole chemical plant with the help of process simulators. For process simulation, besides kinetic data for the chemical reaction, various pure component and mixture properties are required. Because of the great importance of separation processes for a chemical plant in particular, a reliable knowledge of the phase equilibrium behavior is required. The phase equilibrium behavior can be calculated with the help of modern equations of state or g(E)-models using only binary parameters. But unfortunately, only a very small part of the experimental data for fitting the required binary model parameters is available, so very often these models cannot be applied directly. To solve this problem, powerful predictive thermodynamic models have been developed. Group contribution methods allow the prediction of the required phase equilibrium data using only a limited number of group interaction parameters. A prerequisite for fitting the required group interaction parameters is a comprehensive database. That is why for the development of powerful group contribution methods almost all published pure component properties, phase equilibrium data, excess properties, etc., were stored in computerized form in the Dortmund Data Bank. In this review, the present status, weaknesses, advantages and disadvantages, possible applications, and typical results of the different group contribution methods for the calculation of phase equilibria are presented.

  6. A corrector for spacecraft calculated electron moments

    J. Geach

    2005-03-01

    Full Text Available We present the application of a numerical method to correct electron moments calculated on-board spacecraft from the effects of potential broadening and energy range truncation. Assuming a shape for the natural distribution of the ambient plasma and employing the scalar approximation, the on-board moments can be represented as non-linear integral functions of the underlying distribution. We have implemented an algorithm which inverts this system successfully over a wide range of parameters for an assumed underlying drifting Maxwellian distribution. The outputs of the solver are the corrected electron plasma temperature Te, density Ne and velocity vector Ve. We also make an estimation of the temperature anisotropy A of the distribution. We present corrected moment data from Cluster's PEACE experiment for a range of plasma environments and make comparisons with electron and ion data from other Cluster instruments, as well as the equivalent ground-based calculations using full 3-D distribution PEACE telemetry.

  7. Improved calculation of relic gravitational waves

    2007-01-01

    In this paper, we have improved the calculation of the relic gravitational waves (RGW) in two aspects. First, we investigate the transfer function by taking into consideration the redshift-suppression effect, the accelerating expansion effect, the damping effect of free-streaming relativistic particles, and the damping effect of cosmic phase transition, and give a simple approximate analytic expression, which clearly illustrates the dependence on the cosmological parameters.Second, we develop a numerical method to calculate the primordial power spectrum of RGW in a very wide frequency range, where the observed constraints on ns (the scalar spectral index) and Ps(ko) (the amplitude of primordial scalar spectrum) and the Hamilton-Jacobi equation are used. This method is applied to two kinds of inflationary models,which satisfy the current constraints on ns, α (the running of ns) and r (the tensor-scalar ratio). We plot them in the r - Ωg diagram, where Ωg is the strength of RGW, and study their measurements from the cosmic microwave background (CMB) experiments and laser interferometers.

  8. Cognitive Reflection Versus Calculation in Decision Making

    Aleksandr eSinayev

    2015-05-01

    Full Text Available Scores on the three-item Cognitive Reflection Test (CRT have been linked with dual-system theory and normative decision making (Frederick, 2005. In particular, the CRT is thought to measure monitoring of System 1 intuitions such that, if cognitive reflection is high enough, intuitive errors will be detected and the problem will be solved. However, CRT items also require numeric ability to be answered correctly and it is unclear how much numeric ability vs. cognitive reflection contributes to better decision making. In two studies, CRT responses were used to calculate Cognitive Reflection and numeric ability; a numeracy scale was also administered. Numeric ability, measured on the CRT or the numeracy scale, accounted for the CRT’s ability to predict more normative decisions (a subscale of decision-making competence, incentivized measures of impatient and risk-averse choice, and self-reported financial outcomes; Cognitive Reflection contributed no independent predictive power. Results were similar whether the two abilities were modeled (Study 1 or calculated using proportions (Studies 1 and 2. These findings demonstrate numeric ability as a robust predictor of superior decision making across multiple tasks and outcomes. They also indicate that correlations of decision performance with the CRT are insufficient evidence to implicate overriding intuitions in the decision-making biases and outcomes we examined. Numeric ability appears to be the key mechanism instead.

  9. Unified approach to alpha decay calculations

    C S Shastry; S M Mahadevan; K Aditya

    2014-05-01

    With the discovery of a large number of superheavy nuclei undergoing decay through emissions, there has been a revival of interest in decay in recent years. In the theoretical study of decay the -nucleus potential, which is the basic input in the study of -nucleus systems, is also being studied using advanced theoretical methods. In the light of these, theWentzel–Kramers–Brillouin (WKB) approximation method often used for the study of decay is critically examined and its limitations are pointed out. At a given energy, the WKB expression uses barrier penetration formula for the determination of the transmission coefficient. This approach utilizes the -nucleus potential only at the barrier region and ignores it elsewhere. In the present era, when one has more precise experimental information on decay parameters and better understanding of -nucleus potential, it is desirable to use a more precise method for the calculation of decay parameters. We describe the analytic -matrix (SM) method which gives a procedure for the calculation of decay energy and mean life in an integrated way by evaluating the resonance pole of the -matrix in the complex momentum or energy plane. We make an illustrative comparative study of WKB and -matrix methods for the determination of decay parameters in a number of superheavy nuclei.

  10. Reactivity of Tourmaline by Quantum Chemical Calculations

    2007-01-01

    ZnAb initio calculations on reactivity of tourmaline were performed using both Gaussian and density function theory discrete variation method (DFT-DVM). The HF, B3LYP methods and basis sets STO-3G(3d,3p),6-31G(3d,3p) and 6-311++G(3df,3pd) were used in the calculations. The experimental results show energy value obtained from B3LYP and 6-31++1G(3df,3pd) basis sets is more accurate than those from other methods. The highest occupied molecular orbital (HOMO) of the tourmaline cluster mainly consists of O atom of hydroxyl group with relative higher energy level, suggesting that chemical bond between those of electron acceptor and this site may readily form, indicating the higher reactivity of hydroxyl group. The lowest unoccupied molecular orbital (LUMO) of the tourmaline cluster are dominantly composed of Si, O of tetrahedron and Na with relative lower energy level, suggesting that these atoms may tend to form chemical bond with those of electron donor. The results also prove that the O atoms of the tourmaline cluster have stronger reactivity than other atoms.

  11. Free-Energy Calculations. A Mathematical Perspective

    Pohorille, Andrzej

    2015-01-01

    Ion channels are pore-forming assemblies of transmembrane proteins that mediate and regulate ion transport through cell walls. They are ubiquitous to all life forms. In humans and other higher organisms they play the central role in conducting nerve impulses. They are also essential to cardiac processes, muscle contraction and epithelial transport. Ion channels from lower organisms can act as toxins or antimicrobial agents, and in a number of cases are involved in infectious diseases. Because of their important and diverse biological functions they are frequent targets of drug action. Also, simple natural or synthetic channels find numerous applications in biotechnology. For these reasons, studies of ion channels are at the forefront of biophysics, structural biology and cellular biology. In the last decade, the increased availability of X-ray structures has greatly advanced our understanding of ion channels. However, their mechanism of action remains elusive. This is because, in order to assist controlled ion transport, ion channels are dynamic by nature, but X-ray crystallography captures the channel in a single, sometimes non-native state. To explain how ion channels work, X-ray structures have to be supplemented with dynamic information. In principle, molecular dynamics (MD) simulations can aid in providing this information, as this is precisely what MD has been designed to do. However, MD simulations suffer from their own problems, such as inability to access sufficiently long time scales or limited accuracy of force fields. To assess the reliability of MD simulations it is only natural to turn to the main function of channels - conducting ions - and compare calculated ionic conductance with electrophysiological data, mainly single channel recordings, obtained under similar conditions. If this comparison is satisfactory it would greatly increase our confidence that both the structures and our computational methodologies are sufficiently accurate. Channel

  12. Development of thermodynamic databases for geochemical calculations

    Arthur, R.C. [Monitor Scientific, L.L.C., Denver, Colorado (United States); Sasamoto, Hiroshi; Shibata, Masahiro; Yui, Mikazu [Japan Nuclear Cycle Development Inst., Tokai, Ibaraki (Japan); Neyama, Atsushi [Computer Software Development Corp., Tokyo (Japan)

    1999-09-01

    Two thermodynamic databases for geochemical calculations supporting research and development on geological disposal concepts for high level radioactive waste are described in this report. One, SPRONS.JNC, is compatible with thermodynamic relations comprising the SUPCRT model and software, which permits calculation of the standard molal and partial molal thermodynamic properties of minerals, gases, aqueous species and reactions from 1 to 5000 bars and 0 to 1000degC. This database includes standard molal Gibbs free energies and enthalpies of formation, standard molal entropies and volumes, and Maier-Kelly heat capacity coefficients at the reference pressure (1 bar) and temperature (25degC) for 195 minerals and 16 gases. It also includes standard partial molal Gibbs free energies and enthalpies of formation, standard partial molal entropies, and Helgeson, Kirkham and Flowers (HKF) equation-of-state coefficients at the reference pressure and temperature for 1147 inorganic and organic aqueous ions and complexes. SPRONS.JNC extends similar databases described elsewhere by incorporating new and revised data published in the peer-reviewed literature since 1991. The other database, PHREEQE.JNC, is compatible with the PHREEQE series of geochemical modeling codes. It includes equilibrium constants at 25degC and l bar for mineral-dissolution, gas-solubility, aqueous-association and oxidation-reduction reactions. Reaction enthalpies, or coefficients in an empirical log K(T) function, are also included in this database, which permits calculation of equilibrium constants between 0 and 100degC at 1 bar. All equilibrium constants, reaction enthalpies, and log K(T) coefficients in PHREEQE.JNC are calculated using SUPCRT and SPRONS.JNC, which ensures that these two databases are mutually consistent. They are also internally consistent insofar as all the data are compatible with basic thermodynamic definitions and functional relations in the SUPCRT model, and because primary

  13. Relativistic Few-Body Hadronic Physics Calculations

    Polyzou, Wayne [Univ. of Iowa, Iowa City, IA (United States)

    2016-06-20

    The goal of this research proposal was to use ``few-body'' methods to understand the structure and reactions of systems of interacting hadrons (neutrons, protons, mesons, quarks) over a broad range of energy scales. Realistic mathematical models of few-hadron systems have the advantage that they are sufficiently simple that they can be solved with mathematically controlled errors. These systems are also simple enough that it is possible to perform complete accurate experimental measurements on these systems. Comparison between theory and experiment puts strong constraints on the structure of the models. Even though these systems are ``simple'', both the experiments and computations push the limits of technology. The important property of ``few-body'' systems is that the ``cluster property'' implies that the interactions that appear in few-body systems are identical to the interactions that appear in complicated many-body systems. Of particular interest are models that correctly describe physics at distance scales that are sensitive to the internal structure of the individual nucleons. The Heisenberg uncertainty principle implies that in order to be sensitive to physics on distance scales that are a fraction of the proton or neutron radius, a relativistic treatment of quantum mechanics is necessary. The research supported by this grant involved 30 years of effort devoted to studying all aspects of interacting two and three-body systems. Realistic interactions were used to compute bound states of two- and three-nucleon, and two- and three-quark systems. Scattering observables for these systems were computed for a broad range of energies - from zero energy scattering to few GeV scattering, where experimental evidence of sub-nucleon degrees of freedom is beginning to appear. Benchmark calculations were produced, which when compared with calculations of other groups provided an essential check on these complicated calculations. In

  14. Void growth in metals: Atomistic calculations

    Traiviratana, Sirirat [Department of Mechanical and Aerospace Engineering, University of California, San Diego, La Jolla, CA 92093 (United States); Bringa, Eduardo M. [Materials Science Department, Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Benson, David J. [Department of Mechanical and Aerospace Engineering, University of California, San Diego, La Jolla, CA 92093 (United States); Meyers, Marc A. [Department of Mechanical and Aerospace Engineering, University of California, San Diego, La Jolla, CA 92093 (United States); NanoEngineering, University of California, San Diego, La Jolla, CA 92093 (United States)], E-mail: mameyers@ucsd.edu

    2008-09-15

    Molecular dynamics simulations in monocrystalline and bicrystalline copper were carried out with LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) to reveal void growth mechanisms. The specimens were subjected to tensile uniaxial strains; the results confirm that the emission of (shear) loops is the primary mechanism of void growth. It is observed that many of these shear loops develop along two slip planes (and not one, as previously thought), in a heretofore unidentified mechanism of cooperative growth. The emission of dislocations from voids is the first stage, and their reaction and interaction is the second stage. These loops, forming initially on different {l_brace}1 1 1{r_brace} planes, join at the intersection, if the Burgers vector of the dislocations is parallel to the intersection of two {l_brace}1 1 1{r_brace} planes: a <1 1 0> direction. Thus, the two dislocations cancel at the intersection and a biplanar shear loop is formed. The expansion of the loops and their cross slip leads to the severely work-hardened region surrounding a growing void. Calculations were carried out on voids with different sizes, and a size dependence of the stress threshold to emit dislocations was obtained by MD, in disagreement with the Gurson model which is scale independent. This disagreement is most marked for the nanometer sized voids. The scale dependence of the stress required to grow voids is interpreted in terms of the decreasing availability of optimally oriented shear planes and increased stress required to nucleate shear loops as the void size is reduced. The growth of voids simulated by MD is compared with the Cocks-Ashby constitutive model and significant agreement is found. The density of geometrically necessary dislocations as a function of void size is calculated based on the emission of shear loops and their outward propagation. Calculations are also carried out for a void at the interface between two grains to simulate polycrystalline

  15. FUEL HANDLING FACILITY CRITICALITY SAFETY CALCULATIONS

    C.E. Sanders

    2005-06-30

    The purpose of this design calculation is to perform a criticality evaluation of the Fuel Handling Facility (FHF) and the operations and processes performed therein. The current intent of the FHF is to receive transportation casks whose contents will be unloaded and transferred to waste packages (WP) or MGR Specific Casks (MSC) in the fuel transfer bays. Further, the WPs will also be prepared in the FHF for transfer to the sub-surface facility (for disposal). The MSCs will be transferred to the Aging Facility for storage. The criticality evaluation of the FHF features the following: (I) Consider the types of waste to be received in the FHF as specified below: (1) Uncanistered commercial spent nuclear fuel (CSNF); (2) Canistered CSNF (with the exception of horizontal dual-purpose canister (DPC) and/or multi-purpose canisters (MPCs)); (3) Navy canistered SNF (long and short); (4) Department of Energy (DOE) canistered high-level waste (HLW); and (5) DOE canistered SNF (with the exception of MCOs). (II) Evaluate the criticality analyses previously performed for the existing Nuclear Regulatory Commission (NRC)-certified transportation casks (under 10 CFR 71) to be received in the FHF to ensure that these analyses address all FHF conditions including normal operations, and Category 1 and 2 event sequences. (III) Evaluate FHF criticality conditions resulting from various Category 1 and 2 event sequences. Note that there are currently no Category 1 and 2 event sequences identified for FHF. Consequently, potential hazards from a criticality point of view will be considered as identified in the ''Internal Hazards Analysis for License Application'' document (BSC 2004c, Section 6.6.4). (IV) Assess effects of potential moderator intrusion into the fuel transfer bay for defense in depth. The SNF/HLW waste transfer activity (i.e., assembly and canister transfer) that is being carried out in the FHF has been classified as safety category in the &apos

  16. Improving the accuracy of dynamic mass calculation

    Oleksandr F. Dashchenko

    2015-06-01

    Full Text Available With the acceleration of goods transporting, cargo accounting plays an important role in today's global and complex environment. Weight is the most reliable indicator of the materials control. Unlike many other variables that can be measured indirectly, the weight can be measured directly and accurately. Using strain-gauge transducers, weight value can be obtained within a few milliseconds; such values correspond to the momentary load, which acts on the sensor. Determination of the weight of moving transport is only possible by appropriate processing of the sensor signal. The aim of the research is to develop a methodology for weighing freight rolling stock, which increases the accuracy of the measurement of dynamic mass, in particular wagon that moves. Apart from time-series methods, preliminary filtration for improving the accuracy of calculation is used. The results of the simulation are presented.

  17. Calculation methods of the nuclear characteristics

    Dubovichenko, S B

    2010-01-01

    In the book the mathematical methods of nuclear cross sections and phases of elastic scattering, energy and characteristics of bound states in two- and three-particle nuclear systems, when the potentials of interaction contain not only central, but also tensor component, are presented. Are given the descriptions of the mathematical numerical calculation methods and computer programs in the algorithmic language "BASIC" for "Turbo Basic" of firm "Borland" for the computers of the type IBM PC AT. For the numerical solutions of the initial Schroedinger equations are used finite- difference and variational methods, and also method of Runge-Kutta with the automatic calling sequence on the assigned accuracy of results for the scattering phase shifts and binding energy. Is given the description not of the standard methods of solving the system of equations of Schroedinger to the bound states and the alternative to Schmidt's method, method of solution of the generalized matrix problem at the eigenvalues. The developed...

  18. Zero energy scattering calculation in Euclidean space

    Carbonell, J

    2016-01-01

    We show that the Bethe-Salpeter equation for the scattering amplitude in the limit of zero incident energy can be transformed into a purely Euclidean form, as it is the case for the bound states. The decoupling between Euclidean and Minkowski amplitudes is only possible for zero energy scattering observables and allows determining the scattering length from the Euclidean Bethe-Salpeter amplitude. Such a possibility strongly simplifies the numerical solution of the Bethe-Salpeter equation and suggests an alternative way to compute the scattering length in Lattice Euclidean calculations without using the Luscher formalism. The derivations contained in this work were performed for scalar particles and one-boson exchange kernel. They can be generalized to the fermion case and more involved interactions.

  19. Zero energy scattering calculation in Euclidean space

    J. Carbonell

    2016-03-01

    Full Text Available We show that the Bethe–Salpeter equation for the scattering amplitude in the limit of zero incident energy can be transformed into a purely Euclidean form, as it is the case for the bound states. The decoupling between Euclidean and Minkowski amplitudes is only possible for zero energy scattering observables and allows determining the scattering length from the Euclidean Bethe–Salpeter amplitude. Such a possibility strongly simplifies the numerical solution of the Bethe–Salpeter equation and suggests an alternative way to compute the scattering length in Lattice Euclidean calculations without using the Luscher formalism. The derivations contained in this work were performed for scalar particles and one-boson exchange kernel. They can be generalized to the fermion case and more involved interactions.

  20. Zero energy scattering calculation in Euclidean space

    Carbonell, J. [Institut de Physique Nucléaire, Université Paris-Sud, IN2P3-CNRS, 91406 Orsay Cedex (France); Karmanov, V.A., E-mail: karmanov@sci.lebedev.ru [Lebedev Physical Institute, Leninsky Prospekt 53, 119991 Moscow (Russian Federation)

    2016-03-10

    We show that the Bethe–Salpeter equation for the scattering amplitude in the limit of zero incident energy can be transformed into a purely Euclidean form, as it is the case for the bound states. The decoupling between Euclidean and Minkowski amplitudes is only possible for zero energy scattering observables and allows determining the scattering length from the Euclidean Bethe–Salpeter amplitude. Such a possibility strongly simplifies the numerical solution of the Bethe–Salpeter equation and suggests an alternative way to compute the scattering length in Lattice Euclidean calculations without using the Luscher formalism. The derivations contained in this work were performed for scalar particles and one-boson exchange kernel. They can be generalized to the fermion case and more involved interactions.

  1. Zero energy scattering calculation in Euclidean space

    Carbonell, J.; Karmanov, V. A.

    2016-03-01

    We show that the Bethe-Salpeter equation for the scattering amplitude in the limit of zero incident energy can be transformed into a purely Euclidean form, as it is the case for the bound states. The decoupling between Euclidean and Minkowski amplitudes is only possible for zero energy scattering observables and allows determining the scattering length from the Euclidean Bethe-Salpeter amplitude. Such a possibility strongly simplifies the numerical solution of the Bethe-Salpeter equation and suggests an alternative way to compute the scattering length in Lattice Euclidean calculations without using the Luscher formalism. The derivations contained in this work were performed for scalar particles and one-boson exchange kernel. They can be generalized to the fermion case and more involved interactions.

  2. Density functional calculations on hydrocarbon isodesmic reactions

    Fortunelli, Alessandro; Selmi, Massimo

    1994-06-01

    Hartree—Fock, Hartree—Fock-plus-correlation and self-consistent Kohn—Sham calculations are performed on a set of hydrocarbon isodesmic reactions, i.e. reactions among hydrocarbons in which the number and type of carbon—carbon and carbon—hydrogen bonds is conserved. It is found that neither Hartree—Fock nor Kohn—Sham methods correctly predict standard enthalpies, Δ Hr(298 K), of these reactions, even though — for reactions involving molecules containing strained double bonds — the agreement between the theoretical estimates and the experimental values of Δ Hr seems to be improved by the self-consistent solution of the Kohn—Sham equations. The remaining discrepancies are attributed to intramolecular dispersion effects, that are not described by ordinary exchange—correlation functionals, and are eliminated by introducing corrections based on a simple semi-empirical model.

  3. ANALYTICAL METHODS FOR CALCULATING FAN AERODYNAMICS

    Jan Dostal

    2015-12-01

    Full Text Available This paper presents results obtained between 2010 and 2014 in the field of fan aerodynamics at the Department of Composite Technology at the VZLÚ aerospace research and experimental institute in Prague – Letnany. The need for rapid and accurate methods for the preliminary design of blade machinery led to the creation of a mathematical model based on the basic laws of turbomachine aerodynamics. The mathematical model, the derivation of which is briefly described below, has been encoded in a computer programme, which enables the theoretical characteristics of a fan of the designed geometry to be determined rapidly. The validity of the mathematical model is assessed continuously by measuring model fans in the measuring unit, which was developed and manufactured specifically for this purpose. The paper also presents a comparison between measured characteristics and characteristics determined by the mathematical model as the basis for a discussion on possible causes of measured deviations and calculation deviations.

  4. Distributed Function Calculation over Noisy Networks

    Zhidun Zeng

    2016-01-01

    Full Text Available Considering any connected network with unknown initial states for all nodes, the nearest-neighbor rule is utilized for each node to update its own state at every discrete-time step. Distributed function calculation problem is defined for one node to compute some function of the initial values of all the nodes based on its own observations. In this paper, taking into account uncertainties in the network and observations, an algorithm is proposed to compute and explicitly characterize the value of the function in question when the number of successive observations is large enough. While the number of successive observations is not large enough, we provide an approach to obtain the tightest possible bounds on such function by using linear programing optimization techniques. Simulations are provided to demonstrate the theoretical results.

  5. Cooling rate calculations for silicate glasses.

    Birnie, D. P., III; Dyar, M. D.

    1986-03-01

    Series solution calculations of cooling rates are applied to a variety of samples with different thermal properties, including an analog of an Apollo 15 green glass and a hypothetical silicate melt. Cooling rates for the well-studied green glass and a generalized silicate melt are tabulated for different sample sizes, equilibration temperatures and quench media. Results suggest that cooling rates are heavily dependent on sample size and quench medium and are less dependent on values of physical properties. Thus cooling histories for glasses from planetary surfaces can be estimated on the basis of size distributions alone. In addition, the variation of cooling rate with sample size and quench medium can be used to control quench rate.

  6. Molecular orbital calculations using chemical graph theory

    Dias, Jerry Ray

    1993-01-01

    Professor John D. Roberts published a highly readable book on Molecular Orbital Calculations directed toward chemists in 1962. That timely book is the model for this book. The audience this book is directed toward are senior undergraduate and beginning graduate students as well as practicing bench chemists who have a desire to develop conceptual tools for understanding chemical phenomena. Although, ab initio and more advanced semi-empirical MO methods are regarded as being more reliable than HMO in an absolute sense, there is good evidence that HMO provides reliable relative answers particularly when comparing related molecular species. Thus, HMO can be used to rationalize electronic structure in 1t-systems, aromaticity, and the shape use HMO to gain insight of simple molecular orbitals. Experimentalists still into subtle electronic interactions for interpretation of UV and photoelectron spectra. Herein, it will be shown that one can use graph theory to streamline their HMO computational efforts and to arrive...

  7. Calculation of topological connectivity index for minerals

    2001-01-01

    Topological method was applied firstly to calculate the topological connectivity index of minerals (TCIM). The reciprocal of effective atomic refractivity of metal dement in minerals was chosen as its valence. The reasonability of TCIM as an activity criterion was tested through comparison of TCIM with two kinds of dectronegativity parameter, i.e. ionic percentage and energy criteria of Yang's electronegativity, solubility product, energy criterion according to the gen eralized perturbation theory and adsorption of flotation reagents on the surface of minerals. The results indicated that TCIM is an effective structural parameter of minerals to study the structure-activity relationship. In addition, different mineral is of different TCIM value, so TCIM brings about convenience in comparison of flotation activity for minerals.

  8. Calculating Outsourcing Strategies and Trials of Strength

    Christensen, Mark; Skærbæk, Peter; Tryggestad, Kjell

    was termed ‘internal optimization’ to first increase efficiency and be followed by anticipated sequential tenders to test the free market against internal provision. This option implied a time perspective where outsourcing, if not economically feasible, would be postponed and subsequently tested...... demonstrates the power of projects and their use of accounting calculation. We study how the two options emerged and were valued differently by the supra-national outsourcing program and the local Defense projects over 22 years and how that valuation process involved accounting. Drawing on Actor-Network Theory...... we show how the two strategic options emerged and were pitted against each other in what Latour and Callon describe as ‘trials of strength’. The contribution of the paper is in four parts: 1. highlights how accounting inscriptions take part in formulating, evaluating and advancing different...

  9. Development of automatic luminosity calculation framework

    Lavicka, Roman

    2015-01-01

    Up-to-date knowledge on the collected number of events and integrated luminosity is crucial for the ALICE data taking and trigger strategy planning. The purpose of the project is to develop a framework for the automatic recalculation of achieved statistics and integrated luminosity on a daily basis using information from the ALICE data base. We have been encouraged encouraged to work on the improvement of available luminosity calculation algorithms, in particular accounting for pile-up corrections. Results are represented in a form of trending plots and summary tables for different trigger classes and stored in the personal web site of the author with an outlook on the possibility to story it in the ALICE monitoring repository.

  10. p Ka calculation of poliprotic acid: histamine

    De Abreu, Heitor A.; De Almeida, Wagner B.; Duarte, Hélio A.

    2004-01-01

    Various theoretical studies have been reported addressing the performance of solvation models available to estimate p Ka values. However, no attention has been paid so far to the role played by the electronic, thermal and solvation energy individual contributions to the Gibbs free energy of the deprotonation process. In this work, we decompose the total Gibbs free energy into three distinct terms and then evaluate the dependence of each contribution on the level of theory employed for its determination using different levels of theory. The three possible p Kas of histamine have been estimated and compared with available experimental data. We found that the electronic energy term is sensitive to the level of theory and basis set, and, therefore, could be also a source of error in the theoretical calculation of p Kas.

  11. Microscopic versus macroscopic calculation of dielectric nanospheres

    Kühn, M.; Kliem, H.

    2008-12-01

    The issue of nanodielectrics has recently become an important field of interest. The term describes nanometric dielectrics, i. e. dielectric materials with structural dimensions typically smaller than 100 run. In contrast to the behaviour of a bulk material the nanodielectrics can behave completely different. With shrinking dimensions the surface or rather boundary effects outweigh the volume effects. This leads to a different observable physics at the nanoscale. A crucial point is the question whether a continuum model for the calculation of dielectric properties is still applicable for these nanomaterials. In order to answer this question we simulated dielectric nanospheres with a microscopic local field method and compared the results to the macroscopic mean field theory.

  12. Operational source receptor calculations for large agglomerations

    Gauss, Michael; Shamsudheen, Semeena V.; Valdebenito, Alvaro; Pommier, Matthieu; Schulz, Michael

    2016-04-01

    For Air quality policy an important question is how much of the air pollution within an urbanized region can be attributed to local sources and how much of it is imported through long-range transport. This is critical information for a correct assessment of the effectiveness of potential emission measures. The ratio between indigenous and long-range transported air pollution for a given region depends on its geographic location, the size of its area, the strength and spatial distribution of emission sources, the time of the year, but also - very strongly - on the current meteorological conditions, which change from day to day and thus make it important to provide such calculations in near-real-time to support short-term legislation. Similarly, long-term analysis over longer periods (e.g. one year), or of specific air quality episodes in the past, can help to scientifically underpin multi-regional agreements and long-term legislation. Within the European MACC projects (Monitoring Atmospheric Composition and Climate) and the transition to the operational CAMS service (Copernicus Atmosphere Monitoring Service) the computationally efficient EMEP MSC-W air quality model has been applied with detailed emission data, comprehensive calculations of chemistry and microphysics, driven by high quality meteorological forecast data (up to 96-hour forecasts), to provide source-receptor calculations on a regular basis in forecast mode. In its current state, the product allows the user to choose among different regions and regulatory pollutants (e.g. ozone and PM) to assess the effectiveness of fictive emission reductions in air pollutant emissions that are implemented immediately, either within the agglomeration or outside. The effects are visualized as bar charts, showing resulting changes in air pollution levels within the agglomeration as a function of time (hourly resolution, 0 to 4 days into the future). The bar charts not only allow assessing the effects of emission

  13. Marginal Loss Calculations for the DCOPF

    Eldridge, Brent [Federal Energy Regulatory Commission, Washington, DC (United States); Johns Hopkins Univ., Baltimore, MD (United States); O' Neill, Richard P. [Federal Energy Regulatory Commission, Washington, DC (United States); Castillo, Andrea R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-12-05

    The purpose of this paper is to explain some aspects of including a marginal line loss approximation in the DCOPF. The DCOPF optimizes electric generator dispatch using simplified power flow physics. Since the standard assumptions in the DCOPF include a lossless network, a number of modifications have to be added to the model. Calculating marginal losses allows the DCOPF to optimize the location of power generation, so that generators that are closer to demand centers are relatively cheaper than remote generation. The problem formulations discussed in this paper will simplify many aspects of practical electric dispatch implementations in use today, but will include sufficient detail to demonstrate a few points with regard to the handling of losses.

  14. Towards reliable calculations of the correlation function

    Maj, Radoslaw; 10.1142/S0218301307009221

    2008-01-01

    The correlation function of two identical pions interacting via Coulomb potential is computed for a general case of anisotropic particle's source of finite life time. The effect of halo is taken into account as an additional particle's source of large spatial extension. Due to the Coulomb interaction, the effect of halo is not limited to very small relative momenta but it influences the correlation function in a relatively large domain. The relativistic effects are discussed in detail and it is argued that the calculations have to be performed in the center-of-mass frame of particle's pair where the (nonrelativistic) wave function of particle's relative motion is meaningful. The Bowler-Sinyukov procedure to remove the Coulomb interaction is tested and it is shown to significantly underestimate the source's life time.

  15. Relativistic calculations of coalescing binary neutron stars

    Joshua Faber; Phillippe Grandclément; Frederic Rasio

    2004-10-01

    We have designed and tested a new relativistic Lagrangian hydrodynamics code, which treats gravity in the conformally flat approximation to general relativity. We have tested the resulting code extensively, finding that it performs well for calculations of equilibrium single-star models, collapsing relativistic dust clouds, and quasi-circular orbits of equilibrium solutions. By adding a radiation reaction treatment, we compute the full evolution of a coalescing binary neutron star system. We find that the amount of mass ejected from the system, much less than a per cent, is greatly reduced by the inclusion of relativistic gravitation. The gravity wave energy spectrum shows a clear divergence away from the Newtonian point-mass form, consistent with the form derived from relativistic quasi-equilibrium fluid sequences.

  16. Calculating Auroral Oval Pattern by AE Index

    CHEN Anqin; LI Jiawei; YANG Guanglin; WANG Jingsong

    2008-01-01

    The relationship between the auroral oval pattern, i.e., location, size, shape, and intensity, and the auroral electrojet activity index (AE index) is studied. It is found that the maximal auroral intensity is elliptically distributed, and the lengths of semimajor and semiminor axes are positively correlated to AE.The intensity along the normal of the auroral oval can be satisfyingly described by a Gaussian distribution,and the maximum and the full width at half maximum of the Gaussian distribution are both positively correlated to AE. Based on these statistical results, a series of experimental formulas as a function of AE are developed to calculate the location, size, shape, and intensity of the auroral oval. These formulas are validated by the auroral images released by SWPC/NOAA.

  17. Calculation of light emission in sonoluminescence

    LI ChaoHui; AN Yu

    2009-01-01

    We modify a uniform model of single bubble sonoluminescenca, in which heat diffusion, water vapor diffusion and chemical reactions are included to describe the bubble dynamics, and the processes of electron-atom bremsstrahlung, electron-ion bremsstrahlung and recombination radiation, radiative attachment of electrons to atoms and molecules, line emissions of OH radicals and Na atoms are taken into account to calculate the light emission. With this model, we compute the light pulse width, the photon number per flash, the continuum and line spectra and the gas species as the products of chemical reactions, and try to compare with all the experimental data available. We obtain good agreement with the observations of Ar and Xe bubbles in many cases, but fail to match the experi-mental data of the photon number per flash. We also find that for He bubble the computed photon number is always too small to interpret the observations.

  18. Columbia River flow-time calculations

    Soldat, J.K.

    1958-11-21

    Re-appraisal of the available data on flow times of the Columbia River between the reactor areas and Pasco was undertaken to permit extrapolation of the flow-time curves to lower river flow rates. Comparisons were made between data collected by the US Corps of Engineers and Regional Monitoring and with the equation for calculation of flow times developed by H.T. Norton. Extrapolation of the Regional Monitoring float study data to a flow of 3 {times} 10{sup 5} gallons per second was accomplished by comparison with the slope of the curve obtained from the US Corps of Engineers data; the latter covered flow times from 100-F Area to Pasco over a range of 3.4 {times} 10{sup 5} gps to 3.7 {times} 10{sup 6} gps. The revised flow-time curves are illustrated in Figures 1 through 6.

  19. Variational Calculation of the Effective Action

    Sugihara, T

    1998-01-01

    An indication of spontaneous symmetry breaking is found in the two-dimensional $\\lambda\\phi^4$ model, where an attention is payed to a functional form of an effective action. An effective energy, which is an effective action for a static field, is obtained as a functional of the classical field from the ground state of hamiltonian $H[J]$ interacting with a constant external field. The energy and wavefunction of the ground state are calculated in terms of DLCQ (Discretized Light-Cone Quantization) under antiperiodic boundary condition. A field configuration which is physically meaningful is found as a solution of the quantum mechanical Euler-Lagrange equation in the $J\\to 0$ limit. It is shown that there exists a nontrivial field configuration in the broken phase of $Z_2$ symmetry because of a boundary effect.

  20. Calculation of light emission in sonoluminescence

    2009-01-01

    We modify a uniform model of single bubble sonoluminescence,in which heat diffusion,water vapor diffusion and chemical reactions are included to describe the bubble dynamics,and the processes of electron-atom bremsstrahlung,electron-ion bremsstrahlung and recombination radiation,radiative attachment of electrons to atoms and molecules,line emissions of OH radicals and Na atoms are taken into account to calculate the light emission. With this model,we compute the light pulse width,the photon number per flash,the continuum and line spectra and the gas species as the products of chemical reactions,and try to compare with all the experimental data available. We obtain good agreement with the observations of Ar and Xe bubbles in many cases,but fail to match the experimental data of the photon number per flash. We also find that for He bubble the computed photon number is always too small to interpret the observations.

  1. COSTS CALCULATION OF TARGET COSTING METHOD

    Sebastian UNGUREANU

    2014-06-01

    Full Text Available Cost information system plays an important role in every organization in the decision making process. An important task of management is ensuring control of the operations, processes, sectors, and not ultimately on costs. Although in achieving the objectives of an organization compete more control systems (production control, quality control, etc., the cost information system is important because monitors results of the other. Detailed analysis of costs, production cost calculation, quantification of losses, estimate the work efficiency provides a solid basis for financial control. Knowledge of the costs is a decisive factor in taking decisions and planning future activities. Managers are concerned about the costs that will appear in the future, their level underpinning the supply and production decisions as well as price policy. An important factor is the efficiency of cost information system in such a way that the information provided by it may be useful for decisions and planning of the work.

  2. Calculating Organic Carbon Stock from Forest Soils

    Lucian Constantin DINCĂ

    2015-12-01

    Full Text Available The organic carbon stock (SOC (t/ha was calculated in different approaches in order to enhance the differences among methods and their utility regarding specific studies. Using data obtained in Romania (2000-2012 from 4,500 profiles and 9,523 soil horizons, the organic carbon stock was calculated for the main forest soils (18 types using three different methods: 1 on pedogenetical horizons, by soil bulk density and depth class/horizon thickness; 2 by soil type and standard depths; 3 using regression equations between the quantity of organic C and harvesting depths. Even though the same data were used, the differences between the values of C stock obtained from the three methods were relatively high. The first method led to an overvaluation of the C stock. The differences between methods 1 and 2 were high (and reached 33% for andosol, while the differences between methods 2 and 3 were smaller (a maximum of 23% for rendzic leptosol. The differences between methods 2 and 3 were significantly lower especially for andosol, arenosol and vertisol. A thorough analysis of all three methods concluded that the best method to evaluate the organic C stock was to distribute the obtained values on the following standard depths: 0 - 10 cm; 10 - 20 cm; 20 - 40 cm; > 40 cm. For each soil type, a correlation between the quantity of organic C and the sample harvesting depth was also established. These correlations were significant for all types of soil; however, lower correlation coefficients were registered for rendzic leptosol, haplic podzol and fluvisol.

  3. An integrated tool for loop calculations: AITALC

    Lorca, Alejandro; Riemann, Tord

    2006-01-01

    AITALC, a new tool for automating loop calculations in high energy physics, is described. The package creates Fortran code for two-fermion scattering processes automatically, starting from the generation and analysis of the Feynman graphs. We describe the modules of the tool, the intercommunication between them and illustrate its use with three examples. Program summaryTitle of the program:AITALC version 1.2.1 (9 August 2005) Catalogue identifier:ADWO Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWO Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Computer:PC i386 Operating system:GNU/ LINUX, tested on different distributions SuSE 8.2 to 9.3, Red Hat 7.2, Debian 3.0, Ubuntu 5.04. Also on SOLARIS Programming language used:GNU MAKE, DIANA, FORM, FORTRAN77 Additional programs/libraries used:DIANA 2.35 ( QGRAF 2.0), FORM 3.1, LOOPTOOLS 2.1 ( FF) Memory required to execute with typical data:Up to about 10 MB No. of processors used:1 No. of lines in distributed program, including test data, etc.:40 926 No. of bytes in distributed program, including test data, etc.:371 424 Distribution format:tar gzip file High-speed storage required:from 1.5 to 30 MB, depending on modules present and unfolding of examples Nature of the physical problem:Calculation of differential cross sections for ee annihilation in one-loop approximation. Method of solution:Generation and perturbative analysis of Feynman diagrams with later evaluation of matrix elements and form factors. Restriction of the complexity of the problem:The limit of application is, for the moment, the 2→2 particle reactions in the electro-weak standard model. Typical running time:Few minutes, being highly depending on the complexity of the process and the FORTRAN compiler.

  4. Enhancing calculation of thin sea ice growth

    Appel, Igor

    2016-12-01

    The goal of the present study is to develop, generate, and integrate into operational practice a new model of ice growth. The development of this Sea Ice Growth Model for Arctic (SIGMA), a description of the theoretical foundation, the model advantages and analysis of its results are considered in the paper. The enhanced model includes two principal modifications. Surface temperature of snow on ice is defined as internal model parameter maintaining rigorous consistency between processes of atmosphere-ice thermodynamic interaction and ice growth. The snow depth on ice is naturally defined as a function of a local snowfall rate and linearly depends on time rather than ice thickness. The model was initially outlined in the Visible Infrared Radiometer Suite (VIIRS) Sea Ice Characterization Algorithm Theoretical Basis Document (Appel et al., 2005) that included two different approaches to retrieve sea ice age: reflectance analysis for daytime and derivation of ice thickness using energy balance for nighttime. Only the latter method is considered in this paper. The improved account for the influence of surface temperature and snow depth increases the reliability of ice thickness calculations and is used to develop an analytical Snow Depth/Ice Thickness Look up table suitable to the VIIRS observations as well as to other instruments. The applicability of SIGMA to retrieve ice thickness from the VIIRS satellite observations and the comparison of its results with the One-dimensional Thermodynamic Ice Model (OTIM) are also considered. The comparison of the two models demonstrating the difference between their assessments of heat fluxes and radical distinction between the influences of snow depth uncertainty on errors of ice thickness calculations is of great significance to further improve the retrieval of ice thickness from satellite observations.

  5. Data calculation program for RELAP 5 code

    Silvestre, Larissa J.B.; Sabundjian, Gaiane, E-mail: larissajbs@usp.br, E-mail: gdjian@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    As the criteria and requirements for a nuclear power plant are extremely rigid, computer programs for simulation and safety analysis are required for certifying and licensing a plant. Based on this scenario, some sophisticated computational tools have been used such as the Reactor Excursion and Leak Analysis Program (RELAP5), which is the most used code for the thermo-hydraulic analysis of accidents and transients in nuclear reactors. A major difficulty in the simulation using RELAP5 code is the amount of information required for the simulation of thermal-hydraulic accidents or transients. The preparation of the input data leads to a very large number of mathematical operations for calculating the geometry of the components. Therefore, a mathematical friendly preprocessor was developed in order to perform these calculations and prepare RELAP5 input data. The Visual Basic for Application (VBA) combined with Microsoft EXCEL demonstrated to be an efficient tool to perform a number of tasks in the development of the program. Due to the absence of necessary information about some RELAP5 components, this work aims to make improvements to the Mathematic Preprocessor for RELAP5 code (PREREL5). For the new version of the preprocessor, new screens of some components that were not programmed in the original version were designed; moreover, screens of pre-existing components were redesigned to improve the program. In addition, an English version was provided for the new version of the PREREL5. The new design of PREREL5 contributes for saving time and minimizing mistakes made by users of the RELAP5 code. The final version of this preprocessor will be applied to Angra 2. (author)

  6. Activation calculation of the EURISOL mercury target

    Rapp, B.; David, J.C.; Blideanu, V.; Dore, D.; Ridikas, D.; Thiolliere, N

    2006-08-15

    We have used MCNPX coupled to CINDER to estimate the production of radioactive nuclides in the EURISOL 4 MW liquid mercury target during a 40 years'lifetime of the installation. The calculations have been done with different intra-nuclear cascade and fission evaporation model combinations. A benchmark exercise has allowed a better understanding of differences seen between these models for the creation of tritium and fission products. To obtain a realistic production yield for tritium gas in proton induced spallation reactions, we recommend using the ISABEL-RAL model, while both CEM2k and BERTINI-RAL overestimate the production rate above 1 GeV incident proton. The best combinations of models to calculate the residual nuclei production are those using ABLA fission-evaporation model, CEM2k or combinations using RAL model are giving too broad mass distributions when compared to available data. An extensive list of radio-nuclides was obtained and is available on tabular format, we show that the 4 nuclei whose contributions to the total activity of the mercury target (after 40 years of irradiation) are the most important are the following: -) 1 day after shutdown: Y{sup 91} (15%), Y{sup 90} (13%), Hg{sup 197} (6%) and Sr{sup 89} (5%); -) 1 year after shutdown: H{sup 3} (19%), Y{sup 90} (17%), Sr{sup 90} (17%) and Nb{sup 93*} (10%); -) 10 years after shutdown: Y{sup 90} (22%), Sr{sup 90} (22%), H{sup 3} (18%) and Nb{sup 93*} (14%); and -) 100 years after shutdown: Mo{sup 93} (34%), Nb{sup 93*} (32%), Pt{sup 193} (9%) and Y{sup 90} (8%). (A.C.)

  7. a Relativistic Calculation of Baryon Masses

    Giammarco, Joseph Michael

    1990-01-01

    We calculate ground state baryon masses using a saddle-point variational (SPV) method, which permits us the use of fully relativistic 4-component Dirac spinors without the need for positive energy projection operators. This variational approach has been shown to work in the relativistic domain for one particle in an external potential (Dirac equation). We have extended its use to the relativistic 3-body Breit equation. Our procedure is as follows: we pick a trial wave function having the appropriate spin, flavor and color dependence. This can be accomplished with a non-symmetric relativistic spatial wave function having two different size parameters if the the first two quarks are always chosen to be identical. We than calculate an energy eigenvalue for the particle state and vary the parameters in our wave function to search for a "saddle-point". We minimize the energy with respect to the two size parameters and maximize with respect to two parameters that measure the contribution from the negative-energy states. This gives the baryon's mass as a function of four input parameters: the masses of the up, down and strange quarks (m_{u=d },m_{s}), and the strength of the coupling constants for the potentials ( alpha_{s},mu). We do this for the eight Baryon ground states and fit these to experimental data. This fit gives the values of the input parameters. For the potentials we use a coulombic term to represent one-gluon exchange and a linear term for confinement. For both terms we include a retardation term required by relativity. We also add delta function and spin-spin terms to account for the large contribution of the coulomb interaction at the origin. The results we obtain from our SPV method are in good agreement with experimental data. The actual search for the saddle-point parameters and the fitting of the quark masses and the values of the coupling strengths was done on a CDC Cyber 860.

  8. Using Financial Calculators in a Business Mathematics Course.

    Heller, William H.; Taylor, Monty B.

    2000-01-01

    Discusses the authors' experiences with integrating financial calculators into a business mathematics course. Presents a brief overview of the operation of financial calculators, reviews some of the more common models, discusses how to use the equation solver utility on other calculators to emulate a financial calculator, and explores the…

  9. 42 CFR 403.253 - Calculation of benefits.

    2010-10-01

    ... values on the initial calculation date of— (A) Expected incurred benefits in the loss ratio calculation period, to— (B) The total policy reserve at the last day of the loss ratio calculation period: and (ii... Ratio Provisions § 403.253 Calculation of benefits. (a) General provisions. (1) Except as provided...

  10. Accurate calculation of the p Ka of trifluoroacetic acid using high-level ab initio calculations

    Namazian, Mansoor; Zakery, Maryam; Noorbala, Mohammad R.; Coote, Michelle L.

    2008-01-01

    The p Ka value of trifluoroacetic acid has been successfully calculated using high-level ab initio methods such as G3 and CBS-QB3. Solvation energies have been calculated using CPCM continuum model of solvation at the HF and B3-LYP levels of theory with various basis sets. Excellent agreement with experiment (to within 0.4 p Ka units) was obtained using CPCM solvation energies at the B3-LYP/6-31+G(d) level (or larger) in conjunction with CBS-QB3 or G3 gas-phase energies of trifluoroacetic acid and its anion.

  11. Exact Calculation of Antiferromagnetic Ising Model on an Inhomogeneous Surface Recursive Lattice to Investigate Thermodynamics and Glass Transition on Surface/Thin Film

    Huang, Ran; Gujrati, Purushottam D.

    2017-01-01

    An inhomogeneous 2-dimensional recursive lattice formed by planar elements has been designed to investigate the thermodynamics of Ising spin system on the surface/thin film. The lattice is constructed as a hybrid of partial Husimi square lattice representing the bulk and 1D single bonds representing the surface. Exact calculations can be achieved with the recursive property of the lattice. The model has an anti-ferromagnetic interaction to give rise to an ordered phase identified as crystal, and a solution with higher energy to represent the amorphous/metastable phase. Free energy and entropy of the ideal crystal and supercooled liquid state of the model on the surface are calculated by the partial partition function. By analyzing the free energies and entropies of the crystal and supercooled liquid state, we are able to identify the melting and ideal glass transition on the surface. The results show that due to the variation of coordination number, the transition temperatures on the surface decrease significantly compared to the bulk system. Our calculation qualitatively agrees with both experimental and simulation works on the thermodynamics of surfaces and thin films conducted by others. Interactions between particles farther than the nearest neighbor distance are taken into consideration, and their effects are investigated. Supported by the National Natural Science Foundation of China under Grant No. 11505110, the Shanghai Pujiang Talent Program under Grant No. 16PJ1431900, and the China Postdoctoral Science Foundation under Grant No. 2016M591666

  12. FIESTA 2: Parallelizeable multiloop numerical calculations

    Smirnov, A. V.; Smirnov, V. A.; Tentyukov, M.

    2011-03-01

    The program FIESTA has been completely rewritten. Now it can be used not only as a tool to evaluate Feynman integrals numerically, but also to expand Feynman integrals automatically in limits of momenta and masses with the use of sector decompositions and Mellin-Barnes representations. Other important improvements to the code are complete parallelization (even to multiple computers), high-precision arithmetics (allowing to calculate integrals which were undoable before), new integrators, Speer sectors as a strategy, the possibility to evaluate more general parametric integrals. Program summaryProgram title:FIESTA 2 Catalogue identifier: AECP_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECP_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL version 2 No. of lines in distributed program, including test data, etc.: 39 783 No. of bytes in distributed program, including test data, etc.: 6 154 515 Distribution format: tar.gz Programming language: Wolfram Mathematica 6.0 (or higher) and C Computer: From a desktop PC to a supercomputer Operating system: Unix, Linux, Windows, Mac OS X Has the code been vectorised or parallelized?: Yes, the code has been parallelized for use on multi-kernel computers as well as clusters via Mathlink over the TCP/IP protocol. The program can work successfully with a single processor, however, it is ready to work in a parallel environment and the use of multi-kernel processor and multi-processor computers significantly speeds up the calculation; on clusters the calculation speed can be improved even further. RAM: Depends on the complexity of the problem Classification: 4.4, 4.12, 5, 6.5 Catalogue identifier of previous version: AECP_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 735 External routines: QLink [1], Cuba library [2], MPFR [3] Does the new version supersede the previous version?: Yes Nature of problem: The sector

  13. Improvement of Neutronics Calculation Methods for Fast Reactors

    Takeda, Toshikazu

    2011-01-01

    To accurately estimate neutronics properties of fast reactors, particularly Japan Sodium-cooled Fast Reactor of1,500 MW electric, calculational methods are being improved in Japan.This paper describes the planning and the ongoing development of the neutronics calculation methods in the fieldof 1) assembly calculations including the calculations of effective cross sections, 2) core calculations and 3) uncertaintyevaluation and uncertainty reduction.

  14. Iron diffusion from first principles calculations

    Wann, E.; Ammann, M. W.; Vocadlo, L.; Wood, I. G.; Lord, O. T.; Brodholt, J. P.; Dobson, D. P.

    2013-12-01

    The cores of Earth and other terrestrial planets are made up largely of iron1 and it is therefore very important to understand iron's physical properties. Chemical diffusion is one such property and is central to many processes, such as crystal growth, and viscosity. Debate still surrounds the explanation for the seismologically observed anisotropy of the inner core2, and hypotheses include convection3, anisotropic growth4 and dendritic growth5, all of which depend on diffusion. In addition to this, the main deformation mechanism at the inner-outer core boundary is believed to be diffusion creep6. It is clear, therefore, that to gain a comprehensive understanding of the core, a thorough understanding of diffusion is necessary. The extremely high pressures and temperatures of the Earth's core make experiments at these conditions a challenge. Low-temperature and low-pressure experimental data must be extrapolated across a very wide gap to reach the relevant conditions, resulting in very poorly constrained values for diffusivity and viscosity. In addition to these dangers of extrapolation, preliminary results show that magnetisation plays a major role in the activation energies for diffusion at low pressures therefore creating a break down in homologous scaling to high pressures. First principles calculations provide a means of investigating diffusivity at core conditions, have already been shown to be in very good agreement with experiments7, and will certainly provide a better estimate for diffusivity than extrapolation. Here, we present first principles simulations of self-diffusion in solid iron for the FCC, BCC and HCP structures at core conditions in addition to low-temperature and low-pressure calculations relevant to experimental data. 1. Birch, F. Density and composition of mantle and core. Journal of Geophysical Research 69, 4377-4388 (1964). 2. Irving, J. C. E. & Deuss, A. Hemispherical structure in inner core velocity anisotropy. Journal of Geophysical

  15. CALCULATION OF COMPANY COSTS THROUGH THE DIRECT-COSTING CALCULATION METHOD

    Florin-Constantin DIMA

    2013-06-01

    Full Text Available The cost of production has as its starting point the purchase cost of raw materials and consumables, as well as their processing cost and the calculation of the production cost involves complex aspects. This article is based on the two major concepts of costs calculation, namely the concept of full costs and the concept of partial costs, and it analyses the direct-costing calculation method. Necessity of the Development of calculation methods to ensure rapid determination of the cost of production, and the establishment of indicators broad spectrum of information necessary for making decisions to streamline a business activity conducted by direct-costing method. Direct-costing method appeared in the U.S. for the first time in 1934 (applied by Jonathan Harris and G. Charter Harrison. Subsequently, this method was applied to European countries (England, France, Germany etc.. We stopped on this method because it is considered a modern method of costing. Therefore, we analyzed both advantages and limitations of the method in question

  16. There Is Time for Calculation in Speed Chess, and Calculation Accuracy Increases With Expertise.

    Chang, Yu-Hsuan A; Lane, David M

    2016-01-01

    The recognition-action theory of chess skill holds that expertise in chess is due primarily to the ability to recognize familiar patterns of pieces. Despite its widespread acclaim, empirical evidence for this theory is indirect. One source of indirect evidence is that there is a high correlation between speed chess and standard chess. Assuming that there is little or no time for calculation in speed chess, this high correlation implies that calculation is not the primary factor in standard chess. Two studies were conducted analyzing 100 games of speed chess. In Study 1, we examined the distributions of move times, and the key finding was that players often spent considerable time on a few moves. Moreover, stronger players were more likely than weaker players to do so. Study 2 examined skill differences in calculation by examining poor moves. The stronger players made proportionally fewer blunders (moves that a 2-ply search would have revealed to be errors). Overall, the poor moves made by the weaker players would have required a less extensive search to be revealed as poor moves than the poor moves made by the stronger players. Apparently, the stronger players are searching deeper and more accurately. These results are difficult to reconcile with the view that speed chess does not allow players time to calculate extensively and call into question the assertion that the high correlation between speed chess and standard chess supports recognition-action theory.

  17. Millimeter wave imaging system modeling: spatial frequency domain calculation versus spatial domain calculation.

    Qi, Feng; Tavakol, Vahid; Ocket, Ilja; Xu, Peng; Schreurs, Dominique; Wang, Jinkuan; Nauwelaers, Bart

    2010-01-01

    Active millimeter wave imaging systems have become a promising candidate for indoor security applications and industrial inspection. However, there is a lack of simulation tools at the system level. We introduce and evaluate two modeling approaches that are applied to active millimeter wave imaging systems. The first approach originates in Fourier optics and concerns the calculation in the spatial frequency domain. The second approach is based on wave propagation and corresponds to calculation in the spatial domain. We compare the two approaches in the case of both rough and smooth objects and point out that the spatial frequency domain calculation may suffer from a large error in amplitude of 50% in the case of rough objects. The comparison demonstrates that the concepts of point-spread function and f-number should be applied with careful consideration in coherent millimeter wave imaging systems. In the case of indoor applications, the near-field effect should be considered, and this is included in the spatial domain calculation.

  18. NEXAFS multiple scattering calculations of KO2

    M.Pedio; Z.Y.Wu; M.Benfatto; A.Mascaraque; E.Michel; C.Crotti; M.Pel

    2001-01-01

    Since many years the oxidation of alkali metals has being attracted much interest due to the catalytic properties of metal promoters and the simple electronic structure of alkali atoms.The alkali-oxides phase diagram indicates that the interaction of oxygen with alkali metals can lead to the formation of different atomic O2 ions and molecular O2 and O22- ions.Potassium superoxide has been prepared in situ and high resolution O e-edge absorption NEXAFS spectra have been measured at the VUV beam-Line at ELETTRA facility.The experimental data have been analyzed by multiple scattering approach deriving many geometrical and electronic is of the KO2 type with an O-O distance of about 1.35A and that the transition involving singleπ molecular empty state of the superoxied O2 anion has a fine structure.Multiple Scattering self consistent calculation indicates that the bond between oxygen anion adn K atom is totally ionic and that the fine structure is essentially due to solid state effects.

  19. Source apportionment using reconstructed mass calculations.

    Siddique, Naila; Waheed, Shahida

    2014-01-01

    A long-term study was undertaken to investigate the air quality of the Islamabad/Rawalpindi area. In this regard fine and coarse particulate matter were collected from 4 sites in the Islamabad/Rawalpindi region from 1998 to 2010 using Gent samplers and polycarbonate filters and analyzed for their elemental composition using the techniques of Neutron Activation Analysis (NAA), Proton Induced X-ray Emission/Proton Induced Gamma-ray Emission (PIXE/PIGE) and X-ray Fluorescence (XRF) Spectroscopy. The elemental data along with the gravimetric measurements and black carbon (BC) results obtained by reflectance measurement were used to approximate or reconstruct the particulate mass (RCM) by estimation of pseudo sources such as soil, smoke, sea salt, sulfate and black carbon or soot. This simple analysis shows that if the analytical technique used does not measure important major elements then the data will not be representative of the sample composition and cannot be further utilized for source apportionment studies or to perform transboundary analysis. In this regard PIXE/PIGE and XRF techniques that can provide elemental compositional data for most of the major environmentally important elements appear to be more useful as compared to NAA. Therefore %RCM calculations for such datasets can be used as a quality assurance (QA) measure to treat data prior to application of chemometrical tools such as factor analysis (FA) or cluster analysis (CA).

  20. New unifying procedure for PC index calculations.

    Stauning, P.

    2012-04-01

    The Polar Cap (PC) index is a controversial topic within the IAGA scientific community. Since 1997 discussions of the validity of the index to be endorsed as an official IAGA index have ensued. Currently, there are now the three separate PC index versions constructed from the different procedures used at the three institutes: the Arctic and Antarctic Research Institute (AARI), the Danish Meteorological Institute (DMI), and the Danish National Space Institute (DTU Space). It is demonstrated in this presentation, that two consistent unifying procedures can be built from the best elements of the three different versions. One procedure uses a set of coefficients aimed at the calculation of final PC index values to be accepted by IAGA. The other procedure uses coefficients aimed at on-line real-time production of preliminary PC index values for Space Weather monitoring applications. For each of the two cases the same procedure is used for the northern (PCN) and the southern (PCS) polar cap indices, and the derived PCN and PCS coefficients are similar.

  1. Recommendations for Insulin Dose Calculator Risk Management

    2014-01-01

    Several studies have shown the usefulness of an automated insulin dose bolus advisor (BA) in achieving improved glycemic control for insulin-using diabetes patients. Although regulatory agencies have approved several BAs over the past decades, these devices are not standardized in their approach to dosage calculation and include many features that may introduce risk to patients. Moreover, there is no single standard of care for diabetes worldwide and no guidance documents for BAs, specifically. Given the emerging and more stringent regulations on software used in medical devices, the approval process is becoming more difficult for manufacturers to navigate, with some manufacturers opting to remove BAs from their products altogether. A comprehensive literature search was performed, including publications discussing: diabetes BA use and benefit, infusion pump safety and regulation, regulatory submissions, novel BAs, and recommendations for regulation and risk management of BAs. Also included were country-specific and international guidance documents for medical device, infusion pump, medical software, and mobile medical application risk management and regulation. No definitive worldwide guidance exists regarding risk management requirements for BAs, specifically. However, local and international guidance documents for medical devices, infusion pumps, and medical device software offer guidance that can be applied to this technology. In addition, risk management exercises that are algorithm-specific can help prepare manufacturers for regulatory submissions. This article discusses key issues relevant to BA use and safety, and recommends risk management activities incorporating current research and guidance. PMID:24876550

  2. Calculations of magnetohydrodynamic swirl combustor flowfields

    Gupta, A.K.; Beer, J.H.; Khan, H.; Lilley, D.G.

    1982-09-01

    The objectives of the paper were to theoretically calculate and experimentally verify the fluid mechanics in the second stage of a model MHD swirl combustor with special emphasis on avoidance of the boundary-layer separation as the flow turns in to the MHD disk generator; to find the most suitable seed injection point at the entrance to the second stage which will yield uniform seed concentration at the combustor exit prior to entry into the disk generator. The model combustor is a multiannular swirl burner that is placed at the exit of the first-stage swirl combustor, which in turn can be used to vary the turbulent shear that arises between the individual swirling concentric annuli. This design permits ultrahigh swirl in the second stage with swirl vanes (if any) to be placed outside the very high temperature regions of the combustor in the clean preheated air. The gas burns completely in the second-stage combustor and turns 90 deg into the disk generator along a trumpet-shaped exit module. In this synoptic results are presented of the fluid mechanics in the trumpet-shaped second-stage exit module, with water as the working fluid.

  3. Wave function calculations in finite nuclei

    Pieper, S.C.

    1993-07-01

    One of the central problems in nuclear physics is the description of nuclei as systems of nucleons interacting via realistic potentials. There are two main aspects of this problem: (1) specification of the Hamiltonian, and (2) calculation of the ground (or excited) states of nuclei with the given interaction. Realistic interactions must contain both two- and three-nucleon potentials and these potentials have a complicated non-central operator structure consisting, for example, of spin, isospin and tensor dependencies. This structure results in formidable many-body problems in the computation of the ground states of nuclei. At Argonne and Urbana, the authors have been following a program of developing realistic NN and NNN interactions and the methods necessary to compute nuclear properties from variational wave functions suitable for these interactions. The wave functions are used to compute energies, density distributions, charge form factors, structure functions, momentum distributions, etc. Most recently they have set up a collaboration with S. Boffi and M. Raduci (University of Pavia) to compute (e,e{prime}p) reactions.

  4. Wave function calculations in finite nuclei

    Pieper, S.C.

    1993-01-01

    One of the central problems in nuclear physics is the description of nuclei as systems of nucleons interacting via realistic potentials. There are two main aspects of this problem: (1) specification of the Hamiltonian, and (2) calculation of the ground (or excited) states of nuclei with the given interaction. Realistic interactions must contain both two- and three-nucleon potentials and these potentials have a complicated non-central operator structure consisting, for example, of spin, isospin and tensor dependencies. This structure results in formidable many-body problems in the computation of the ground states of nuclei. At Argonne and Urbana, the authors have been following a program of developing realistic NN and NNN interactions and the methods necessary to compute nuclear properties from variational wave functions suitable for these interactions. The wave functions are used to compute energies, density distributions, charge form factors, structure functions, momentum distributions, etc. Most recently they have set up a collaboration with S. Boffi and M. Raduci (University of Pavia) to compute (e,e[prime]p) reactions.

  5. Detailed opacity calculations for stellar models

    Pain, Jean-Christophe; Gilleron, Franck

    2016-10-01

    We present a state of the art of precise spectral opacity calculations illustrated by stellar applications. The essential role of laboratory experiments to check the quality of the computed data is underlined. We review some X-ray and XUV laser and Z-pinch photo-absorption measurements as well as X-ray emission spectroscopy experiments of hot dense plasmas produced by ultra-high-intensity laser interaction. The measured spectra are systematically compared with the fine-structure opacity code SCO-RCG. Focus is put on iron, due to its crucial role in the understanding of asteroseismic observations of Beta Cephei-type and Slowly Pulsating B stars, as well as in the Sun. For instance, in Beta Cephei-type stars (which should not be confused with Cepheid variables), the iron-group opacity peak excites acoustic modes through the kappa-mechanism. A particular attention is paid to the higher-than-predicted iron opacity measured on Sandia's Z facility at solar interior conditions (boundary of the convective zone). We discuss some theoretical aspects such as orbital relaxation, electron collisional broadening, ionic Stark effect, oscillator-strength sum rules, photo-ionization, or the ``filling-the-gap'' effect of highly excited states.

  6. Fast Electron Beam Simulation and Dose Calculation

    Trindade, A; Peralta, L; Lopes, M C; Alves, C; Chaves, A

    2003-01-01

    A flexible multiple source model capable of fast reconstruction of clinical electron beams is presented in this paper. A source model considers multiple virtual sources emulating the effect of accelerator head components. A reference configuration (10 MeV and 10x10 cm2 field size) for a Siemens KD2 linear accelerator was simulated in full detail using GEANT3 Monte Carlo code. Our model allows the reconstruction of other beam energies and field sizes as well as other beam configurations for similar accelerators using only the reference beam data. Electron dose calculations were performed with the reconstructed beams in a water phantom and compared with experimental data. An agreement of 1-2% / 1-2 mm was obtained, equivalent to the accuracy of full Monte Carlo accelerator simulation. The source model reduces accelerator simulation CPU time by a factor of 7500 relative to full Monte Carlo approaches. The developed model was then interfaced with DPM, a fast radiation transport Monte Carlo code for dose calculati...

  7. Integrated burnup calculation code system SWAT

    Suyama, Kenya; Hirakawa, Naohiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Iwasaki, Tomohiko

    1997-11-01

    SWAT is an integrated burnup code system developed for analysis of post irradiation examination, transmutation of radioactive waste, and burnup credit problem. It enables us to analyze the burnup problem using neutron spectrum depending on environment of irradiation, combining SRAC which is Japanese standard thermal reactor analysis code system and ORIGEN2 which is burnup code widely used all over the world. SWAT makes effective cross section library based on results by SRAC, and performs the burnup analysis with ORIGEN2 using that library. SRAC and ORIGEN2 can be called as external module. SWAT has original cross section library on based JENDL-3.2 and libraries of fission yield and decay data prepared from JNDC FP Library second version. Using these libraries, user can use latest data in the calculation of SWAT besides the effective cross section prepared by SRAC. Also, User can make original ORIGEN2 library using the output file of SWAT. This report presents concept and user`s manual of SWAT. (author)

  8. Set point calculations for RAPID project

    HICKMAN, G.L.

    1999-08-27

    The Respond and Pump in Days (RAPID) project was initiated to pump part of the contents of tank 241-SY-101 into tank 241-SY-102. This document establishes the basis for all set points and ranges used in the RAPID project. There are 23 instrument and/or control loops utilized by the RAPID project. These range from the simple indication loop with two components to complex indication, control, and alarm loops with up to eight components. Several loops include safety class elements. This document is intended to describe the loops in full and to provide the basis for each of the element setpoints, ranges and accuracies identified in the RAPID project Master Equipment List (MEL). These values are developed in two steps. First, the base value is identified with reference to the supporting document providing that value. Second, a spreadsheet calculation is performed on each element and loop, utilizing a standard methodology described below, that takes into account known and suspected variance in output and establishes the actual setpoint used on a given element. The results of the spreadsheet are reported directly in this document.

  9. AJAC: Atomic data calculation tool in Python

    Amani Tahat; Jordi Marti; Kaher Tahat; Ali Khwaldeh

    2013-01-01

    In this work,new features and extensions of a currently used online atomic database management system are reported.A multiplatform flexible computation package is added to the present system,to allow the calculation of various atomic radiative and collisional processes,based on simplifying the use of some existing atomic codes adopted from the literature.The interaction between users and data is facilitated by a rather extensive Python graphical user interface working online and could be installed in personal computers of different classes.In particular,this study gives an overview of the use of one model of the package models (i.e.,electron impact collisional excitation model).The accuracy of computing capability of the electron impact collisional excitation in the adopted model,which follows the distorted wave approximation approach,is enhanced by implementing the Dirac R-matrix approximation approach.The validity and utility of this approach are presented through a comparison of the current computed results with earlier available theoretical and experimental results.Finally,the source code is made available under the general public license and being distributed freely in the hope that it will be useful to a wide community of laboratory and astrophysical plasma diagnostics.

  10. ALGORITHME POUR LE CALCUL DES COURBURES GENERALISEES

    K MEZAGHCHA

    2004-06-01

    Full Text Available On sait qu’une courbe algébrique standard  d'équation f(x, y =0 admet un nombre fini de branches (nombre inférieur à l'ordre de f , dont les paramètrages peuvent être obtenus en particulier à partir de la décomposition de Goze itérée. On aimerait calculer  leur courbure généralisée sans les déterminer explicitement, la notion de courbure généralisée ayant fait l’objet d’un travail, publié dans les comptes rendus de l’Université de Cagliari (Italie [12]. Dans cet article, on se propose d'établir à cet effet un algorithme qui donnera à partir seulement des coefficients de f, la liste exhaustive des courbures généralisées de toutes les branches réelles. L’article se termine par la donnée d’un exemple pour montrer l’efficacité de l’algorithme proposé.

  11. Ab initio calculation of the Hoyle state

    Epelbaum, Evgeny; Lee, Dean; Meißner, Ulf-G

    2011-01-01

    The Hoyle state plays a crucial role in the hydrogen burning of stars heavier than our sun and in the production of carbon and other elements necessary for life. This excited state of the carbon-12 nucleus was postulated by Hoyle^{1} as a necessary ingredient for the fusion of three alpha particles to produce carbon at stellar temperatures. Although the Hoyle state was seen experimentally more than a half century ago^{2,3}, nuclear theorists have not yet uncovered the nature of this state from first principles. In this letter we report the first ab initio calculation of the low-lying states of carbon-12 using supercomputer lattice simulations and a theoretical framework known as effective field theory. In addition to the ground state and excited spin-2 state, we find a resonance at -85(3) MeV with all of properties of the Hoyle state and in agreement with the experimentally observed energy. These lattice simulations provide insight into the structure of this unique state and new clues as to the amount of fine...

  12. About uncertainties in practical salinity calculations

    M. Le Menn

    2009-10-01

    Full Text Available Salinity is a quantity computed, in the actual state of the art, from conductivity ratio measurements, knowing temperature and pressure at the time of the measurement and using the Practical Salinity Scale algorithm of 1978 (PSS-78 which gives practical salinity values S. The uncertainty expected on PSS-78 values is ±0.002, but nothing has ever been detailed about the method to work out this uncertainty, and the sources of errors to include in this calculation. Following a guide edited by the Bureau International des Poids et Mesures (BIPM, this paper assess, by two independent methods, the uncertainties of salinity values obtained from a laboratory salinometer and Conductivity-Temperature-Depth (CTD measurements after laboratory calibration of a conductivity cell. The results show that the part due to the PSS-78 relations fits is sometimes as much significant as the instruments one's. This is particularly the case with CTD measurements where correlations between the variables contribute to decrease largely the uncertainty on S, even when the expanded uncertainties on conductivity cells calibrations are largely up of 0.002 mS/cm. The relations given in this publication, and obtained with the normalized GUM method, allow a real analysis of the uncertainties sources and they can be used in a more general way, with instruments having different specifications.

  13. The matrix method to calculate page rank

    H. Barboucha, M. Nasri

    2014-06-01

    Full Text Available Choosing the right keywords is relatively easy, whereas getting a high PageRank is more complicated. The index Page Rank is what defines the position in the result pages of search engines (for Google of course, but the other engines are now using more or less the same kind of algorithm. It is therefore very important to understand how this type of algorithm functions to hope to appear on the first page of results (the only page read in 95 % of cases or at least be among the first. We propose in this paper to clarify the operation of this algorithm using a matrix method and a JavaScript program enabling to experience this type of analysis. It is of course a simplified version, but it can add value to the website and achieve a high ranking in the search results and reach a larger customer base. The interest is to disclose an algorithm to calculate the relevance of each page. This is in fact a mathematical algorithm based on a web graph. This graph is formed of all the web pages that are modeled by nodes, and hyperlinks that are modeled by arcs.

  14. Calculation of the energetics of chemical reactions

    Dunning, T.H. Jr.; Harding, L.B.; Shepard, R.L.; Harrison, R.J.

    1988-01-01

    To calculate the energetics of chemical reactions we must solve the electronic Schroedinger equation for the molecular conformations of importance for the reactive encounter. Substantial changes occur in the electronic structure of a molecular system as the reaction progresses from reactants through the transition state to products. To describe these changes, our approach includes the following three elements: the use of multiconfiguration self-consistent field wave functions to provide a consistent zero-order description of the electronic structure of the reactants, transition state, and products; the use of configuration interaction techniques to describe electron correlation effects needed to provide quantitative predictions of the reaction energetics; and the use of large, optimized basis sets to provide the flexibility needed to describe the variations in the electronic distributions. With this approach we are able to study reactions involving as many as 5--6 atoms with errors of just a few kcal/mol in the predicted reaction energetics. Predictions to chemical accuracy, i.e., to 1 kcal/mol or less, are not yet feasible, although continuing improvements in both the theoretical methodology and computer technology suggest that this will soon be possible, at least for reactions involving small polyatomic species. 4 figs.

  15. Transmission line sag calculations using interval mathematics

    Shaalan, H. [Institute of Electrical and Electronics Engineers, Washington, DC (United States)]|[US Merchant Marine Academy, Kings Point, NY (United States)

    2007-07-01

    Electric utilities are facing the need for additional generating capacity, new transmission systems and more efficient use of existing resources. As such, there are several uncertainties associated with utility decisions. These uncertainties include future load growth, construction times and costs, and performance of new resources. Regulatory and economic environments also present uncertainties. Uncertainty can be modeled based on a probabilistic approach where probability distributions for all of the uncertainties are assumed. Another approach to modeling uncertainty is referred to as unknown but bounded. In this approach, the upper and lower bounds on the uncertainties are assumed without probability distributions. Interval mathematics is a tool for the practical use and extension of the unknown but bounded concept. In this study, the calculation of transmission line sag was used as an example to demonstrate the use of interval mathematics. The objective was to determine the change in cable length, based on a fixed span and an interval of cable sag values for a range of temperatures. The resulting change in cable length was an interval corresponding to the interval of cable sag values. It was shown that there is a small change in conductor length due to variation in sag based on the temperature ranges used in this study. 8 refs.

  16. Solution of the transport equation in stationary state, in one and two dimensions, for BWR assemblies using nodal methods; Solucion de la ecuacion de transporte en estado estacionario, en 1 y 2 dimensiones, para ensambles tipo BWR usando metodos nodales

    Xolocostli M, J.V

    2002-07-01

    . In this geometry nodal, continuous and discontinuous schemes were used. For the continuos schemes, only the Bi Quadratic (BiQ) and the Bi Cubic (BiC) were considered. In the case of the discontinuous ones two nodal schemes were considered, namely the Discontinuous Bi Linear (DBiL) and Discontinuous Bi Quadratic (DBiQ). The nodal schemes applied use from 4 up to 16 interpolation parameters per cell. These schemes are-defined for a set D{sub c} of interpolation parameters and a polynomial space S{sub h} corresponding to each one of the nodal schemes considered. All these four nodal hybrid schemes were implemented in a computer program called TNHXY starting from the computer program TNXY developed in previous thesis works. Several subroutines wae added to calculate the average neutron flux for each cell and for each energy group, generating two versions, one for the continuous schemes and one for the discontinuous schemes. For this geometry, two benchmark problems of the ANL-7416 document were analyzed. They are 7x7 BWR fuel assemblies, one without control rod and the other one with control rod. The computer program was also applied to a MOX assembly proposed by the Nuclear Energy Agency and it is considered as a reference problem. The results obtained for the one-dimensional problems using TNX for the effective multiplication factor were compared with the ones obtained with the code ANISN/PC. TNX code shows a faster convergence within four significant figures for the case with no control rod and three significant figures for the case with control rod (using double precision). These results suggest TNX is a very useful tool for this kind of calculations. For X Y geometry, the results obtained with TNHXY were compared with those calculated with the code TWOTRAN. To get these results, several spatial (1x1, 2x2, 4x4 per cell) and angular meshes (S{sub 2}, S{sub 4}, S{sub 6}, and S{sub 8}) were used. The results for the problem with no control rod were practically the same

  17. Solution of the transport equation in stationary state, in one and two dimensions, for BWR assemblies using nodal methods; Solucion de la ecuacion de transporte en estado estacionario, en 1 y 2 dimensiones, para ensambles tipo BWR usando metodos nodales

    Xolocostli M, J.V

    2002-07-01

    . In this geometry nodal, continuous and discontinuous schemes were used. For the continuos schemes, only the Bi Quadratic (BiQ) and the Bi Cubic (BiC) were considered. In the case of the discontinuous ones two nodal schemes were considered, namely the Discontinuous Bi Linear (DBiL) and Discontinuous Bi Quadratic (DBiQ). The nodal schemes applied use from 4 up to 16 interpolation parameters per cell. These schemes are-defined for a set D{sub c} of interpolation parameters and a polynomial space S{sub h} corresponding to each one of the nodal schemes considered. All these four nodal hybrid schemes were implemented in a computer program called TNHXY starting from the computer program TNXY developed in previous thesis works. Several subroutines wae added to calculate the average neutron flux for each cell and for each energy group, generating two versions, one for the continuous schemes and one for the discontinuous schemes. For this geometry, two benchmark problems of the ANL-7416 document were analyzed. They are 7x7 BWR fuel assemblies, one without control rod and the other one with control rod. The computer program was also applied to a MOX assembly proposed by the Nuclear Energy Agency and it is considered as a reference problem. The results obtained for the one-dimensional problems using TNX for the effective multiplication factor were compared with the ones obtained with the code ANISN/PC. TNX code shows a faster convergence within four significant figures for the case with no control rod and three significant figures for the case with control rod (using double precision). These results suggest TNX is a very useful tool for this kind of calculations. For X Y geometry, the results obtained with TNHXY were compared with those calculated with the code TWOTRAN. To get these results, several spatial (1x1, 2x2, 4x4 per cell) and angular meshes (S{sub 2}, S{sub 4}, S{sub 6}, and S{sub 8}) were used. The results for the problem with no control rod were practically the same

  18. Real time UAV autonomy through offline calculations

    Jung, Sunghun

    Two or three dimensional mission plans for a single or a group of hover or fixed wing UAVs are generated. The mission plans can largely be separated into seven main parts. Firstly, the Region Growing algorithm is used to generate a map from 2D or 3D images. Secondly, the map is analyzed to separate each blocks using vertices of blocks and seven filtering steps. Thirdly, the Trapezoidal map algorithm is used to convert the map into a traversability graph. Fourthly, this process also filters out paths that are not traversable. That is, nodes located inside the blocks and too closely located nodes are filtered out. Fifthly, the Dijkstra algorithm is used to calculate the shortest path from a starting point to a goal point. Sixthly, the 1D Optimal Control algorithm is applied to manipulate the velocity and acceleration of the UAVs efficiently. Basically, the UAVs accelerates at one graph node and maintains a constant velocity and decelerates before reaching the next graph node. Lastly, Traveling Salesman Problem Method (TSP) algorithm is used to calculate the shortest path to search the whole region. After this discretization of space and time, it becomes possible to solve several autonomous mission planning problems. We focus on one of the most difficult problems: coordinated search. This is a multiple Traveling Salesman Problem (mTSP). We solve it by decomposing the search region and solving TSPs for each vehicle searching a sub-region. The mTSP is generally used when there are more than one salesman is used. In addition to the four main parts, there are three minor parts which support the main parts. Firstly, Target Detection algorithm is generated to detect a target located near the UAVs' path. A picture of the desired target is inserted into the algorithm before UAVs launch. Using the Scale-Invariant Transform Feature (SIFT) algorithm, a target with a specific shape can be detected. Secondly, Tracking algorithm is generated to manipulate UAVs to follow targets

  19. Neutronic calculations for a final focus system

    Mainardi, E. E-mail: enrico@nuc.berkeley.edu; Premuda, F.; Lee, E

    2001-05-21

    For heavy-ion fusion and for 'liquid-protected' reactor designs such as HYLIFE-II (Moir et al., Fusion Technol. 25 (1994); HYLIFE-II-Progress Report, UCID-21816, 4-82-100), a mixture of molten salts made of F{sup 10}, Li{sup 6}, Li{sup 7}, Be{sup 9} called flibe allows highly compact target chambers. Smaller chambers will have lower costs and will allow the final-focus magnets to be closer to the target with decreased size of the focus spot and of the driver, as well as drastically reduced costs of IFE electricity. Consequently the superconducting coils of the magnets closer to the chamber will suffer higher radiation damage though they can stand only a certain amount of energy deposited before quenching. The scope of our calculations is essentially the total energy deposited on the magnetic lens system by fusion neutrons and induced {gamma}-rays. Such a study is important for the design of the final focus system itself from the neutronic point of view and indicates some guidelines for a design with six magnets in the beam line. The entire chamber consists of 192 beam lines to provide access of heavy ions that will implode the pellet. A 3-D transport calculation of the radiation penetrating through ducts that takes into account the complexity of the system, requires Monte Carlo methods. The development of efficient and precise models for geometric representation and nuclear analysis is necessary. The parameters are optimized thanks to an accurate analysis of six geometrical models that are developed starting from the simplest. Different configurations are examined employing TART 98 (D.E. Cullen, Lawrence Livermore National Laboratory, UCRL-ID-126455, Rev. 1, November, 1997) and MCNP 4B (Briesmeister (Ed.), Version 4B, La-12625-m, March 1997, Los Alamos National Laboratory): two Monte Carlo codes for neutrons and photons. The quantities analyzed include: energy deposited by neutrons and gamma photons, values of the total fluence integrated on the whole

  20. Glass dissolution rate measurement and calculation revisited

    Fournier, Maxime; Ull, Aurélien; Nicoleau, Elodie; Inagaki, Yaohiro; Odorico, Michaël; Frugier, Pierre; Gin, Stéphane

    2016-08-01

    Aqueous dissolution rate measurements of nuclear glasses are a key step in the long-term behavior study of such waste forms. These rates are routinely normalized to the glass surface area in contact with solution, and experiments are very often carried out using crushed materials. Various methods have been implemented to determine the surface area of such glass powders, leading to differing values, with the notion of the reactive surface area of crushed glass remaining vague. In this study, around forty initial dissolution rate measurements were conducted following static and flow rate (SPFT, MCFT) measurement protocols at 90 °C, pH 10. The international reference glass (ISG), in the forms of powders with different particle sizes and polished monoliths, and soda-lime glass beads were examined. Although crushed glass grains clearly cannot be assimilated with spheres, it is when using the samples geometric surface (Sgeo) that the rates measured on powders are closest to those found for monoliths. Overestimation of the reactive surface when using the BET model (SBET) may be due to small physical features at the atomic scale-contributing to BET surface area but not to AFM surface area. Such features are very small compared with the thickness of water ingress in glass (a few hundred nanometers) and should not be considered in rate calculations. With a SBET/Sgeo ratio of 2.5 ± 0.2 for ISG powders, it is shown here that rates measured on powders and normalized to Sgeo should be divided by 1.3 and rates normalized to SBET should be multiplied by 1.9 in order to be compared with rates measured on a monolith. The use of glass beads indicates that the geometric surface gives a good estimation of glass reactive surface if sample geometry can be precisely described. Although data clearly shows the repeatability of measurements, results must be given with a high uncertainty of approximately ±25%.

  1. CALCULATING ECONOMIC RISK AFTER HANFORD CLEANUP

    Scott, M.J.

    2003-02-27

    Since late 1997, researchers at the Hanford Site have been engaged in the Groundwater Protection Project (formerly, the Groundwater/Vadose Zone Project), developing a suite of integrated physical and environmental models and supporting data to trace the complex path of Hanford legacy contaminants through the environment for the next thousand years, and to estimate corresponding environmental, human health, economic, and cultural risks. The linked set of models and data is called the System Assessment Capability (SAC). The risk mechanism for economics consists of ''impact triggers'' (sequences of physical and human behavior changes in response to, or resulting from, human health or ecological risks), and processes by which particular trigger mechanisms induce impacts. Economic impacts stimulated by the trigger mechanisms may take a variety of forms, including changes in either costs or revenues for economic sectors associated with the affected resource or activity. An existing local economic impact model was adapted to calculate the resulting impacts on output, employment, and labor income in the local economy (the Tri-Cities Economic Risk Model or TCERM). The SAC researchers ran a test suite of 25 realization scenarios for future contamination of the Columbia River after site closure for a small subset of the radionuclides and hazardous chemicals known to be present in the environment at the Hanford Site. These scenarios of potential future river contamination were analyzed in TCERM. Although the TCERM model is sensitive to river contamination under a reasonable set of assumptions concerning reactions of the authorities and the public, the scenarios show low enough future contamination that the impacts on the local economy are small.

  2. The experience of GPU calculations at Lunarc

    Sjöström, Anders; Lindemann, Jonas; Church, Ross

    2011-09-01

    To meet the ever increasing demand for computational speed and use of ever larger datasets, multi GPU instal- lations look very tempting. Lunarc and the Theoretical Astrophysics group at Lund Observatory collaborate on a pilot project to evaluate and utilize multi-GPU architectures for scientific calculations. Starting with a small workshop in 2009, continued investigations eventually lead to the procurement of the GPU-resource Timaeus, which is a four-node eight-GPU cluster with two Nvidia m2050 GPU-cards per node. The resource is housed within the larger cluster Platon and share disk-, network- and system resources with that cluster. The inaugu- ration of Timaeus coincided with the meeting "Computational Physics with GPUs" in November 2010, hosted by the Theoretical Astrophysics group at Lund Observatory. The meeting comprised of a two-day workshop on GPU-computing and a two-day science meeting on using GPUs as a tool for computational physics research, with a particular focus on astrophysics and computational biology. Today Timaeus is used by research groups from Lund, Stockholm and Lule in fields ranging from Astrophysics to Molecular Chemistry. We are investigating the use of GPUs with commercial software packages and user supplied MPI-enabled codes. Looking ahead, Lunarc will be installing a new cluster during the summer of 2011 which will have a small number of GPU-enabled nodes that will enable us to continue working with the combination of parallel codes and GPU-computing. It is clear that the combination of GPUs/CPUs is becoming an important part of high performance computing and here we will describe what has been done at Lunarc regarding GPU-computations and how we will continue to investigate the new and coming multi-GPU servers and how they can be utilized in our environment.

  3. Wear Calculation for Sliding Friction Pairs

    Springis, G.; Rudzitis, J.; Avisane, A.; Leitans, A.

    2014-04-01

    One of the principal objectives of modern production process is the improvement of quality level; this means also guaranteeing the required service life of different products and increase in their wear resistance. To perform this task, prediction of service life of fitted components is of crucial value, since with the development of production technologies and measuring devices it is possible to determine with ever increasing precision the data to be used also in analytical calculations. Having studied the prediction theories of wear process that have been developed in the course of time and can be classified into definite groups one can state that each of them has shortcomings that might strongly impair the results thus making unnecessary theoretical calculations. The proposed model for wear calculation is based on the application of theories from several branches of science to the description of 3D surface micro-topography, assessing the material's physical and mechanical characteristics, substantiating the regularities in creation of the material particles separated during the wear process and taking into consideration definite service conditions of fittings. ums Mūsdienu ražošanas procesa viens no pamatmērķiem ir produkcijas kvalitātes līmeņa paaugstināšana, tas nozīmē arī dažādu izstrādājumu nepieciešamā kalpošanas laika nodrošināšanu un nodilumizturības palielināšanu. Svarīga loma šī uzdevuma sasniegšanā ir salāgojamo detaļu kalpošanas laika prognozēšanai, kas ir ļoti aktuāls jautājums, jo attīstoties dažādām ražošanas, kā arī mēriekārtu tehnoloģijām, kļūst iespējams arvien precīzāk noteikt nepieciešamos datus, kuri vēlāk tiek izmantoti arī analītiskajos aprēķinos. Apskatot laika gaitā izstrādātās dilšanas procesa prognozēšanas teorijas, kuras var klasificēt, apkopojot tās noteiktās grupās, ņemot par pamatu līdzīgas teorētiskās pieejas, jāsaka, ka katrai no tām piemīt da

  4. Ab initio calculations to support accurate modelling of the rovibronic spectroscopy calculations of vanadium monoxide (VO)

    McKemmish, Laura K; Tennyson, Jonathan

    2016-01-01

    Accurate knowledge of the rovibronic near-infrared and visible spectra of vanadium monoxide (VO) is very important for studies of cool stellar and hot planetary atmospheres. Here, the required ab initio dipole moment and spin-orbit coupling curves for VO are produced. This data forms the basis of a new VO line list considering 13 different electronic states and containing over 277 million transitions. Open shell transition, metal diatomics are challenging species to model through ab initio quantum mechanics due to the large number of low-lying electronic states, significant spin-orbit coupling and strong static and dynamic electron correlation. Multi-reference configuration interaction methodologies using orbitals from a complete active space self-consistent-field (CASSCF) calculation are the standard technique for these systems. We use different state-specific or minimal-state CASSCF orbitals for each electronic state to maximise the calculation accuracy. The off-diagonal dipole moment controls the intensity...

  5. Algorithms for computer algebra calculations in spacetime; 1, the calculation of curvature

    Pollney, D; Santosuosso, K; Lake, K; Pollney, Denis; Musgrave, Peter; Santosuosso, Kevin; Lake, Kayll

    1996-01-01

    We examine the relative performance of algorithms for the calculation of curvature in spacetime. The classical coordinate component method is compared to two distinct versions of the Newman-Penrose tetrad approach for a variety of spacetimes, and distinct coordinates and tetrads for a given spacetime. Within the system GRTensorII, we find that there is no single preferred approach on the basis of speed. Rather, we find that the fastest algorithm is the one that minimizes the amount of time spent on simplification. This means that arguments concerning the theoretical superiority of an algorithm need not translate into superior performance when applied to a specific spacetime calculation. In all cases it is the global simplification strategy which is of paramount importance. An appropriate simplification strategy can change an untractable problem into one which can be solved essentially instantaneously.

  6. The value of fast low-angle shot 2-dimensional sequence with sliding multi-slice technique in the detection of abdominal metastasis of rectal cancer%滑动多层技术的二维快速小角度激发成像序列在直肠癌腹部转移诊断中的应用价值

    熊斌; Tobias Baumann; 冯敢生; Arnd-Oliver Schaefer; Mathias Langer

    2008-01-01

    目的 探讨采用滑动多层(SMS)技术的二维快速小角度激发成像(FLASH-2D)序列在直肠癌腹部转移灶诊断中的价值.方法 由2名放射科医师回顾性分析15例经手术病理证实的直肠癌患者资料,15例均接受SMS FLASH-2D MRI和多层螺旋CT(MSCT)检查(4例进行了2次检查).以SMS和MSCT及6个月内所有随诊检查的最终一致意见作为疾病诊断的参考标准.采用Kappa检验评价2种检查方法对于直肠癌肝转移、淋巴结转移和骨转移诊断的一致性和不同观察者对疾病诊断的一致性,并比较2种方法对各种转移病灶诊断的灵敏度.结果 2名医师采用SMS技术都发现了60个病灶中的56个转移灶,灵敏度均为93.33%(56/60);采用MSCT均发现了50个病灶,灵敏度均为83.33%(50/60).对于肝转移灶的诊断,2名医师采用SMS诊断的灵敏度分别为97.44%(38/39)和100%(39/39);采用MSCT诊断的灵敏度均为100%(39/39).对于淋巴结转移的诊断,SMS诊断的灵敏度为85.71%(12/14)和71.43%(10/14);MSCT分别为78.57%(11/14)和71.43%(10/14).对骨转移灶的诊断,SMS的灵敏度分别为85.71%(6/7)和100%(7/7);MSCT分别为0(0/7)和14.29%(1/7).结论 SMS FLASH-2D序列在对直肠癌肝转移和淋巴结转移诊断上有和MSCT相同的能力,对于骨转移灶的诊断则明显较MSCT敏感.%Objective To evaluate the potential of a sliding multi-slice (SMS)fast low-angbe shot 2-dimensional (FLASH-2D) sequence for abdominal lesion detection in patients with rectal cancer.Methods Nineteen paired SMS MRI( FLASH-2D sequnce) and MSCT examinations of the whole abdomen and pelvis in 15 patients (four of them were examined twice) with rectal cancer were retrospectively analyzed by two radiologists.While the lesion-based agreement between the two methods and the diagnostic agreement between two observers were tested by means of Kappa statistics,the sensitivities of SMS FLASH-2D and MSCT to detect liver metastases,lymph node metastases and bone

  7. New method for calculation of integral characteristics of thermal plumes

    Zukowska, Daria; Popiolek, Zbigniew; Melikov, Arsen Krikor

    2008-01-01

    A method for calculation of integral characteristics of thermal plumes is proposed. The method allows for determination of the integral parameters of plumes based on speed measurements performed with omnidirectional low velocity thermoanemometers. The method includes a procedure for calculation...

  8. Supporting Calculations For Submerged Bed Scrubber Condensate Disposal Preconceptual Study

    Pajunen, A. J.; Tedeschi, A. R.

    2012-09-18

    This document provides supporting calculations for the preparation of the Submerged Bed Scrubber Condensate Disposal Preconceptual Study report The supporting calculations include equipment sizing, Hazard Category determination, and LAW Melter Decontamination Factor Adjustments.

  9. Calculation base of flooded type evaporators with finned tubes

    Brod, W.; Slipcevic, B.

    1989-03-01

    For the construction of flooded type evaporators with halogen refrigerants, the refrigeration industry is using finned tubes. Equations for thermodynamical calculations of the apparaturs are given, and explained with the aid of a calculation example.

  10. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  11. Calculating Minimum Detectable Impacts in Teen Pregnancy Prevention Impact Evaluations

    Lorenzo Moreno; Russell Cole

    2014-01-01

    This brief provides an overview of how researchers can calculate the minimum detectable impacts (MDIs), which are related to power calculations, for Teen Pregnancy Prevention (TPP) evaluations. It describes a tool that evaluators can use for their own MDI calculations, and includes examples that highlight how to use the tool. A technical appendix provides more details on the formulae in the tool that inform MDI calculations.

  12. Teaching Mental Abacus Calculation to Students with Mental Retardation

    Shen, Hong

    2006-01-01

    The abacus is a calculating tool that has been used in Asia for thousands of years. Mental abacus calculation is a skill in which an abacus image in the mind is used without the actual physical manipulation of the abacus. Using this method, people can perform extremely rapid and accurate mental calculations. Research indicates that abacus training…

  13. Calculation Method for Normal Inducedlongitudinal Voltage on Pilot Cable

    Abdelaziz B.M. Kamel,

    2014-09-01

    Full Text Available In this paper a full study and detailed calculations of the induced voltage in pilot cables are carried out. First an introduction showing the importance of the induced voltage and its effect in pilot cables. The first calculation method Flat Formation. The second calculation method Trefoil Formation. Then the results obtained for both methods and compared. Finally a conclusion is conduct.

  14. Assessing the reliability of calculated catalytic ammonia synthesis rates

    Medford, Andrew James; Wellendorff, Jess; Vojvodic, Aleksandra

    2014-01-01

    We introduce a general method for estimating the uncertainty in calculated materials properties based on density functional theory calculations. We illustrate the approach for a calculation of the catalytic rate of ammonia synthesis over a range of transition-metal catalysts. The correlation...

  15. 40 CFR 1065.675 - CLD quench verification calculations.

    2010-07-01

    ... verification calculations. Perform CLD quench-check calculations as follows: (a) Perform a CLD analyzer quench... water content in combustion air, fuel combustion products, and dilution air (if applicable). If you... the maximum expected CO2 content in fuel combustion products and dilution air. (d) Calculate quench...

  16. Oxidation-Reduction Calculations in the Biochemistry Course

    Feinman, Richard D.

    2004-01-01

    Redox calculations have the potential to reinforce important concepts in bioenergetics. The intermediacy of the NAD[superscript +]/NADH couple in the oxidation of food by oxygen, for example, can be brought out by such calculations. In practice, students have great difficulty and, even when adept at the calculations, frequently do not understand…

  17. A new Calculation Procedure for Spatial Impulse Responses in Ultrasound

    Jensen, Jørgen Arendt

    1999-01-01

    A new procedure for the calculation of spatial impulse responses for linear sound fields is introduced. This calculation procedure uses the well known technique of calculating the spatial impulse response from the intersection of a circle emanating from the projected spherical wave with the bound...

  18. 40 CFR 98.383 - Calculating GHG emissions.

    2010-07-01

    ... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Coal-based Liquid Fuels § 98.383 Calculating GHG emissions. You must follow the calculation methodologies of § 98.393 as if they applied to the appropriate... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Calculating GHG emissions....

  19. 40 CFR 98.43 - Calculating GHG emissions.

    2010-07-01

    ... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Electricity Generation § 98.43 Calculating GHG emissions... to 40 CFR part 75, and § 75.64. Calculate CO2, CH4, and N2O emissions as follows: (a) Convert the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Calculating GHG emissions....

  20. 40 CFR 98.283 - Calculating GHG emissions.

    2010-07-01

    ... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Silicon Carbide Production § 98.283 Calculating GHG emissions. You must calculate and report the annual process CO2 emissions from each silicon carbide process unit... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Calculating GHG emissions....