WorldWideScience

Sample records for energy-based temperature computation

  1. Energy based model for temperature dependent behavior of ferromagnetic materials

    International Nuclear Information System (INIS)

    Sah, Sanjay; Atulasimha, Jayasimha

    2017-01-01

    An energy based model for temperature dependent anhysteretic magnetization curves of ferromagnetic materials is proposed and benchmarked against experimental data. This is based on the calculation of macroscopic magnetic properties by performing an energy weighted average over all possible orientations of the magnetization vector. Most prior approaches that employ this method are unable to independently account for the effect of both inhomogeneity and temperature in performing the averaging necessary to model experimental data. Here we propose a way to account for both effects simultaneously and benchmark the model against experimental data from ~5 K to ~300 K for two different materials in both annealed (fewer inhomogeneities) and deformed (more inhomogeneities) samples. This demonstrates that this framework is well suited to simulate temperature dependent experimental magnetic behavior. - Highlights: • Energy based model for temperature dependent ferromagnetic behavior. • Simultaneously accounts for effect of temperature and inhomogeneities. • Benchmarked against experimental data from 5 K to 300 K.

  2. Energy-Based Tetrahedron Sensor for High-Temperature, High-Pressure Environments

    Science.gov (United States)

    Gee, Kent L.; Sommerfeldt, Scott D.; Blotter, Jonathan D.

    2012-01-01

    An acoustic energy-based probe has been developed that incorporates multiple acoustic sensing elements in order to obtain the acoustic pressure and three-dimensional acoustic particle velocity. With these quantities, the user can obtain various energy-based quantities, including acoustic energy density, acoustic intensity, and acoustic impedance. In this specific development, the probe has been designed to operate in an environment characterized by high temperatures and high pressures as is found in the close vicinity of rocket plumes. Given these capabilities, the probe is designed to be used to investigate the acoustic conditions within the plume of a rocket engine or jet engine to facilitate greater understanding of the noise generation mechanisms in those plumes. The probe features sensors mounted inside a solid sphere. The associated electronics for the probe are contained within the sphere and the associated handle for the probe. More importantly, the design of the probe has desirable properties that reduce the bias errors associated with determining the acoustic pressure and velocity using finite sum and difference techniques. The diameter of the probe dictates the lower and upper operating frequencies for the probe, where accurate measurements can be acquired. The current probe design implements a sphere diameter of 1 in. (2.5 cm), which limits the upper operating frequency to about 4.5 kHz. The sensors are operational up to much higher frequencies, and could be used to acquire pressure data at higher frequencies, but the energy-based measurements are limited to that upper frequency. Larger or smaller spherical probes could be designed to go to lower or higher frequency range

  3. An energy-based body temperature threshold between torpor and normothermia for small mammals.

    Science.gov (United States)

    Willis, Craig K R

    2007-01-01

    Field studies of use of torpor by heterothermic endotherms suffer from the lack of a standardized threshold differentiating torpid body temperatures (T(b)) from normothermic T(b)'s. This threshold can be more readily observed if metabolic rate (MR) is measured in the laboratory. I digitized figures from the literature that depicted simultaneous traces of MR and T(b) from 32 respirometry runs for 14 mammal species. For each graph, I quantified the T(b) measured when MR first began to drop at the onset of torpor (T(b-onset)). I used a general linear model to quantify the effect of ambient temperature (T(a)) and body mass (BM) on T(b-onset). For species lighter than 70 g, the model was highly significant and was described by the equation Tb-onset=(0.055+/-0.014)BM+(0.071+/-0.031)Ta+(31.823+/-0.740). To be conservative, I recommend use of these model parameters minus 1 standard error, which modifies the equation to Tb-onset-1 SE=(0.041)BM+(0.040)Ta+31.083. This approach provides a standardized threshold for differentiating torpor from normothermia that is based on use of energy, the actual currency of interest for studies of torpor in the wild. Few laboratory studies have presented the time-course data required to quantify T(b-onset), so more data are needed to validate this relationship.

  4. Free surface profiles in river flows: Can standard energy-based gradually-varied flow computations be pursued?

    Science.gov (United States)

    Cantero, Francisco; Castro-Orgaz, Oscar; Garcia-Marín, Amanda; Ayuso, José Luis; Dey, Subhasish

    2015-10-01

    Is the energy equation for gradually-varied flow the best approximation for the free surface profile computations in river flows? Determination of flood inundation in rivers and natural waterways is based on the hydraulic computation of flow profiles. This is usually done using energy-based gradually-varied flow models, like HEC-RAS, that adopts a vertical division method for discharge prediction in compound channel sections. However, this discharge prediction method is not so accurate in the context of advancements over the last three decades. This paper firstly presents a study of the impact of discharge prediction on the gradually-varied flow computations by comparing thirteen different methods for compound channels, where both energy and momentum equations are applied. The discharge, velocity distribution coefficients, specific energy, momentum and flow profiles are determined. After the study of gradually-varied flow predictions, a new theory is developed to produce higher-order energy and momentum equations for rapidly-varied flow in compound channels. These generalized equations enable to describe the flow profiles with more generality than the gradually-varied flow computations. As an outcome, results of gradually-varied flow provide realistic conclusions for computations of flow in compound channels, showing that momentum-based models are in general more accurate; whereas the new theory developed for rapidly-varied flow opens a new research direction, so far not investigated in flows through compound channels.

  5. Automatic temperature computation for realistic IR simulation

    Science.gov (United States)

    Le Goff, Alain; Kersaudy, Philippe; Latger, Jean; Cathala, Thierry; Stolte, Nilo; Barillot, Philippe

    2000-07-01

    Polygon temperature computation in 3D virtual scenes is fundamental for IR image simulation. This article describes in detail the temperature calculation software and its current extensions, briefly presented in [1]. This software, called MURET, is used by the simulation workshop CHORALE of the French DGA. MURET is a one-dimensional thermal software, which accurately takes into account the material thermal attributes of three-dimensional scene and the variation of the environment characteristics (atmosphere) as a function of the time. Concerning the environment, absorbed incident fluxes are computed wavelength by wavelength, for each half an hour, druing 24 hours before the time of the simulation. For each polygon, incident fluxes are compsed of: direct solar fluxes, sky illumination (including diffuse solar fluxes). Concerning the materials, classical thermal attributes are associated to several layers, such as conductivity, absorption, spectral emissivity, density, specific heat, thickness and convection coefficients are taken into account. In the future, MURET will be able to simulate permeable natural materials (water influence) and vegetation natural materials (woods). This model of thermal attributes induces a very accurate polygon temperature computation for the complex 3D databases often found in CHORALE simulations. The kernel of MUET consists of an efficient ray tracer allowing to compute the history (over 24 hours) of the shadowed parts of the 3D scene and a library, responsible for the thermal computations. The great originality concerns the way the heating fluxes are computed. Using ray tracing, the flux received in each 3D point of the scene accurately takes into account the masking (hidden surfaces) between objects. By the way, this library supplies other thermal modules such as a thermal shows computation tool.

  6. A New Energy-Based Method for 3-D Finite-Element Nonlinear Flux Linkage computation of Electrical Machines

    DEFF Research Database (Denmark)

    Lu, Kaiyuan; Rasmussen, Peter Omand; Ritchie, Ewen

    2011-01-01

    This paper presents a new method for computation of the nonlinear flux linkage in 3-D finite-element models (FEMs) of electrical machines. Accurate computation of the nonlinear flux linkage in 3-D FEM is not an easy task. Compared to the existing energy-perturbation method, the new technique......-perturbation method. The new method proposed is validated using experimental results on two different permanent magnet machines....

  7. Reexamination of a novel determination of density, temperature, and symmetry energy based on a modified Fisher model

    Science.gov (United States)

    Liu, X.; Zheng, H.; Lin, W.; Huang, M.; Yang, Y. Y.; Wang, J. S.; Wada, R.; Bonasera, A.; Natowitz, J. B.

    2018-01-01

    In this article, a mistake in the formulation of the modified Fisher model (MFM) derived in the pioneering works of the Purdue group is addressed and corrected by reversing the sign of the mixing entropy term in the original formulation. The errors in the results of the previous MFM-related studies, such as isotopic yield distribution, isobaric yield ratios, isoscaling, m scaling, self-consistent determination of density, symmetry energy, and temperature, and density and temperature determination related to the intermediate mass fragment (IMF) freezeout, are quantitatively analyzed. It is found that the errors originating from the mistake in sign of the mixing entropy term are generally small and even have no effect in some cases.

  8. Computer code validation by high temperature chemistry

    International Nuclear Information System (INIS)

    Alexander, C.A.; Ogden, J.S.

    1988-01-01

    At least five of the computer codes utilized in analysis of severe fuel damage-type events are directly dependent upon or can be verified by high temperature chemistry. These codes are ORIGEN, CORSOR, CORCON, VICTORIA, and VANESA. With the exemption of CORCON and VANESA, it is necessary that verification experiments be performed on real irradiated fuel. For ORIGEN, the familiar knudsen effusion cell is the best choice and a small piece of known mass and known burn-up is selected and volatilized completely into the mass spectrometer. The mass spectrometer is used in the integral mode to integrate the entire signal from preselected radionuclides, and from this integrated signal the total mass of the respective nuclides can be determined. For CORSOR and VICTORIA, experiments with flowing high pressure hydrogen/steam must flow over the irradiated fuel and then enter the mass spectrometer. For these experiments, a high pressure-high temperature molecular beam inlet must be employed. Finally, in support of VANESA-CORCON, the very highest temperature and molten fuels must be contained and analyzed. Results from all types of experiments will be discussed and their applicability to present and future code development will also be covered

  9. Computing Temperatures in Optically Thick Protoplanetary Disks

    Science.gov (United States)

    Capuder, Lawrence F.. Jr.

    2011-01-01

    We worked with a Monte Carlo radiative transfer code to simulate the transfer of energy through protoplanetary disks, where planet formation occurs. The code tracks photons from the star into the disk, through scattering, absorption and re-emission, until they escape to infinity. High optical depths in the disk interior dominate the computation time because it takes the photon packet many interactions to get out of the region. High optical depths also receive few photons and therefore do not have well-estimated temperatures. We applied a modified random walk (MRW) approximation for treating high optical depths and to speed up the Monte Carlo calculations. The MRW is implemented by calculating the average number of interactions the photon packet will undergo in diffusing within a single cell of the spatial grid and then updating the packet position, packet frequencies, and local radiation absorption rate appropriately. The MRW approximation was then tested for accuracy and speed compared to the original code. We determined that MRW provides accurate answers to Monte Carlo Radiative transfer simulations. The speed gained from using MRW is shown to be proportional to the disk mass.

  10. High temperature estimation through computer vision

    International Nuclear Information System (INIS)

    Segovia de los R, J.A.

    1996-01-01

    The form recognition process has between his purposes to conceive and to analyze the classification algorithms applied to the image representations, sounds or signals of any kind. In a process with a thermal plasma reactor in which cannot be employed conventional dispositives or methods for the measurement of the very high temperatures. The goal of this work was to determine these temperatures in an indirect way. (Author)

  11. Computational Chemistry of Cyclopentane Low Temperature Oxidation

    KAUST Repository

    El Rachidi, Mariam

    2015-03-30

    Cycloalkanes are significant constituents of conventional fossil fuels, but little is known concerning their combustion chemistry and kinetics, particularly at low temperatures. This study investigates the pressure dependent kinetics of several reactions occurring during low-temperature cyclopentane combustion using theoretical chemical kinetics. The reaction pathways of the cyclopentyl + O2 adduct is traced to alkylhydroperoxide, cyclic ether, β-scission and HO2 elimination products. The calculations are carried out at the UCCSD(T)-F12b/cc-pVTZ-F12//M06-2X/6-311++G(d,p) level of theory. The barrierless entrance channel is treated using variable-reaction-coordinate transition state theory (VRC-TST) at the CASPT2(7e,6o) level of theory, including basis set, geometry relaxation and ZPE corrections. 1-D time-dependent multiwell master equation analysis is used to determine pressure-and temperature-dependent rate parameters of all investigated reactions. Tunneling corrections are included using Eckart barriers. Comparison with cyclohexane is used to elucidate the effect of ring size on the low temperature reactivity of naphthenes. The rate coefficients reported herein are suitable for use in cyclopentane and methylcyclopentane combustion models, even below ~900 K, where ignition is particularly sensitive to these pressure-dependent values.

  12. Computational Chemistry of Cyclopentane Low Temperature Oxidation

    KAUST Repository

    El Rachidi, Mariam; Zá dor, Judit; Sarathy, Mani

    2015-01-01

    reactions occurring during low-temperature cyclopentane combustion using theoretical chemical kinetics. The reaction pathways of the cyclopentyl + O2 adduct is traced to alkylhydroperoxide, cyclic ether, β-scission and HO2 elimination products. The calculations are carried out at the UCCSD(T)-F12b/cc-pVTZ-F12//M06-2X/6-311++G(d,p) level of theory. The barrierless entrance channel is treated using variable-reaction-coordinate transition state theory (VRC-TST) at the CASPT2(7e,6o) level of theory, including basis set, geometry relaxation and ZPE corrections. 1-D time-dependent multiwell master equation analysis is used to determine pressure-and temperature-dependent rate parameters of all investigated reactions. Tunneling corrections are included using Eckart barriers. Comparison with cyclohexane is used to elucidate the effect of ring size on the low temperature reactivity of naphthenes. The rate coefficients reported herein are suitable for use in cyclopentane and methylcyclopentane combustion models, even below ~900 K, where ignition is particularly sensitive to these pressure-dependent values.

  13. Computer supervision of the core outlet sodium temperatures of FBTR

    International Nuclear Information System (INIS)

    Boopathy, C.

    1976-01-01

    Safety monitoring of the fast breeder test reactor at Kalpakkam (India) is achieved by a CDPS-on-line dual computer system which is dedicated to plant supervision. The on-line subsystem scans and supervises all the 170 core thermocouple signals every second. Organisation of the reactor core instruments, supervision of mean sodium outlet temperature and mean temperature drop across the core, detection of plugging of a fuel assembly are explained. (A.K.)

  14. Computed temperature profile in materials exposed to gamma radiation

    Energy Technology Data Exchange (ETDEWEB)

    Ping, Tso Chin; Choong, Yap Siew; Seon, Chan Kam

    1987-06-01

    Computed temperature profiles are presented for the materials of lead, steel, concrete and water in curved shells, when they are exposed to gamma radiation. The results are based on the usual simplified theory of thermal conduction with an exponential heat source.

  15. Computations of finite temperature QCD with the pseudofermion method

    International Nuclear Information System (INIS)

    Fucito, F.; Solomon, S.

    1985-01-01

    The authors discuss the phase diagram of finite temperature QCD as it is obtained including the effects of dynamical quarks by the pseudofermion method. They compare their results with the results obtained by other groups and comment on the actual state of the art for these kind of computations

  16. Computational scheme for transient temperature distribution in PWR vessel wall

    International Nuclear Information System (INIS)

    Dedovic, S.; Ristic, P.

    1980-01-01

    Computer code TEMPNES is a part of joint effort made in Gosa Industries in achieving the technique for structural analysis of heavy pressure vessels. Transient heat conduction problems analysis is based on finite element discretization of structures non-linear transient matrix formulation and time integration scheme as developed by Wilson (step-by-step procedure). Convection boundary conditions and the effect of heat generation due to radioactive radiation are both considered. The computation of transient temperature distributions in reactor vessel wall when the water temperature suddenly drops as a consequence of reactor cooling pump failure is presented. The vessel is treated as as axisymmetric body of revolution. The program has two finite time element options a) fixed predetermined increment and; b) an automatically optimized time increment for each step dependent on the rate of change of the nodal temperatures. (author)

  17. Computational investigation of the temperature separation in vortex chamber

    International Nuclear Information System (INIS)

    Anish, S.; Setoguchi, T.; Kim, H. D.

    2014-01-01

    The vortex chamber is a mechanical device, without any moving parts that separates compressed gas into a high temperature region and a low temperature region. Functionally vortex chamber is similar to a Ranque-Hilsch vortex tube (RVHT), but it is a simpler and compact structure. The objective of the present study is to investigate computationally the physical reasoning behind the energy separation mechanism inside a vortex chamber. A computational analysis has been performed using three-dimensional compressible Navier Stokes equations. A fully implicit finite volume scheme was used to solve the governing equations. A commercial software ANSYS CFX is used for this purpose. The computational predictions were validated with existing experimental data. The results obtained show that the vortex chamber contains a large free vortex zone and a comparatively smaller forced vortex region. The physical mechanism that causes the heating towards periphery of the vortex chamber is identified as the work done by the viscous force. The cooling at the center may be due to expansion of the flow. The extent of temperature separation greatly depends on the outer diameter of the vortex chamber. A small amount of compression is observed towards the periphery of the vortex chamber when the outer diameter is reduced.

  18. A computed room temperature line list for phosphine

    Science.gov (United States)

    Sousa-Silva, Clara; Yurchenko, Sergei N.; Tennyson, Jonathan

    2013-06-01

    An accurate and comprehensive room temperature rotation-vibration transition line list for phosphine (31PH3) is computed using a newly refined potential energy surface and a previously constructed ab initio electric dipole moment surface. Energy levels, Einstein A coefficients and transition intensities are computed using these surfaces and a variational approach to the nuclear motion problem as implemented in the program TROVE. A ro-vibrational spectrum is computed, covering the wavenumber range 0-8000 cm-1. The resulting line list, which is appropriate for temperatures up to 300 K, consists of a total of 137 million transitions between 5.6 million energy levels. Several of the band centres are shifted to better match experimental transition frequencies. The line list is compared to the most recent HITRAN database and other laboratorial sources. Transition wavelengths and intensities are generally found to be in good agreement with the existing experimental data, with particularly close agreement for the rotational spectrum. An analysis of the comparison between the theoretical data created and the existing experimental data is performed, and suggestions for future improvements and assignments to the HITRAN database are made.

  19. Computer Simulation Studies of Ion Channels at High Temperatures

    Science.gov (United States)

    Song, Hyun Deok

    The gramicidin channel is the smallest known biological ion channel, and it exhibits cation selectivity. Recently, Dr. John Cuppoletti's group at the University of Cincinnati showed that the gramicidin channel can function at high temperatures (360 ˜ 380K) with significant currents. This finding may have significant implications for fuel cell technology. In this thesis, we have examined the gramicidin channel at 300K, 330K, and 360K by computer simulation. We have investigated how the temperature affects the current and differences in magnitude of free energy between the two gramicidin forms, the helical dimer (HD) and the double helix (DH). A slight decrease of the free energy barrier inside the gramicidin channel and increased diffusion at high temperatures result in an increase of current. An applied external field of 0.2V/nm along the membrane normal results in directly observable ion transport across the channels at high temperatures for both HD and DH forms. We found that higher temperatures also affect the probability distribution of hydrogen bonds, the bending angle, the distance between dimers, and the size of the pore radius for the helical dimer structure. These findings may be related to the gating of the gramicidin channel. Methanococcus jannaschii (MJ) is a methane-producing thermophile, which was discovered at a depth of 2600m in a Pacific Ocean vent in 1983. It has the ability to thrive at high temperatures and high pressures, which are unfavorable for most life forms. There have been some experiments to study its stability under extreme conditions, but still the origin of the stability of MJ is not exactly known. MJ0305 is the chloride channel protein from the thermophile MJ. After generating a structure of MJ0305 by homology modeling based on the Ecoli ClC templates, we examined the thermal stability, and the network stability from the change of network entropy calculated from the adjacency matrices of the protein. High temperatures increase the

  20. A Computational Framework for Efficient Low Temperature Plasma Simulations

    Science.gov (United States)

    Verma, Abhishek Kumar; Venkattraman, Ayyaswamy

    2016-10-01

    Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.

  1. Experimental and computational prediction of glass transition temperature of drugs.

    Science.gov (United States)

    Alzghoul, Ahmad; Alhalaweh, Amjad; Mahlin, Denny; Bergström, Christel A S

    2014-12-22

    Glass transition temperature (Tg) is an important inherent property of an amorphous solid material which is usually determined experimentally. In this study, the relation between Tg and melting temperature (Tm) was evaluated using a data set of 71 structurally diverse druglike compounds. Further, in silico models for prediction of Tg were developed based on calculated molecular descriptors and linear (multilinear regression, partial least-squares, principal component regression) and nonlinear (neural network, support vector regression) modeling techniques. The models based on Tm predicted Tg with an RMSE of 19.5 K for the test set. Among the five computational models developed herein the support vector regression gave the best result with RMSE of 18.7 K for the test set using only four chemical descriptors. Hence, two different models that predict Tg of drug-like molecules with high accuracy were developed. If Tm is available, a simple linear regression can be used to predict Tg. However, the results also suggest that support vector regression and calculated molecular descriptors can predict Tg with equal accuracy, already before compound synthesis.

  2. Computation of the temperatures of a fluid flowing through a pipe from temperature measurements on the pipe's outer surface

    International Nuclear Information System (INIS)

    Sauer, G.

    1999-01-01

    A method for computing the temperatures of a fluid flowing through a pipe on the basis of temperatures recorded at the pipe's outer surface is presented. The heat conduction in the pipe wall is described by one-dimensional heat conduction elements. Heat transfer between fluid, pipe and surrounding is allowed for. The equation system resulting from the standard finite element discretization is reformulated to enable the computation of temperature events preceding the recorded temperature in time. It is shown that the method can be used to identify the actual fluid temperature from temperature data obtained only at the outer surface of the pipe. The temperatures in the pipe wall are computed with good accuracy even in the case of a severe thermal shock. (orig.) [de

  3. Computation and measurement of air temperature distribution of an industrial melt blowing die

    Directory of Open Access Journals (Sweden)

    Wu Li-Li

    2014-01-01

    Full Text Available The air flow field of the dual slot die on an HDF-6D melt blowing non-woven equipment is computed numerically. A temperature measurement system is built to measure air temperatures. The computation results tally with the measured results proving the correctness of the computation. The results have great valuable significance in the actual melt blowing production.

  4. Automatic calibration system of the temperature instrument display based on computer vision measuring

    Science.gov (United States)

    Li, Zhihong; Li, Jinze; Bao, Changchun; Hou, Guifeng; Liu, Chunxia; Cheng, Fang; Xiao, Nianxin

    2010-07-01

    With the development of computers and the techniques of dealing with pictures and computer optical measurement, various measuring techniques are maturing gradually on the basis of optical picture processing technique and using in practice. On the bases, we make use of the many years' experience and social needs in temperature measurement and computer vision measurement to come up with the completely automatic way of the temperature measurement meter with integration of the computer vision measuring technique. It realizes synchronization collection with theory temperature value, improves calibration efficiency. based on least square fitting principle, integrate data procession and the best optimize theory, rapidly and accurately realizes automation acquisition and calibration of temperature.

  5. Experimental and computational investigation of temperature effects on soot mechanisms

    Directory of Open Access Journals (Sweden)

    Bi Xiaojie

    2014-01-01

    Full Text Available Effects of initial ambient temperatures on combustion and soot emission characteristics of diesel fuel were investigated through experiment conducted in optical constant volume chamber and simulation using phenomenological soot model. There are four difference initial ambient temperatures adopted in our research: 1000 K, 900 K, 800 K and 700 K. In order to obtain a better prediction of soot behavior, phenomenological soot model was revised to take into account the soot oxidation feedback on soot number density and good agreement was observed in the comparison of soot measurement and prediction. Results indicated that ignition delay prolonged with the decrease of initial ambient temperature. The heat release rate demonstrated the transition from mixing controlled combustion at high ambient temperature to premixed combustion mode at low ambient temperature. At lower ambient temperature, soot formation and oxidation mechanism were both suppressed. But finally soot mass concentration reduced with decreasing initial ambient temperature. Although the drop in ambient temperature did not cool the mean in-cylinder temperature during the combustion, it did shrink the total area of local high equivalence ratio, in which soot usually generated fast. At 700 K initial ambient temperature, soot emissions were almost negligible, which indicates that sootless combustion might be achieved at super low initial temperature operation conditions.

  6. FLATT - a computer programme for calculating flow and temperature transients in nuclear fuels

    International Nuclear Information System (INIS)

    Venkat Raj, V.; Koranne, S.M.

    1976-01-01

    FLATT is a computer code written in Fortran language for BESM-6 computer. The code calculates the flow transients in the coolant circuit of a nuclear reactor, caused by pump failure, and the consequent temperature transients in the fuel, clad, and the coolant. In addition any desired flow transient can be fed into the programme and the resulting temperature transients can be calculated. A case study is also presented. (author)

  7. Computational Study of Nonequilibrium Chemistry in High Temperature Flows

    Science.gov (United States)

    Doraiswamy, Sriram

    Recent experimental measurements in the reflected shock tunnel CUBRC LENS-I facility raise questions about our ability to correctly model the recombination processes in high enthalpy flows. In the carbon dioxide flow, the computed shock standoff distance over the Mars Science Laboratory (MSL) shape was less than half of the experimental result. For the oxygen flows, both pressure and heat transfer data on the double cone geometry were not correctly predicted. The objective of this work is to investigate possible reasons for these discrepancies. This process involves systematically addressing different factors that could possibly explain the differences. These factors include vibrational modeling, role of electronic states and chemistry-vibrational coupling in high enthalpy flows. A state-specific vibrational model for CO2, CO, O2 and O system is devised by taking into account the first few vibrational states of each species. All vibrational states with energies at or below 1 eV are included in the present work. Of the three modes of vibration in CO2 , the antisymmetric mode is considered separately from the symmetric stretching mode and the doubly degenerate bending modes. The symmetric and the bending modes are grouped together since the energy transfer rates between the two modes are very large due to Fermi resonance. The symmetric and bending modes are assumed to be in equilibrium with the translational and rotational modes. The kinetic rates for the vibrational-translation energy exchange reactions, and the intermolecular and intramolecular vibrational-vibrational energy exchange reactions are based on experimental data to the maximum extent possible. Extrapolation methods are employed when necessary. This vibrational model is then coupled with an axisymmetric computational fluid dynamics code to study the expansion of CO2 in a nozzle. The potential role of low lying electronic states is also investigated. Carbon dioxide has a single excited state just below

  8. A class of energy-based ensembles in Tsallis statistics

    International Nuclear Information System (INIS)

    Chandrashekar, R; Naina Mohammed, S S

    2011-01-01

    A comprehensive investigation is carried out on the class of energy-based ensembles. The eight ensembles are divided into two main classes. In the isothermal class of ensembles the individual members are at the same temperature. A unified framework is evolved to describe the four isothermal ensembles using the currently accepted third constraint formalism. The isothermal–isobaric, grand canonical and generalized ensembles are illustrated through a study of the classical nonrelativistic and extreme relativistic ideal gas models. An exact calculation is possible only in the case of the isothermal–isobaric ensemble. The study of the ideal gas models in the grand canonical and the generalized ensembles has been carried out using a perturbative procedure with the nonextensivity parameter (1 − q) as the expansion parameter. Though all the thermodynamic quantities have been computed up to a particular order in (1 − q) the procedure can be extended up to any arbitrary order in the expansion parameter. In the adiabatic class of ensembles the individual members of the ensemble have the same value of the heat function and a unified formulation to described all four ensembles is given. The nonrelativistic and the extreme relativistic ideal gases are studied in the isoenthalpic–isobaric ensemble, the adiabatic ensemble with number fluctuations and the adiabatic ensemble with number and particle fluctuations

  9. Programmed temperature control of capsule in irradiation test with personal computer at JMTR

    International Nuclear Information System (INIS)

    Saito, H.; Uramoto, T.; Fukushima, M.; Obata, M.; Suzuki, S.; Nakazaki, C.; Tanaka, I.

    1992-01-01

    The capsule irradiation facility is one of various equipments employed at the Japan Materials Testing Reactor (JMTR). The capsule facility has been used in irradiation tests of both nuclear fuels and materials. The capsule to be irradiated consists of the specimen, the outer tube and inner tube with a annular space between them. The temperature of the specimen is controlled by varying the degree of pressure (below the atmospheric pressure) of He gas in the annular space (vacuum-controlled). Beside this, in another system the temperature of the specimen is controlled with electric heaters mounted around the specimen (heater-controlled). The use of personal computer in the capsule facility has led to the development of a versatile temperature control system at the JMTR. Features of this newly-developed temperature control system lie in the following: the temperature control mode for a operation period can be preset prior to the operation; and the vacuum-controlled irradiation facility can be used in cooperation with the heater-controlled. The introduction of personal computer has brought in automatic heat-up and cool-down operations of the capsule, setting aside the hand-operated jobs which had been conducted by the operators. As a result of this, the various requirements seeking a higher accuracy and efficiency in the irradiation can be met by fully exploiting the capabilities incorporated into the facility which allow the cyclic or delicate changes in the temperature. This paper deals with a capsule temperature control system with personal computer. (author)

  10. Computational fluid dynamic on the temperature simulation of air preheat effect combustion in propane turbulent flame

    Science.gov (United States)

    Elwina; Yunardi; Bindar, Yazid

    2018-04-01

    this paper presents results obtained from the application of a computational fluid dynamics (CFD) code Fluent 6.3 to modelling of temperature in propane flames with and without air preheat. The study focuses to investigate the effect of air preheat temperature on the temperature of the flame. A standard k-ε model and Eddy Dissipation model are utilized to represent the flow field and combustion of the flame being investigated, respectively. The results of calculations are compared with experimental data of propane flame taken from literature. The results of the study show that a combination of the standard k-ε turbulence model and eddy dissipation model is capable of producing reasonable predictions of temperature, particularly in axial profile of all three flames. Both experimental works and numerical simulation showed that increasing the temperature of the combustion air significantly increases the flame temperature.

  11. Modeling of temperature profiles in an environmental transmission electron microscope using computational fluid dynamics

    International Nuclear Information System (INIS)

    Mølgaard Mortensen, Peter; Willum Hansen, Thomas; Birkedal Wagner, Jakob; Degn Jensen, Anker

    2015-01-01

    The temperature and velocity field, pressure distribution, and the temperature variation across the sample region inside an environmental transmission electron microscope (ETEM) have been modeled by means of computational fluid dynamics (CFD). Heating the sample area by a furnace type TEM holder gives rise to temperature gradients over the sample area. Three major mechanisms have been identified with respect to heat transfer in the sample area: radiation from the grid, conduction in the grid, and conduction in the gas. A parameter sensitivity analysis showed that the sample temperature was affected by the conductivity of the gas, the emissivity of the sample grid, and the conductivity of the grid. Ideally the grid should be polished and made from a material with good conductivity, e.g. copper. With hydrogen gas, which has the highest conductivity of the gases studied, the temperature difference over the TEM grid is less than 5 °C, at what must be considered typical conditions, and it is concluded that the conditions on the sample grid in the ETEM can be considered as isothermal during general use. - Highlights: • Computational fluid dynamics used for mapping flow and temperature in ETEM setup. • Temperature gradient across TEM grid in furnace based heating holder very small in ETEM. • Conduction from TEM grid and gas in addition to radiation from TEM grid most important. • Pressure drop in ETEM limited to the pressure limiting apertures

  12. Modeling of temperature profiles in an environmental transmission electron microscope using computational fluid dynamics

    DEFF Research Database (Denmark)

    Mortensen, Peter Mølgaard; Jensen, Anker Degn; Hansen, Thomas Willum

    2015-01-01

    The temperature and velocity field, pressure distribution, and the temperature variation across the sample region inside an environmental transmission electron microscope (ETEM) have been modeled by means of computational fluid dynamics (CFD). Heating the sample area by a furnace type TEM holder...... gives rise to temperature gradients over the sample area. Three major mechanisms have been identified with respect to heat transfer in the sample area: radiation from the grid, conduction in the grid, and conduction in the gas. A parameter sensitivity analysis showed that the sample temperature...... was affected by the conductivity of the gas, the emissivity of the sample grid, and the conductivity of the grid. Ideally the grid should be polished and made from a material with good conductivity, e.g. copper. With hydrogen gas, which has the highest conductivity of the gases studied, the temperature...

  13. Energy-based fatigue model for shape memory alloys including thermomechanical coupling

    Science.gov (United States)

    Zhang, Yahui; Zhu, Jihong; Moumni, Ziad; Van Herpen, Alain; Zhang, Weihong

    2016-03-01

    This paper is aimed at developing a low cycle fatigue criterion for pseudoelastic shape memory alloys to take into account thermomechanical coupling. To this end, fatigue tests are carried out at different loading rates under strain control at room temperature using NiTi wires. Temperature distribution on the specimen is measured using a high speed thermal camera. Specimens are tested to failure and fatigue lifetimes of specimens are measured. Test results show that the fatigue lifetime is greatly influenced by the loading rate: as the strain rate increases, the fatigue lifetime decreases. Furthermore, it is shown that the fatigue cracks initiate when the stored energy inside the material reaches a critical value. An energy-based fatigue criterion is thus proposed as a function of the irreversible hysteresis energy of the stabilized cycle and the loading rate. Fatigue life is calculated using the proposed model. The experimental and computational results compare well.

  14. Energy-based fatigue model for shape memory alloys including thermomechanical coupling

    International Nuclear Information System (INIS)

    Zhang, Yahui; Zhu, Jihong; Moumni, Ziad; Zhang, Weihong; Van Herpen, Alain

    2016-01-01

    This paper is aimed at developing a low cycle fatigue criterion for pseudoelastic shape memory alloys to take into account thermomechanical coupling. To this end, fatigue tests are carried out at different loading rates under strain control at room temperature using NiTi wires. Temperature distribution on the specimen is measured using a high speed thermal camera. Specimens are tested to failure and fatigue lifetimes of specimens are measured. Test results show that the fatigue lifetime is greatly influenced by the loading rate: as the strain rate increases, the fatigue lifetime decreases. Furthermore, it is shown that the fatigue cracks initiate when the stored energy inside the material reaches a critical value. An energy-based fatigue criterion is thus proposed as a function of the irreversible hysteresis energy of the stabilized cycle and the loading rate. Fatigue life is calculated using the proposed model. The experimental and computational results compare well. (paper)

  15. Computer calculation of heat capacity of natural gases over a wide range of pressure and temperature

    Energy Technology Data Exchange (ETDEWEB)

    Dranchuk, P.M. (Alberta Univ., Edmonton, AB (Canada)); Abou-Kassem, J.H. (Pennsylvania State Univ., University Park, PA (USA))

    1992-04-01

    A method is presented whereby specific heats or heat capacities of natural gases, both sweet and sour, at elevated pressures and temperatures may be made suitable to modern-day machine calculation. The method involves developing a correlation for ideal isobaric heat capacity as a function of gas gravity and pseudo reduced temperature over the temperature range of 300 to 1500 K, and a mathematical equation for the isobaric heat capacity departure based on accepted thermodynamic principles applied to an equation of state that adequately describes the behavior of gases to which the Standing and Katz Z factor correlation applies. The heat capacity departure equation is applicable over the range of 0.2 {le} Pr {le} 15 and 1.05 {le} Tr {le} 3, where Pr and Tr refer to the reduced pressure and temperature respectively. The significance of the method presented lies in its utility and adaptability to computer applications. 25 refs., 2 figs., 4 tabs.

  16. Computer Simulation of Temperature Parameter for Diamond Formation by Using Hot-Filament Chemical Vapor Deposition

    Directory of Open Access Journals (Sweden)

    Chang Weon Song

    2017-12-01

    Full Text Available To optimize the deposition parameters of diamond films, the temperature, pressure, and distance between the filament and the susceptor need to be considered. However, it is difficult to precisely measure and predict the filament and susceptor temperature in relation to the applied power in a hot filament chemical vapor deposition (HF-CVD system. In this study, the temperature distribution inside the system was numerically calculated for the applied powers of 12, 14, 16, and 18 kW. The applied power needed to achieve the appropriate temperature at a constant pressure and other conditions was deduced, and applied to actual experimental depositions. The numerical simulation was conducted using the commercial computational fluent dynamics software ANSYS-FLUENT. To account for radiative heat-transfer in the HF-CVD reactor, the discrete ordinate (DO model was used. The temperatures of the filament surface and the susceptor at different power levels were predicted to be 2512–2802 K and 1076–1198 K, respectively. Based on the numerical calculations, experiments were performed. The simulated temperatures for the filament surface were in good agreement with the experimental temperatures measured using a two-color pyrometer. The results showed that the highest deposition rate and the lowest deposition of non-diamond was obtained at a power of 16 kW.

  17. Computer simulation of the optical properties of high-temperature cermet solar selective coatings

    Energy Technology Data Exchange (ETDEWEB)

    Nejati, M. Reza [K.N. Toosi Univ. of Technology, Dept. of Mechanical Engineering, Tehran (Iran); Fathollahi, V.; Asadi, M. Khalaji [AEOI, Center for Renewable Energy Research and Applications (CRERA), Tehran (Iran)

    2005-02-01

    A computer simulation is developed to calculate the solar absorptance and thermal emittance of various configurations of cermet solar selective coatings. Special attention has been paid to those material combinations, which are commonly used in high-temperature solar thermal applications. Moreover, other material combinations such as two-, three- and four-cermet-layer structures as solar selective coatings have been theoretically analyzed by computer simulation using three distinct physical models of Ping Sheng, Maxwell-Garnett and Bruggeman. The novel case of two-cermet-layer structure with different cermet components has also been investigated. The results were optimized by allowing the program to manipulate the metal volume fraction and thickness of each layer and the results compared to choose the best possible configuration. The calculated results are within the range of 0.91-0.97 for solar absorptance and 0.02-0.07 for thermal emittance at room temperature. (Author)

  18. Computer-based liquid radioactive waste control with plant emergency and generator temperature monitoring

    International Nuclear Information System (INIS)

    Plotnick, R.J.; Schneider, M.I.; Shaffer, C.E.

    1986-01-01

    At the start of the design of the liquid radwaste control system for a nuclear generating station under construction, several serious problems were detected. The solution incorporated a new approach utilizing a computer and a blend of standard and custom software to replace the existing conventionally instrumented benchboard. The computer-based system, in addition to solving the problems associated with the benchboard design, also provided other enhancements which significantly improved the operability and reliability of the radwaste system. The functionality of the computer-based radwaste control system also enabled additional applications to be added to an expanded multitask version of the radwaste computer: 1) a Nuclear Regulatory Commission (NRC) requirement that all nuclear power plants have an emergency response facility status monitoring system; and 2) the sophisticated temperature monitoring and trending requested by the electric generator manufacturer to continue its warranty commitments. The addition of these tasks to the radwaste computer saved the cost of one or more computers that would be dedicated to these work requirements

  19. On the possibility of non-invasive multilayer temperature estimation using soft-computing methods.

    Science.gov (United States)

    Teixeira, C A; Pereira, W C A; Ruano, A E; Ruano, M Graça

    2010-01-01

    This work reports original results on the possibility of non-invasive temperature estimation (NITE) in a multilayered phantom by applying soft-computing methods. The existence of reliable non-invasive temperature estimator models would improve the security and efficacy of thermal therapies. These points would lead to a broader acceptance of this kind of therapies. Several approaches based on medical imaging technologies were proposed, magnetic resonance imaging (MRI) being appointed as the only one to achieve the acceptable temperature resolutions for hyperthermia purposes. However, MRI intrinsic characteristics (e.g., high instrumentation cost) lead us to use backscattered ultrasound (BSU). Among the different BSU features, temporal echo-shifts have received a major attention. These shifts are due to changes of speed-of-sound and expansion of the medium. The originality of this work involves two aspects: the estimator model itself is original (based on soft-computing methods) and the application to temperature estimation in a three-layer phantom is also not reported in literature. In this work a three-layer (non-homogeneous) phantom was developed. The two external layers were composed of (in % of weight): 86.5% degassed water, 11% glycerin and 2.5% agar-agar. The intermediate layer was obtained by adding graphite powder in the amount of 2% of the water weight to the above composition. The phantom was developed to have attenuation and speed-of-sound similar to in vivo muscle, according to the literature. BSU signals were collected and cumulative temporal echo-shifts computed. These shifts and the past temperature values were then considered as possible estimators inputs. A soft-computing methodology was applied to look for appropriate multilayered temperature estimators. The methodology involves radial-basis functions neural networks (RBFNN) with structure optimized by the multi-objective genetic algorithm (MOGA). In this work 40 operating conditions were

  20. An experimental platform for triaxial high-pressure/high-temperature testing of rocks using computed tomography

    Science.gov (United States)

    Glatz, Guenther; Lapene, Alexandre; Castanier, Louis M.; Kovscek, Anthony R.

    2018-04-01

    A conventional high-pressure/high-temperature experimental apparatus for combined geomechanical and flow-through testing of rocks is not X-ray compatible. Additionally, current X-ray transparent systems for computed tomography (CT) of cm-sized samples are limited to design temperatures below 180 °C. We describe a novel, high-temperature (>400 °C), high-pressure (>2000 psi/>13.8 MPa confining, >10 000 psi/>68.9 MPa vertical load) triaxial core holder suitable for X-ray CT scanning. The new triaxial system permits time-lapse imaging to capture the role of effective stress on fluid distribution and porous medium mechanics. System capabilities are demonstrated using ultimate compressive strength (UCS) tests of Castlegate sandstone. In this case, flooding the porous medium with a radio-opaque gas such as krypton before and after the UCS test improves the discrimination of rock features such as fractures. The results of high-temperature tests are also presented. A Uintah Basin sample of immature oil shale is heated from room temperature to 459 °C under uniaxial compression. The sample contains kerogen that pyrolyzes as temperature rises, releasing hydrocarbons. Imaging reveals the formation of stress bands as well as the evolution and connectivity of the fracture network within the sample as a function of time.

  1. Computer simulations of the restricted primitive model at very low temperature and density

    International Nuclear Information System (INIS)

    Valeriani, Chantal; Camp, Philip J; Zwanikken, Jos W; Van Roij, Rene; Dijkstra, Marjolein

    2010-01-01

    The problem of successfully simulating ionic fluids at low temperature and low density states is well known in the simulation literature: using conventional methods, the system is not able to equilibrate rapidly due to the presence of strongly associated cation-anion pairs. In this paper we present a numerical method for speeding up computer simulations of the restricted primitive model (RPM) at low temperatures (around the critical temperature) and at very low densities (down to 10 -10 σ -3 , where σ is the ion diameter). Experimentally, this regime corresponds to typical concentrations of electrolytes in nonaqueous solvents. As far as we are aware, this is the first time that the RPM has been equilibrated at such extremely low concentrations. More generally, this method could be used to equilibrate other systems that form aggregates at low concentrations.

  2. TEMP: a computer code to calculate fuel pin temperatures during a transient

    International Nuclear Information System (INIS)

    Bard, F.E.; Christensen, B.Y.; Gneiting, B.C.

    1980-04-01

    The computer code TEMP calculates fuel pin temperatures during a transient. It was developed to accommodate temperature calculations in any system of axi-symmetric concentric cylinders. When used to calculate fuel pin temperatures, the code will handle a fuel pin as simple as a solid cylinder or as complex as a central void surrounded by fuel that is broken into three regions by two circumferential cracks. Any fuel situation between these two extremes can be analyzed along with additional cladding, heat sink, coolant or capsule regions surrounding the fuel. The one-region version of the code accurately calculates the solution to two problems having closed-form solutions. The code uses an implicit method, an explicit method and a Crank-Nicolson (implicit-explicit) method

  3. Computational model for a high temperature electrolyzer coupled to a HTTR for efficient nuclear hydrogen production

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, Daniel; Rojas, Leorlen; Rosales, Jesus; Castro, Landy; Gamez, Abel; Brayner, Carlos, E-mail: danielgonro@gmail.com [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil); Garcia, Lazaro; Garcia, Carlos; Torre, Raciel de la, E-mail: lgarcia@instec.cu [Instituto Superior de Tecnologias y Ciencias Aplicadas (InSTEC), La Habana (Cuba); Sanchez, Danny [Universidade Estadual de Santa Cruz (UESC), Ilheus, BA (Brazil)

    2015-07-01

    High temperature electrolysis process coupled to a very high temperature reactor (VHTR) is one of the most promising methods for hydrogen production using a nuclear reactor as the primary heat source. However there are not references in the scientific publications of a test facility that allow to evaluate the efficiency of the process and other physical parameters that has to be taken into consideration for its accurate application in the hydrogen economy as a massive production method. For this lack of experimental facilities, mathematical models are one of the most used tools to study this process and theirs flowsheets, in which the electrolyzer is the most important component because of its complexity and importance in the process. A computational fluid dynamic (CFD) model for the evaluation and optimization of the electrolyzer of a high temperature electrolysis hydrogen production process flowsheet was developed using ANSYS FLUENT®. Electrolyzer's operational and design parameters will be optimized in order to obtain the maximum hydrogen production and the higher efficiency in the module. This optimized model of the electrolyzer will be incorporated to a chemical process simulation (CPS) code to study the overall high temperature flowsheet coupled to a high temperature accelerator driven system (ADS) that offers advantages in the transmutation of the spent fuel. (author)

  4. Time and temperature dependence of cascade induced defect production in in situ experiments and computer simulation

    International Nuclear Information System (INIS)

    Ishino, Shiori

    1993-01-01

    Understanding of the defect production and annihilation processes in a cascade is important in modelling of radiation damage for establishing irradiation correlation. In situ observation of heavy ion radiation damage has a great prospect in this respect. Time and temperature dependence of formation and annihilation of vacancy clusters in a cascade with a time resolution of 30 ms has been studied with a facility which comprises a heavy ion accelerator and an electron microscope. Formation and annihilation rates of defect clusters have been separately measured by this technique. The observed processes have been analysed by simple kinetic equations, taking into account the sink effect of surface and the defect clusters themselves together with the annihilation process due to thermal emission of vacancies from the defect clusters. Another tool to study time and temperature dependence of defect production in a cascade is computer simulation. Recent results of molecular dynamics calculations on the temperature dependence of cascade evolution are presented, including directional and temperature dependence of the lengths of replacement collision sequences, temperature dependence of the process to reach thermal equilibrium and so on. These results are discussed under general time frame of radiation damage evolution covering from 10 -15 to 10 9 s, and several important issues for the general understanding have been identified. (orig.)

  5. Computational model for a high temperature electrolyzer coupled to a HTTR for efficient nuclear hydrogen production

    International Nuclear Information System (INIS)

    Gonzalez, Daniel; Rojas, Leorlen; Rosales, Jesus; Castro, Landy; Gamez, Abel; Brayner, Carlos; Garcia, Lazaro; Garcia, Carlos; Torre, Raciel de la; Sanchez, Danny

    2015-01-01

    High temperature electrolysis process coupled to a very high temperature reactor (VHTR) is one of the most promising methods for hydrogen production using a nuclear reactor as the primary heat source. However there are not references in the scientific publications of a test facility that allow to evaluate the efficiency of the process and other physical parameters that has to be taken into consideration for its accurate application in the hydrogen economy as a massive production method. For this lack of experimental facilities, mathematical models are one of the most used tools to study this process and theirs flowsheets, in which the electrolyzer is the most important component because of its complexity and importance in the process. A computational fluid dynamic (CFD) model for the evaluation and optimization of the electrolyzer of a high temperature electrolysis hydrogen production process flowsheet was developed using ANSYS FLUENT®. Electrolyzer's operational and design parameters will be optimized in order to obtain the maximum hydrogen production and the higher efficiency in the module. This optimized model of the electrolyzer will be incorporated to a chemical process simulation (CPS) code to study the overall high temperature flowsheet coupled to a high temperature accelerator driven system (ADS) that offers advantages in the transmutation of the spent fuel. (author)

  6. Computational Thermodynamics and Kinetics-Based ICME Framework for High-Temperature Shape Memory Alloys

    Science.gov (United States)

    Arróyave, Raymundo; Talapatra, Anjana; Johnson, Luke; Singh, Navdeep; Ma, Ji; Karaman, Ibrahim

    2015-11-01

    Over the last decade, considerable interest in the development of High-Temperature Shape Memory Alloys (HTSMAs) for solid-state actuation has increased dramatically as key applications in the aerospace and automotive industry demand actuation temperatures well above those of conventional SMAs. Most of the research to date has focused on establishing the (forward) connections between chemistry, processing, (micro)structure, properties, and performance. Much less work has been dedicated to the development of frameworks capable of addressing the inverse problem of establishing necessary chemistry and processing schedules to achieve specific performance goals. Integrated Computational Materials Engineering (ICME) has emerged as a powerful framework to address this problem, although it has yet to be applied to the development of HTSMAs. In this paper, the contributions of computational thermodynamics and kinetics to ICME of HTSMAs are described. Some representative examples of the use of computational thermodynamics and kinetics to understand the phase stability and microstructural evolution in HTSMAs are discussed. Some very recent efforts at combining both to assist in the design of HTSMAs and limitations to the full implementation of ICME frameworks for HTSMA development are presented.

  7. STATIC{sub T}EMP: a useful computer code for calculating static formation temperatures in geothermal wells

    Energy Technology Data Exchange (ETDEWEB)

    Santoyo, E. [Universidad Nacional Autonoma de Mexico, Centro de Investigacion en Energia, Temixco (Mexico); Garcia, A.; Santoyo, S. [Unidad Geotermia, Inst. de Investigaciones Electricas, Temixco (Mexico); Espinosa, G. [Universidad Autonoma Metropolitana, Co. Vicentina (Mexico); Hernandez, I. [ITESM, Centro de Sistemas de Manufactura, Monterrey (Mexico)

    2000-07-01

    The development and application of the computer code STATIC{sub T}EMP, a useful tool for calculating static formation temperatures from actual bottomhole temperature data logged in geothermal wells is described. STATIC{sub T}EMP is based on five analytical methods which are the most frequently used in the geothermal industry. Conductive and convective heat flow models (radial, spherical/radial and cylindrical/radial) were selected. The computer code is a useful tool that can be reliably used in situ to determine static formation temperatures before or during the completion stages of geothermal wells (drilling and cementing). Shut-in time and bottomhole temperature measurements logged during well completion activities are required as input data. Output results can include up to seven computations of the static formation temperature by each wellbore temperature data set analysed. STATIC{sub T}EMP was written in Fortran-77 Microsoft language for MS-DOS environment using structured programming techniques. It runs on most IBM compatible personal computers. The source code and its computational architecture as well as the input and output files are described in detail. Validation and application examples on the use of this computer code with wellbore temperature data (obtained from specialised literature) and with actual bottomhole temperature data (taken from completion operations of some geothermal wells) are also presented. (Author)

  8. Computation of temperature-dependent legendre moments of a double-differential elastic cross section

    International Nuclear Information System (INIS)

    Arbanas, G.; Dunn, M.E.; Larson, N.M.; Leal, L.C.; Williams, M.L.; Becker, B.; Dagan, R.

    2011-01-01

    A general expression for temperature-dependent Legendre moments of a double-differential elastic scattering cross section was derived by Ouisloumen and Sanchez [Nucl. Sci. Eng. 107, 189-200 (1991)]. Attempts to compute this expression are hindered by the three-fold nested integral, limiting their practical application to just the zeroth Legendre moment of an isotropic scattering. It is shown that the two innermost integrals could be evaluated analytically to all orders of Legendre moments, and for anisotropic scattering, by a recursive application of the integration by parts method. For this method to work, the anisotropic angular distribution in the center of mass is expressed as an expansion in Legendre polynomials. The first several Legendre moments of elastic scattering of neutrons on 238 U are computed at T=1000 K at incoming energy 6.5 eV for isotropic scattering in the center of mass frame. Legendre moments of the anisotropic angular distribution given via Blatt-Biedenharn coefficients are computed at 1 keV. The results are in agreement with those computed by the Monte Carlo method. (author)

  9. GENGTC-JB: a computer program to calculate temperature distribution for cylindrical geometry capsule

    International Nuclear Information System (INIS)

    Someya, Hiroyuki; Kobayashi, Toshiki; Niimi, Motoji; Hoshiya, Taiji; Harayama, Yasuo

    1987-09-01

    In design of JMTR irradiation capsules contained specimens, a program (named GENGTC) has been generally used to evaluate temperature distributions in the capsules. The program was originally compiled by ORNL(U.S.A.) and consisted of very simple calculation methods. From the incorporated calculation methods, the program is easy to use, and has many applications to the capsule design. However, it was considered to replace original computing methods with advanced ones, when the program was checked from a standpoint of the recent computer abilities, and also to be complicated in data input. Therefore, the program was versioned up as aim to make better calculations and improve input method. The present report describes revised calculation methods and input/output guide of the version-up program. (author)

  10. Multigroup computation of the temperature-dependent Resonance Scattering Model (RSM) and its implementation

    Energy Technology Data Exchange (ETDEWEB)

    Ghrayeb, S. Z. [Dept. of Mechanical and Nuclear Engineering, Pennsylvania State Univ., 230 Reber Building, Univ. Park, PA 16802 (United States); Ouisloumen, M. [Westinghouse Electric Company, 1000 Westinghouse Drive, Cranberry Township, PA 16066 (United States); Ougouag, A. M. [Idaho National Laboratory, MS-3860, PO Box 1625, Idaho Falls, ID 83415 (United States); Ivanov, K. N.

    2012-07-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied. (authors)

  11. FDTD computation of temperature elevation in the elderly for far-field RF exposures

    International Nuclear Information System (INIS)

    Nomura, Tomoki; Laakso, Ilkka; Hirata, Akimasa

    2014-01-01

    Core temperature elevation and perspiration in younger and older adults is investigated for plane-wave exposure at whole-body averaged specific absorption rate of 0.4 W kg -1 . Numeric Japanese male model is considered together with a thermo-regulatory response formula proposed in the authors' previous study. The frequencies considered were at 65 MHz and 2 GHz where the total power absorption in humans becomes maximal for the allowable power density prescribed in the international guidelines. From the computational results used here, the core temperature elevation in the older adult model was larger than that in the younger one at both frequencies. The reason for this difference is attributable to the difference of sweating, which is originated from the difference in the threshold activating the sweating and the decline in sweating in the legs. (authors)

  12. FDTD computation of temperature elevation in the elderly for far-field RF exposures.

    Science.gov (United States)

    Nomura, Tomoki; Laakso, Ilkka; Hirata, Akimasa

    2014-03-01

    Core temperature elevation and perspiration in younger and older adults is investigated for plane-wave exposure at whole-body averaged specific absorption rate of 0.4 W kg(-1). Numeric Japanese male model is considered together with a thermoregulatory response formula proposed in the authors' previous study. The frequencies considered were at 65 MHz and 2 GHz where the total power absorption in humans becomes maximal for the allowable power density prescribed in the international guidelines. From the computational results used here, the core temperature elevation in the older adult model was larger than that in the younger one at both frequencies. The reason for this difference is attributable to the difference of sweating, which is originated from the difference in the threshold activating the sweating and the decline in sweating in the legs.

  13. Computational prediction of the effective temperature in the lying area of pig pens

    DEFF Research Database (Denmark)

    Bjerg, Bjarne; Rong, Li; Zhang, Guoqiang

    2018-01-01

    Using solid floor instead of drained or slatted floor in the lying areas of pig pens has distinct advantages in relation to animal welfare, odour abatement and ammonia emission, energy consumption and reduced building costs. However, pig producers often opt out of providing a solid floor due......, individually contribute to the combined effect of the thermal condition raising pigs are exposed to. Computational Fluid Dynamics (CFD) simulations were conducted to estimate the relevant parameters and, finally, the ET. Furthermore, the developed ET equation was implemented in the CFD model as a Custom Field...... Function to calculate the distribution of ET in the animal occupied zone. It was assumed that a traditional diffuse ceiling air inlet would deliver the required airflow rate as long as the outdoor temperature was below 10. °C. At higher outdoor temperature, a ceiling-jet inlet above each pen was opened...

  14. General method and thermodynamic tables for computation of equilibrium composition and temperature of chemical reactions

    Science.gov (United States)

    Huff, Vearl N; Gordon, Sanford; Morrell, Virginia E

    1951-01-01

    A rapidly convergent successive approximation process is described that simultaneously determines both composition and temperature resulting from a chemical reaction. This method is suitable for use with any set of reactants over the complete range of mixture ratios as long as the products of reaction are ideal gases. An approximate treatment of limited amounts of liquids and solids is also included. This method is particularly suited to problems having a large number of products of reaction and to problems that require determination of such properties as specific heat or velocity of sound of a dissociating mixture. The method presented is applicable to a wide variety of problems that include (1) combustion at constant pressure or volume; and (2) isentropic expansion to an assigned pressure, temperature, or Mach number. Tables of thermodynamic functions needed with this method are included for 42 substances for convenience in numerical computations.

  15. Computer codes used in the calculation of high-temperature thermodynamic properties of sodium

    International Nuclear Information System (INIS)

    Fink, J.K.

    1979-12-01

    Three computer codes - SODIPROP, NAVAPOR, and NASUPER - were written in order to calculate a self-consistent set of thermodynamic properties for saturated, subcooled, and superheated sodium. These calculations incorporate new critical parameters (temperature, pressure, and density) and recently derived single equations for enthalpy and vapor pressure. The following thermodynamic properties have been calculated in these codes: enthalpy, heat capacity, entropy, vapor pressure, heat of vaporization, density, volumetric thermal expansion coefficient, compressibility, and thermal pressure coefficient. In the code SODIPROP, these properties are calculated for saturated and subcooled liquid sodium. Thermodynamic properties of saturated sodium vapor are calculated in the code NAVAPOR. The code NASUPER calculates thermodynamic properties for super-heated sodium vapor only for low (< 1644 K) temperatures. No calculations were made for the supercritical region

  16. TRANSENERGY S: computer codes for coolant temperature prediction in LMFBR cores during transient events

    International Nuclear Information System (INIS)

    Glazer, S.; Todreas, N.; Rohsenow, W.; Sonin, A.

    1981-02-01

    This document is intended as a user/programmer manual for the TRANSENERGY-S computer code. The code represents an extension of the steady state ENERGY model, originally developed by E. Khan, to predict coolant and fuel pin temperatures in a single LMFBR core assembly during transient events. Effects which may be modelled in the analysis include temporal variation in gamma heating in the coolant and duct wall, rod power production, coolant inlet temperature, coolant flow rate, and thermal boundary conditions around the single assembly. Numerical formulations of energy equations in the fuel and coolant are presented, and the solution schemes and stability criteria are discussed. A detailed description of the input deck preparation is presented, as well as code logic flowcharts, and a complete program listing. TRANSENERGY-S code predictions are compared with those of two different versions of COBRA, and partial results of a 61 pin bundle test case are presented

  17. Temperature-based estimation of global solar radiation using soft computing methodologies

    Science.gov (United States)

    Mohammadi, Kasra; Shamshirband, Shahaboddin; Danesh, Amir Seyed; Abdullah, Mohd Shahidan; Zamani, Mazdak

    2016-07-01

    Precise knowledge of solar radiation is indeed essential in different technological and scientific applications of solar energy. Temperature-based estimation of global solar radiation would be appealing owing to broad availability of measured air temperatures. In this study, the potentials of soft computing techniques are evaluated to estimate daily horizontal global solar radiation (DHGSR) from measured maximum, minimum, and average air temperatures ( T max, T min, and T avg) in an Iranian city. For this purpose, a comparative evaluation between three methodologies of adaptive neuro-fuzzy inference system (ANFIS), radial basis function support vector regression (SVR-rbf), and polynomial basis function support vector regression (SVR-poly) is performed. Five combinations of T max, T min, and T avg are served as inputs to develop ANFIS, SVR-rbf, and SVR-poly models. The attained results show that all ANFIS, SVR-rbf, and SVR-poly models provide favorable accuracy. Based upon all techniques, the higher accuracies are achieved by models (5) using T max- T min and T max as inputs. According to the statistical results, SVR-rbf outperforms SVR-poly and ANFIS. For SVR-rbf (5), the mean absolute bias error, root mean square error, and correlation coefficient are 1.1931 MJ/m2, 2.0716 MJ/m2, and 0.9380, respectively. The survey results approve that SVR-rbf can be used efficiently to estimate DHGSR from air temperatures.

  18. Creating Very True Quantum Algorithms for Quantum Energy Based Computing

    Science.gov (United States)

    Nagata, Koji; Nakamura, Tadao; Geurdes, Han; Batle, Josep; Abdalla, Soliman; Farouk, Ahmed; Diep, Do Ngoc

    2018-04-01

    An interpretation of quantum mechanics is discussed. It is assumed that quantum is energy. An algorithm by means of the energy interpretation is discussed. An algorithm, based on the energy interpretation, for fast determining a homogeneous linear function f( x) := s. x = s 1 x 1 + s 2 x 2 + ⋯ + s N x N is proposed. Here x = ( x 1, … , x N ), x j ∈ R and the coefficients s = ( s 1, … , s N ), s j ∈ N. Given the interpolation values (f(1), f(2),...,f(N))=ěc {y}, the unknown coefficients s = (s1(ěc {y}),\\dots , sN(ěc {y})) of the linear function shall be determined, simultaneously. The speed of determining the values is shown to outperform the classical case by a factor of N. Our method is based on the generalized Bernstein-Vazirani algorithm to qudit systems. Next, by using M parallel quantum systems, M homogeneous linear functions are determined, simultaneously. The speed of obtaining the set of M homogeneous linear functions is shown to outperform the classical case by a factor of N × M.

  19. Computational and Experimental Study of Thermodynamics of the Reaction of Titania and Water at High Temperatures.

    Science.gov (United States)

    Nguyen, Q N; Bauschlicher, C W; Myers, D L; Jacobson, N S; Opila, E J

    2017-12-14

    Gaseous titanium hydroxide and oxyhydroxide species were studied with quantum chemical methods. The results are used in conjunction with an experimental transpiration study of titanium dioxide (TiO 2 ) in water vapor-containing environments at elevated temperatures to provide a thermodynamic description of the Ti(OH) 4 (g) and TiO(OH) 2 (g) species. The geometry and harmonic vibrational frequencies of these species were computed using the coupled-cluster singles and doubles method with a perturbative correction for connected triple substitutions [CCSD(T)]. For the OH bending and rotation, the B3LYP density functional theory was used to compute corrections to the harmonic approximations. These results were combined to determine the enthalpy of formation. Experimentally, the transpiration method was used with water contents from 0 to 76 mol % in oxygen or argon carrier gases for 20-250 h exposure times at 1473-1673 K. Results indicate that oxygen is not a key contributor to volatilization, and the primary reaction for volatilization in this temperature range is TiO 2 (s) + H 2 O(g) = TiO(OH) 2 (g). Data were analyzed with both the second and third law methods using the thermal functions derived from the theoretical calculations. The third law enthalpy of formation at 298.15 K for TiO(OH) 2 (g) at 298 K was -838.9 ± 6.5 kJ/mol, which compares favorably to the theoretical calculation of -838.7 ± 25 kJ/mol. We recommend the experimentally derived third law enthalpy of formation at 298.15 K for TiO(OH) 2 , the computed entropy of 320.67 J/mol·K, and the computed heat capacity [149.192 + (-0.02539)T + (8.28697 × 10 -6 )T 2 + (-15614.05)/T + (-5.2182 × 10 -11 )/T 2 ] J/mol-K, where T is the temperature in K.

  20. Computational study of a high-temperature thermal nanoimprint lithographic (TNIL) process

    Science.gov (United States)

    Cleveland, Nicolas Joseph

    As an emerging manufacturing technique, nanoimprint lithography (NIL) can fabricate micro and nanoscale features of microfluidic devices at very high accuracy and reliability. The process parameters such as pressure, temperature, and material properties play critical roles in the NIL process. In this work, the process of thermal nanoimprint lithography (TNIL) is studied computationally and the developed model can accurately predict the nano and micro-pattern geometry and quality from TNIL processes based on complex mold-resist interaction. Applications of this modeling technique range from micro- and nano-patterns used in micro-channels for biomedical devices to other applications such as biological/particle sensors or superhydrophobic surfaces. In high-temperature TNIL process, a polymer melt such as polymethyl-methacrylate (PMMA) is heated beyond the melting temperature so that it behaves predominantly as a fluid during the imprint process. The effects of surface tension and shear thinning become significant at or above the melting point, whereas the polymer melt can be modeled as a viscoelastic solid, solved with finite element analysis, when process temperature remains between the glass transition and melting temperatures. Additionally, the mold used in TNIL can deform since it is made of soft-rubbery elastomer such as polydimethylsiloxane (PDMS), and it is of interest to include the effect of subsequent mold deformation. Leakage between channels or significant variation in channel width can occur in micro-fluidic devices if mold deformation exceeds design tolerances. In the current work, fluid-structure interaction (FSI) technology is leveraged to solve for significant mold deformation and its effect on the polymer melt flow field during TNIL process. The simulation result is compared to experimental results. The FSI simulation result is also compared to the equivalent case with a rigid mold in place of flexible material, which shows results of differing mold

  1. Computer-Aided Design of Materials for use under High Temperature Operating Condition

    Energy Technology Data Exchange (ETDEWEB)

    Rajagopal, K. R.; Rao, I. J.

    2010-01-31

    The procedures in place for producing materials in order to optimize their performance with respect to creep characteristics, oxidation resistance, elevation of melting point, thermal and electrical conductivity and other thermal and electrical properties are essentially trial and error experimentation that tend to be tremendously time consuming and expensive. A computational approach has been developed that can replace the trial and error procedures in order that one can efficiently design and engineer materials based on the application in question can lead to enhanced performance of the material, significant decrease in costs and cut down the time necessary to produce such materials. The work has relevance to the design and manufacture of turbine blades operating at high operating temperature, development of armor and missiles heads; corrosion resistant tanks and containers, better conductors of electricity, and the numerous other applications that are envisaged for specially structured nanocrystalline solids. A robust thermodynamic framework is developed within which the computational approach is developed. The procedure takes into account microstructural features such as the dislocation density, lattice mismatch, stacking faults, volume fractions of inclusions, interfacial area, etc. A robust model for single crystal superalloys that takes into account the microstructure of the alloy within the context of a continuum model is developed. Having developed the model, we then implement in a computational scheme using the software ABAQUS/STANDARD. The results of the simulation are compared against experimental data in realistic geometries.

  2. Novel spintronics devices for memory and logic: prospects and challenges for room temperature all spin computing

    Science.gov (United States)

    Wang, Jian-Ping

    An energy efficient memory and logic device for the post-CMOS era has been the goal of a variety of research fields. The limits of scaling, which we expect to reach by the year 2025, demand that future advances in computational power will not be realized from ever-shrinking device sizes, but rather by innovative designs and new materials and physics. Magnetoresistive based devices have been a promising candidate for future integrated magnetic computation because of its unique non-volatility and functionalities. The application of perpendicular magnetic anisotropy for potential STT-RAM application was demonstrated and later has been intensively investigated by both academia and industry groups, but there is no clear path way how scaling will eventually work for both memory and logic applications. One of main reasons is that there is no demonstrated material stack candidate that could lead to a scaling scheme down to sub 10 nm. Another challenge for the usage of magnetoresistive based devices for logic application is its available switching speed and writing energy. Although a good progress has been made to demonstrate the fast switching of a thermally stable magnetic tunnel junction (MTJ) down to 165 ps, it is still several times slower than its CMOS counterpart. In this talk, I will review the recent progress by my research group and my C-SPIN colleagues, then discuss the opportunities, challenges and some potential path ways for magnetoresitive based devices for memory and logic applications and their integration for room temperature all spin computing system.

  3. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  4. CosmoTransitions: Computing cosmological phase transition temperatures and bubble profiles with multiple fields

    Science.gov (United States)

    Wainwright, Carroll L.

    2012-09-01

    I present a numerical package (CosmoTransitions) for analyzing finite-temperature cosmological phase transitions driven by single or multiple scalar fields. The package analyzes the different vacua of a theory to determine their critical temperatures (where the vacuum energy levels are degenerate), their supercooling temperatures, and the bubble wall profiles which separate the phases and describe their tunneling dynamics. I introduce a new method of path deformation to find the profiles of both thin- and thick-walled bubbles. CosmoTransitions is freely available for public use.Program summaryProgram Title: CosmoTransitionsCatalogue identifier: AEML_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEML_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 8775No. of bytes in distributed program, including test data, etc.: 621096Distribution format: tar.gzProgramming language: Python.Computer: Developed on a 2009 MacBook Pro. No computer-specific optimization was performed.Operating system: Designed and tested on Mac OS X 10.6.8. Compatible with any OS with Python installed.RAM: Approximately 50 MB, mostly for loading plotting packages.Classification: 1.9, 11.1.External routines: SciPy, NumPy, matplotLibNature of problem: I describe a program to analyze early-Universe finite-temperature phase transitions with multiple scalar fields. The goal is to analyze the phase structure of an input theory, determine the amount of supercooling at each phase transition, and find the bubble-wall profiles of the nucleated bubbles that drive the transitions.Solution method: To find the bubble-wall profile, the program assumes that tunneling happens along a fixed path in field space. This reduces the equations of motion to one dimension, which can then be solved using the overshoot

  5. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation.

    Science.gov (United States)

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  6. Analysis of the transformations temperatures of helicoidal Ti-Ni actuators using computational numerical methods

    Directory of Open Access Journals (Sweden)

    Carlos Augusto do N. Oliveira

    2013-01-01

    Full Text Available The development of shape memory actuators has enabled noteworthy applications in the mechanical engineering, robotics, aerospace, and oil industries and in medicine. These applications have been targeted on miniaturization and taking full advantage of spaces. This article analyses a Ti-Ni shape memory actuator used as part of a flow control system. A Ti-Ni spring actuator is subjected to thermomechanical training and parameters such as transformation temperature, thermal hysteresis and shape memory effect performance were investigated. These parameters were important for understanding the behavior of the actuator related to martensitic phase transformation during the heating and cooling cycles which it undergoes when in service. The multiple regression methodology was used as a computational tool for analysing data in order to simulate and predict the results for stress and cycles where the experimental data was not developed. The results obtained using the training cycles enable actuators to be characterized and the numerical simulation to be validated.

  7. A hybrid computation method for determining fluctuations of temperature in branched structures

    International Nuclear Information System (INIS)

    Czomber, L.

    1982-01-01

    A hybrid computation method for determining temperature fluctuations at discrete points of slab like geometries is developed on the basis of a new formulation of the finite difference method. For this purpose, a new finite difference method is combined with an exact solution of the heat equation within the range of values of the Laplace transformation. Whereas the exact solution can be applied to arbitraryly large ranges, the finite difference formulation is given for structural ranges which need finer discretization. The boundary conditions of the exact solution are substituted by finite difference terms for the boundary residual flow or an internal heat source, depending on the problem. The resulting system of conditional equations contains only the node parameters of the finite difference method. (orig.) [de

  8. Analysis of a computational benchmark for a high-temperature reactor using SCALE

    International Nuclear Information System (INIS)

    Goluoglu, S.

    2006-01-01

    Several proposed advanced reactor concepts require methods to address effects of double heterogeneity. In doubly heterogeneous systems, heterogeneous fuel particles in a moderator matrix form the fuel region of the fuel element and thus constitute the first level of heterogeneity. Fuel elements themselves are also heterogeneous with fuel and moderator or reflector regions, forming the second level of heterogeneity. The fuel elements may also form regular or irregular lattices. A five-phase computational benchmark for a high-temperature reactor (HTR) fuelled with uranium or reactor-grade plutonium has been defined by the Organization for Economic Cooperation and Development, Nuclear Energy Agency (OECD NEA), Nuclear Science Committee, Working Party on the Physics of Plutonium Fuels and Innovative Fuel Cycles. This paper summarizes the analysis results using the latest SCALE code system (to be released in CY 2006 as SCALE 5.1). (authors)

  9. A computer model to predict temperatures and gas flows during AGR fuel handling

    International Nuclear Information System (INIS)

    Bishop, D.C.; Bowler, P.G.

    1986-01-01

    The paper describes the development of a comprehensive computer model (HOSTAGE) that has been developed for the Heysham II/Torness AGRs to predict temperature transients for all the important components during normal and fault conditions. It models not only the charge and discharge or fuel from an on-load reactor but also follows the fuel down the rest of the fuel route until it is dismantled. The main features of the physical model of gas and heat flow are described. Experimental results are used where appropriate and an indication will be given of how the predictions by HOSTAGE correlate with operating AGR reactors. The role of HOSTAGE in the Heysham II/Torness safety case is briefly discussed. (author)

  10. Thermal models of buildings. Determination of temperatures, heating and cooling loads. Theories, models and computer programs

    Energy Technology Data Exchange (ETDEWEB)

    Kaellblad, K

    1998-05-01

    The need to estimate indoor temperatures, heating or cooling load and energy requirements for buildings arises in many stages of a buildings life cycle, e.g. at the early layout stage, during the design of a building and for energy retrofitting planning. Other purposes are to meet the authorities requirements given in building codes. All these situations require good calculation methods. The main purpose of this report is to present the authors work with problems related to thermal models and calculation methods for determination of temperatures and heating or cooling loads in buildings. Thus the major part of the report deals with treatment of solar radiation in glazing systems, shading of solar and sky radiation and the computer program JULOTTA used to simulate the thermal behavior of rooms and buildings. Other parts of thermal models of buildings are more briefly discussed and included in order to give an overview of existing problems and available solutions. A brief presentation of how thermal models can be built up is also given and it is a hope that the report can be useful as an introduction to this part of building physics as well as during development of calculation methods and computer programs. The report may also serve as a help for the users of energy related programs. Independent of which method or program a user choose to work with it is his or her own responsibility to understand the limits of the tool, else wrong conclusions may be drawn from the results 52 refs, 22 figs, 4 tabs

  11. Computational characterization of ignition regimes in a syngas/air mixture with temperature fluctuations

    KAUST Repository

    Pal, Pinaki

    2016-07-27

    Auto-ignition characteristics of compositionally homogeneous reactant mixtures in the presence of thermal non-uniformities and turbulent velocity fluctuations were computationally investigated. The main objectives were to quantify the observed ignition characteristics and numerically validate the theory of the turbulent ignition regime diagram recently proposed by Im et al. 2015 [29] that provides a framework to predict ignition behavior . a priori based on the thermo-chemical properties of the reactant mixture and initial flow and scalar field conditions. Ignition regimes were classified into three categories: . weak (where deflagration is the dominant mode of fuel consumption), . reaction-dominant strong, and . mixing-dominant strong (where volumetric ignition is the dominant mode of fuel consumption). Two-dimensional (2D) direct numerical simulations (DNS) of auto-ignition in a lean syngas/air mixture with uniform mixture composition at high-pressure, low-temperature conditions were performed in a fixed volume. The initial conditions considered two-dimensional isotropic velocity spectrums, temperature fluctuations and localized thermal hot spots. A number of parametric test cases, by varying the characteristic turbulent Damköhler and Reynolds numbers, were investigated. The evolution of the auto-ignition phenomena, pressure rise, and heat release rate were analyzed. In addition, combustion mode analysis based on front propagation speed and computational singular perturbation (CSP) was applied to characterize the auto-ignition phenomena. All results supported that the observed ignition behaviors were consistent with the expected ignition regimes predicted by the theory of the regime diagram. This work provides new high-fidelity data on syngas ignition characteristics over a broad range of conditions and demonstrates that the regime diagram serves as a predictive guidance in the understanding of various physical and chemical mechanisms controlling auto

  12. Computation of classical triton burnup with high plasma temperature and current

    International Nuclear Information System (INIS)

    Batistoni, P.

    1990-09-01

    For comparison with experiment, the expected production of 14-MeV neutrons from the burnup of tritons produced in the d(d,t)p reaction must be computed. An effort was undertaken to compare in detail the computer codes used for this purpose at TFTR and JET. The calculation of the confined fraction of tritons by the different codes agrees to within a few percent. The high electron temperature in the experiments has raised the critical energy of the tritons that are slowing down to near or above the peak of the D-T reactivity, making the ion drag terms more important. When the different codes use the same slowing down formulas, the calculated burnup was within 6% for a case where orbit effects are expected to be small. Then results from codes with and without the effects of finite radial orbit excursions were compared for two test cases. For medium to high current discharges the finite radius effects are only of order 10%. A new version of the TFTR burnup code using an implicit Fokker-Planck solution was written to include the effects of energy diffusion and charge exchange. These effects change the time-integrated yields by only a few percent, but can significantly affect the instantaneous rates in time. Significant populations of hot ions can affect the fusion reactivity, and this effect was also studied. In particular, the d(d,p)t rate can be 10%--15% less than the d(d, 3 He)n rate which is usually used as a direct monitor of the triton source. Finally, a finite particle confinement time for the thermalized tritons can increase the apparent ''burn-up'' either if there is a high thermal deuteron temperature or if there exists a significant beam deuteron density

  13. Temperature resolution enhancing of commercially available THz passive cameras due to computer processing of images

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.

    2014-06-01

    As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection of concealed object: minimal size of the object, maximal distance of the detection, image detail. One of probable ways for a quality image enhancing consists in computer processing of image. Using computer processing of the THz image of objects concealed on the human body, one may improve it many times. Consequently, the instrumental resolution of such device may be increased without any additional engineering efforts. We demonstrate new possibilities for seeing the clothes details, which raw images, produced by the THz cameras, do not allow to see. We achieve good quality of the image due to applying various spatial filters with the aim to demonstrate independence of processed images on math operations. This result demonstrates a feasibility of objects seeing. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China).

  14. Kinetics of the high-temperature combustion reactions of dibutylether using composite computational methods

    KAUST Repository

    Rachidi, Mariam El

    2015-01-01

    This paper investigates the high-temperature combustion kinetics of n-dibutyl ether (n-DBE), including unimolecular decomposition, H-abstraction by H, H-migration, and C{single bond}C/C{single bond}O β-scission reactions of the DBE radicals. The energetics of H-abstraction by OH radicals is also studied. All rates are determined computationally using the CBS-QB3 and G4 composite methods in conjunction with conventional transition state theory. The B3LYP/6-311++G(2df,2pd) method is used to optimize the geometries and calculate the frequencies of all reactive species and transition states for use in ChemRate. Some of the rates calculated in this study vary markedly from those obtained for similar reactions of alcohols or alkanes, particularly those pertaining to unimolecular decomposition and β-scission at the α-β C{single bond}C bond. These variations show that analogies to alkanes and alcohols are, in some cases, inappropriate means of estimating the reaction rates of ethers. This emphasizes the need to establish valid rates through computation or experimentation. Such studies are especially important given that ethers exhibit promising biofuel and fuel additive characteristics. © 2014.

  15. Room temperature linelists for CO2 asymmetric isotopologues with ab initio computed intensities

    Science.gov (United States)

    Zak, Emil J.; Tennyson, Jonathan; Polyansky, Oleg L.; Lodi, Lorenzo; Zobov, Nikolay F.; Tashkun, Sergei A.; Perevalov, Valery I.

    2017-12-01

    The present paper reports room temperature line lists for six asymmetric isotopologues of carbon dioxide: 16O12C18O (628), 16O12C17O (627), 16O13C18O (638),16O13C17O (637), 17O12C18O (728) and 17O13C18O (738), covering the range 0-8000 cm-1. Variational rotation-vibration wavefunctions and energy levels are computed using the DVR3D software suite and a high quality semi-empirical potential energy surface (PES), followed by computation of intensities using an ab initio dipole moment surface (DMS). A theoretical procedure for quantifying sensitivity of line intensities to minor distortions of the PES/DMS renders our theoretical model as critically evaluated. Several recent high quality measurements and theoretical approaches are discussed to provide a benchmark of our results against the most accurate available data. Indeed, the thesis of transferability of accuracy among different isotopologues with the use of mass-independent PES is supported by several examples. Thereby, we conclude that the majority of line intensities for strong bands are predicted with sub-percent accuracy. Accurate line positions are generated using an effective Hamiltonian, constructed from the latest experiments. This study completes the list of relevant isotopologues of carbon dioxide; these line lists are available to remote sensing studies and inclusion in databases.

  16. Computing the distribution of return levels of extreme warm temperatures for future climate projections

    Energy Technology Data Exchange (ETDEWEB)

    Pausader, M.; Parey, S.; Nogaj, M. [EDF/R and D, Chatou Cedex (France); Bernie, D. [Met Office Hadley Centre, Exeter (United Kingdom)

    2012-03-15

    In order to take into account uncertainties in the future climate projections there is a growing demand for probabilistic projections of climate change. This paper presents a methodology for producing such a probabilistic analysis of future temperature extremes. The 20- and 100-years return levels are obtained from that of the normalized variable and the changes in mean and standard deviation given by climate models for the desired future periods. Uncertainty in future change of these extremes is quantified using a multi-model ensemble and a perturbed physics ensemble. The probability density functions of future return levels are computed at a representative location from the joint probability distribution of mean and standard deviation changes given by the two combined ensembles of models. For the studied location, the 100-years return level at the end of the century is lower than 41 C with an 80% confidence. Then, as the number of model simulations is low to compute a reliable distribution, two techniques proposed in the literature (local pattern scaling and ANOVA) have been used to infer the changes in mean and standard deviation for the combinations of RCM and GCM which have not been run. The ANOVA technique leads to better results for the reconstruction of the mean changes, whereas the two methods fail to correctly infer the changes in standard deviation. As standard deviation change has a major impact on return level change, there is a need to improve the models and the different techniques regarding the variance changes. (orig.)

  17. Natural Carbonized Sugar as a Low-Temperature Ammonia Sensor Material: Experimental, Theoretical, and Computational Studies.

    Science.gov (United States)

    Ghule, Balaji G; Shaikh, Shoyebmohamad; Ekar, Satish U; Nakate, Umesh T; Gunturu, Krishna Chaitanya; Shinde, Nanasaheb M; Naushad, Mu; Kim, Kwang Ho; O'Dwyer, Colm; Mane, Rajaram S

    2017-12-13

    Carbonized sugar (CS) has been synthesized via microwave-assisted carbonization of market-quality tabletop sugar bearing in mind the advantages of this synthesis method, such as being useful, cost-effective, and eco-friendly. The as-prepared CS has been characterized for its morphology, phase purity, type of porosity, pore-size distribution, and so on. The gas-sensing properties of CS for various oxidizing and reducing gases are demonstrated at ambient temperature, where we observe good selectivity toward liquid ammonia among other gases. The highest ammonia response (50%) of a CS-based sensor was noted at 80 °C for 100 ppm concentration. The response and recovery times of the CS sensor are 180 and 216 s, respectively. This unveiling ammonia-sensing study is explored through a plausible theoretical mechanism, which is further well-supported by computational modeling performed using density function theory. The effect of relative humidity on the CS sensor has also been studied at ambient temperature, which demonstrated that the minimum and maximum (20-100%) relative humidity values revealed 16 and 62% response, respectively.

  18. PUMP DESIGN AND COMPUTATIONAL FLUID DYNAMIC ANALYSIS FOR HIGH TEMPERATURE SULFURIC ACID TRANSFER SYSTEM

    Directory of Open Access Journals (Sweden)

    JUNG-SIK CHOI

    2014-06-01

    Full Text Available In this study, we proposed a newly designed sulfuric acid transfer system for the sulfur-iodine (SI thermochemical cycle. The proposed sulfuric acid transfer system was evaluated using a computational fluid dynamics (CFD analysis for investigating thermodynamic/hydrodynamic characteristics and material properties. This analysis was conducted to obtain reliable continuous operation parameters; in particular, a thermal analysis was performed on the bellows box and bellows at amplitudes and various frequencies (0.1, 0.5, and 1.0 Hz. However, the high temperatures and strongly corrosive operating conditions of the current sulfuric acid system present challenges with respect to the structural materials of the transfer system. To resolve this issue, we designed a novel transfer system using polytetrafluoroethylene (PTFE, Teflon® as a bellows material for the transfer of sulfuric acid. We also carried out a CFD analysis of the design. The CFD results indicated that the maximum applicable temperature of PTFE is about 533 K (260 °C, even though its melting point is around 600 K. This result implies that the PTFE is a potential material for the sulfuric acid transfer system. The CFD simulations also confirmed that the sulfuric acid transfer system was designed properly for this particular investigation.

  19. Computational design and characterization of a temperature-sensitive plasmid replicon for gram positive thermophiles

    Directory of Open Access Journals (Sweden)

    Olson Daniel G

    2012-05-01

    Full Text Available Abstract Background Temperature-sensitive (Ts plasmids are useful tools for genetic engineering, but there are currently none compatible with the gram positive, thermophilic, obligate anaerobe, Clostridium thermocellum. Traditional mutagenesis techniques yield Ts mutants at a low frequency, and therefore requires the development of high-throughput screening protocols, which are also not available for this organism. Recently there has been progress in the development of computer algorithms which can predict Ts mutations. Most plasmids currently used for genetic modification of C. thermocellum are based on the replicon of plasmid pNW33N, which replicates using the RepB replication protein. To address this problem, we set out to create a Ts plasmid by mutating the gene coding for the RepB replication protein using an algorithm designed by Varadarajan et al. (1996 for predicting Ts mutants based on the amino-acid sequence of the protein. Results A library of 34 mutant plasmids was designed, synthesized and screened, resulting in 6 mutants which exhibited a Ts phenotype. Of these 6, the one with the most temperature-sensitive phenotype (M166A was compared with the original plasmid. It exhibited lower stability at 48°C and was completely unable to replicate at 55°C. Conclusions The plasmid described in this work could be useful in future efforts to genetically engineer C. thermocellum, and the method used to generate this plasmid may be useful for others trying to make Ts plasmids.

  20. Computational estimation of decline in sweating in the elderly from measured body temperatures and sweating for passive heat exposure

    International Nuclear Information System (INIS)

    Hirata, Akimasa; Nomura, Tomoki; Laakso, Ilkka

    2012-01-01

    Several studies reported the difference in heat tolerance between younger and older adults, which may be attributable to the decline in the sweating rate. One of the studies suggested a hypothesis that the dominant factor causing the decline in sweating was the decline in thermal sensitivity due to a weaker signal from the periphery to the regulatory centres. However, no quantitative investigation of the skin temperature threshold for activating the sweating has been conducted in previous studies. In this study, we developed a computational code to simulate the time evolution of the temperature variation and sweating in realistic human models under heat exposure, in part by comparing the computational results with measured data from younger and older adults. Based on our computational results, the difference in the threshold temperatures for activating the thermophysiological response, especially for sweating, is examined between older and younger adults. The threshold for activating sweating in older individuals was found to be about 1.5 °C higher than that in younger individuals. However, our computation did not suggest that it was possible to evaluate the central alteration with ageing by comparing the computation with the measurements for passive heat exposure, since the sweating rate is marginally affected by core temperature elevation at least for the scenarios considered here. The computational technique developed herein is useful for understanding the thermophysiological response of older individuals from measured data. (note)

  1. Computer simulation of the effect of temperature on pH.

    Science.gov (United States)

    Kipp, J E; Schuck, D F

    1995-11-01

    The effect of temperature on solution pH was simulated by computer (program PHTEMP). We have determined that the change in pH due to shifts in acid-base equilibria [delta pH = pH(60 degrees C) - pH(25 degrees C)] can be substantial for compounds such as aliphatic amines that have high enthalpies for acid dissociation. This is of particular significance during elevated temperature experiments in which changes in the pKa values of formulation components, and hence the solution pH, can accelerate decomposition as compared to those formulations where sensitive functionality is absent. PHTEMP afforded the following results at initial pH = 7 (25 degrees C): (a) 0.1 M triethylamine (delta H zero = 10.4 kcal/mol) delta pH approximately -0.8; (b) 0.1 M acetic acid (delta H zero = -0.1 kcal/mol) delta pH approximately 0; (c) 0.1 M sulfuric acid (delta H zero 1 = -12 kcal/mol; delta H zero 2 = -5.4 kcal/mol) delta pH approximately -0.4. Solutions of general pharmaceutical interest were also studied and included a 12-component amino acid mixture, 0.1 M glycine, and 0.1 M triethylamine in either 0.02 M citric acid or 0.05 M TRIS buffer. In each case the pH change with temperature was dependent on the concentrations of components, the enthalpies for each acid dissociation, and the starting pH. At lower pH ( 9). These results are interpreted as the effect of a relative change in hydronium ion activity, delta H+/H+(initial), due to temperature-induced shifts in equilibria (acid dissociation, water autoprotolysis). This relative change must become larger as H+ decreases (pH increases). The output of PHTEMP was experimentally verified with 0.1 M glycine and with a multiple component amino acid solution. In both cases, agreement with prediction was excellent. The results of this investigation underscore the need to critically review formulation choices for both thermodynamic and traditional kinetic effects on the resulting product stability.

  2. A thermal analysis computer programme package for the estimation of KANUPP coolant channel flows and outlet header temperature distribution

    International Nuclear Information System (INIS)

    Siddiqui, M.S.

    1992-06-01

    COFTAN is a computer code for actual estimation of flows and temperatures in the coolant channels of a pressure tube heavy water reactor. The code is being used for Candu type reactor with coolant flowing 208 channels. The simulation model first performs the detailed calculation of flux and power distribution based on two groups diffusion theory treatment on a three dimensional mesh and then channel powers, resulting from the summation of eleven bundle powers in each of the 208 channels, are employed to make actual estimation of coolant flows using channel powers and channel outlet temperature monitored by digital computers. The code by using the design flows in individual channels and applying a correction factor based on control room monitored flows in eight selected channels, can also provide a reserve computational tool of estimating individual channel outlet temperatures, thus providing an alternate arrangements for checking Rads performance. 42 figs. (Orig./A.B.)

  3. Computational study of chain transfer to monomer reactions in high-temperature polymerization of alkyl acrylates.

    Science.gov (United States)

    Moghadam, Nazanin; Liu, Shi; Srinivasan, Sriraj; Grady, Michael C; Soroush, Masoud; Rappe, Andrew M

    2013-03-28

    This article presents a computational study of chain transfer to monomer (CTM) reactions in self-initiated high-temperature homopolymerization of alkyl acrylates (methyl, ethyl, and n-butyl acrylate). Several mechanisms of CTM are studied. The effects of the length of live polymer chains and the type of monoradical that initiated the live polymer chains on the energy barriers and rate constants of the involved reaction steps are investigated theoretically. All calculations are carried out using density functional theory. Three types of hybrid functionals (B3LYP, X3LYP, and M06-2X) and four basis sets (6-31G(d), 6-31G(d,p), 6-311G(d), and 6-311G(d,p)) are applied to predict the molecular geometries of the reactants, products and transition sates, and energy barriers. Transition state theory is used to estimate rate constants. The results indicate that abstraction of a hydrogen atom (by live polymer chains) from the methyl group in methyl acrylate, the methylene group in ethyl acrylate, and methylene groups in n-butyl acrylate are the most likely mechanisms of CTM. Also, the rate constants of CTM reactions calculated using M06-2X are in good agreement with those estimated from polymer sample measurements using macroscopic mechanistic models. The rate constant values do not change significantly with the length of live polymer chains. Abstraction of a hydrogen atom by a tertiary radical has a higher energy barrier than abstraction by a secondary radical, which agrees with experimental findings. The calculated and experimental NMR spectra of dead polymer chains produced by CTM reactions are comparable. This theoretical/computational study reveals that CTM occurs most likely via hydrogen abstraction by live polymer chains from the methyl group of methyl acrylate and methylene group(s) of ethyl (n-butyl) acrylate.

  4. APPLICATION OF SOFT COMPUTING TECHNIQUES FOR PREDICTING COOLING TIME REQUIRED DROPPING INITIAL TEMPERATURE OF MASS CONCRETE

    Directory of Open Access Journals (Sweden)

    Santosh Bhattarai

    2017-07-01

    Full Text Available Minimizing the thermal cracks in mass concrete at an early age can be achieved by removing the hydration heat as quickly as possible within initial cooling period before the next lift is placed. Recognizing the time needed to remove hydration heat within initial cooling period helps to take an effective and efficient decision on temperature control plan in advance. Thermal properties of concrete, water cooling parameters and construction parameter are the most influencing factors involved in the process and the relationship between these parameters are non-linear in a pattern, complicated and not understood well. Some attempts had been made to understand and formulate the relationship taking account of thermal properties of concrete and cooling water parameters. Thus, in this study, an effort have been made to formulate the relationship for the same taking account of thermal properties of concrete, water cooling parameters and construction parameter, with the help of two soft computing techniques namely: Genetic programming (GP software “Eureqa” and Artificial Neural Network (ANN. Relationships were developed from the data available from recently constructed high concrete double curvature arch dam. The value of R for the relationship between the predicted and real cooling time from GP and ANN model is 0.8822 and 0.9146 respectively. Relative impact on target parameter due to input parameters was evaluated through sensitivity analysis and the results reveal that, construction parameter influence the target parameter significantly. Furthermore, during the testing phase of proposed models with an independent set of data, the absolute and relative errors were significantly low, which indicates the prediction power of the employed soft computing techniques deemed satisfactory as compared to the measured data.

  5. Experiences in control system design aided by interactive computer programs: temperature control of the laser isotope separation vessel

    International Nuclear Information System (INIS)

    Gavel, D.T.; Pittenger, L.C.; McDonald, J.S.; Cramer, P.G.; Herget, C.J.

    1985-01-01

    A robust control system has been designed to regulate temperature in a vacuum vessel. The thermodynamic process is modeled by a set of nonlinear, implicit differential equations. The control design and analysis task exercised many of the computer-aided control systems design software packages, including MATLAB, DELIGHT, and LSAP. The working environment is a VAX computer. Advantages and limitations of the software and environment, and the impact on final controller design is discussed

  6. Experiences in control system design aided by interactive computer programs: Temperature control of the laser isotope separation vessel

    Science.gov (United States)

    Gavel, D. T.; Pittenger, L. C.; McDonald, J. S.; Cramer, P. G.; Herget, C. J.

    A robust control system has been designed to regulate temperature in a vacuum vessel. The thermodynamic process is modeled by a set of nonlinear, implicit differential equations. The control design and analysis task exercised many of the computer-aided control systems design software packages, including MATLAB, DELIGHT, AND LSAP. The working environment is a VAX computer. Advantages and limitations of the software and environment, and the impact on final controller design is discussed.

  7. Evaluation of pore structures and cracking in cement paste exposed to elevated temperatures by X-ray computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kwang Yeom, E-mail: kimky@kict.re.kr [Korea Institute of Construction Technology, 283 Goyangdae-ro, Ilsanseo-gu, Goyang 411-712 (Korea, Republic of); Yun, Tae Sup, E-mail: taesup@yonsei.ac.kr [School of Civil and Environmental Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-749 (Korea, Republic of); Park, Kwang Pil, E-mail: bamtol97@kict.re.kr [Korea Institute of Construction Technology, 283 Goyangdae-ro, Ilsanseo-gu, Goyang 411-712 (Korea, Republic of)

    2013-08-15

    When cement-based materials are exposed to the high temperatures induced by fire, which can rapidly cause temperatures of over 1000 °C, the changes in pore structure and density prevail. In the present study, mortar specimens were subjected to a series of increasing temperatures to explore the temperature-dependent evolution of internal pore structure. High-performance X-ray computed tomography (CT) was used to observe the evolution of temperature-induced discontinuities at the sub-millimeter level. X-ray diffraction (XRD) and scanning electron microscopy (SEM) were employed to investigate the cause of physical changes in the heated mortar specimens. Results exhibit the changes in pore structure caused by elevated temperatures, and thermally induced fractures. We discuss the progressive formation of thermally induced fracture networks, which is a prerequisite for spalling failure of cement-based materials by fire, based on visual observations of the 3D internal structures revealed by X-ray CT.

  8. Evaluation of pore structures and cracking in cement paste exposed to elevated temperatures by X-ray computed tomography

    International Nuclear Information System (INIS)

    Kim, Kwang Yeom; Yun, Tae Sup; Park, Kwang Pil

    2013-01-01

    When cement-based materials are exposed to the high temperatures induced by fire, which can rapidly cause temperatures of over 1000 °C, the changes in pore structure and density prevail. In the present study, mortar specimens were subjected to a series of increasing temperatures to explore the temperature-dependent evolution of internal pore structure. High-performance X-ray computed tomography (CT) was used to observe the evolution of temperature-induced discontinuities at the sub-millimeter level. X-ray diffraction (XRD) and scanning electron microscopy (SEM) were employed to investigate the cause of physical changes in the heated mortar specimens. Results exhibit the changes in pore structure caused by elevated temperatures, and thermally induced fractures. We discuss the progressive formation of thermally induced fracture networks, which is a prerequisite for spalling failure of cement-based materials by fire, based on visual observations of the 3D internal structures revealed by X-ray CT

  9. Computer modelling of high-temperature superconductors using an A-V formulation

    International Nuclear Information System (INIS)

    Ruiz-Alonso, D; Coombs, T; Campbell, A M

    2004-01-01

    Numerical methods for calculating the current and field distribution in high-temperature superconductors under non-uniform time-varying fields are being investigated. The highly non-linear behaviour of superconductors makes them difficult to analyse and computationally expensive. This non-linear behaviour is often accounted for through a non-linear E-J constitutive law. This paper proposes a fast method based on the finite element method to solve 2D and axially symmetric problems that contain superconducting materials. An E-J power law together with an A-V formulation is used to calculate the induction of currents in the superconductor due to time-varying external magnetic fields or forced transport current. Experimental data of a magnet-above-superconductor system is obtained in order to validate the model. In the experimental set-up a magnet is brought towards a superconducting puck at different speed rates and is also vibrated on top of it. The force between the magnet and the superconductor is measured and is found to vary with both time and frequency of excitation

  10. Room temperature ionic liquids interacting with bio-molecules: an overview of experimental and computational studies

    Science.gov (United States)

    Benedetto, Antonio; Ballone, Pietro

    2016-03-01

    We briefly review experimental and computational studies of room temperature ionic liquids (RTILs) interacting with important classes of biomolecules, including phospholipids, peptides and proteins, nucleic acids and carbohydrates. Most of these studies have been driven by the interest for RTILs applications as solvents. Thus, available experimental data cover primarily thermodynamic properties such as the reciprocal solubility of RTILs and bio-molecules, as well as phase boundaries. Less extensive data are also available on transport properties such as diffusion and viscosity of homogeneous binary (RTILs/biomolecules) and ternary (RTIL/biomolecules/water) solutions. Most of the structural information at the atomistic level, of interest especially for biochemical, pharmaceutical and nanotechnology applications, has been made available by molecular dynamics simulations. Major exceptions to this statement are represented by the results from NMR and circular dichroism spectroscopy, by selected neutron and X-ray scattering data, and by recent neutron reflectometry measurements on lipid bilayers on surfaces, hydrated by water-RTIL solutions. A final section of our paper summarizes new developments in the field of RTILs based on amino acids, that combine in themselves the two main aspects of our discussion, i.e. ionic liquids and bio-molecules.

  11. Computer modelling of high-temperature superconductors using an A-V formulation

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz-Alonso, D; Coombs, T; Campbell, A M [Cambridge University Engineering Department, Trumpington Street, Cambridge, CB2 1PZ (United Kingdom)

    2004-05-01

    Numerical methods for calculating the current and field distribution in high-temperature superconductors under non-uniform time-varying fields are being investigated. The highly non-linear behaviour of superconductors makes them difficult to analyse and computationally expensive. This non-linear behaviour is often accounted for through a non-linear E-J constitutive law. This paper proposes a fast method based on the finite element method to solve 2D and axially symmetric problems that contain superconducting materials. An E-J power law together with an A-V formulation is used to calculate the induction of currents in the superconductor due to time-varying external magnetic fields or forced transport current. Experimental data of a magnet-above-superconductor system is obtained in order to validate the model. In the experimental set-up a magnet is brought towards a superconducting puck at different speed rates and is also vibrated on top of it. The force between the magnet and the superconductor is measured and is found to vary with both time and frequency of excitation.

  12. Analyses, algorithms, and computations for models of high-temperature superconductivity. Final report

    International Nuclear Information System (INIS)

    Du, Q.

    1997-01-01

    Under the sponsorship of the Department of Energy, the authors have achieved significant progress in the modeling, analysis, and computation of superconducting phenomena. The work so far has focused on mezoscale models as typified by the celebrated Ginzburg-Landau equations; these models are intermediate between the microscopic models (that can be used to understand the basic structure of superconductors and of the atomic and sub-atomic behavior of these materials) and the macroscale, or homogenized, models (that can be of use for the design of devices). The models they have considered include a time dependent Ginzburg-Landau model, a variable thickness thin film model, models for high values of the Ginzburg-landau parameter, models that account for normal inclusions and fluctuations and Josephson effects, and the anisotropic ginzburg-Landau and Lawrence-Doniach models for layered superconductors, including those with high critical temperatures. In each case, they have developed or refined the models, derived rigorous mathematical results that enhance the state of understanding of the models and their solutions, and developed, analyzed, and implemented finite element algorithms for the approximate solution of the model equations

  13. Analyses, algorithms, and computations for models of high-temperature superconductivity. Final technical report

    International Nuclear Information System (INIS)

    Gunzburger, M.D.; Peterson, J.S.

    1998-01-01

    Under the sponsorship of the Department of Energy, the authors have achieved significant progress in the modeling, analysis, and computation of superconducting phenomena. Their work has focused on mezoscale models as typified by the celebrated ginzburg-Landau equations; these models are intermediate between the microscopic models (that can be used to understand the basic structure of superconductors and of the atomic and sub-atomic behavior of these materials) and the macroscale, or homogenized, models (that can be of use for the design of devices). The models the authors have considered include a time dependent Ginzburg-Landau model, a variable thickness thin film model, models for high values of the Ginzburg-Landau parameter, models that account for normal inclusions and fluctuations and Josephson effects, and the anisotropic Ginzburg-Landau and Lawrence-Doniach models for layered superconductors, including those with high critical temperatures. In each case, they have developed or refined the models, derived rigorous mathematical results that enhance the state of understanding of the models and their solutions, and developed, analyzed, and implemented finite element algorithms for the approximate solution of the model equations

  14. A Computational-Experimental Study of Plasma Processing of Carbides at High Temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Bronson, Arturo [Univ. of Texas, El Paso, TX (United States); Kumar, Vinod [Univ. of Texas, El Paso, TX (United States)

    2016-02-01

    The effects of plasma on carbides were computationally investigated for the ultimate development of adherent, dense scales such as Al2O3-TiO2 systems toward oxidation resistance at 1873 K. An alumina scale forms on the carbide because of the significant stability of Al2O3 as the outer scale adjacent to the gas phase compared to rutile, though TiO and Ti2O3 may form as components of an inner layer of a complicated scale. A sequence of surface reactions starting with the adsorption of oxygen on the surface was applied to experimental data from Donnelly’s research group who reported the adsorption of O2 in a plasma atmosphere as a function of power. In addition to the adsorbed oxygen (Oad) as the rate determining step, it controlled the cascading reaction sequence of the adsorbed species of AlO, AlO2 and AlO3, as indicated in the present study. The rate of oxygen adsorption also depends on the ratio of the final to initial adsorbed oxygen as a function the oxygen potential. In a secondary research thrust, Ti3AlC was synthesized and subsequently oxidized. A 39Ti-14Al-47TiC (in wt%) mixture was equilibrated by using a pseudo-isopiestic technique to form ultimately an aggregate of Ti3AlC, Ti2AlC and TiC phases. The aggregate was primarily composed of Ti3AlC with minor amounts of Ti2AlC and TiC, as determined by an X-ray diffraction analysis. The Ti3AlC/Ti2AlC/TiC aggregate was subsequently oxidized at 1873 K to form a scale composed of an outer layer of Al2O3-TiO2-Al2TiO5 with an inner layer consisting of TiO-Al2O3- Al4CO3. The measured scale thickness grew according to Wagner’s parabolic growth rate, which estimates an effective diffusion coefficient of 6 (10)-8 cm2/s. The scale

  15. Computational simulation of reactive species production by methane-air DBD at high pressure and high temperature

    Science.gov (United States)

    Takana, H.; Tanaka, Y.; Nishiyama, H.

    2012-01-01

    Computational simulations of a single streamer in DBD in lean methane-air mixture at pressure of 1 and 3 atm and temperature of 300 and 500 K were conducted for plasma-enhanced chemical reactions in a closed system. The effects of surrounding pressure and temperature are characterized for reactive species production by a DBD discharge. The results show that the production characteristics of reactive species are strongly influenced by the total gas number density and the higher concentration of reactive species are produced at higher pressure and lower gas temperature for a given initial reduced electric field.

  16. International Conference: Computer-Aided Design of High-Temperature Materials

    National Research Council Canada - National Science Library

    Kalia, Rajiv

    1998-01-01

    .... The conference was attended by experimental and computational materials scientists, and experts in high performance computing and communications from universities, government laboratories, and industries in the U.S., Europe, and Japan...

  17. Vaginal rejuvenation using energy-based devices

    Directory of Open Access Journals (Sweden)

    Cheryl Karcher, MD

    2016-09-01

    Full Text Available Physiologic changes in a woman’s life, such as childbirth, weight fluctuations, and hormonal changes due to aging and menopause, may alter the laxity of the vaginal canal, damage the pelvic floor, and devitalize the mucosal tone of the vaginal wall. These events often lead to the development of genitourinary conditions such as stress urinary incontinence; vaginal atrophy; dryness; and physiologic distress affecting a woman’s quality of life, self-confidence, and sexuality. Various treatment modalities are currently available to manage these indications, varying from invasive vaginal surgery to more benign treatments like topical vaginal hormonal gels or hormone-replacement therapy. A new trend gaining momentum is the advent of energy-based devices for vaginal rejuvenation that apply thermal or nonthermal energy to the various layers of the vaginal tissue, stimulating collagen regeneration contracture of elastin fibers, neovascularization, and improved vaginal lubrication. This review aims to present the available technologies offering vaginal rejuvenation and the scientific evidence that underlines their safety and efficacy for this indication.

  18. Computation of fission product distribution in core and primary circuit of a high temperature reactor during normal operation

    International Nuclear Information System (INIS)

    Mattke, U.H.

    1991-08-01

    The fission product release during normal operation from the core of a high temperature reactor is well known to be very low. A HTR-Modul-reactor with a reduced power of 170 MW th is examined under the aspect whether the contamination with Cs-137 as most important nuclide will be so low that a helium turbine in the primary circuit is possible. The program SPTRAN is the tool for the computations and siumlations of fission product transport in HTRs. The program initially developed for computations of accident events has been enlarged for computing the fission product transport under the conditions of normal operation. The theoretical basis, the used programs and data basis are presented followed by the results of the computations. These results are explained and discussed; moreover the consequences and future possibilities of development are shown. (orig./HP) [de

  19. Fast, Computer Supported Experimental Determination of Absolute Zero Temperature at School

    Science.gov (United States)

    Bogacz, Bogdan F.; Pedziwiatr, Antoni T.

    2014-01-01

    A simple and fast experimental method of determining absolute zero temperature is presented. Air gas thermometer coupled with pressure sensor and data acquisition system COACH is applied in a wide range of temperature. By constructing a pressure vs temperature plot for air under constant volume it is possible to obtain--by extrapolation to zero…

  20. PATE - a computer code for the calculation of temperature distribution in cylindrical fuel rods

    International Nuclear Information System (INIS)

    Silva Neto, A.J. da; Roberty, N.C.; Carmo, E.G.D. do.

    1983-08-01

    An analytical solution for the temperature profile in the fuel cladding is presented, having the coolant temperature as boundary conditions and using a first-order polynomial for the zircalloy thermal conductivity. The temperature profile in the fuel pellet is determined solving an algebraic equation by iterative methods. (E.G.) [pt

  1. Synergistic Computational and Microstructural Design of Next- Generation High-Temperature Austenitic Stainless Steels

    Energy Technology Data Exchange (ETDEWEB)

    Karaman, Ibrahim [Texas A& M Engineering Experiment Station, College Station, TX (United States); Arroyave, Raymundo [Texas A& M Engineering Experiment Station, College Station, TX (United States)

    2015-07-31

    The purpose of this project was to: 1) study deformation twinning, its evolution, thermal stability, and the contribution on mechanical response of the new advanced stainless steels, especially at elevated temperatures; 2) study alumina-scale formation on the surface, as an alternative for conventional chromium oxide, that shows better oxidation resistance, through alloy design; and 3) design new generation of high temperature stainless steels that form alumina scale and have thermally stable nano-twins. The work involved few baseline alloys for investigating the twin formation under tensile loading, thermal stability of these twins, and the role of deformation twins on the mechanical response of the alloys. These baseline alloys included Hadfield Steel (Fe-13Mn-1C), 316, 316L and 316N stainless steels. Another baseline alloy was studied for alumina-scale formation investigations. Hadfield steel showed twinning but undesired second phases formed at higher temperatures. 316N stainless steel did not show signs of deformation twinning. Conventional 316 stainless steel demonstrated extensive deformation twinning at room temperature. Investigations on this alloy, both in single crystalline and polycrystalline forms, showed that deformation twins evolve in a hierarchical manner, consisting of micron–sized bundles of nano-twins. The width of nano-twins stays almost constant as the extent of strain increases, but the width and number of the bundles increase with increasing strain. A systematic thermomechanical cycling study showed that the twins were stable at temperatures as high as 900°C, after the dislocations are annealed out. Using such cycles, volume fraction of the thermally stable deformation twins were increased up to 40% in 316 stainless steel. Using computational thermodynamics and kinetics calculations, we designed two generations of advanced austenitic stainless steels. In the first generation, Alloy 1, which had been proposed as an alumina

  2. Data Acquistion Controllers and Computers that can Endure, Operate and Survive Cryogenic Temperatures, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Current and future NASA exploration flight missions require Avionics systems, Computers, Controllers and Data processing units that are capable of enduring extreme...

  3. High pressure-elevated temperature x-ray micro-computed tomography for subsurface applications.

    Science.gov (United States)

    Iglauer, Stefan; Lebedev, Maxim

    2018-06-01

    Physical, chemical and mechanical pore-scale (i.e. micrometer-scale) mechanisms in rock are of key importance in many, if not all, subsurface processes. These processes are highly relevant in various applications, e.g. hydrocarbon recovery, CO 2 geo-sequestration, geophysical exploration, water production, geothermal energy production, or the prediction of the location of valuable hydrothermal deposits. Typical examples are multi-phase flow (e.g. oil and water) displacements driven by buoyancy, viscous or capillary forces, mineral-fluid interactions (e.g. mineral dissolution and/or precipitation over geological times), geo-mechanical rock behaviour (e.g. rock compaction during diagenesis) or fines migration during water production, which can dramatically reduce reservoir permeability (and thus reservoir performance). All above examples are 3D processes, and 2D experiments (as traditionally done for micro-scale investigations) will thus only provide qualitative information; for instance the percolation threshold is much lower in 3D than in 2D. However, with the advent of x-ray micro-computed tomography (μCT) - which is now routinely used - this limitation has been overcome, and such pore-scale processes can be observed in 3D at micrometer-scale. A serious complication is, however, the fact that in the subsurface high pressures and elevated temperatures (HPET) prevail, due to the hydrostatic and geothermal gradients imposed upon it. Such HPET-reservoir conditions significantly change the above mentioned physical and chemical processes, e.g. gas density is much higher at high pressure, which strongly affects buoyancy and wettability and thus gas distributions in the subsurface; or chemical reactions are significantly accelerated at increased temperature, strongly affecting fluid-rock interactions and thus diagenesis and deposition of valuable minerals. It is thus necessary to apply HPET conditions to the aforementioned μCT experiments, to be able to mimic subsurface

  4. The effect of influent temperature variations in a sedimentation tank for potable water treatment--a computational fluid dynamics study.

    Science.gov (United States)

    Goula, Athanasia M; Kostoglou, Margaritis; Karapantsios, Thodoris D; Zouboulis, Anastasios I

    2008-07-01

    A computational fluid dynamics (CFD) model is used to assess the effect of influent temperature variation on solids settling in a sedimentation tank for potable water treatment. The model is based on the CFD code Fluent and exploits several specific aspects of the potable water application to derive a computational tool much more efficient than the corresponding tools employed to simulate primary and secondary wastewater settling tanks. The linearity of the particle conservation equations allows separate calculations for each particle size class, leading to the uncoupling of the CFD problem from a particular inlet particle size distribution. The usually unknown and difficult to be measured particle density is determined by matching the theoretical to the easily measured experimental total settling efficiency. The present model is adjusted against data from a real sedimentation tank and then it is used to assess the significance of influent temperature variation. It is found that a temperature difference of only 1 degrees C between influent and tank content is enough to induce a density current. When the influent temperature rises, the tank exhibits a rising buoyant plume that changes the direction of the main circular current. This process keeps the particles in suspension and leads to a higher effluent suspended solids concentration, thus, worse settling. As the warmer water keeps coming in, the temperature differential decreases, the current starts going back to its original position, and, thus, the suspended solids concentration decreases.

  5. Magic angle spinning NMR below 6 K with a computational fluid dynamics analysis of fluid flow and temperature gradients

    Science.gov (United States)

    Sesti, Erika L.; Alaniva, Nicholas; Rand, Peter W.; Choi, Eric J.; Albert, Brice J.; Saliba, Edward P.; Scott, Faith J.; Barnes, Alexander B.

    2018-01-01

    We report magic angle spinning (MAS) up to 8.5 kHz with a sample temperature below 6 K using liquid helium as a variable temperature fluid. Cross polarization 13C NMR spectra exhibit exquisite sensitivity with a single transient. Remarkably, 1H saturation recovery experiments show a 1H T1 of 21 s with MAS below 6 K in the presence of trityl radicals in a glassy matrix. Leveraging the thermal spin polarization available at 4.2 K versus 298 K should result in 71 times higher signal intensity. Taking the 1H longitudinal relaxation into account, signal averaging times are therefore predicted to be expedited by a factor of >500. Computer assisted design (CAD) and finite element analysis were employed in both the design and diagnostic stages of this cryogenic MAS technology development. Computational fluid dynamics (CFD) models describing temperature gradients and fluid flow are presented. The CFD models bearing and drive gas maintained at 100 K, while a colder helium variable temperature fluid stream cools the center of a zirconia rotor. Results from the CFD were used to optimize the helium exhaust path and determine the sample temperature. This novel cryogenic experimental platform will be integrated with pulsed dynamic nuclear polarization and electron decoupling to interrogate biomolecular structure within intact human cells.

  6. SARTEMP2 - A computer program to calculate power and temperatures in a transport flask during a criticality accident

    International Nuclear Information System (INIS)

    Shaw, P.M.

    1983-04-01

    The computer code SARTEMP2, an extended version of the original SARTEMP program, which calculates the power and temperatures in a transport flask during a hypothetical criticality accident is described. The accident arises, it is assumed, during the refilling of the flask with water, bringing the system to delayed critical. As the water level continues to rise, reactivity is added causing the power to rise, and thus temperatures in the fuel, clad and water to increase. The point kinetics equations are coupled to the one-dimensional heat conduction equation. The model used, the method of solution of the equations and the input data required are given. (author)

  7. Online monitoring of the two-dimensional temperature field in a boiler furnace based on acoustic computed tomography

    International Nuclear Information System (INIS)

    Zhang, Shiping; Shen, Guoqing; An, Liansuo; Niu, Yuguang

    2015-01-01

    Online monitoring of the temperature field is crucial to optimally adjust combustion within a boiler. In this paper, acoustic computed tomography (CT) technology was used to obtain the temperature profile of a furnace cross-section. The physical principles behind acoustic CT, acoustic signals and time delay estimation were studied. Then, the technique was applied to a domestic 600-MW coal-fired boiler. Acoustic CT technology was used to monitor the temperature field of the cross-section in the boiler furnace, and the temperature profile was reconstructed through ART iteration. The linear sweeping frequency signal was adopted as the sound source signal, whose sweeping frequency ranged from 500 to 3000 Hz with a sweeping cycle of 0.1 s. The generalized cross-correlation techniques with PHAT and ML were used as the time delay estimation method when the boiler was in different states. Its actual operation indicated that the monitored images accurately represented the combustion state of the boiler, and the acoustic CT system was determined to be accurate and reliable. - Highlights: • An online monitoring approach to monitor temperature field in a boiler furnace. • The paper provides acoustic CT technology to obtain the temperature profile of a furnace cross-section. • The temperature profile was reconstructed through ART iteration. • The technique is applied to a domestic 600-MW coal-fired boiler. • The monitored images accurately represent the combustion state of the boiler

  8. DATING: A computer code for determining allowable temperatures for dry storage of spent fuel in inert and nitrogen gases

    International Nuclear Information System (INIS)

    Simonen, E.P.; Gilbert, E.R.

    1988-12-01

    The DATING (Determining Allowable Temperatures in Inert and Nitrogen Gases) code can be used to calculate allowable initial temperatures for dry storage of light-water-reactor spent fuel. The calculations are based on the life fraction rule using both measured data and mechanistic equations as reported by Chin et al. (1986). The code is written in FORTRAN and utilizes an efficient numerical integration method for rapid calculations on IBM-compatible personal computers. This report documents the technical basis for the DATING calculations, describes the computational method and code statements, and includes a user's guide with examples. The software for the DATING code is available through the National Energy Software Center operated by Argonne National Laboratory, Argonne, Illinois 60439. 5 refs., 8 figs., 5 tabs

  9. Computer program MCAP-TOSS calculates steady-state fluid dynamics of coolant in parallel channels and temperature distribution in surrounding heat-generating solid

    Science.gov (United States)

    Lee, A. Y.

    1967-01-01

    Computer program calculates the steady state fluid distribution, temperature rise, and pressure drop of a coolant, the material temperature distribution of a heat generating solid, and the heat flux distributions at the fluid-solid interfaces. It performs the necessary iterations automatically within the computer, in one machine run.

  10. Hydrogen energy based on nuclear energy

    International Nuclear Information System (INIS)

    2002-06-01

    A concept to produce hydrogen of an energy carrier using nuclear energy was proposed since 1970s, and a number of process based on thermochemical method has been investigated after petroleum shock. As this method is used high temperature based on nuclear reactors, these researches are mainly carried out as a part of application of high temperature reactors, which has been carried out at an aim of the high temperature reactor application in the Japan Atomic Energy Research Institute. On October, 2000, the 'First International Conference for Information Exchange on Hydrogen Production based on Nuclear Energy' was held by auspice of OECD/NEA, where hydrogen energy at energy view in the 21st Century, technology on hydrogen production using nuclear energy, and so on, were published. This commentary was summarized surveys and researches on hydrogen production using nuclear energy carried out by the Nuclear Hydrogen Research Group established on January, 2001 for one year. They contains, views on energy and hydrogen/nuclear energy, hydrogen production using nuclear energy and already finished researches, methods of hydrogen production using nuclear energy and their present conditions, concepts on production plants of nuclear hydrogen, resources on nuclear hydrogen production and effect on global environment, requests from market and acceptability of society, and its future process. (G.K.)

  11. Computer program for afterheat temperature distribution for mobile nuclear power plant

    Science.gov (United States)

    Parker, W. G.; Vanbibber, L. E.

    1972-01-01

    ESATA computer program was developed to analyze thermal safety aspects of post-impacted mobile nuclear power plants. Program is written in FORTRAN 4 and designed for IBM 7094/7044 direct coupled system.

  12. About possibility of temperature trace observing on a human skin through clothes by using computer processing of IR image

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Shestakov, Ivan L.; Blednov, Roman G.

    2017-05-01

    One of urgent security problems is a detection of objects placed inside the human body. Obviously, for safety reasons one cannot use X-rays for such object detection widely and often. For this purpose, we propose to use THz camera and IR camera. Below we continue a possibility of IR camera using for a detection of temperature trace on a human body. In contrast to passive THz camera using, the IR camera does not allow to see very pronounced the object under clothing. Of course, this is a big disadvantage for a security problem solution based on the IR camera using. To find possible ways for this disadvantage overcoming we make some experiments with IR camera, produced by FLIR Company and develop novel approach for computer processing of images captured by IR camera. It allows us to increase a temperature resolution of IR camera as well as human year effective susceptibility enhancing. As a consequence of this, a possibility for seeing of a human body temperature changing through clothing appears. We analyze IR images of a person, which drinks water and eats chocolate. We follow a temperature trace on human body skin, caused by changing of temperature inside the human body. Some experiments are made with observing of temperature trace from objects placed behind think overall. Demonstrated results are very important for the detection of forbidden objects, concealed inside the human body, by using non-destructive control without using X-rays.

  13. Computing the temperature dependence of effective CP violation in the standard model

    DEFF Research Database (Denmark)

    Brauner, Tomas; Taanila, Olli; Tranberg, Anders

    2012-01-01

    model is strongly suppressed at high temperature, but that at T less than or similar to 1 GeV it may be relevant for certain scenarios of baryogenesis. We also identify a selected class of operators at the next, eighth order and discuss the convergence of the covariant gradient expansion....... of the effective action to the leading nontrivial, sixth order in the covariant gradient expansion as a function of temperature. In the limit of zero temperature, our result addresses the discrepancy between two independent calculations existing in the literature [1, 2]. We find that CP violation in the standard...

  14. Computer simulation of temperature-dependent growth of fractal and compact domains in diluted Ising models

    DEFF Research Database (Denmark)

    Sørensen, Erik Schwartz; Fogedby, Hans C.; Mouritsen, Ole G.

    1989-01-01

    temperature are studied as functions of temperature, time, and concentration. At zero temperature and high dilution, the growing solid is found to have a fractal morphology and the effective fractal exponent D varies with concentration and ratio of time scales of the two dynamical processes. The mechanism...... responsible for forming the fractal solid is shown to be a buildup of a locally high vacancy concentration in the active growth zone. The growth-probability measure of the fractals is analyzed in terms of multifractality by calculating the f(α) spectrum. It is shown that the basic ideas of relating...... probability measures of static fractal objects to the growth-probability distribution during formation of the fractal apply to the present model. The f(α) spectrum is found to be in the universality class of diffusion-limited aggregation. At finite temperatures, the fractal solid domains become metastable...

  15. Numerical analysis for temperature profile of the closed house using computational fluid dynamics

    Science.gov (United States)

    Setiadi, Rizki; Munadi, Tauviqirrahman, Mohammad

    2018-03-01

    This study aims to analyze the air temperature distribution in the closed house system for broiler using ABAQUS CFD Model. The obtained data is used for placing the temperature sensor before making the control system for the closed house. The dimesion of the experimental house was 30 m × 12 m × 2 m (length × width × height) which could be occupied by 7.500 broiler. The wall was made from expose mercy brick and curtain, ventilation system used 7 exhaust fan with diameter 1 m and 2 cooling unit, the roof was made from wood, and system used 45 of 7 watt lamp. The results of the analysis show that temperature distribution occurs on temperature 21-33.5°C and still relatively comfortable for broiler at the age of 1-21days. The air temperature distribution near the cooling unit is lower and increases to near the exhaust fan. In addition, the air temperature in the area near the roof is more high than others.

  16. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  17. Computation and measurement of a plasma temperature using the emission lines of copper

    International Nuclear Information System (INIS)

    Sassi, M.; Pierre, L.; Benard, J.; Cahen, C.

    1994-01-01

    The use of copper emission lines for temperature measurement in plasma environments is presented. The article features the problems encountered with such a measurement method when used in a stationary and a non-stationary plasma. In the first case, we were concerned with the temperature measurements in the jet of a 2 MW industrial plasma torche. The plasma was stationary and the measurement geometry allowed the use of the Abel inversion method to recover the temperature profiles in the plasma jet. The limitations of the measurement method on the jet boundaries as well as the cooling of the plasma by entrainement of cold air are discussed. In the second case, a non-stationary plasma in a 1.4 MW electric furnace was studied. This study allowed to feature the role of the detector dynamics as related to the dynamics of the observed medium. The obtained measurements show only a probable temperature in the immediate proximity of the arc. Finally, we conclude on the validity of copper as a tracer and the use of spontaneous emission spectroscopy, and easy method to implement, compared to other sophisticated temperature measurement methods. (orig.)

  18. Computation of temperature elevation in rabbit eye irradiated by 2.45-GHz microwaves with different field configurations.

    Science.gov (United States)

    Hirata, Akimasa; Watanabe, Soichi; Taki, Masao; Fujiwara, Osamu; Kojima, Masami; Sasaki, Kazuyuki

    2008-02-01

    This study calculated the temperature elevation in the rabbit eye caused by 2.45-GHz near-field exposure systems. First, we calculated specific absorption rate distributions in the eye for different antennas and then compared them with those observed in previous studies. Next, we re-examined the temperature elevation in the rabbit eye due to a horizontally-polarized dipole antenna with a C-shaped director, which was used in a previous study. For our computational results, we found that decisive factors of the SAR distribution in the rabbit eye were the polarization of the electromagnetic wave and antenna aperture. Next, we quantified the eye average specific absorption rate as 67 W kg(-1) for the dipole antenna with an input power density at the eye surface of 150 mW cm(-2), which was specified in the previous work as the minimum cataractogenic power density. The effect of administrating anesthesia on the temperature elevation was 30% or so in the above case. Additionally, the position where maximum temperature in the lens appears is discussed due to different 2.45-GHz microwave systems. That position was found to appear around the posterior of the lens regardless of the exposure condition, which indicates that the original temperature distribution in the eye was the dominant factor.

  19. Feasibility of Noninvasive Temperature Assessment During Radiofrequency Liver Ablation on Computed Tomography

    NARCIS (Netherlands)

    Pandeya, Ganga D.; Greuter, Marcel J. W.; de Jong, Koert P.; Schmidt, Bernhard; Flohr, Thomas; Oudkerk, Matthijs

    2011-01-01

    Purpose: The purpose of this study was to assess the feasibility of noninvasive thermometry using high-resolution computer tomography (CT) for the monitoring of bovine liver during radiofrequency (RF) ablation. Methods: Radiofrequency probes were used to ablate bovine livers from 20 degrees C to 98

  20. Kinetics of the high-temperature combustion reactions of dibutylether using composite computational methods

    KAUST Repository

    Rachidi, Mariam El; Davis, Alexander C.; Sarathy, Mani

    2015-01-01

    . The energetics of H-abstraction by OH radicals is also studied. All rates are determined computationally using the CBS-QB3 and G4 composite methods in conjunction with conventional transition state theory. The B3LYP/6-311++G(2df,2pd) method is used to optimize

  1. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  2. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  4. Computer simulations of the restricted primitive model at very low temperature and density

    NARCIS (Netherlands)

    Valeriani, C.; Camp, P. J.; Zwanikken, J.W.; Van Roij, R.; Dijkstra, M.

    2010-01-01

    The problem of successfully simulating ionic fluids at low temperature and low density states is well known in the simulation literature: using conventional methods, the system is not able to equilibrate rapidly due to the presence of strongly associated cation–anion pairs. In this paper we present

  5. Postmortem time estimation using body temperature and a finite-element computer model

    NARCIS (Netherlands)

    Hartog, E.A. den; Lotens, W.A.

    2004-01-01

    In the Netherlands most murder victims are found 2-24 h after the crime. During this period, body temperature decrease is the most reliable method to estimate the postmortem time (PMT). Recently, two murder cases were analysed in which currently available methods did not provide a su.ciently

  6. Solubility of magnetite in high temperature water and an approach to generalized solubility computations

    International Nuclear Information System (INIS)

    Dinov, K.; Ishigure, K.; Matsuura, C.; Hiroishi, D.

    1993-01-01

    Magnetite solubility in pure water was measured at 423 K in a fully teflon-covered autoclave system. A fairly good agreement was found to exist between the experimental data and calculation results obtained from the thermodynamical model, based on the assumption of Fe 3 O 4 dissolution and Fe 2 O 3 deposition reactions. A generalized thermodynamical approach to the solubility computations under complex conditions on the basis of minimization of the total system Gibbs free energy was proposed. The forms of the chemical equilibria were obtained for various systems initially defined and successfully justified by the subsequent computations. A [Fe 3+ ] T -[Fe 2+ ] T phase diagram was introduced as a tool for systematic understanding of the magnetite dissolution phenomena in pure water and under oxidizing and reducing conditions. (orig.)

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  8. A Computational Method for Determining the Equilibrium Composition and Product Temperature in a LH2/LOX Combustor

    Science.gov (United States)

    Sozen, Mehmet

    2003-01-01

    In what follows, the model used for combustion of liquid hydrogen (LH2) with liquid oxygen (LOX) using chemical equilibrium assumption, and the novel computational method developed for determining the equilibrium composition and temperature of the combustion products by application of the first and second laws of thermodynamics will be described. The modular FORTRAN code developed as a subroutine that can be incorporated into any flow network code with little effort has been successfully implemented in GFSSP as the preliminary runs indicate. The code provides capability of modeling the heat transfer rate to the coolants for parametric analysis in system design.

  9. Using soft-X-ray energy spectrum to measure electronic temperature Te and primary research with computer data processing

    International Nuclear Information System (INIS)

    Wang Jingyao; Zhang Guangyang

    1993-01-01

    The authors reported the application of SCORPIO--2000 Computer detecting system on a nuclear fusion equipment, to measure the energy spectrum of soft X-ray from which the plasma electronic temperature was calculated. The authors processed systematically the data of the energy area of 1-4 Kev soft X-ray. The program edited was mostly made in FORTRAN, but only one SUBSB was made in assembly language. The program worked normally with convincing operation and easy correction of the data. The result obtained from calculation is the same as what was expected and the diagram obtained is the same as the expected one

  10. Natural carbonized sugar as a low-temperature ammonia sensor material: experimental, theoretical and computational studies

    OpenAIRE

    Ghule, Balaji; Shaikh, Shoyebmohamad F.; Ekar, Satish; Nakate, Umesh Tukaram; Gunturu, Krishna Chaitanya; Shinde, Nanasaheb; Naushad, Mu; Kim, Kwang Ho; O'Dwyer, Colm; Mane, Rajaram

    2017-01-01

    Carbonized sugar (CS) has been synthesized via microwave-assisted carbonization of market-quality tabletop sugar bearing in mind the advantages of this synthesis method, such as being useful, cost-effective, and eco-friendly. The as-prepared CS has been characterized for its morphology, phase purity, type of porosity, pore-size distribution, and so on. The gas-sensing properties of CS for various oxidizing and reducing gases are demonstrated at ambient temperature, where we observe good selec...

  11. Contempt-LT: a computer program for predicting containment pressure-temperature response to a loss-of-coolant accident

    International Nuclear Information System (INIS)

    Wheat, L.L.; Wagner, R.J.; Niederauer, G.F.; Obenchain, C.F.

    1975-06-01

    CONTEMPT-LT is a digital computer program, written in FORTRAN IV, developed to describe the long-term behavior of water-cooled nuclear reactor containment systems subjected to postulated loss-of-coolant accident (LOCA) conditions. The program calculates the time variation of compartment pressures, temperatures, mass and energy inventories, heat structure temperature distributions, and energy exchange with adjacent compartments. The program is capable of describing the effects of leakage on containment response. Models are provided to describe fan cooler and cooling spray engineered safety systems. Up to four compartments can be modeled with CONTEMPT-LT, and any compartment except the reactor system may have both a liquid pool region and an air-vapor atmosphere region above the pool. Each region is assumed to have a uniform temperature, but the temperatures of the two regions may be different. CONTEMPT-LT can be used to model all current boiling water reactor pressure suppression systems, including containments with either vertical or horizontal vent systems. CONTEMPT-LT can also be used to model pressurized water reactor dry containments, subatmospheric containments, and dual volume containments with an annulus region, and can be used to describe containment responses in experimental containment systems. The program user defines which compartments are used, specifies input mass and energy additions, defines heat structure and leakage systems, and describes the time advancement and output control. CONTEMPT-LT source decks are available in double precision extended-binary-coded-decimal-interchange-code (EBCDIC) versions. Sample problems have been run on the IBM360/75 computer. (U.S.)

  12. Computer based systems for fast reactor core temperature monitoring and protection

    International Nuclear Information System (INIS)

    Wall, D.N.

    1991-01-01

    Self testing fail safe trip systems and guardlines have been developed using dynamic logic as a basis for temperature monitoring and temperature protection in the UK. The guardline and trip system have been tested in passive operation on a number of reactors and a pulse coded logic guardline is currently in use on the DIDO test reactor. Acoustic boiling noise and ultrasonic systems have been developed in the UK as diverse alternatives to using thermocouples for temperature monitoring and measurement. These systems have the advantage that they make remote monitoring possible but they rely on complex signal processing to achieve their output. The means of incorporating such systems within the self testing trip system architecture are explored and it is apparent that such systems, particularly that based on ultrasonics has great potential for development. There remain a number of problems requiring detailed investigation in particular the verification of the signal processing electronics and trip software. It is considered that these problems while difficult are far from insurmountable and this work should result in the production of protection and monitoring systems suitable for deployment on the fast reactor. 6 figs

  13. A soft-computing methodology for noninvasive time-spatial temperature estimation.

    Science.gov (United States)

    Teixeira, César A; Ruano, Maria Graça; Ruano, António E; Pereira, Wagner C A

    2008-02-01

    The safe and effective application of thermal therapies is restricted due to lack of reliable noninvasive temperature estimators. In this paper, the temporal echo-shifts of backscattered ultrasound signals, collected from a gel-based phantom, were tracked and assigned with the past temperature values as radial basis functions neural networks input information. The phantom was heated using a piston-like therapeutic ultrasound transducer. The neural models were assigned to estimate the temperature at different intensities and points arranged across the therapeutic transducer radial line (60 mm apart from the transducer face). Model inputs, as well as the number of neurons were selected using the multiobjective genetic algorithm (MOGA). The best attained models present, in average, a maximum absolute error less than 0.5 degrees C, which is pointed as the borderline between a reliable and an unreliable estimator in hyperthermia/diathermia. In order to test the spatial generalization capacity, the best models were tested using spatial points not yet assessed, and some of them presented a maximum absolute error inferior to 0.5 degrees C, being "elected" as the best models. It should be also stressed that these best models present implementational low-complexity, as desired for real-time applications.

  14. Experimental and Computational Investigation of High Entropy Alloys for Elevated-Temperature Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liaw, Peter [Univ. of Tennessee, Knoxville, TN (United States); Zhang, Fan [CompuTherm LLC, Madison, WI (United States); Zhang, Chuan [CompuTherm LLC, Madison, WI (United States); Wang, Gongyao [Univ. of Tennessee, Knoxville, TN (United States); Xie, Xie [Univ. of Tennessee, Knoxville, TN (United States); Diao, Haoyan [Univ. of Tennessee, Knoxville, TN (United States); Kuo, Chih-Hsiang [Univ. of Tennessee, Knoxville, TN (United States); An, Zhinan [Univ. of Tennessee, Knoxville, TN (United States); Hemphill, Michael [Univ. of Tennessee, Knoxville, TN (United States)

    2016-07-30

    To create and design novel structural materials with enhanced creep-resistance, fundamental studies have been conducted on high-entropy alloys (HEAs), using (1) thermodynamic calculations, (2) mechanical tests, (3) neutron diffraction, (4) characterization techniques, and (5) crystal-plasticity finite-element modeling (CPFEM), to explore future candidates for next-generation power plants. All the constituent binary and ternary systems of the Al-Cr-Cu-Fe-Mn-Ni and Al-Co-Cr-Fe-Ni systems were thermodynamically modeled within the whole composition range. Comparisons between the calculated phase diagrams and literature data are in good agreement. Seven types of HEAs were fabricated from Al-Cr-Cu-Fe-Mn-Ni and Al-Co-Cr-Fe-Ni systems. The AlxCrCuFeMnNi HEAs have disordered [face-centered cubic (FCC) + body-centered cubic (BCC)] crystal structures, not FCC or BCC single structure. Excessive alloying of the Al element results in the change of both microstructural and mechanical properties in AlxCoCrFeNi HEAs. There are mainly three structural features in AlxCoCrFeNi: (1) the morphology, (2) the volume fractions of the constitute phases, and (3) existing temperatures of all six phases. After homogenization, the Al0.3CoCrFeNi material is a pure FCC solid solution. After aging at 700 °C for 500 hours, the optimal microstructure combinations, the FCC matrix, needle-like B2 phase within grains, and granular σ phase along grain boundary, is achieved for Al0.3CoCrFeNi. The cold-rolling process is utilized to reduce the grain size of Al0.1CoCrFeNi and Al0.3CoCrFeNi. The chemical elemental partitioning of FCC, BCC, B2, and σphases at different temperatures, before and after mechanical tests, in Al-Cr-Cu-Fe-Mn-Ni and Al-Co-Cr-Fe-Ni systems are quantitatively characterized by both synchrotron X-ray diffraction, neutron diffraction with levitation, scanning electron microscopy (SEM), advanced atom probe

  15. TAOI B- Computational Microstructural Optimization Design Tool for High Temperature Structural Materials

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Rajiv [Univ. Of North Texas, Denton, TX (United States); Charit, Indrajit [Univ. of Idaho, Moscow, ID (United States)

    2015-02-28

    The objectives of this research were two-fold: (a) develop a methodology for microstructural optimization of alloys - genetic algorithm approach for alloy microstructural optimization using theoretical models based on fundamental micro-mechanisms, and (b) develop a new computationally designed Ni-Cr alloy for coal-fired power plant applications. The broader outcome of these objectives is expected to be creation of an integrated approach for ‘structural materials by microstructural design’. Three alloy systems were considered for computational optimization and validation, (i) Ni-20Cr (wt.%) base alloy using only solid solution strengthening, (ii) nano-Y2O3 containing Ni-20Cr-1.2Y2O3 (wt.%) alloy for dispersion strengthening and (iii) a sub-micron Al2O3 for composite strengthening, Ni-20Cr-1.2Y2O3-5.0Al2O3 (wt.%). The specimens were synthesized by mechanical alloying and consolidated using spark plasma sintering. Detailed microstructural characterization was done along with initial mechanical properties to validate the computational prediction. A key target property is to have creep rate of 1x10-9 s-1 at 100 MPa and 800oC. The initial results were quite promising and require additional quantification of strengthening contributions from dislocation-particle attractive interaction and load transfer. The observed creep rate was in order of 10-9 s-1 for longer time creep test of Ni-20Cr -1.2Y2O3-5Al2O3, lending support to the overall approach pursued in this project.

  16. Computer simulation of current percolation in polycrystalline high-temperature superconductors

    Energy Technology Data Exchange (ETDEWEB)

    Zeimetz, B [Department of Materials Science and Interdisciplinary Research Centre in Superconductivity, Cambridge University, Pembroke Street, Cambridge (United Kingdom); Rutter, N A; Glowacki, B A; Evetts, J E [Department of Materials Science and Interdisciplinary Research Centre in Superconductivity, Cambridge University, Pembroke Street, Cambridge (United Kingdom)

    2001-09-01

    YBCO-coated conductors were modelled in a computer simulation using a resistor network concept, with the resistors representing the grain boundaries. Dissipation above the critical current, accompanied by flux penetration into the grain boundaries, was described by a linear (flux-flow) resistivity. The model allowed calculation of the combined percolation of current and magnetic flux. Current-voltage data showed scaling in agreement with percolation theory for two-dimensional systems. The influence of grain alignment and electromagnetic parameters on conductor performance was investigated. (author)

  17. Energy-Based Devices in Treatment of Acne Vulgaris.

    Science.gov (United States)

    Handler, Marc Z; Bloom, Bradley S; Goldberg, David J

    2016-05-01

    Acne vulgaris is a chronic dermatologic complaint with a multifactorial cause. Traditionally, antibiotics and retinoids have been used to manage the condition; patient compliance has been an ongoing issue. A variety of energy-based devices have been reported to be effective in the treatment of acne vulgaris. To review and summarize the current literature specific to treatment of acne vulgaris with energy-based devices. A review of the current literature of energy-based devices used for the treatment of acne vulgaris. Although limited randomized controlled trials for the treatment of acne have been performed, significant clinical improvement of acne vulgaris, especially of inflammatory lesions, has been demonstrated with a variety of energy-based devices. Newer approaches may lead to even better results.

  18. Energy-Based Acoustic Source Localization Methods: A Survey

    Directory of Open Access Journals (Sweden)

    Wei Meng

    2017-02-01

    Full Text Available Energy-based source localization is an important problem in wireless sensor networks (WSNs, which has been studied actively in the literature. Numerous localization algorithms, e.g., maximum likelihood estimation (MLE and nonlinear-least-squares (NLS methods, have been reported. In the literature, there are relevant review papers for localization in WSNs, e.g., for distance-based localization. However, not much work related to energy-based source localization is covered in the existing review papers. Energy-based methods are proposed and specially designed for a WSN due to its limited sensor capabilities. This paper aims to give a comprehensive review of these different algorithms for energy-based single and multiple source localization problems, their merits and demerits and to point out possible future research directions.

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  20. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  1. Computer Modeling Of High-Level Waste Glass Temperatures Within DWPF Canisters During Pouring And Cool Down

    International Nuclear Information System (INIS)

    Amoroso, J.

    2011-01-01

    This report describes the results of a computer simulation study to predict the temperature of the glass at any location inside a DWPF canister during pouring and subsequent cooling. These simulations are an integral part of a larger research focus aimed at developing methods to predict, evaluate, and ultimately suppress nepheline formation in HLW glasses. That larger research focus is centered on holistically understanding nepheline formation in HLW glass by exploring the fundamental thermal and chemical driving forces for nepheline crystallization with respect to realistic processing conditions. Through experimental work, the goal is to integrate nepheline crystallization potential in HLW glass with processing capability to ultimately optimize waste loading and throughput while maintaining an acceptable product with respect to durability. The results of this study indicated severe temperature gradients and prolonged temperature dwell times exist throughout different locations in the canister and that the time and temperatures that HLW glass is subjected to during processing is a function of pour rate. The simulations indicate that crystallization driving forces are not uniform throughout the glass volume in a DWPF (or DWPF-like) canister and illustrate the importance of considering overall kinetics (chemical and thermal driving forces) of nepheline formation when developing methods to predict and suppress its formation in HLW glasses. The intended path forward is to use the simulation data both as a driver for future experimental work and, as an investigative tool for evaluating the impact of experimental results. Simulation data will be used to develop laboratory experiments to more acutely evaluate nepheline formation in HLW glass by incorporating the simulated temperatures throughout the canister into the laboratory experiments. Concurrently, laboratory experiments will be performed to identify nepheline crystallization potential in HLW glass as a function of

  2. Computational fluid dynamics model of avian tracheal temperature control as a model for extant and extinct animals.

    Science.gov (United States)

    Sverdlova, N S; Arkali, F; Witzel, U; Perry, S F

    2013-10-01

    Respiratory evaporative cooling is an important mechanism of temperature control in bird. A computational simulation of the breathing cycle, heat and water loss in anatomical avian trachea/air sac model has not previously been conducted. We report a first attempt to simulate a breathing cycle in a three-dimensional model of avian trachea and air sacs (domestic fowl) using transient computational fluid dynamics. The airflow in the trachea of the model is evoked by changing the volume of the air sacs based on the measured tidal volume and inspiratory/expiratory times for the domestic fowl. We compare flow parameters and heat transfer results with in vivo data and with our previously reported results for a two-dimensional model. The total respiratory heat loss corresponds to about 13-19% of the starvation metabolic rate of domestic fowl. The present study can lend insight into a possible thermoregulatory function in species with long necks and/or a very long trachea, as found in swans and birds of paradise. Assuming the structure of the sauropod dinosaur respiratory system was close to avian, the simulation of the respiratory temperature control (using convective and evaporative cooling) in the extensively experimentally studied domestic fowl may also help in making simulations of respiratory heat control in these extinct animals. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Accuracy of the microcanonical Lanczos method to compute real-frequency dynamical spectral functions of quantum models at finite temperatures

    Science.gov (United States)

    Okamoto, Satoshi; Alvarez, Gonzalo; Dagotto, Elbio; Tohyama, Takami

    2018-04-01

    We examine the accuracy of the microcanonical Lanczos method (MCLM) developed by Long et al. [Phys. Rev. B 68, 235106 (2003), 10.1103/PhysRevB.68.235106] to compute dynamical spectral functions of interacting quantum models at finite temperatures. The MCLM is based on the microcanonical ensemble, which becomes exact in the thermodynamic limit. To apply the microcanonical ensemble at a fixed temperature, one has to find energy eigenstates with the energy eigenvalue corresponding to the internal energy in the canonical ensemble. Here, we propose to use thermal pure quantum state methods by Sugiura and Shimizu [Phys. Rev. Lett. 111, 010401 (2013), 10.1103/PhysRevLett.111.010401] to obtain the internal energy. After obtaining the energy eigenstates using the Lanczos diagonalization method, dynamical quantities are computed via a continued fraction expansion, a standard procedure for Lanczos-based numerical methods. Using one-dimensional antiferromagnetic Heisenberg chains with S =1 /2 , we demonstrate that the proposed procedure is reasonably accurate, even for relatively small systems.

  4. Accuracy of the microcanonical Lanczos method to compute real-frequency dynamical spectral functions of quantum models at finite temperatures.

    Science.gov (United States)

    Okamoto, Satoshi; Alvarez, Gonzalo; Dagotto, Elbio; Tohyama, Takami

    2018-04-01

    We examine the accuracy of the microcanonical Lanczos method (MCLM) developed by Long et al. [Phys. Rev. B 68, 235106 (2003)PRBMDO0163-182910.1103/PhysRevB.68.235106] to compute dynamical spectral functions of interacting quantum models at finite temperatures. The MCLM is based on the microcanonical ensemble, which becomes exact in the thermodynamic limit. To apply the microcanonical ensemble at a fixed temperature, one has to find energy eigenstates with the energy eigenvalue corresponding to the internal energy in the canonical ensemble. Here, we propose to use thermal pure quantum state methods by Sugiura and Shimizu [Phys. Rev. Lett. 111, 010401 (2013)PRLTAO0031-900710.1103/PhysRevLett.111.010401] to obtain the internal energy. After obtaining the energy eigenstates using the Lanczos diagonalization method, dynamical quantities are computed via a continued fraction expansion, a standard procedure for Lanczos-based numerical methods. Using one-dimensional antiferromagnetic Heisenberg chains with S=1/2, we demonstrate that the proposed procedure is reasonably accurate, even for relatively small systems.

  5. Fabrication of computationally designed scaffolds by low temperature 3D printing

    International Nuclear Information System (INIS)

    Castilho, Miguel; Dias, Marta; Fernandes, Paulo; Pires, Inês; Gouveia, Barbara; Rodrigues, Jorge; Gbureck, Uwe; Groll, Jürgen; Vorndran, Elke

    2013-01-01

    The development of artificial bone substitutes that mimic the properties of bone and simultaneously promote the desired tissue regeneration is a current issue in bone tissue engineering research. An approach to create scaffolds with such characteristics is based on the combination of novel design and additive manufacturing processes. The objective of this work is to characterize the microstructural and the mechanical properties of scaffolds developed by coupling both topology optimization and a low temperature 3D printing process. The scaffold design was obtained using a topology optimization approach to maximize the permeability with constraints on the mechanical properties. This procedure was studied to be suitable for the fabrication of a cage prototype for tibial tuberosity advancement application, which is one of the most recent and promising techniques to treat cruciate ligament rupture in dogs. The microstructural and mechanical properties of the scaffolds manufactured by reacting α/β-tricalcium phosphate with diluted phosphoric acid were then assessed experimentally and the scaffolds strength reliability was determined. The results demonstrate that the low temperature 3D printing process is a reliable option to create synthetic scaffolds with tailored properties, and when coupled with topology optimization design it can be a powerful tool for the fabrication of patient-specific bone implants. (paper)

  6. Groebner bases for finite-temperature quantum computing and their complexity

    International Nuclear Information System (INIS)

    Crompton, P. R.

    2011-01-01

    Following the recent approach of using order domains to construct Groebner bases from general projective varieties, we examine the parity and time-reversal arguments relating to the Wightman axioms of quantum field theory and propose that the definition of associativity in these axioms should be introduced a posteriori to the cluster property in order to generalize the anyon conjecture for quantum computing to indefinite metrics. We then show that this modification, which we define via ideal quotients, does not admit a faithful representation of the Braid group, because the generalized twisted inner automorphisms that we use to reintroduce associativity are only parity invariant for the prime spectra of the exterior algebra. We then use a coordinate prescription for the quantum deformations of toric varieties to show how a faithful representation of the Braid group can be reconstructed and argue that for a degree reverse lexicographic (monomial) ordered Groebner basis, the complexity class of this problem is bounded quantum polynomial.

  7. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  11. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  12. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  14. Computational and Experimental Design of Fe-Based Superalloys for Elevated-Temperature Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liaw, Peter K. [Univ. of Tennessee, Knoxville, TN (United States); Fine, Morris E. [Northwestern Univ., Evanston, IL (United States); Ghosh, Gautam [Northwestern Univ., Evanston, IL (United States); Asta, Mark D. [Univ. of California, Berkeley, CA (United States); Liu, Chain T. [Auburn Univ., AL (United States); Sun, Zhiqian [Univ. of Tennessee, Knoxville, TN (United States); Huang, Shenyan [Univ. of Tennessee, Knoxville, TN (United States); Teng, Zhenke [Univ. of Tennessee, Knoxville, TN (United States); Wang, Gongyao [Univ. of Tennessee, Knoxville, TN (United States)

    2012-04-13

    Analogous to nickel-based superalloys, Fe-based superalloys, which are strengthened by coherent B2- type precipitates are proposed for elevated-temperature applications. During the period of this project, a series of ferritic superalloys have been designed and fabricated by methods of vacuum-arc melting and vacuum-induction melting. Nano-scale precipitates were characterized by atom-probe tomography, ultrasmall- angle X-ray scattering, and transmission-electron microscopy. A duplex distribution of precipitates was found. It seems that ferritic superalloys are susceptible to brittle fracture. Systematic endeavors have been devoted to understanding and resolving the problem. Factors, such as hot rolling, precipitate volume fractions, alloy compositions, precipitate sizes and inter-particle spacings, and hyperfine cooling precipitates, have been investigated. In order to understand the underlying relationship between the microstructure and creep behavior of ferric alloys at elevated temperatures, in-situ neutron studies have been carried out. Based on the current result, it seems that the major role of β' with a 16%-volume fraction in strengthening ferritic alloys is not load sharing but interactions with dislocations. The oxidation behavior of one ferritic alloy, FBB8 (Fe-6.5Al-10Ni-10Cr-3.4Mo-0.25Zr-0.005B, weight percent), was studied in dry air. It is found that it possesses superior oxidation resistance at 1,023 and 1,123 K, compared with other creep-resistant ferritic steels [T91 (modified 9Cr-1Mo, weight percent) and P92 (9Cr-1.8W-0.5Mo, weight percent)]. At the same time, the calculation of the interfacial energies between the -iron and B2-type intermetallics (CoAl, FeAl, and NiAl) has been conducted.

  15. Temperature control system with computer mapping for engine cooling circuits; Kennfeldgesteuertes Temperaturregelsystem fuer Motorkuehlkreislaeufe

    Energy Technology Data Exchange (ETDEWEB)

    Saur, R.; Leu, P.; Lemberger, H.; Heumer, G.

    1996-07-01

    Thermomanagement of the vehicles powered by internal combustion engines is one of the prerequisites needed to fulfil the German automobil industry`s commitment to reduce fuel consumption by 25% as compared with 1990 before the year 2005. Thermomanagement improves comfort, and reduces fuel consumption and toxic emissions. BMW and Behr Thermot-Tronik have jointly developed the first component of such a thermomanagement system: An engine cooling system with computer mapping. BMW is the first manufacturer worldwide to install this system as standard equipment, as it is doing in its refinde eight-cylinder engine series (M62). (orig.) [Deutsch] Die Verpflichtung der deutschen Automobilindustrie, den Kraftstoffverbrauch bis zum Jahre 2005 zur Basis des Jahres 1990 um 25% zu reduzieren, fuehrt unter anderem zwingend zum Thermomanagement der mit Verbrennungsmotoren betriebenen Fahrzeuge. Ein Thermomanagement verbessert den Komfort, reduziert den Verbrauch und vermindert die Schadstoffemissionen. BMW und Behr Thermot-Tronik haben gemeinsam den ersten Baustein des Thermomanagements - das kennfeldgesteuerte Motorkuehlungssystem - entwickelt. Dieses System wird seit Januar 1996 weltweit erstmalig von BMW serienmaessig in der ueberarbeiteten Achtzylinder-Motorbaureihe (M62) eingesetzt. (orig.)

  16. Using Combined Computational Techniques to Predict the Glass Transition Temperatures of Aromatic Polybenzoxazines

    Science.gov (United States)

    Mhlanga, Phumzile; Wan Hassan, Wan Aminah; Hamerton, Ian; Howlin, Brendan J.

    2013-01-01

    The Molecular Operating Environment software (MOE) is used to construct a series of benzoxazine monomers for which a variety of parameters relating to the structures (e.g. water accessible surface area, negative van der Waals surface area, hydrophobic volume and the sum of atomic polarizabilities, etc.) are obtained and quantitative structure property relationships (QSPR) models are formulated. Three QSPR models (formulated using up to 5 descriptors) are first used to make predictions for the initiator data set (n = 9) and compared to published thermal data; in all of the QSPR models there is a high level of agreement between the actual data and the predicted data (within 0.63–1.86 K of the entire dataset). The water accessible surface area is found to be the most important descriptor in the prediction of Tg. Molecular modelling simulations of the benzoxazine polymer (minus initiator) carried out at the same time using the Materials Studio software suite provide an independent prediction of Tg. Predicted Tg values from molecular modelling fall in the middle of the range of the experimentally determined Tg values, indicating that the structure of the network is influenced by the nature of the initiator used. Hence both techniques can provide predictions of glass transition temperatures and provide complementary data for polymer design. PMID:23326419

  17. Using combined computational techniques to predict the glass transition temperatures of aromatic polybenzoxazines.

    Directory of Open Access Journals (Sweden)

    Phumzile Mhlanga

    Full Text Available The Molecular Operating Environment software (MOE is used to construct a series of benzoxazine monomers for which a variety of parameters relating to the structures (e.g. water accessible surface area, negative van der Waals surface area, hydrophobic volume and the sum of atomic polarizabilities, etc. are obtained and quantitative structure property relationships (QSPR models are formulated. Three QSPR models (formulated using up to 5 descriptors are first used to make predictions for the initiator data set (n = 9 and compared to published thermal data; in all of the QSPR models there is a high level of agreement between the actual data and the predicted data (within 0.63-1.86 K of the entire dataset. The water accessible surface area is found to be the most important descriptor in the prediction of T(g. Molecular modelling simulations of the benzoxazine polymer (minus initiator carried out at the same time using the Materials Studio software suite provide an independent prediction of T(g. Predicted T(g values from molecular modelling fall in the middle of the range of the experimentally determined T(g values, indicating that the structure of the network is influenced by the nature of the initiator used. Hence both techniques can provide predictions of glass transition temperatures and provide complementary data for polymer design.

  18. Computed estimates of maximum temperature elevations in fetal tissues during transabdominal pulsed Doppler examinations.

    Science.gov (United States)

    Bly, S H; Vlahovich, S; Mabee, P R; Hussey, R G

    1992-01-01

    Measured characteristics of ultrasonic fields were obtained in submissions from manufacturers of diagnostic ultrasound equipment for devices operating in pulsed Doppler mode. Simple formulae were used with these data to generate upper limits to fetal temperature elevations, delta Tlim, during a transabdominal pulsed Doppler examination. A total of 236 items were analyzed, each item being a console/transducer/operating-mode/intended-use combination, for which the spatial-peak temporal-average intensity, ISPTA, was greater than 500 mW cm-2. The largest calculated delta Tlim values were approximately 1.5, 7.1 and 8.7 degrees C for first-, second- and third-trimester examinations, respectively. The vast majority of items yielded delta Tlim values which were less than 1 degree C in the first trimester. For second- and third-trimester examinations, where heating of fetal bone determines delta Tlim, most delta Tlim values were less than 4 degrees C. The clinical significance of the results is discussed.

  19. Computer simulation of trails on a square lattice. I. Trails at infinite temperature

    International Nuclear Information System (INIS)

    Lim, H.A.; Meirovitch, H.

    1989-01-01

    A trail is a random walk on a lattice for which two bonds are not allowed to overlap. However, the chain may cross itself and one may associate with each such intersection an attractive energy epsilon-c. We study trails at infinite temperature T = ∞ (i.e., trails without attractions) on a square lattice using the scanning simulation method. Our results for the radius of gyration and the end-to-end distance strongly suggest (as do previous studies) that the shape exponent is ν = 0.75, similar to that for self-avoiding walks (SAW's). We obtain significantly more accurate estimates than have been obtained before for the entropy exponent γ = 1.350 +- 0.012 and for the effective growth parameter μ = 2.720 58 +- 0.000 20 (95% confidence limit). The persistence length is found to increase with increasing chain length N and the data fit slightly better an exponential function N/sup w/ where w = 0.047 +- 0.009 than a logarithmic one. Guttmann [J. Phys. A 18, 567 (1985)] has shown exactly that trails and SAW's on the hexagonal lattice at T = ∞ have the same exponents. Our results suggest that this is true also for the square lattice

  20. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  1. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  2. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  3. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  4. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  7. New analytic and computational techniques for finite temperature condensed matter systems

    International Nuclear Information System (INIS)

    Arias, T.A.

    1992-01-01

    By employing a special summation technique we find that the breakdown of the Meissner-Ochsenfeld effect in the three dimensional Bose gas as the applied field passes;through its critical value is an entropy driven weakly first order transition, rather than the second order transition usually ascribed to the system. The transition is second order at the usual Bose condensation temperature T c as well as at T = O, with a line o first order transition connecting these critical points. The first order transitions make the Bose gas resemble familiar superconductors, and a Landau-Ginzburg analysis indicates that the Bose gas is always a type I superconductor. We employ the recently introduce conjugate-gradient methods for minimization of the electronic energy functional to perform an extensive ab initio study of the Σ = 5 tilt [310] grain boundary in germanium. We find that the boundary reliably reconstructs to the tetrahedrally bonded network observed in HREM experiments without the proliferation of false local minima observed in similar twist boundaries. The reduced density of bonds crossing the grain boundary plan leads us to conjecture that the boundary may be a preferred fracture interface. We then combine these conjugate-gradient methods with a new technique for generating trail wavefunctions to produce an efficient ab initio molecular dynamics scheme that is that is at least two orders of magnitude more accurate than previous schemes and thus allows accurate calculation of dynamic correlation functions while maintaining tolerable energy conservation for microcanonical averages of those correlation function over picosecond time scales. We present two advances which greatly enhance the efficiency of our new ab initio molecular dynamics technique. We introduce a class of generalizations of traditional Fermionic energy functionals which allow us to lift the orthonormality constraints on the single particle orbitals and thus speed convergence

  8. Computational design for a wide-angle cermet-based solar selective absorber for high temperature applications

    International Nuclear Information System (INIS)

    Sakurai, Atsushi; Tanikawa, Hiroya; Yamada, Makoto

    2014-01-01

    The purpose of this study is to computationally design a wide-angle cermet-based solar selective absorber for high temperature applications by using a characteristic matrix method and a genetic algorithm. The present study investigates a solar selective absorber with tungsten–silica (W–SiO 2 ) cermet. Multilayer structures of 1, 2, 3, and 4 layers and a wide range of metal volume fractions are optimized. The predicted radiative properties show good solar performance, i.e., thermal emittances, especially beyond 2 μm, are quite low, in contrast, solar absorptance levels are successfully high with wide angular range, so that solar photons are effectively absorbed and infrared radiative heat loss can be decreased. -- Highlights: • Electromagnetic simulation of radiative properties by characteristic matrix method. • Optimization for multilayered W–SiO 2 cermet-based absorber by a Genetic Algorithm. • We propose a successfully high solar performance of solar selective absorber

  9. REMIX: a computer program for temperature transients due to high pressure injection after interruption of natural circulation

    International Nuclear Information System (INIS)

    Iyer, K.; Nourbakhsh, H.P.; Theofanous, T.G.

    1986-05-01

    This report describes the features and use of several computer programs developed on the basis of the Regional Mixing Model (RMM). This model provides a phenomenologically-based analytical description of the stratified flow and temperature fields resulting from High Pressure Safety Injection (HPI) in the stagnated loops of a Pressurized Water Reactor (PWR). The basic program is called REMIX and is intended for thermally-induced stratification at low Froude number injections. The REMIX-S version is intended for solute-induced stratification with or without thermal effects as found in several experimental simulations. The NEWMIX program is a derivative of REMIX representing the limit of maximum possible mixing within the cold leg and is intended for high Froude number injections. The NEWMIX-S version accounts for solute effects. Listings of all programs and sample problem input and output files are included. 10 refs

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  11. Calculation of temperature fields formed in induction annealing of closing welded joint of jacket of steam generator for WWER 440 type nuclear power plant using ICL 2960 computer

    International Nuclear Information System (INIS)

    Sajnar, P.; Fiala, J.

    1983-01-01

    The problems are discussed of the mathematical description and simulation of temperature fields in annealing the closing weld of the steam generator jacket of the WWER 440 nuclear power plant. The basic principles are given of induction annealing, the method of calculating temperature fields is indicated and the mathematical description is given of boundary conditions on the outer and inner surfaces of the steam generator jacket for the computation of temperature fields arising during annealing. Also described are the methods of determining the temperature of exposed parts of heat exchange tubes inside the steam generator and the technical possibilities are assessed of the annealing equipment from the point of view of its computer simulation. Five alternatives are given for the computation of temperature fields in the area around the weld for different boundary conditions. The values are given of maximum differences in the temperatures of the metal in the annealed part of the steam generator jacket which allow the assessment of individual computation variants, this mainly from the point of view of observing the course of annealing temperature in the required width of the annealed jacket of the steam generator along both sides of the closing weld. (B.S.)

  12. Heat transfer, velocity-temperature correlation, and turbulent shear stress from Navier-Stokes computations of shock wave/turbulent boundary layer interaction flows

    Science.gov (United States)

    Wang, C. R.; Hingst, W. R.; Porro, A. R.

    1991-01-01

    The properties of 2-D shock wave/turbulent boundary layer interaction flows were calculated by using a compressible turbulent Navier-Stokes numerical computational code. Interaction flows caused by oblique shock wave impingement on the turbulent boundary layer flow were considered. The oblique shock waves were induced with shock generators at angles of attack less than 10 degs in supersonic flows. The surface temperatures were kept at near-adiabatic (ratio of wall static temperature to free stream total temperature) and cold wall (ratio of wall static temperature to free stream total temperature) conditions. The computational results were studied for the surface heat transfer, velocity temperature correlation, and turbulent shear stress in the interaction flow fields. Comparisons of the computational results with existing measurements indicated that (1) the surface heat transfer rates and surface pressures could be correlated with Holden's relationship, (2) the mean flow streamwise velocity components and static temperatures could be correlated with Crocco's relationship if flow separation did not occur, and (3) the Baldwin-Lomax turbulence model should be modified for turbulent shear stress computations in the interaction flows.

  13. Energy-based ferromagnetic material model with magnetic anisotropy

    Energy Technology Data Exchange (ETDEWEB)

    Steentjes, Simon, E-mail: simon.steentjes@iem.rwth-aachen.de [Institute of Electrical Machines - RWTH Aachen University, Schinkelstr. 4, D-52056 Aachen (Germany); Henrotte, François, E-mail: francois.henrotte@uclouvain.be [Institute of Mechanics Materials and Civil Engineering - UCL, Av. G. Lemaître 4-6, B-1348 Louvain-la-Neuve (Belgium); Hameyer, Kay [Institute of Electrical Machines - RWTH Aachen University, Schinkelstr. 4, D-52056 Aachen (Germany)

    2017-03-01

    Non-oriented soft magnetic materials are commonly assumed to be magnetically isotropic. However, due to the rolling process a preferred direction exists along the rolling direction. This uniaxial magnetic anisotropy, and the related magnetostriction effect, are critical to the accurate calculation of iron losses and magnetic forces in rotating electrical machines. This paper proposes an extension of an isotropic energy-based vector hysteresis model to account for these two effects. - Highlights: • Energy-based vector hysteresis model with magnetic anisotropy. • Two-scale model to account for pinning field distribution. • Pinning force and reluctivity are extended to anisotropic case.

  14. Influence of contrast media viscosity and temperature on injection pressure in computed tomographic angiography: a phantom study.

    Science.gov (United States)

    Kok, Madeleine; Mihl, Casper; Mingels, Alma A; Kietselaer, Bas L; Mühlenbruch, Georg; Seehofnerova, Anna; Wildberger, Joachim E; Das, Marco

    2014-04-01

    Iodinated contrast media (CM) in computed tomographic angiography is characterized by its concentration and, consecutively, by its viscosity. Viscosity itself is directly influenced by temperature, which will furthermore affect injection pressure. Therefore, the purposes of this study were to systematically evaluate the viscosity of different CM at different temperatures and to assess their impact on injection pressure in a circulation phantom. Initially, viscosity of different contrast media concentrations (240, 300, 370, and 400 mgI/mL) was measured at different temperatures (20°C-40°C) with a commercially available viscosimeter. In the next step, a circulation phantom with physical conditions was used. Contrast media were prepared at different temperatures (20°C, 30°C, 37°C) and injected through a standard 18-gauge needle. All other relevant parameters were kept constant (iodine delivery rate, 1.9 g I/s; total amount of iodine, 15 g I). Peak flow rate (in milliliter per second) and injection pressure (psi) were monitored. Differences in significance were tested using the Kruskal-Wallis test (Statistical Package for the Social Sciences). Viscosities for iodinated CM of 240, 300, 370, and 400 mg I/mL at 20°C were 5.1, 9.1, 21.2, and 28.8 mPa.s, respectively, whereas, at 40°C, these were substantially lower (2.8, 4.4, 8.7, and 11.2 mPa.s). In the circulation phantom, mean (SD) peak pressures for CM of 240 mg I/mL at 20°C, 30°C, and 37°C were 107 (1.5), 95 (0.6), and 92 (2.1) psi; for CM of 300 mg I/mL, 119 (1.5), 104 (0.6), and 100 (3.6) psi; for CM of 370 mg I/mL, 150 (0.6), 133 (4.4), and 120 (3.5) psi; and for CM of 400 mg I/mL, 169 (1.0), 140 (2.1), and 135 (2.9) psi, respectively, with all P values less than 0.05. Low concentration, low viscosity, and high temperatures of CM are beneficial in terms of injection pressure. This should also be considered for individually tailored contrast protocols in daily routine scanning.

  15. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  16. Computational study of collisions between O(3P) and NO(2Π) at temperatures relevant to the hypersonic flight regime

    International Nuclear Information System (INIS)

    Castro-Palacio, Juan Carlos; Nagy, Tibor; Meuwly, Markus; Bemish, Raymond J.

    2014-01-01

    Reactions involving N and O atoms dominate the energetics of the reactive air flow around spacecraft when reentering the atmosphere in the hypersonic flight regime. For this reason, the thermal rate coefficients for reactive processes involving O( 3 P) and NO( 2 Π) are relevant over a wide range of temperatures. For this purpose, a potential energy surface (PES) for the ground state of the NO 2 molecule is constructed based on high-level ab initio calculations. These ab initio energies are represented using the reproducible kernel Hilbert space method and Legendre polynomials. The global PES of NO 2 in the ground state is constructed by smoothly connecting the surfaces of the grids of various channels around the equilibrium NO 2 geometry by a distance-dependent weighting function. The rate coefficients were calculated using Monte Carlo integration. The results indicate that at high temperatures only the lowest A-symmetry PES is relevant. At the highest temperatures investigated (20 000 K), the rate coefficient for the “O1O2+N” channel becomes comparable (to within a factor of around three) to the rate coefficient of the oxygen exchange reaction. A state resolved analysis shows that the smaller the vibrational quantum number of NO in the reactants, the higher the relative translational energy required to open it and conversely with higher vibrational quantum number, less translational energy is required. This is in accordance with Polanyi's rules. However, the oxygen exchange channel (NO2+O1) is accessible at any collision energy. Finally, this work introduces an efficient computational protocol for the investigation of three-atom collisions in general

  17. Computational study of collisions between O(3P) and NO(2Π) at temperatures relevant to the hypersonic flight regime.

    Science.gov (United States)

    Castro-Palacio, Juan Carlos; Nagy, Tibor; Bemish, Raymond J; Meuwly, Markus

    2014-10-28

    Reactions involving N and O atoms dominate the energetics of the reactive air flow around spacecraft when reentering the atmosphere in the hypersonic flight regime. For this reason, the thermal rate coefficients for reactive processes involving O((3)P) and NO((2)Π) are relevant over a wide range of temperatures. For this purpose, a potential energy surface (PES) for the ground state of the NO2 molecule is constructed based on high-level ab initio calculations. These ab initio energies are represented using the reproducible kernel Hilbert space method and Legendre polynomials. The global PES of NO2 in the ground state is constructed by smoothly connecting the surfaces of the grids of various channels around the equilibrium NO2 geometry by a distance-dependent weighting function. The rate coefficients were calculated using Monte Carlo integration. The results indicate that at high temperatures only the lowest A-symmetry PES is relevant. At the highest temperatures investigated (20,000 K), the rate coefficient for the "O1O2+N" channel becomes comparable (to within a factor of around three) to the rate coefficient of the oxygen exchange reaction. A state resolved analysis shows that the smaller the vibrational quantum number of NO in the reactants, the higher the relative translational energy required to open it and conversely with higher vibrational quantum number, less translational energy is required. This is in accordance with Polanyi's rules. However, the oxygen exchange channel (NO2+O1) is accessible at any collision energy. Finally, this work introduces an efficient computational protocol for the investigation of three-atom collisions in general.

  18. Computer modeling of the sensitivity of a laser water vapor sensor to variations in temperature and air speed

    Science.gov (United States)

    Tucker, George F.

    1994-01-01

    Currently, there is disagreement among existing methods of determining atmospheric water vapor concentration at dew-points below -40 C. A major source of error is wall effects which result from the necessity of bringing samples into the instruments. All of these instruments also have response times on the order of seconds. NASA Langley is developing a water vapor sensor which utilizes the absorption of the infrared radiation produced by a diode laser to estimate water vapor concentration. The laser beam is directed through an aircraft window to a retroreflector located on an engine. The reflected beam is detected by an infrared detector located near the laser. To maximize signal to noise, derivative signals are analyzed. By measuring the 2f/DC signal and correcting for ambient temperature, atmospheric pressure and air speed (which results in a Doppler shifting of the laser beam), the water vapor concentration can be retrieved. Since this is an in situ measurement there are no wall effects and measurements can be made at a rate of more than 20 per second. This allows small spatial variations of water vapor to be studied. In order to study the sensitivity of the instrument to variations in temperature and air speed, a computer program which generated the 2f, 3f, 4f, DC and 2f/DC signals of the instrument as a function of temperature, pressure and air speed was written. This model was used to determine the effect of errors in measurement of the temperature and air speed on the measured water vapor concentration. Future studies will quantify the effect of pressure measurement errors, which are expected to be very small. As a result of these studied, a retrieval algorithm has been formulated, and will be applied to data taken during the PEM-West atmospheric science field mission. Spectroscopic studies of the water vapor line used by the instrument will be used to refine this algorithm. To prepare for these studies, several lasers have been studied to determine their

  19. Knee temperatures measured in vivo after arthroscopic ACL reconstruction followed by cryotherapy with gel-packs or computer controlled heat extraction.

    Science.gov (United States)

    Rashkovska, Aleksandra; Trobec, Roman; Avbelj, Viktor; Veselko, Matjaž

    2014-09-01

    To obtain in vivo data about intra- and extra-articular knee temperatures to assess the effectiveness of two cryotherapeutic methods-conventional cooling with gel-packs and computer controlled cryotherapy following anterior cruciate ligament (ACL) reconstructive surgery. Twenty patients were arbitrarily assigned for cryotherapy after ACL reconstruction: 8 patients with frozen gel-packs and 12 patients with computer controlled cryotherapy with constant temperatures of the cooling liquid in the knee pads. The treatment was performed for 12 h. Temperatures were measured with two thermo sensors in catheters placed intraarticularly and subcutaneously, four sensors on the skin and one sensor under protective bandage, every second for 16 h after surgery. In the first 2 h of treatment, there were no significant differences (n.s.) between the groups in temperatures in the intracondylar notch. After 4 h of cryotherapy, the temperatures were significantly lower on the skin (24.6 ± 2.8 and 31.4 ± 1.3 °C, p cryotherapy group compared to the gel-pack group. The cooling effect of the arthroscopy irrigation fluid on the knee temperature is evident in the first 2 h of treatment. The energy extraction is significantly more effective and controllable by computer controlled cryotherapy than with frozen gel-packs. Prospective comparative study, Level II.

  20. Computational and variable-temperature infrared spectroscopic studies on carbon monoxide adsorption on zeolite Ca-A.

    Science.gov (United States)

    Pulido, Angeles; Nachtigall, Petr; Rodríguez Delgado, Montserrat; Otero Areán, Carlos

    2009-05-11

    Carbon monoxide adsorption on LTA (Linde type 5A) zeolite Ca-A is studied by using a combination of variable-temperature infrared spectroscopy and computational methods involving periodic density functional calculations and the correlation between stretching frequency and bond length of adsorbed CO species (nu(CO)/r(CO) correlation). Based on the agreement between calculated and experimental results, the main adsorption species can be identified as bridged Ca(2+)...CO...Ca(2+) complexes formed on dual-cation sites constituted by a pair of nearby Ca(2+) cations. Two types of such species can be formed: One of them has the two Ca(2+) ions located on six-membered rings of the zeolite framework and is characterized by a C-O stretching frequency in the range of 2174-2179 cm(-1) and an adsorption enthalpy of -31 to -33 kJ mol(-1), whereas the other bridged CO species is formed between a Ca(2+) ion located on an eight-membered ring and another one on a nearby six-membered ring and is characterized by nu(CO) in the range 2183-2188 cm(-1) and an adsorption enthalpy of -46 to -50 kJ mol(-1). Ca(2+)...CO monocarbonyl complexes are also identified, and at a relatively high CO equilibrium pressure, dicarbonyl species can also be formed.

  1. Application of computational thermodynamics to the determination of thermophysical properties as a function of temperature for multicomponent Al-based alloys

    Energy Technology Data Exchange (ETDEWEB)

    Nascimento, Fabiana C. [Fluminense Federal University, Graduate Program in Metallurgical Engineering, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil); Paresque, Mara C.C. [Fluminense Federal University, Graduate Program in Mechanical Engineering, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil); Castro, José A. de [Fluminense Federal University, Graduate Program in Metallurgical Engineering, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil); Jácome, Paulo A.D. [Fluminense Federal University, Graduate Program in Mechanical Engineering, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil); Garcia, Amauri, E-mail: amaurig@fem.unicamp.br [University of Campinas – UNICAMP, Department of Manufacturing and Materials Engineering, 13083-860 Campinas, SP (Brazil); Ferreira, Ivaldo L. [Fluminense Federal University, Graduate Program in Mechanical Engineering, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil)

    2015-11-10

    Highlights: • A model coupled to a computational thermodynamics software is proposed to compute thermophysical properties. • The model applies to multicomponent alloys and has been validated against experimental results. • Density and specific heat as a function of temperature are computed for Al–Si–Cu alloys. - Abstract: Despite the technological importance of Al–Si–Cu alloys in manufacturing processes involving heat transfer, such as welding, casting and heat treatment, thermophysical properties of this system of alloys are very scarce in the literature. In this paper, a model connected to a computational thermodynamics software is proposed permitting density and specific heats as a function of temperature and enthalpy of transformations to be numerically determined. The model is pre-validated against experimental density as a function of temperature for liquid and solid phases of A319 and 7075 alloys found in the literature and validated against experimental density values for the solid phase of an Al-6 wt%Cu-1 wt%Si alloy determined in the present study. In both cases the numerical predictions are in good agreement with the experimental results. Specific heat and temperatures and heats of transformation are also numerically determined for this ternary Al-based alloy.

  2. Application of computational thermodynamics to the determination of thermophysical properties as a function of temperature for multicomponent Al-based alloys

    International Nuclear Information System (INIS)

    Nascimento, Fabiana C.; Paresque, Mara C.C.; Castro, José A. de; Jácome, Paulo A.D.; Garcia, Amauri; Ferreira, Ivaldo L.

    2015-01-01

    Highlights: • A model coupled to a computational thermodynamics software is proposed to compute thermophysical properties. • The model applies to multicomponent alloys and has been validated against experimental results. • Density and specific heat as a function of temperature are computed for Al–Si–Cu alloys. - Abstract: Despite the technological importance of Al–Si–Cu alloys in manufacturing processes involving heat transfer, such as welding, casting and heat treatment, thermophysical properties of this system of alloys are very scarce in the literature. In this paper, a model connected to a computational thermodynamics software is proposed permitting density and specific heats as a function of temperature and enthalpy of transformations to be numerically determined. The model is pre-validated against experimental density as a function of temperature for liquid and solid phases of A319 and 7075 alloys found in the literature and validated against experimental density values for the solid phase of an Al-6 wt%Cu-1 wt%Si alloy determined in the present study. In both cases the numerical predictions are in good agreement with the experimental results. Specific heat and temperatures and heats of transformation are also numerically determined for this ternary Al-based alloy.

  3. High resolution vertical profiles of wind, temperature and humidity obtained by computer processing and digital filtering of radiosonde and radar tracking data from the ITCZ experiment of 1977

    Science.gov (United States)

    Danielson, E. F.; Hipskind, R. S.; Gaines, S. E.

    1980-01-01

    Results are presented from computer processing and digital filtering of radiosonde and radar tracking data obtained during the ITCZ experiment when coordinated measurements were taken daily over a 16 day period across the Panama Canal Zone. The temperature relative humidity and wind velocity profiles are discussed.

  4. An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments

    Directory of Open Access Journals (Sweden)

    Michael A. Guthrie

    2013-01-01

    Full Text Available limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment. For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.

  5. RPM-WEBBSYS: A web-based computer system to apply the rational polynomial method for estimating static formation temperatures of petroleum and geothermal wells

    Science.gov (United States)

    Wong-Loya, J. A.; Santoyo, E.; Andaverde, J. A.; Quiroz-Ruiz, A.

    2015-12-01

    A Web-Based Computer System (RPM-WEBBSYS) has been developed for the application of the Rational Polynomial Method (RPM) to estimate static formation temperatures (SFT) of geothermal and petroleum wells. The system is also capable to reproduce the full thermal recovery processes occurred during the well completion. RPM-WEBBSYS has been programmed using advances of the information technology to perform more efficiently computations of SFT. RPM-WEBBSYS may be friendly and rapidly executed by using any computing device (e.g., personal computers and portable computing devices such as tablets or smartphones) with Internet access and a web browser. The computer system was validated using bottomhole temperature (BHT) measurements logged in a synthetic heat transfer experiment, where a good matching between predicted and true SFT was achieved. RPM-WEBBSYS was finally applied to BHT logs collected from well drilling and shut-in operations, where the typical problems of the under- and over-estimation of the SFT (exhibited by most of the existing analytical methods) were effectively corrected.

  6. Study on computer-aided control system design platform of 10MW high temperature gas-cooled test reactor

    International Nuclear Information System (INIS)

    Feng Yan; Shi Lei; Sun Yuliang; Luo Shaojie

    2004-01-01

    the 10 MW high temperature gas-cooled test reactor (HTR-10) is the first modular pebble bed reactor built in China, which needs to be researched on engineering design, control study, safety analysis and operator training. An integrated system for simulation, control design and online assistance of the HTR-10 (HTRSIMU) has been developed by the Institute of Nuclear Energy Technology (INET) of Tsinghua University. The HTRSIMU system is based on a high-speed local area network, on which a computer-aided control system design platform (CDP) is developed and combined with the simulating subsystem in order to provide a visualized and convenient tool for the HTR-10 control system design. The CDP has friendly man-machine interface and good expansibility, in which eighteen types of control items are integrated. These control items are divided into two types: linear and non-linear control items. The linear control items include Proportion, Integral, Differential, Inertial, Leed-lag, Oscillation, Pure-lag, Common, PID and Fuzzy, while the non-linear control items include Saturation, Subsection, Insensitive, Backlash, Relay, Insensi-Relay, Sluggish-Relay and Insens-Slug. The CDP provides a visualized platform for control system modeling and the control loop system can be automatically generated and graphically simulated. Users can conveniently design control loop, modify control parameters, study control method, and analyze control results just by clicking mouse buttons. This kind of control system design method can provide a powerful tool and good reference for the actual system operation for HTR-10. A control scheme is also given and studied to demonstrate the functions of the CDP in this article. (author)

  7. Thermodynamic performance assessment of a novel environmentally-benign solar energy based integrated system

    International Nuclear Information System (INIS)

    Yuksel, Yunus Emre; Ozturk, Murat; Dincer, Ibrahim

    2016-01-01

    Highlights: • Development of a novel solar energy based system for multigenaration applications. • Evaluation of the exergy efficiency and destruction rate in each system component. • Investigation of the varying operating conditions on the system performance. • Evaluation of complete parametric studies and performance analysis of the system. - Abstract: In this paper, a novel solar energy based multigeneration system for producing electricity, hydrogen, hot water, heating and cooling is presented and analyzed thermodynamically for potential applications. The energy and exergy analyses are conducted for entire system and its sub-systems, which are a parabolic trough collector system, a double-stage organic Rankine cycle, a proton exchange membrane electrolyzer, a PEM fuel cycle and a quadruple effect absorption cooling system. The parametric studies are performed in order to indicate the impacts of some key indicators on the integrated system performance. These analyses are simulated by using the Engineering Equation Solver software. The results show that the increase in ambient temperature increases the exergetic coefficient performance of the Quadruple Effect Absorption Cooling System. In addition, the increase in solar intensity, temperature of absorber pipes inner surface and concentration of ammonia in working fluid mixture has the positive effect on produced electricity from the expanders and turbine, and hydrogen from the PEM electrolyzer. According to exergy analyses, the largest exergy destruction rates are obtained in the parabolic trough collector, PEM fuel cell and turbine. Therefore, any improvements in these components would lead to a better efficiency of the integrated system.

  8. The calculation of surface free energy based on embedded atom method for solid nickel

    International Nuclear Information System (INIS)

    Luo Wenhua; Hu Wangyu; Su Kalin; Liu Fusheng

    2013-01-01

    Highlights: ► A new solution for accurate prediction of surface free energy based on embedded atom method was proposed. ► The temperature dependent anisotropic surface energy of solid nickel was obtained. ► In isotropic environment, the approach does not change most predictions of bulk material properties. - Abstract: Accurate prediction of surface free energy of crystalline metals is a challenging task. The theory calculations based on embedded atom method potentials often underestimate surface free energy of metals. With an analytical charge density correction to the argument of the embedding energy of embedded atom method, an approach to improve the prediction for surface free energy is presented. This approach is applied to calculate the temperature dependent anisotropic surface energy of bulk nickel and surface energies of nickel nanoparticles, and the obtained results are in good agreement with available experimental data.

  9. Energy-Based Metrics for Arthroscopic Skills Assessment.

    Science.gov (United States)

    Poursartip, Behnaz; LeBel, Marie-Eve; McCracken, Laura C; Escoto, Abelardo; Patel, Rajni V; Naish, Michael D; Trejos, Ana Luisa

    2017-08-05

    Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metrics based on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM) and neural networks (NNs) for classification. The accuracy of the combination of the normalized energy-based metrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts) in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-based metrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-based metrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.

  10. Energy-Based Facial Rejuvenation: Advances in Diagnosis and Treatment.

    Science.gov (United States)

    Britt, Christopher J; Marcus, Benjamin

    2017-01-01

    The market for nonsurgical, energy-based facial rejuvenation techniques has increased exponentially since lasers were first used for skin rejuvenation in 1983. Advances in this area have led to a wide range of products that require the modern facial plastic surgeon to have a large repertoire of knowledge. To serve as a guide for current trends in the development of technology, applications, and outcomes of laser and laser-related technology over the past 5 years. We performed a review of PubMed from January 1, 2011, to March 1, 2016, and focused on randomized clinical trials, meta-analyses, systematic reviews, and clinical practice guidelines including case control, case studies and case reports when necessary, and included 14 articles we deemed landmark articles before 2011. Three broad categories of technology are leading non-energy-based rejuvenation technology: lasers, light therapy, and non-laser-based thermal tightening devices. Laser light therapy has continued to diversify with the use of ablative and nonablative resurfacing technologies, fractionated lasers, and their combined use. Light therapy has developed for use in combination with other technologies or stand alone. Finally, thermally based nonlaser skin-tightening devices, such as radiofrequency (RF) and intense focused ultrasonography (IFUS), are evolving technologies that have changed rapidly over the past 5 years. Improvements in safety and efficacy for energy-based treatment have expanded the patient base considering these therapies viable options. With a wide variety of options, the modern facial plastic surgeon can have a frank discussion with the patient regarding nonsurgical techniques that were never before available. Many of these patients can now derive benefit from treatments requiring significantly less downtime than before while the clinician can augment the treatment to maximize benefit to fit the patient's time schedule.

  11. Energy Based Clutter Filtering for Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Jensen, Jonas; Ewertsen, Caroline

    2017-01-01

    for obtaining vector flow measurements, since the spectra overlaps at high beam-to-flow angles. In this work a distinct approach is proposed, where the energy of the velocity spectrum is used to differentiate among the two signals. The energy based method is applied by limiting the amplitude of the velocity...... spectrum function to a predetermined threshold. The effect of the clutter filtering is evaluated on a plane wave (PW) scan sequence in combination with transverse oscillation (TO) and directional beamforming (DB) for velocity estimation. The performance of the filter is assessed by comparison...

  12. A renewable energy based hydrogen demonstration park in Turkey. HYDEPARK

    Energy Technology Data Exchange (ETDEWEB)

    Ilhan, Niluefer; Ersoez, Atilla [TUEBITAK Marmara Research Center Energy Institute, Gebze Kocaeli (Turkey); Cubukcu, Mete [Ege Univ., Bornova, Izmir (Turkey). Solar Energy Inst.

    2010-07-01

    The main goal of this national project is to research hydrogen technologies and renewable energy applications. Solar and wind energy are utilized to obtain hydrogen via electrolysis, which can either be used in the fuel cell or stored in cylinders for further use. The management of all project work packages was carried by TUeBITAK Marmara Research Center (MRC) Energy Institute (EI) with the support of the collaborators. The aim of this paper is to present the units of the renewable energy based hydrogen demonstration park, which is in the demonstration phase now and share the experimental results. (orig.)

  13. CONTEMPT-LT/028: a computer program for predicting containment pressure-temperature response to a loss-of-coolant accident

    International Nuclear Information System (INIS)

    Hargroves, D.W.; Metcalfe, L.J.; Wheat, L.L.; Niederauer, G.F.; Obenchain, C.F.

    1979-03-01

    CONTEMPT-LT is a digital computer program, written in FORTRAN IV, developed to describe the long-term behavior of water-cooled nuclear reactor containment systems subjected to postulated loss-of-coolant accident (LOCA) conditions. The program calculates the time variation of compartment pressures, temperatures, mass and energy inventories, heat structure temperature distributions, and energy exchange with adjacent compartments. The program is capable of describing the effects of leakage on containment response. Models are provided to describe fan cooler and cooling spray engineered safety systems. An annular fan model is also provided to model pressure control in the annular region of dual containment systems. Up to four compartments can be modeled with CONTEMPT-LT, and any compartment except the reactor system may have both a liquid pool region and an air--vapor atmosphere region above the pool. Each region is assumed to have a uniform temperature, but the temperatures of the two regions may be different

  14. AMPTRACT: an algebraic model for computing pressure tube circumferential and steam temperature transients under stratified channel coolant conditions

    International Nuclear Information System (INIS)

    Gulshani, P.; So, C.B.

    1986-10-01

    In a number of postulated accident scenarios in a CANDU reactor, some of the horizontal fuel channels are predicted to experience periods of stratified channel coolant condition which can lead to a circumferential temperature gradient around the pressure tube. To study pressure tube strain and integrity under stratified flow channel conditions, it is, necessary to determine the pressure tube circumferential temperature distribution. This paper presents an algebraic model, called AMPTRACT (Algebraic Model for Pressure Tube TRAnsient Circumferential Temperature), developed to give the transient temperature distribution in a closed form. AMPTRACT models the following modes of heat transfer: radiation from the outermost elements to the pressure tube and from the pressure to calandria tube, convection between the fuel elements and the pressure tube and superheated steam, and circumferential conduction from the exposed to submerged part of the pressure tube. An iterative procedure is used to solve the mass and energy equations in closed form for axial steam and fuel-sheath transient temperature distributions. The one-dimensional conduction equation is then solved to obtain the pressure tube circumferential transient temperature distribution in a cosine series expansion. In the limit of large times and in the absence of convection and radiation to the calandria tube, the predicted pressure tube temperature distribution reduces identically to a parabolic profile. In this limit, however, radiation cannot be ignored because the temperatures are generally high. Convection and radiation tend to flatten the parabolic distribution

  15. Nuclear Engineering Computer Modules, Thermal-Hydraulics, TH-3: High Temperature Gas Cooled Reactor Thermal-Hydraulics.

    Science.gov (United States)

    Reihman, Thomas C.

    This learning module is concerned with the temperature field, the heat transfer rates, and the coolant pressure drop in typical high temperature gas-cooled reactor (HTGR) fuel assemblies. As in all of the modules of this series, emphasis is placed on developing the theory and demonstrating its use with a simplified model. The heart of the module…

  16. The inverse Numerical Computer Program FLUX-BOT for estimating Vertical Water Fluxes from Temperature Time-Series.

    Science.gov (United States)

    Trauth, N.; Schmidt, C.; Munz, M.

    2016-12-01

    Heat as a natural tracer to quantify water fluxes between groundwater and surface water has evolved to a standard hydrological method. Typically, time series of temperatures in the surface water and in the sediment are observed and are subsequently evaluated by a vertical 1D representation of heat transport by advection and dispersion. Several analytical solutions as well as their implementation into user-friendly software exist in order to estimate water fluxes from the observed temperatures. Analytical solutions can be easily implemented but assumptions on the boundary conditions have to be made a priori, e.g. sinusoidal upper temperature boundary. Numerical models offer more flexibility and can handle temperature data which is characterized by irregular variations such as storm-event induced temperature changes and thus cannot readily be incorporated in analytical solutions. This also reduced the effort of data preprocessing such as the extraction of the diurnal temperature variation. We developed a software to estimate water FLUXes Based On Temperatures- FLUX-BOT. FLUX-BOT is a numerical code written in MATLAB which is intended to calculate vertical water fluxes in saturated sediments, based on the inversion of measured temperature time series observed at multiple depths. It applies a cell-centered Crank-Nicolson implicit finite difference scheme to solve the one-dimensional heat advection-conduction equation. Besides its core inverse numerical routines, FLUX-BOT includes functions visualizing the results and functions for performing uncertainty analysis. We provide applications of FLUX-BOT to generic as well as to measured temperature data to demonstrate its performance.

  17. Ab initio computation of the transition temperature of the charge density wave transition in TiS e2

    Science.gov (United States)

    Duong, Dinh Loc; Burghard, Marko; Schön, J. Christian

    2015-12-01

    We present a density functional perturbation theory approach to estimate the transition temperature of the charge density wave transition of TiS e2 . The softening of the phonon mode at the L point where in TiS e2 a giant Kohn anomaly occurs, and the energy difference between the normal and distorted phase are analyzed. Both features are studied as functions of the electronic temperature, which corresponds to the Fermi-Dirac distribution smearing value in the calculation. The transition temperature is found to be 500 and 600 K by phonon and energy analysis, respectively, in reasonable agreement with the experimental value of 200 K.

  18. Babcock and Wilcox revisions to CONTEMPT, computer program for predicting containment pressure-temperature response to a loss-of-coolant accident

    International Nuclear Information System (INIS)

    Hsii, Y.H.

    1975-01-01

    The CONTEMPT computer program predicts the pressure-temperature response of a single-volume reactor building to a loss-of-coolant accident. The analytical model used for the program is described. CONTEMPT assumes that the loss-of-coolant accident can be separated into two phases; the primary system blowdown and reactor building pressurization. The results of the blowdown analysis serve as the boundary conditions and are input to the CONTEMPT program. Thus, the containment model is only concerned with the pressure and temperature in the reactor building and the temperature distribution through the reactor building structures. The program also calculates building leakage and the effects of engineered safety features such as reactor building sprays, decay heat coolers, sump coolers, etc. 11 references. (U.S.)

  19. RAGRAF: a computer code for calculating temperature distributions in multi-pin fuel assemblies in a stagnant gas atmosphere

    International Nuclear Information System (INIS)

    Eastham, A.

    1979-02-01

    A method of calculating the temperature distribution in a cross-section of a multi-pin nuclear reactor fuel assembly has been computerised. It utilises the thermal radiation interchange between individual fuel pins in either a square or triangular pitched lattice. A stagnant gas atmosphere within the fuel assembly is assumed which inhibits natural convection but permits thermal conduction between adjacent fuel pins. no restriction is placed upon the shape of wrapper used, but its temperature must always be uniform. RAGRAF has great flexibility because of the many options it provides. Although, essentially, it is a transient code, steady state solutions may be readily identified from successive temperature prints. An enclosure for the assembly wrapper is available, to be included or discarded at will during transient calculations. outside the limit of the assembly wrapper, any type or combination of heat transfer mode may be included. Transient variations in boundary temperature may be included if required. (author)

  20. USE OF COMPUTATIONAL FLUID DYNAMICS TO SIMULATE TEMPERATURE DISTRIBUTION IN BROILER HOUSES WITH NEGATIVE AND POSITIVE TUNNEL TYPE VENTILATION SYSTEMS

    OpenAIRE

    Osorio Saraz, Jairo Alexander; Arêdes Martins, Marcio; Oliveira Rocha, Keller Sullivan; Silva Machado, Neiton; Ciro Velasques, Hector José

    2013-01-01

    In tropical and subtropical countries such as Brazil, temperatures inside agricultural facilities reach values greater than those considered suitable for thermal comfort for both animals and people. To predict the behavior and distribution of temperature in all directions of the structure and make correct decisions about the inside environment, the classic research method would require a significant experimental arrangement. Consequently, classic methodologies do not offer immediate informati...

  1. Proposed Empirical Entropy and Gibbs Energy Based on Observations of Scale Invariance in Open Nonequilibrium Systems.

    Science.gov (United States)

    Tuck, Adrian F

    2017-09-07

    There is no widely agreed definition of entropy, and consequently Gibbs energy, in open systems far from equilibrium. One recent approach has sought to formulate an entropy and Gibbs energy based on observed scale invariances in geophysical variables, particularly in atmospheric quantities, including the molecules constituting stratospheric chemistry. The Hamiltonian flux dynamics of energy in macroscopic open nonequilibrium systems maps to energy in equilibrium statistical thermodynamics, and corresponding equivalences of scale invariant variables with other relevant statistical mechanical variables such as entropy, Gibbs energy, and 1/(k Boltzmann T), are not just formally analogous but are also mappings. Three proof-of-concept representative examples from available adequate stratospheric chemistry observations-temperature, wind speed and ozone-are calculated, with the aim of applying these mappings and equivalences. Potential applications of the approach to scale invariant observations from the literature, involving scales from molecular through laboratory to astronomical, are considered. Theoretical support for the approach from the literature is discussed.

  2. A computational study of syngas auto-ignition characteristics at high-pressure and low-temperature conditions with thermal inhomogeneities

    KAUST Repository

    Pal, Pinaki

    2015-07-30

    A computational study was conducted to investigate the characteristics of auto-ignition in a syngas mixture at high-pressure and low-temperature conditions in the presence of thermal inhomogeneities. Highly resolved one-dimensional numerical simulations incorporating detailed chemistry and transport were performed. The temperature inhomogeneities were represented by a global sinusoidal temperature profile and a local Gaussian temperature spike (hot spot). Reaction front speed and front Damköhler number analyses were employed to characterise the propagating ignition front. In the presence of a global temperature gradient, the ignition behaviour shifted from spontaneous propagation (strong) to deflagrative (weak), as the initial mean temperature of the reactant mixture was lowered. A predictive Zel\\'dovich–Sankaran criterion to determine the transition from strong to weak ignition was validated for different parametric sets. At sufficiently low temperatures, the strong ignition regime was recovered due to faster passive scalar dissipation of the imposed thermal fluctuations relative to the reaction timescale, which was quantified by the mixing Damköhler number. In the presence of local hot spots, only deflagrative fronts were observed. However, the fraction of the reactant mixture consumed by the propagating front was found to increase as the initial mean temperature was lowered, thereby leading to more enhanced compression-heating of the end-gas. Passive scalar mixing was not found to be important for the hot spot cases considered. The parametric study confirmed that the relative magnitude of the Sankaran number translates accurately to the quantitative strength of the deflagration front in the overall ignition advancement. © 2015 Taylor & Francis

  3. A computational study of syngas auto-ignition characteristics at high-pressure and low-temperature conditions with thermal inhomogeneities

    Science.gov (United States)

    Pal, Pinaki; Mansfield, Andrew B.; Arias, Paul G.; Wooldridge, Margaret S.; Im, Hong G.

    2015-09-01

    A computational study was conducted to investigate the characteristics of auto-ignition in a syngas mixture at high-pressure and low-temperature conditions in the presence of thermal inhomogeneities. Highly resolved one-dimensional numerical simulations incorporating detailed chemistry and transport were performed. The temperature inhomogeneities were represented by a global sinusoidal temperature profile and a local Gaussian temperature spike (hot spot). Reaction front speed and front Damköhler number analyses were employed to characterise the propagating ignition front. In the presence of a global temperature gradient, the ignition behaviour shifted from spontaneous propagation (strong) to deflagrative (weak), as the initial mean temperature of the reactant mixture was lowered. A predictive Zel'dovich-Sankaran criterion to determine the transition from strong to weak ignition was validated for different parametric sets. At sufficiently low temperatures, the strong ignition regime was recovered due to faster passive scalar dissipation of the imposed thermal fluctuations relative to the reaction timescale, which was quantified by the mixing Damköhler number. In the presence of local hot spots, only deflagrative fronts were observed. However, the fraction of the reactant mixture consumed by the propagating front was found to increase as the initial mean temperature was lowered, thereby leading to more enhanced compression-heating of the end-gas. Passive scalar mixing was not found to be important for the hot spot cases considered. The parametric study confirmed that the relative magnitude of the Sankaran number translates accurately to the quantitative strength of the deflagration front in the overall ignition advancement.

  4. Numerical simulation of temperature and thermal stress for nuclear piping by using computational fluid dynamics analysis and Green’s function

    Energy Technology Data Exchange (ETDEWEB)

    Boo, Myung-Hwan [Korea Hydro and Nuclear Power Company, Daejeon (Korea, Republic of); Oh, Chang-Kyun; Kim, Hyun-Su [KEPCO Engineering and Construction Company, Gimcheon (Korea, Republic of); Choi, Choeng-Ryul [ELSOLTEC, Inc., Yongin (Korea, Republic of)

    2017-05-15

    Owing to the fact that thermal fatigue is a well-known damage mechanism in nuclear power plants, accurate stress and fatigue evaluation are highly important. Operating experience shows that the design condition is conservative compared to the actual one. Therefore, various fatigue monitoring methods have been extensively utilized to consider the actual operating data. However, defining the local temperature in the piping is difficult because temperature-measuring instruments are limited. The purpose of this paper is to define accurate local temperature in the piping and evaluate thermal stress using Green’s function (GF) by performing a series of computational fluid dynamics analyses considering the complex fluid conditions. Also, the thermal stress is determined by adopting GF and comparing it with that of the design condition. The fluid dynamics analysis result indicates that the fluid temperature slowly varies compared to the designed one even when the flow rate changes abruptly. In addition, the resulting thermal stress can significantly decrease when reflecting the actual temperature.

  5. Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation

    Directory of Open Access Journals (Sweden)

    Benjamin Scellier

    2017-05-01

    Full Text Available We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made and the second phase of training (after the target or prediction error is revealed. Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well-defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point or stationary distribution toward a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged toward their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal “back-propagated” during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not. We also show experimentally that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST

  6. Cooling of Gas Turbines. 6; Computed Temperature Distribution Through Cross Section of Water-Cooled Turbine Blade

    Science.gov (United States)

    Livingood, John N. B.; Sams, Eldon W.

    1947-01-01

    A theoretical analysis of the cross-sectional temperature distribution of a water-cooled turbine blade was made using the relaxation method to solve the differential equation derived from the analysis. The analysis was applied to specific turbine blade and the studies icluded investigations of the accuracy of simple methods to determine the temperature distribution along the mean line of the rear part of the blade, of the possible effect of varying the perimetric distribution of the hot gas-to -metal heat transfer coefficient, and of the effect of changing the thermal conductivity of the blade metal for a constant cross sectional area blade with two quarter inch diameter coolant passages.

  7. Assessment of the computational uncertainty of temperature rise and SAR in the eyes and brain under far-field exposure from 1 to 10 GHz

    International Nuclear Information System (INIS)

    Laakso, Ilkka

    2009-01-01

    This paper presents finite-difference time-domain (FDTD) calculations of specific absorption rate (SAR) values in the head under plane-wave exposure from 1 to 10 GHz using a resolution of 0.5 mm in adult male and female voxel models. Temperature rise due to the power absorption is calculated by the bioheat equation using a multigrid method solver. The computational accuracy is investigated by repeating the calculations with resolutions of 1 mm and 2 mm and comparing the results. Cubically averaged 10 g SAR in the eyes and brain and eye-averaged SAR are calculated and compared to the corresponding temperature rise as well as the recommended limits for exposure. The results suggest that 2 mm resolution should only be used for frequencies smaller than 2.5 GHz, and 1 mm resolution only under 5 GHz. Morphological differences in models seemed to be an important cause of variation: differences in results between the two different models were usually larger than the computational error due to the grid resolution, and larger than the difference between the results for open and closed eyes. Limiting the incident plane-wave power density to smaller than 100 W m -2 was sufficient for ensuring that the temperature rise in the eyes and brain were less than 1 deg. C in the whole frequency range.

  8. Assessment of the computational uncertainty of temperature rise and SAR in the eyes and brain under far-field exposure from 1 to 10 GHz

    Science.gov (United States)

    Laakso, Ilkka

    2009-06-01

    This paper presents finite-difference time-domain (FDTD) calculations of specific absorption rate (SAR) values in the head under plane-wave exposure from 1 to 10 GHz using a resolution of 0.5 mm in adult male and female voxel models. Temperature rise due to the power absorption is calculated by the bioheat equation using a multigrid method solver. The computational accuracy is investigated by repeating the calculations with resolutions of 1 mm and 2 mm and comparing the results. Cubically averaged 10 g SAR in the eyes and brain and eye-averaged SAR are calculated and compared to the corresponding temperature rise as well as the recommended limits for exposure. The results suggest that 2 mm resolution should only be used for frequencies smaller than 2.5 GHz, and 1 mm resolution only under 5 GHz. Morphological differences in models seemed to be an important cause of variation: differences in results between the two different models were usually larger than the computational error due to the grid resolution, and larger than the difference between the results for open and closed eyes. Limiting the incident plane-wave power density to smaller than 100 W m-2 was sufficient for ensuring that the temperature rise in the eyes and brain were less than 1 °C in the whole frequency range.

  9. Assessment of the computational uncertainty of temperature rise and SAR in the eyes and brain under far-field exposure from 1 to 10 GHz

    Energy Technology Data Exchange (ETDEWEB)

    Laakso, Ilkka [Department of Radio Science and Engineering, Helsinki University of Technology, Otakaari 5 A, 02150 Espoo (Finland)], E-mail: ilkka.laakso@tkk.fi

    2009-06-07

    This paper presents finite-difference time-domain (FDTD) calculations of specific absorption rate (SAR) values in the head under plane-wave exposure from 1 to 10 GHz using a resolution of 0.5 mm in adult male and female voxel models. Temperature rise due to the power absorption is calculated by the bioheat equation using a multigrid method solver. The computational accuracy is investigated by repeating the calculations with resolutions of 1 mm and 2 mm and comparing the results. Cubically averaged 10 g SAR in the eyes and brain and eye-averaged SAR are calculated and compared to the corresponding temperature rise as well as the recommended limits for exposure. The results suggest that 2 mm resolution should only be used for frequencies smaller than 2.5 GHz, and 1 mm resolution only under 5 GHz. Morphological differences in models seemed to be an important cause of variation: differences in results between the two different models were usually larger than the computational error due to the grid resolution, and larger than the difference between the results for open and closed eyes. Limiting the incident plane-wave power density to smaller than 100 W m{sup -2} was sufficient for ensuring that the temperature rise in the eyes and brain were less than 1 deg. C in the whole frequency range.

  10. Demand Side Management Using the Internet of Energy based on Fog and Cloud Computing

    DEFF Research Database (Denmark)

    Shahryari, Kolsoom; Anvari-Moghaddam, Amjad

    2017-01-01

    The smart grid, as a communication network, allows numerous connected devices such as sensors, relays and actuators to interact and cooperate with each other. An Internet-based solution for electricity that provides bidirectional flow of information and power is internet of energy (IoE) which...

  11. Simulation of uranium transport with variable temperature and oxidation potential: The computer program THCC [Thermo-Hydro-Chemical Coupling

    International Nuclear Information System (INIS)

    Carnahan, C.L.

    1986-12-01

    A simulator of reactive chemical transport has been constructed with the capabilities of treating variable temperatures and variable oxidation potentials within a single simulation. Homogeneous and heterogeneous chemical reactions are simulated at temperature-dependent equilibrium, and changes of oxidation states of multivalent elements can be simulated during transport. Chemical mass action relations for formation of complexes in the fluid phase are included explicitly within the partial differential equations of transport, and a special algorithm greatly simplifies treatment of reversible precipitation of solid phases. This approach allows direct solution of the complete set of governing equations for concentrations of all aqueous species and solids affected simultaneously by chemical and physical processes. Results of example simulations of transport, along a temperature gradient, of uranium solution species under conditions of varying pH and oxidation potential and with reversible precipitation of uraninite and coffinite are presented. The examples illustrate how inclusion of variable temperature and oxidation potential in numerical simulators can enhance understanding of the chemical mechanisms affecting migration of multivalent waste elements

  12. Analysis of inner structure changes of concretes exposed to high temperatures using micro X-ray computed tomography

    OpenAIRE

    Sitek, L. (Libor); Bodnárová, L.; Souček, K. (Kamil); Staš, L. (Lubomír); Gurková, L. (Lucie)

    2015-01-01

    The X-ray Computed Tomography (X-ray CT) repr esents a progressive non-destructive metho d of analysing the inner structure of materials. The method was used for monitoring changes in inner structure of concrete samples of different composition before and after their exposure to various thermal loads. Eight types of concrete samples were prepared which differed by cement and aggregate types. We intentionally used such composition of concrete mixtures which increased their resist...

  13. Using the PSCPCSP computer software for optimization of the composition of industrial alloys and development of new high-temperature nickel-base alloys

    Science.gov (United States)

    Rtishchev, V. V.

    1995-11-01

    Using computer programs some foreign firms have developed new deformable and castable high-temperature nickel-base alloys such as IN, Rene, Mar-M, Udimet, TRW, TM, TMS, TUT, with equiaxial, columnar, and single-crystal structures for manufacturing functional and nozzle blades and other parts of the hot duct of transport and stationary gas-turbine installations (GTI). Similar investigations have been carried out in Russia. This paper presents examples of the use of the PSCPCSP computer software for a quantitative analysis of structural und phase characteristics and properties of industrial alloys with change (within the grade range) in the concentrations of the alloying elements for optimizing the composition of the alloys and regimes of their heat treatment.

  14. Computational fluid dynamics analysis of an innovative start-up method of high temperature fuel cells using dynamic 3d model

    Directory of Open Access Journals (Sweden)

    Kupecki Jakub

    2017-03-01

    Full Text Available The article presents a numerical analysis of an innovative method for starting systems based on high temperature fuel cells. The possibility of preheating the fuel cell stacks from the cold state to the nominal working conditions encounters several limitations related to heat transfer and stability of materials. The lack of rapid and safe start-up methods limits the proliferation of MCFCs and SOFCs. For that reason, an innovative method was developed and verified using the numerical analysis presented in the paper. A dynamic 3D model was developed that enables thermo-fluidic investigations and determination of measures for shortening the preheating time of the high temperature fuel cell stacks. The model was implemented in ANSYS Fluent computational fluid dynamic (CFD software and was used for verification of the proposed start-up method. The SOFC was chosen as a reference fuel cell technology for the study. Results obtained from the study are presented and discussed.

  15. MORECA: A computer code for simulating modular high-temperature gas-cooled reactor core heatup accidents

    International Nuclear Information System (INIS)

    Ball, S.J.

    1991-10-01

    The design features of the modular high-temperature gas-cooled reactor (MHTGR) have the potential to make it essentially invulnerable to damage from postulated core heatup accidents. This report describes the ORNL MORECA code, which was developed for analyzing postulated long-term core heatup scenarios for which active cooling systems used to remove afterheat following the accidents can be assumed to the unavailable. Simulations of long-term loss-of-forced-convection accidents, both with and without depressurization of the primary coolant, have shown that maximum core temperatures stay below the point at which any significant fuel failures and fission product releases are expected. Sensitivity studies also have been done to determine the effects of errors in the predictions due both to uncertainties in the modeling and to the assumptions about operational parameters. MORECA models the US Department of Energy reference design of a standard MHTGR

  16. Computer modeling of the process of self-propagating high-temperature synthesis in thin system Ni-Al

    International Nuclear Information System (INIS)

    Poletayev, G.M.; Starostenkov, M.D.; Denisova, N.F.; Skakov, M.K.

    2004-01-01

    Full text: The process of synthesis of thermal phases of the system Ni-Al is studied through the method of molecular dynamics. As the object of investigation was chosen two-dimensional crystal, that corresponds to atomic packing laying at the plane of volumetric fcc crystal. Clean Ni was taken as a matrix crystal. A particle of clean Al is packed in the center of matrix block. Beyond the bounds of calculated block crystal packing is repeated with the help of periodical border conditions. The interaction between different pairs of atoms is set by pair potential function of Morse, considering interatomic bonding of the point of the sixth coordinate sphere. The allocation of speeds of atomic function in the system is set through the Boltzmann factor, depending the temperature. When the bicrystal is represented by the ideal atom packing and there are no vacancies , the process of structural adjustment is only observed at the temperature, that is higher than melting point. At that, structural adjustment is observed in circular mechanism of atom allocation, also through the border between phases of clean Ni and Al. As a result, Al particle is transformed, at the border between metals, fields of positional disorder and embryos of intermetallide phases NiAl 2 , Ni 2 Al, Ni 3 Al. The introduction of of free volume through the creation of vacancies significantly lowers the temperature of the beginning of the synthesis process of intermetallide phases. The greatest decrease in temperature to the point of 300 K happens, when the vacancies are located in Ni field of bicrystal, the beginning of the thermo-activation is directly connected with the distance from interphase borders. As the process of thermo-activation continues, vacancies located in Ni matrix right up to seventh neighborhood relatively the border bicrystal. During thermo-activation Al particles enter the field and activate the synthesis process

  17. Analysis of inner structure changes of concretes exposed to high temperatures using micro X-ray computed tomography

    Czech Academy of Sciences Publication Activity Database

    Sitek, Libor; Bodnárová, L.; Souček, Kamil; Staš, Lubomír; Gurková, Lucie

    2015-01-01

    Roč. 12, č. 1 (2015), s. 78-89 ISSN 1214-9705 R&D Projects: GA ČR GAP104/12/1988 Institutional support: RVO:68145535 Keywords : concrete * cement paste * aggregate * high temperature Subject RIV: JQ - Machines ; Tools Impact factor: 0.561, year: 2015 https://www.irsm.cas.cz/materialy/acta_content/2015_doi/Sitek_AGG_2015_0009.pdf

  18. Babcock and Wilcox revisions to CONTEMPT, computer program for predicting containment pressure-temperature response to a loss-of-coolant accident

    International Nuclear Information System (INIS)

    Hsii, Y.H.

    1976-06-01

    The CONTEMPT computer program predicts the pressure-temperature response of a single-volume reactor building to a loss-of-coolant accident. The report describes the analytical model used for the program. CONTEMPT assumes that the loss-of-coolant accident can be separated into two phases; the primary system blowdown and reactor building pressurization. The results of the blowdown analysis serve as the boundary conditions and are input to the CONTEMPT program. Thus, the containment model is only concerned with the pressure and temperature in the reactor building and the temperature distribution through the reactor building structures. The user is required to input the description of the discharge of coolant, the boiling of residual water by reactor decay heat, the superheating of steam passing through the core, and metal-water reactions. The reactor building is separated into liquid and vapor regions. Each region is in thermal equilibrium itself, but the two may not be in thermal equilibrium; the liquid and gaseous regions may have different temperatures. The reactor building is represented as consisting of several heat-conducting structures whose thermal behavior can be described by the one-dimensional multi-region heat conduction equation. The program also calculates building leakage and the effects of engineered safety features such as reactor building sprays, decay heat coolers, sump coolers, etc

  19. Temperature based daily incoming solar radiation modeling based on gene expression programming, neuro-fuzzy and neural network computing techniques.

    Science.gov (United States)

    Landeras, G.; López, J. J.; Kisi, O.; Shiri, J.

    2012-04-01

    The correct observation/estimation of surface incoming solar radiation (RS) is very important for many agricultural, meteorological and hydrological related applications. While most weather stations are provided with sensors for air temperature detection, the presence of sensors necessary for the detection of solar radiation is not so habitual and the data quality provided by them is sometimes poor. In these cases it is necessary to estimate this variable. Temperature based modeling procedures are reported in this study for estimating daily incoming solar radiation by using Gene Expression Programming (GEP) for the first time, and other artificial intelligence models such as Artificial Neural Networks (ANNs), and Adaptive Neuro-Fuzzy Inference System (ANFIS). Traditional temperature based solar radiation equations were also included in this study and compared with artificial intelligence based approaches. Root mean square error (RMSE), mean absolute error (MAE) RMSE-based skill score (SSRMSE), MAE-based skill score (SSMAE) and r2 criterion of Nash and Sutcliffe criteria were used to assess the models' performances. An ANN (a four-input multilayer perceptron with ten neurons in the hidden layer) presented the best performance among the studied models (2.93 MJ m-2 d-1 of RMSE). A four-input ANFIS model revealed as an interesting alternative to ANNs (3.14 MJ m-2 d-1 of RMSE). Very limited number of studies has been done on estimation of solar radiation based on ANFIS, and the present one demonstrated the ability of ANFIS to model solar radiation based on temperatures and extraterrestrial radiation. By the way this study demonstrated, for the first time, the ability of GEP models to model solar radiation based on daily atmospheric variables. Despite the accuracy of GEP models was slightly lower than the ANFIS and ANN models the genetic programming models (i.e., GEP) are superior to other artificial intelligence models in giving a simple explicit equation for the

  20. Temperature control of CMS Barrel ECAL (EB) : computational thermo-hydraulic model for dynamic behaviour, control aspects

    CERN Document Server

    Wertelaers, P

    2010-01-01

    The current design foresees a central heat exchanger followed by a controlled post heater, for all ECAL. We discuss the scheme and try to assess its performance, from a Barrel viewpoint. This is based on computational work. The coolant transfer pipes play an essential role in building a dynamical model. After some studies on the behaviour of the cooling circuit itself, a strong yet simple controller is proposed. Then, the system with feedback control is scrutinized, with emphasis on disturbance rejection. The most relevant disturbances are cooling ripple, pipe heat attack, and electronics’ switching.

  1. Comparing between predicted output temperature of flat-plate solar collector and experimental results: computational fluid dynamics and artificial neural network

    Directory of Open Access Journals (Sweden)

    F Nadi

    2017-05-01

    Full Text Available Introduction The significant of solar energy as a renewable energy source, clean and without damage to the environment, for the production of electricity and heat is of great importance. Furthermore, due to the oil crisis as well as reducing the cost of home heating by 70%, solar energy in the past two decades has been a favorite of many researchers. Solar collectors are devices for collecting solar radiant energy through which this energy is converted into heat and then heat is transferred to a fluid (usually air or water. Therefore, a key component in performance improvement of solar heating system is a solar collector optimization under different testing conditions. However, estimation of output parameters under different testing conditions is costly, time consuming and mostly impossible. As a result, smart use of neural networks as well as CFD (computational fluid dynamics to predict the properties with which desired output would have been acquired is valuable. To the best of our knowledge, there are no any studies that compare experimental results with CFD and ANN. Materials and Methods A corrugated galvanized iron sheet of 2 m length, 1 m wide and 0.5 mm in thickness was used as an absorber plate for absorbing the incident solar radiation (Fig. 1 and 2. Corrugations in absorber were caused turbulent air and improved heat transfer coefficient. Computational fluid dynamics K-ε turbulence model was used for simulation. The following assumptions are made in the analysis. (1 Air is a continuous medium and incompressible. (2 The flow is steady and possesses have turbulent flow characteristics, due to the high velocity of flow. (3 The thermal-physical properties of the absorber sheet and the absorber tube are constant with respect to the operating temperature. (4 The bottom side of the absorber tube and the absorber plate are assumed to be adiabatic. Artificial neural network In this research a one-hidden-layer feed-forward network based on the

  2. High temperature breakdown of the Stokes-Einstein relation in a computer simulated Cu-Zr melt

    Energy Technology Data Exchange (ETDEWEB)

    Han, X. J., E-mail: xjhan@sjtu.edu.cn; Li, J. G., E-mail: lijg@sjtu.edu.cn [School of Materials Science and Engineering, Shanghai Jiao Tong University, Dongchuan Rd. 800, 200240 Shanghai (China); Schober, H. R., E-mail: h.schober@fz-juelich.de [Peter Grünberg Institut, Forschungszentrum Jülich, D-52425 Jülich (Germany)

    2016-03-28

    Transport properties and the Stokes-Einstein (SE) relation in liquid Cu{sub 8}Zr{sub 3} are studied by molecular dynamics simulation with a modified embedded atom potential. The critical temperature T{sub c} of mode coupling theory (MCT) is derived as 930 K from the self-diffusion coefficient D and viscosity η. The SE relation breaks down around T{sub SE} = 1900 K, which is far above T{sub c}. At temperatures below T{sub SE}, the product of D and η fluctuates around a constant value, similar to the prediction of MCT near T{sub c}. The influence of the microscopic atomic motion on macroscopic properties is investigated by analyzing the time dependent liquid structure and the self-hole filling process. The self-holes for the two components are preferentially filled by atoms of the same component. The self-hole filling dynamics explains the different breakdown behaviors of the SE relation in Zr-rich liquid CuZr{sub 2} compared to Cu-rich Cu{sub 8}Zr{sub 3}. At T{sub SE}, a kink is found in the temperature dependence of both partial and total coordination numbers for the three atomic pair combinations and of the typical time of self-hole filling. This indicates a strong correlation between liquid structure, atomic dynamics, and the breakdown of SE relation. The previously suggested usefulness of the parameter d(D{sub 1}/D{sub 2})/dT to predict T{sub SE} is confirmed. Additionally we propose a viscosity criterion to predict T{sub SE} in the absence of diffusion data.

  3. CONTEMPT: computer program for predicting containment pressure-temperature response to a loss-of-coolant accident

    International Nuclear Information System (INIS)

    Hsii, Y.H.

    1978-04-01

    The CONTEMPT code is used by Babcock and Wilcox for containment analysis following a postulated loss of coolant accident. An additional model is described which is used for the calculation of long term post reflood mass and energy releases to the containment that is used for the containment design basis LOCA calculations. These calculations maximize the rate of energy flow to the containment. The mass and energy data are given to the containment designer for use in calculating the containment building design pressure and temperature and in sizing containment heat removal equipment

  4. Analysis of Inner Structure Changes of Concretes Exposed to High Temperatures Using Micro X-ray Computed Tomography

    Czech Academy of Sciences Publication Activity Database

    Sitek, Libor; Bodnárová, L.; Souček, Kamil; Staš, Lubomír; Gurková, Lucie

    2015-01-01

    Roč. 12, č. 1 (2015), s. 79-89 ISSN 1214-9705 R&D Projects: GA MŠk(CZ) LO1406; GA ČR GAP104/12/1988; GA MŠk ED2.1.00/03.0082 Institutional support: RVO:68145535 Keywords : concrete * cement paste * aggregate * high temperature Subject RIV: JQ - Machines ; Tools Impact factor: 0.561, year: 2015 https://www.irsm.cas.cz/materialy/acta_content/2015_doi/Sitek_AGG_2015_0009.pdf

  5. Development of THYDE-HTGR: computer code for transient thermal-hydraulics of high-temperature gas-cooled reactor

    International Nuclear Information System (INIS)

    Hirano, Masashi; Hada, Kazuhiko

    1990-04-01

    The THYDE-HTGR code has been developed for transient thermal-hydraulic analyses of high-temperature gas-cooled reactors, based on the THYDE-W code. THYDE-W is a code developed at JAERI for the simulation of Light Water Reactor plant dynamics during various types of transients including loss-of-coolant accidents. THYDE-HTGR solves the conservation equations of mass, momentum and energy for compressible gas, or single-phase or two-phase flow. The major code modification from THYDE-W is to treat helium loops as well as water loops. In parallel to this, modification has been made for the neutron kinetics to be applicable to helium-cooled graphite-moderated reactors, for the heat transfer models to be applicable to various types of heat exchangers, and so forth. In order to assess the validity of the modifications, analyses of some of the experiments conducted at the High Temperature Test Loop of ERANS have been performed. In this report, the models applied in THYDE-HTGR are described focusing on the present modifications and the results from the assessment calculations are presented. (author)

  6. Compact fusion energy based on the spherical tokamak

    Science.gov (United States)

    Sykes, A.; Costley, A. E.; Windsor, C. G.; Asunta, O.; Brittles, G.; Buxton, P.; Chuyanov, V.; Connor, J. W.; Gryaznevich, M. P.; Huang, B.; Hugill, J.; Kukushkin, A.; Kingham, D.; Langtry, A. V.; McNamara, S.; Morgan, J. G.; Noonan, P.; Ross, J. S. H.; Shevchenko, V.; Slade, R.; Smith, G.

    2018-01-01

    Tokamak Energy Ltd, UK, is developing spherical tokamaks using high temperature superconductor magnets as a possible route to fusion power using relatively small devices. We present an overview of the development programme including details of the enabling technologies, the key modelling methods and results, and the remaining challenges on the path to compact fusion.

  7. Energy based methods for determining elastic plastic fracture

    International Nuclear Information System (INIS)

    Witt, F.J.

    1979-01-01

    Several methods are currently in use or under study for calculating various conditions of fracturing for varying degrees of plasticity. Among these are innovations on the J-integral concept, crack opening displacement or angle, the two parameter concept and the equivalent energy method. Methods involving crack arrest and ductile tearing also fall in this category. Each of these methods have many salient points and some efforts are underway to establish the underlying relationship between them. In this paper, the current research directions of J-integral and equivalent energy methodologies are reviewed with a broader discussion presented for the equivalent energy methodology. The fundamental basis of equivalent energy methodology rests with the volumetric energy ratio. For fractures governed by linear elastic fracture mechanics, the volumetric energy ratio is independent of flaw size and geometry and depends only on the scale factor between model and prototype and temperature. The behavioral aspects of the volumetric energy ratios have been investigated throughout the temperature range from brittle fracture to fully ductile fracture. For five different specimen and structural configurations it has been shown experimentally that the volumetric energy ratio retains its basic properties. That is, the volumetric energy ratio while changing in actual value, maintains its independence of geometry and flaw size while retaining a unique dependence on scale factor and temperature. This property interpreted in terms of fracture mechanics leads to the equivalent energy method. (orig.)

  8. Computational Search for Two-Dimensional MX2 Semiconductors with Possible High Electron Mobility at Room Temperature

    Directory of Open Access Journals (Sweden)

    Zhishuo Huang

    2016-08-01

    Full Text Available Neither of the two typical two-dimensional materials, graphene and single layer MoS 2 , are good enough for developing semiconductor logical devices. We calculated the electron mobility of 14 two-dimensional semiconductors with composition of MX 2 , where M (=Mo, W, Sn, Hf, Zr and Pt are transition metals, and Xs are S, Se and Te. We approximated the electron phonon scattering matrix by deformation potentials, within which long wave longitudinal acoustical and optical phonon scatterings were included. Piezoelectric scattering in the compounds without inversion symmetry is also taken into account. We found that out of the 14 compounds, WS 2 , PtS 2 and PtSe 2 are promising for logical devices regarding the possible high electron mobility and finite band gap. Especially, the phonon limited electron mobility in PtSe 2 reaches about 4000 cm 2 ·V - 1 ·s - 1 at room temperature, which is the highest among the compounds with an indirect bandgap of about 1.25 eV under the local density approximation. Our results can be the first guide for experiments to synthesize better two-dimensional materials for future semiconductor devices.

  9. Free Energy-Based Virtual Screening and Optimization of RNase H Inhibitors of HIV-1 Reverse Transcriptase.

    Science.gov (United States)

    Zhang, Baofeng; D'Erasmo, Michael P; Murelli, Ryan P; Gallicchio, Emilio

    2016-09-30

    We report the results of a binding free energy-based virtual screening campaign of a library of 77 α-hydroxytropolone derivatives against the challenging RNase H active site of the reverse transcriptase (RT) enzyme of human immunodeficiency virus-1. Multiple protonation states, rotamer states, and binding modalities of each compound were individually evaluated. The work involved more than 300 individual absolute alchemical binding free energy parallel molecular dynamics calculations and over 1 million CPU hours on national computing clusters and a local campus computational grid. The thermodynamic and structural measures obtained in this work rationalize a series of characteristics of this system useful for guiding future synthetic and biochemical efforts. The free energy model identified key ligand-dependent entropic and conformational reorganization processes difficult to capture using standard docking and scoring approaches. Binding free energy-based optimization of the lead compounds emerging from the virtual screen has yielded four compounds with very favorable binding properties, which will be the subject of further experimental investigations. This work is one of the few reported applications of advanced-binding free energy models to large-scale virtual screening and optimization projects. It further demonstrates that, with suitable algorithms and automation, advanced-binding free energy models can have a useful role in early-stage drug-discovery programs.

  10. A review of solar energy based heat and power generation systems

    DEFF Research Database (Denmark)

    Modi, Anish; Bühler, Fabian; Andreasen, Jesper Graa

    2017-01-01

    The utilization of solar energy based technologies has attracted increased interest in recent times in order to satisfy the various energy demands of our society. This paper presents a thorough review of the open literature on solar energy based heat and power plants. In order to limit the scope ...

  11. An energy-based beam hardening model in tomography

    International Nuclear Information System (INIS)

    Casteele, E van de; Dyck, D van; Sijbers, J; Raman, E

    2002-01-01

    As a consequence of the polychromatic x-ray source, used in micro-computer tomography (μCT) and in medical CT, the attenuation is no longer a linear function of absorber thickness. If this nonlinear beam hardening effect is not compensated, the reconstructed images will be corrupted by cupping artefacts. In this paper, a bimodal energy model for the detected energy spectrum is presented, which can be used for reduction of artefacts caused by beam hardening in well-specified conditions. Based on the combination of the spectrum of the source and the detector efficiency, the assumption is made that there are two dominant energies which can describe the system. The validity of the proposed model is examined by fitting the model to the experimental datapoints obtained on a microtomograph for different materials and source voltages

  12. Computational thermal-fluid dynamics analysis of the laminar flow regime in the meander flow geometry characterizing the heat exchanger used in high temperature superconducting current leads

    International Nuclear Information System (INIS)

    Rizzo, Enrico; Heller, Reinhard; Richard, Laura Savoldi; Zanino, Roberto

    2013-01-01

    Highlights: • The laminar regime in the meander flow geometry has been analysed with a previously validated computational strategy. • Several meander flow geometries as well as flow conditions have been analysed. • A range for the Reynolds number has been defined in which the flow can be considered laminar. • Correlations for the pressure drop and the heat transfer coefficients in the laminar regime have been derived. • A comparison between the computed the experimental pressure drop of the W7-X HTS current lead prototype is presented. -- Abstract: The Karlsruhe Institute of Technology and the Politecnico di Torino have developed and validated a computational thermal-fluid dynamics (CtFD) strategy for the systematic analysis of the thermal-hydraulics inside the meander flow heat exchanger used in high-temperature superconducting current leads for fusion applications. In the recent past, the application of this CtFD technique has shown that some operating conditions occurring in these devices may not reach the turbulent regime region. With that motivation, the CtFD analysis of the helium thermal-fluid dynamics inside different meander flow geometries is extended here to the laminar flow regime. Our first aim is to clarify under which operative conditions the flow regime can be considered laminar and how the pressure drop as well as the heat transfer are related to the geometrical parameters and to the flow conditions. From the results of this analysis, correlations for the pressure drop and for the heat transfer coefficient in the meander flow geometry have been derived, which are applicable with good accuracy to the design of meander flow heat exchangers over a broad range of geometrical parameters

  13. Computational thermal-fluid dynamics analysis of the laminar flow regime in the meander flow geometry characterizing the heat exchanger used in high temperature superconducting current leads

    Energy Technology Data Exchange (ETDEWEB)

    Rizzo, Enrico, E-mail: enrico.rizzo@kit.edu [Institute for Technical Physics, Karlsruhe Institute of Technology, 76021 Karlsruhe (Germany); Heller, Reinhard [Institute for Technical Physics, Karlsruhe Institute of Technology, 76021 Karlsruhe (Germany); Richard, Laura Savoldi; Zanino, Roberto [Dipartimento Energia, Politecnico di Torino, 10129 Torino (Italy)

    2013-11-15

    Highlights: • The laminar regime in the meander flow geometry has been analysed with a previously validated computational strategy. • Several meander flow geometries as well as flow conditions have been analysed. • A range for the Reynolds number has been defined in which the flow can be considered laminar. • Correlations for the pressure drop and the heat transfer coefficients in the laminar regime have been derived. • A comparison between the computed the experimental pressure drop of the W7-X HTS current lead prototype is presented. -- Abstract: The Karlsruhe Institute of Technology and the Politecnico di Torino have developed and validated a computational thermal-fluid dynamics (CtFD) strategy for the systematic analysis of the thermal-hydraulics inside the meander flow heat exchanger used in high-temperature superconducting current leads for fusion applications. In the recent past, the application of this CtFD technique has shown that some operating conditions occurring in these devices may not reach the turbulent regime region. With that motivation, the CtFD analysis of the helium thermal-fluid dynamics inside different meander flow geometries is extended here to the laminar flow regime. Our first aim is to clarify under which operative conditions the flow regime can be considered laminar and how the pressure drop as well as the heat transfer are related to the geometrical parameters and to the flow conditions. From the results of this analysis, correlations for the pressure drop and for the heat transfer coefficient in the meander flow geometry have been derived, which are applicable with good accuracy to the design of meander flow heat exchangers over a broad range of geometrical parameters.

  14. Characterizing the effects of elevated temperature on the air void pore structure of advanced gas-cooled reactor pressure vessel concrete using x-ray computed tomography

    Directory of Open Access Journals (Sweden)

    Withers P.J.

    2013-07-01

    Full Text Available X-ray computed tomography (X-ray CT has been applied to nondestructively characterise changes in the microstructure of a concrete used in the pressure vessel structure of Advanced Gas-cooled Reactors (AGR in the UK. Concrete specimens were conditioned at temperatures of 105 °C and 250 °C, to simulate the maximum thermal load expected to occur during a loss of coolant accident (LOCA. Following thermal treatment, these specimens along with an unconditioned control sample were characterised using micro-focus X-ray CT with a spatial resolution of 14.6 microns. The results indicate that the air void pore structure of the specimens experienced significant volume changes as a result of the increasing temperature. The increase in the porous volume was more prevalent at 250 °C. Alterations in air void size distributions were characterized with respect to the unconditioned control specimen. These findings appear to correlate with changes in the uni-axial compressive strength of the conditioned concrete.

  15. Free radical formation in deoxyguanosine-5'-monophosphate γ-irradiated in frozen solution. A computer-assisted analysis of temperature-dependent ESR spectra

    International Nuclear Information System (INIS)

    Gregoli, S.; Olast, M.; Bertinchamps, A.

    1977-01-01

    Deoxyguanosine-5'-monophosphate (dGMP) was γ-irradiated at 77 K in frozen aqueous solution and then annealed in a stepwise fashion up to the melting point. During this process, the primary radicals formed in DGMP at 77 K are progressively converted into secondary radical species. This is observed as changes in the spectrum intensity and conformation. Computer-assisted analysis of these temperature-dependent spectra permitted us to identify the transient radical species involved and to draw up single-radical concentration kinetics vs temperature. The radiation chemical behavior of dGMP was found to be quite similar to that of dAMP, investigated previously. In both these purine derivatives, radical anions are converted into radicals of H-addition to C-8, and radical cations are converted into radicals of OH-addition to the same position. In dGMP, however, the cationic channel is only induced under certain experimental conditions (alkaline pH, presence of electron scavengers). At neutral pH, G + radicals are quite stable and finally become deactivated without being converted into secondary GOH radicals. Specific deuterium substitution at carbon C-8, and irradiation in H 2 O or in D 2 O, confirmed that both H + and OH - attachments do occur at C-8, and that both the H + and OH - groups come from the aqueous medium

  16. Adaptive control for solar energy based DC microgrid system development

    Science.gov (United States)

    Zhang, Qinhao

    During the upgrading of current electric power grid, it is expected to develop smarter, more robust and more reliable power systems integrated with distributed generations. To realize these objectives, traditional control techniques are no longer effective in either stabilizing systems or delivering optimal and robust performances. Therefore, development of advanced control methods has received increasing attention in power engineering. This work addresses two specific problems in the control of solar panel based microgrid systems. First, a new control scheme is proposed for the microgrid systems to achieve optimal energy conversion ratio in the solar panels. The control system can optimize the efficiency of the maximum power point tracking (MPPT) algorithm by implementing two layers of adaptive control. Such a hierarchical control architecture has greatly improved the system performance, which is validated through both mathematical analysis and computer simulation. Second, in the development of the microgrid transmission system, the issues related to the tele-communication delay and constant power load (CPL)'s negative incremental impedance are investigated. A reference model based method is proposed for pole and zero placements that address the challenges of the time delay and CPL in closed-loop control. The effectiveness of the proposed modeling and control design methods are demonstrated in a simulation testbed. Practical aspects of the proposed methods for general microgrid systems are also discussed.

  17. Computer Aided Multi-scale Design of SiC-Si3N4 Nanoceramic Composites for High-Temperature Structural Applications

    Energy Technology Data Exchange (ETDEWEB)

    Vikas Tomer; John Renaud

    2010-08-31

    temperature dependent strength and microstructural stability was also significantly depended upon the dispersion of new phases at grain boundaries. The material design framework incorporates high temperature creep and mechanical strength data in order to develop a collaborative multiscale framework of morphology optimization. The work also incorporates a computer aided material design dataset development procedure where a systematic dataset on material properties and morphology correlation could be obtained depending upon a material processing scientist's requirements. Two different aspects covered under this requirement are: (1) performing morphology related analyses at the nanoscale and at the microscale to develop a multiscale material design and analyses capability; (2) linking material behavior analyses with the developed design tool to form a set of material design problems that illustrate the range of material design dataset development that could be performed. Overall, a software based methodology to design microstructure of particle based ceramic nanocomposites has been developed. This methodology has been shown to predict changes in phase morphologies required for achieving optimal balance of conflicting properties such as minimal creep strain rate and high fracture strength at high temperatures. The methodology incorporates complex material models including atomistic approaches. The methodology will be useful to design materials for high temperature applications including those of interest to DoE while significantly reducing cost of expensive experiments.

  18. Insights into the fold organization of TIM barrel from interaction energy based structure networks.

    Science.gov (United States)

    Vijayabaskar, M S; Vishveshwara, Saraswathi

    2012-01-01

    There are many well-known examples of proteins with low sequence similarity, adopting the same structural fold. This aspect of sequence-structure relationship has been extensively studied both experimentally and theoretically, however with limited success. Most of the studies consider remote homology or "sequence conservation" as the basis for their understanding. Recently "interaction energy" based network formalism (Protein Energy Networks (PENs)) was developed to understand the determinants of protein structures. In this paper we have used these PENs to investigate the common non-covalent interactions and their collective features which stabilize the TIM barrel fold. We have also developed a method of aligning PENs in order to understand the spatial conservation of interactions in the fold. We have identified key common interactions responsible for the conservation of the TIM fold, despite high sequence dissimilarity. For instance, the central beta barrel of the TIM fold is stabilized by long-range high energy electrostatic interactions and low-energy contiguous vdW interactions in certain families. The other interfaces like the helix-sheet or the helix-helix seem to be devoid of any high energy conserved interactions. Conserved interactions in the loop regions around the catalytic site of the TIM fold have also been identified, pointing out their significance in both structural and functional evolution. Based on these investigations, we have developed a novel network based phylogenetic analysis for remote homologues, which can perform better than sequence based phylogeny. Such an analysis is more meaningful from both structural and functional evolutionary perspective. We believe that the information obtained through the "interaction conservation" viewpoint and the subsequently developed method of structure network alignment, can shed new light in the fields of fold organization and de novo computational protein design.

  19. Computer simulation of scattered ion and sputtered species effects in ion beam sputter-deposition of high temperature superconducting thin films

    International Nuclear Information System (INIS)

    Krauss, A.R.; Auciello, O.

    1992-01-01

    Ion beam sputter-deposition is a technique currently used by many groups to produce single and multicomponent thin films. This technique provides several advantages over other deposition methods, which include the capability for yielding higher film density, accurate stoichiometry control, and smooth surfaces. However, the relatively high kinetic energies associated with ion beam sputtering also lead to difficulties if the process is not properly controlled. Computer simulations have been performed to determine net deposition rates, as well as the secondary erosion, lattice damage, and gas implantation in the films, associated with primary ions scattered from elemental Y, Ba and Cu targets used to produce high temperature superconducting Y-Ba-Cu-O films. The simulations were performed using the TRIM code for different ion masses and kinetic energies, and different deposition geometries. Results are presented for primary beams of Ar + , Kr + and Xe + incident on Ba and Cu targets at 0 degrees and 45 degrees with respect to the surface normal, with the substrate positioned at 0 degrees and 45 degrees. The calculations indicate that the target composition, mass and kinetic energy of the primary beam, angle of incidence on the target, and position and orientation of the substrate affect the film damage and trapped primary beam gas by up to 5 orders of magnitude

  20. A computational study of syngas auto-ignition characteristics at high-pressure and low-temperature conditions with thermal inhomogeneities

    KAUST Repository

    Pal, Pinaki; Mansfield, Andrew B.; Arias, Paul G.; Wooldridge, Margaret S.; Im, Hong G.

    2015-01-01

    number analyses were employed to characterise the propagating ignition front. In the presence of a global temperature gradient, the ignition behaviour shifted from spontaneous propagation (strong) to deflagrative (weak), as the initial mean temperature

  1. An energy-based equilibrium contact angle boundary condition on jagged surfaces for phase-field methods.

    Science.gov (United States)

    Frank, Florian; Liu, Chen; Scanziani, Alessio; Alpak, Faruk O; Riviere, Beatrice

    2018-08-01

    We consider an energy-based boundary condition to impose an equilibrium wetting angle for the Cahn-Hilliard-Navier-Stokes phase-field model on voxel-set-type computational domains. These domains typically stem from μCT (micro computed tomography) imaging of porous rock and approximate a (on μm scale) smooth domain with a certain resolution. Planar surfaces that are perpendicular to the main axes are naturally approximated by a layer of voxels. However, planar surfaces in any other directions and curved surfaces yield a jagged/topologically rough surface approximation by voxels. For the standard Cahn-Hilliard formulation, where the contact angle between the diffuse interface and the domain boundary (fluid-solid interface/wall) is 90°, jagged surfaces have no impact on the contact angle. However, a prescribed contact angle smaller or larger than 90° on jagged voxel surfaces is amplified. As a remedy, we propose the introduction of surface energy correction factors for each fluid-solid voxel face that counterbalance the difference of the voxel-set surface area with the underlying smooth one. The discretization of the model equations is performed with the discontinuous Galerkin method. However, the presented semi-analytical approach of correcting the surface energy is equally applicable to other direct numerical methods such as finite elements, finite volumes, or finite differences, since the correction factors appear in the strong formulation of the model. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. An Energy-Based State Observer for Dynamical Subsystems with Inaccessible State Variables

    NARCIS (Netherlands)

    Khalil, I.S.M.; Sabanovic, Asif; Misra, Sarthak

    2012-01-01

    This work presents an energy-based state estimation formalism for a class of dynamical systems with inaccessible/unknown outputs, and systems at which sensor utilization is impractical, or when measurements can not be taken. The power-conserving physical interconnections among most of the dynamical

  3. A unifying energy-based approach to stability of power grids with market dynamics

    NARCIS (Netherlands)

    Stegink, Tjerk; De Persis, Claudio; van der Schaft, Arjan

    2017-01-01

    In this paper a unifying energy-based approach is provided to the modeling and stability analysis of power systems coupled with market dynamics. We consider a standard model of the power network with a third-order model for the synchronous generators involving voltage dynamics. By applying the

  4. Modeling the effect of temperature on thorium and nitric acid extraction and the formation of third phase for modification of the SEPHIS: Thorex computer program

    Energy Technology Data Exchange (ETDEWEB)

    Watson, S.B.; Rainey, R.H.

    1979-05-01

    Mathematical models have been developed for use with the SEPHIS - Thorex code which correct for the effect of temperature on thorium and nitric acid extraction and which provide a signal to indicate the formation of a second organic phase. Experimental programs have provided data which have been analyzed and indicate that increasing temperature decreases the distribution coefficient of thorium. Also increasing the temperature increases the concentration of thorium in the organic phase at which the third phase is formed. 14 figures, 3 tables.

  5. Computations of the Shock Waves at Hypersonic Velocities Taken into Account the Chemical Reactions that Appear in the Air at High Temperatures

    Directory of Open Access Journals (Sweden)

    Mihai Leonida NICULESCU

    2015-09-01

    Full Text Available The temperature in the nose region of a hypersonic vehicle can be extremely high, for example, reaching approximately 11 000 K at a Mach number of 36 (Apollo reentry. The bow shock wave is normal, or nearly normal, in the nose region of a blunt body, and the gas temperature behind this shock wave can be enormous at hypersonic speeds. In this case, the assumption of a calorically perfect nonreacting gas with the ratio of specific heats  of 1.4 gives an unrealistically high value of temperature. Therefore, the proper inclusion of chemically reacting effects is vital to the calculation of an accurate normal shock wave temperature.

  6. Modelling and optimal operation of a small-scale integrated energy based district heating and cooling system

    International Nuclear Information System (INIS)

    Jing, Z.X.; Jiang, X.S.; Wu, Q.H.; Tang, W.H.; Hua, B.

    2014-01-01

    This paper presents a comprehensive model of a small-scale integrated energy based district heating and cooling (DHC) system located in a residential area of hot-summer and cold-winter zone, which makes joint use of wind energy, solar energy, natural gas and electric energy. The model includes an off-grid wind turbine generator, heat producers, chillers, a water supply network and terminal loads. This research also investigates an optimal operating strategy based on Group Search Optimizer (GSO), through which the daily running cost of the system is optimized in both the heating and cooling modes. The strategy can be used to find the optimal number of operating chillers, optimal outlet water temperature set points of boilers and optimal water flow set points of pumps, taking into account cost functions and various operating constraints. In order to verify the model and the optimal operating strategy, performance tests have been undertaken using MATLAB. The simulation results prove the validity of the model and show that the strategy is able to minimize the system operation cost. The proposed system is evaluated in comparison with a conventional separation production (SP) system. The feasibility of investment for the DHC system is also discussed. The comparative results demonstrate the investment feasibility, the significant energy saving and the cost reduction, achieved in daily operation in an environment, where there are varying heating loads, cooling loads, wind speeds, solar radiations and electricity prices. - Highlights: • A model of a small-scale integrated energy based DHC system is presented. • An off-grid wind generator used for water heating is embedded in the model. • An optimal control strategy is studied to optimize the running cost of the system. • The designed system is proved to be energy efficient and cost effective in operation

  7. Heat transfer study on convective–radiative semi-spherical fins with temperature-dependent properties and heat generation using efficient computational methods

    International Nuclear Information System (INIS)

    Atouei, S.A.; Hosseinzadeh, Kh.; Hatami, M.; Ghasemi, Seiyed E.; Sahebi, S.A.R.; Ganji, D.D.

    2015-01-01

    In this study, heat transfer and temperature distribution equations for semi-spherical convective–radiative porous fins are presented. Temperature-dependent heat generation, convection and radiation effects are considered and after deriving the governing equation, Least Square Method (LSM), Collocation Method (CM) and fourth order Runge-Kutta method (NUM) are applied for predicting the temperature distribution in the described fins. Results reveal that LSM has excellent agreement with numerical method, so can be suitable analytical method for solving the problem. Also, the effect of some physical parameters which are appeared in the mathematical formulation on fin surface temperature is investigated to show the effect of radiation and heat generation in a solid fin temperature. - Highlights: • Thermal analysis of a semi-spherical fin is investigated. • Collocation and Least Square Methods are applied on the problem. • Convection, radiation and heat generation is considered. • Physical results are compared to numerical outcomes.

  8. Analysis of the Slab Temperature, Thermal Stresses and Fractures Computed with the Implementation of Local and Average Boundary Conditions in the Secondary Cooling Zones

    Directory of Open Access Journals (Sweden)

    Hadała B.

    2016-12-01

    Full Text Available The numerical simulations of the temperature fields have been accomplished for slab casting made of a low carbon steel. The casting process of slab of 1500 mm in width and 225 mm in height has been modeled. Two types of boundary condition models of heat transfer have been employed in numerical simulations. The heat transfer coefficient in the first boundary condition model was calculated from the formula which takes into account the slab surface temperature and water flow rate in each secondary cooling zone. The second boundary condition model defines the heat transfer coefficient around each water spray nozzle. The temperature fields resulting from the average in zones water flow rate and from the nozzles arrangement have been compared. The thermal stresses and deformations resulted from such temperature field have given higher values of fracture criterion at slab corners.

  9. PERFORMANCE ANALYSIS OF DSR ROUTING PROTOCOL UNDER ENERGY BASED SELFISH ATTACK IN MOBILE AD HOC NETWORKS

    Directory of Open Access Journals (Sweden)

    T.V.P.Sundararajan

    2010-06-01

    Full Text Available Mobile Ad hoc Networks (MANETs rely on the cooperation of all participating nodes to provide the fundamental operations such as routing and data forwarding. However, due to the open structure and scarcely available battery-based energy, node misbehaviors may exist.[1]. One such routing misbehavior is that some selfish nodes will participate in the route discovery and maintenance processes but refuse to forward data packets. This paper pointed out Energy based selfish nodes (EBSN where these selfish nodes tend to use the network but do not cooperate, saving battery life for their own communications [2],[3]. We present a simulation study of the effects of Energy based selfish nodes (EBSN on DSR routing protocol and its impact over network performance in terms of throughput and delay of a mobile ad hoc network where a defined percentage of nodes were misbehaving.

  10. Full Waveform Inversion Using an Energy-Based Objective Function with Efficient Calculation of the Gradient

    KAUST Repository

    Choi, Yun Seok

    2017-05-26

    Full waveform inversion (FWI) using an energy-based objective function has the potential to provide long wavelength model information even without low frequency in the data. However, without the back-propagation method (adjoint-state method), its implementation is impractical for the model size of general seismic survey. We derive the gradient of the energy-based objective function using the back-propagation method to make its FWI feasible. We also raise the energy signal to the power of a small positive number to properly handle the energy signal imbalance as a function of offset. Examples demonstrate that the proposed FWI algorithm provides a convergent long wavelength structure model even without low-frequency information, which can be used as a good starting model for the subsequent conventional FWI.

  11. Techno-economic studies on hybrid energy based cooling system for milk preservation in isolated regions

    International Nuclear Information System (INIS)

    Edwin, M.; Joseph Sekhar, S.

    2014-01-01

    Highlights: • Performance studies on biomass and biogas based milk cooling systems in remote areas. • Economic analysis of milk cooling system operated with locally available renewable energy sources. • Payback period for replacing conventional milk cooling systems with renewable energy based cooling system. • Identification of the suitable combination of locally available renewable energy sources for milk cooling. • Hybrid energy based milk cooling system for regions that have rubber and paddy cultivation, in India. - Abstract: In developing countries like India, about 70% of the population is engaged in the production of milk, fruits and vegetables. Due to the lack of proper storage and transit facilities, the agricultural produce, in remote areas loses its value. This spoilage could be prevented at the local village level, by providing cooling units for short term preservation. In this paper, the possibility of a hybrid energy based thermally operated cold storage has been considered to meet the cooling needs of the villages in the southern parts of India, where biomass, biogas and gobar gas are available in abundance. A milk cooling system that uses various combinations of locally available renewable energy sources to operate an aqua ammonia vapour absorption cooling system has been analysed using the Matlab software. The impact of various combinations of renewable energy sources on the Coefficient of Performance (COP), Net Present Value (NPV) and payback period of the total cooling system has been studied. The analysis shows that the COP and payback period of the proposed hybrid renewable energy based milk cooling system are 0.16–0.23 and 4–6 years respectively

  12. Short-term forecasts of district heating load and outdoor temperature by use of on-line connected computers; Korttidsprognoser foer fjaerrvaermelast och utetemperatur med on-linekopplade datorer

    Energy Technology Data Exchange (ETDEWEB)

    Malmstroem, B; Ernfors, P; Nilsson, Daniel; Vallgren, H [Chalmers Tekniska Hoegskola, Goeteborg (Sweden). Institutionen foer Energiteknik

    1996-10-01

    In this report the available methods for forecasting weather and district heating load have been studied. A forecast method based on neural networks has been tested against the more common statistical methods. The accuracy of the weather forecasts from the SMHI (Swedish Meteorological and Hydrological Institute) has been estimated. In connection with these tests, the possibilities of improving the forecasts by using on-line connected computers has been analysed. The most important results from the study are: Energy company staff generally look upon the forecasting of district heating load as a problem of such a magnitude that computer support is needed. At the companies where computer calculated forecasts are in use, their accuracy is regarded as quite satisfactory; The interest in computer produced load forecasts among energy company staff is increasing; At present, a sufficient number of commercial suppliers of weather forecasts as well as load forecasts is available to fulfill the needs of energy companies; Forecasts based on neural networks did not attain any precision improvement in comparison to more traditional statistical methods. There may though be other types of neural networks, not tested in this study, that are possibly capable of improving the forecast precision; Forecasts of outdoor temperature and district heating load can be significantly improved through the use of on-line-connected computers supplied with instantaneous measurements of temperature and load. This study shows that a general reduction of the load prediction errors by approximately 15% is attainable. For short time horizons (less than 5 hours), more extensive load prediction error reductions can be reached. For the 1-hour time horizon, the possible reduction amounts to up to 50%. 21 refs, 4 figs, 7 appendices

  13. Prediction of temperature and damage in an irradiated human eye-Utilization of a detailed computer model which includes a vectorial blood stream in the choroid.

    Science.gov (United States)

    Heussner, Nico; Holl, Lukas; Nowak, Timo; Beuth, Thorsten; Spitzer, Martin S; Stork, Wilhelm

    2014-08-01

    The work presented here describes the development and use of a three-dimensional thermo-dynamic model of the human eye for the prediction of temperatures and damage thresholds under irradiation. This model takes into account the blood flow by the implementation of a vectorial blood stream in the choroid and also uses the actual physiological extensions and tissue parameters of the eye. Furthermore it considers evaporation, radiation and convection at the cornea as well as the eye lid. The predicted temperatures were successfully validated against existing eye models in terms of corneal and global thermal behaviour. The model׳s predictions were additionally checked for consistency with in-vivo temperature measurements of the cornea, the irradiated retina and its damage thresholds. These thresholds were calculated from the retinal temperatures using the Arrhenius integral. Hence the model can be used to predict the temperature increase and irradiation hazard within the human eye as long as the absorption values and the Arrhenius coefficients are known and the damage mechanism is in the thermal regime. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Energy-based method for near-real time modeling of sound field in complex urban environments.

    Science.gov (United States)

    Pasareanu, Stephanie M; Remillieux, Marcel C; Burdisso, Ricardo A

    2012-12-01

    Prediction of the sound field in large urban environments has been limited thus far by the heavy computational requirements of conventional numerical methods such as boundary element (BE) or finite-difference time-domain (FDTD) methods. Recently, a considerable amount of work has been devoted to developing energy-based methods for this application, and results have shown the potential to compete with conventional methods. However, these developments have been limited to two-dimensional (2-D) studies (along street axes), and no real description of the phenomena at issue has been exposed. Here the mathematical theory of diffusion is used to predict the sound field in 3-D complex urban environments. A 3-D diffusion equation is implemented by means of a simple finite-difference scheme and applied to two different types of urban configurations. This modeling approach is validated against FDTD and geometrical acoustic (GA) solutions, showing a good overall agreement. The role played by diffraction near buildings edges close to the source is discussed, and suggestions are made on the possibility to predict accurately the sound field in complex urban environments, in near real time simulations.

  15. Computational modelling of temperature rises in the eye in the near field of radiofrequency sources at 380, 900 and 1800 MHz

    Science.gov (United States)

    Wainwright, P. R.

    2007-07-01

    This paper reports calculations of the temperature rises induced in the eye and lens by near-field exposure to radiation from communication handsets, using the finite difference time domain method and classical bioheat equation. Various models are compared, including the analytic solution for a sphere, a finite element model of an isolated eye and a modern model of the whole head. The role of the blood supply to the choroid in moderating temperature is discussed. Three different frequencies are considered, namely 380 MHz (used by TETRA), and 900 and 1800 MHz (used by GSM mobile phones). At 380 MHz, monopole and helical antennas are compared. An 'equivalent blood flow' is derived for the choroid in order to facilitate comparison of the whole head and isolated eye models. In the whole head model, the heating of the lens receives a significant contribution from energy absorbed outside the eye. The temperature rise in the lens is compared to the ICNIRP-recommended average specific energy absorption rate (SAR) and the SAR averaged over the eye alone. The temperature rise may reach 1.4 °C at the ICNIRP occupational exposure limit if an antenna is placed less than 24 mm from the eye and the exposure is sufficiently prolonged.

  16. Computational modelling of temperature rises in the eye in the near field of radiofrequency sources at 380, 900 and 1800 MHz

    International Nuclear Information System (INIS)

    Wainwright, P R

    2007-01-01

    This paper reports calculations of the temperature rises induced in the eye and lens by near-field exposure to radiation from communication handsets, using the finite difference time domain method and classical bioheat equation. Various models are compared, including the analytic solution for a sphere, a finite element model of an isolated eye and a modern model of the whole head. The role of the blood supply to the choroid in moderating temperature is discussed. Three different frequencies are considered, namely 380 MHz (used by TETRA), and 900 and 1800 MHz (used by GSM mobile phones). At 380 MHz, monopole and helical antennas are compared. An 'equivalent blood flow' is derived for the choroid in order to facilitate comparison of the whole head and isolated eye models. In the whole head model, the heating of the lens receives a significant contribution from energy absorbed outside the eye. The temperature rise in the lens is compared to the ICNIRP-recommended average specific energy absorption rate (SAR) and the SAR averaged over the eye alone. The temperature rise may reach 1.4 deg. C at the ICNIRP occupational exposure limit if an antenna is placed less than 24 mm from the eye and the exposure is sufficiently prolonged

  17. Computation of the flash-temperature at the wheel-rail contact using a 3D finite element model and its comparison with analytical methods

    NARCIS (Netherlands)

    Naeimi, M.; Li, Z.; Dollevoet, R.P.B.J.; Wu, J.; Petrov, R.H.; Sietsma, J.

    2015-01-01

    The coupled mechanical–thermal behaviour of wheel and rail materials under rolling contact is studied to determine the temperature rise due to the frictional heat. The wheel–rail frictional rolling contact problem is solved using the three–dimensional finite element (FE) method. The FE model

  18. CORTAP: a coupled neutron kinetics-heat transfer digital computer program for the dynamic simulation of the high temperature gas cooled reactor core

    International Nuclear Information System (INIS)

    Cleveland, J.C.

    1977-01-01

    CORTAP (Core Transient Analysis Program) was developed to predict the dynamic behavior of the High Temperature Gas Cooled Reactor (HTGR) core under normal operational transients and postulated accident conditions. CORTAP is used both as a stand-alone component simulation and as part of the HTGR nuclear steam supply (NSS) system simulation code ORTAP. The core thermal neutronic response is determined by solving the heat transfer equations for the fuel, moderator and coolant in an average powered region of the reactor core. The space independent neutron kinetics equations are coupled to the heat transfer equations through a rapidly converging iterative technique. The code has the capability to determine conservative fuel, moderator, and coolant temperatures in the ''hot'' fuel region. For transients involving a reactor trip, the core heat generation rate is determined from an expression for decay heat following a scram. Nonlinear effects introduced by temperature dependent fuel, moderator, and coolant properties are included in the model. CORTAP predictions will be compared with dynamic test results obtained from the Fort St. Vrain reactor owned by Public Service of Colorado, and, based on these comparisons, appropriate improvements will be made in CORTAP

  19. On the failure analysis of bondlines: Stress or energy based fracture criteria?

    DEFF Research Database (Denmark)

    Anyfantis, Konstantinos

    2014-01-01

    that characterizes a given bondline, both its cohesive strength and fracture toughness material parameters must be experimentally defined. Based on these properties, failure analysis of the bondline can be done either through stress- or energy-based criteria. The aim of this work is to investigate the effectiveness...... to classify the wide range of bondlines with respect to the failure theory that best describes the debonding process. Cohesive length scale effects are first demonstrated by modeling end notch flexure geometries and later by modeling double strap joint geometries within the framework of a wide numerical...

  20. A computational study of radiation and gravity effect on temperature and soot formation in a methane air co-flow diffusion flame

    Energy Technology Data Exchange (ETDEWEB)

    Bhowal, Arup Jyoti, E-mail: arupjyoti.bhowal@heritageit.edu [Department of Mechanical Engineering, Heritage Institute of Technology, Chowbaga Road, Anandapur, Kolkata-700 107, West Bengal (India); Mandal, Bijan Kumar, E-mail: bkm375@yahoo.co.in [Department of Mechanical Engineering, Indian Institute of Engineering Science and Technology, Shibpur, Howrah – 711103, West Bengal (India)

    2016-07-12

    An effort has been made for a quantitative assessment of the soot formed under steady state in a methane air co flow diffusion flame by a numerical simulation at normal gravity and at lower gravity levels of 0.5 G, 0.1 G and 0.0001 G (microgravity). The peak temperature at microgravity is reduced by about 50 K than that at normal gravity level. There is an augmentation of soot formation at lower gravity levels. Peak value at microgravity multiplies by a factor of ∼7 of that at normal gravity. However, if radiation is not considered, soot formation is found to be much more.

  1. Extension of ANISN and DOT 3.5 transport computer codes to calculate heat generation by radiation and temperature distribution in nuclear reactors

    International Nuclear Information System (INIS)

    Torres, L.M.R.; Gomes, I.C.; Maiorino, J.R.

    1986-01-01

    The ANISN and DOT 3.5 codes solve the transport equation using the discrete ordinate method, in one and two-dimensions, respectively. The objectives of the study were to modify these two codes, frequently used in reactor shielding problems, to include nuclear heating calculations due to the interaction of neutrons and gamma-rays with matter. In order to etermine the temperature distribution, a numerical algorithm was developed using the finite difference method to solve the heat conduction equation, in one and two-dimensions, considering the nuclear heating from neutron and gamma-rays, as the source term. (Author) [pt

  2. Computational study of heat transfer from the inner surface of a circular tube to force high temperature liquid metal flow in laminar and transition regions

    Science.gov (United States)

    Hata, K.; Fukuda, K.; Masuzaki, S.

    2018-03-01

    Heat transfer through forced convection from the inner surface of a circular tube to force the flow of liquid sodium in the laminar and transition regions were numerically analysed for two types of tube geometries (concentric annular and circular tubes) and two types of equivalent diameters (hydraulic and thermal equivalent diameters). The unsteady laminar three-dimensional basic equations for forced convection heat transfer caused by a step heat flux were numerically solved until a steady state is attained. The code of the parabolic hyperbolic or elliptic numerical integration code series (PHOENICS) was used for calculations by considering relevant temperature dependent thermo-physical properties. The concentric annular tube has a test tube with inner and outer diameters of 7.6 and 14.3 mm, respectively, has a heated length of 52 mm, and an L/d of 6.84. The two circular tubes have inner diameters of 6.7 and 19.3 mm with L/d of 7.76 and 2.69, respectively, and a heated length of 52 mm. The inlet liquid temperature, inlet liquid velocity, and surface heat flux were equally set for each test tube as T in ≅573 to 585 K, u in = 0.0852 to 1 m/s, and q = 2×105 to 2.5×106 W/m2, respectively. The increase in temperature from the leading edge of the heated section to the outlet of the circular tubes (with a hydraulic diameter of d H = 6.7 mm and a thermal equivalent diameter d te = 19.3 mm) was approximately 2.70 and 1.21 times as large as the corresponding values of the concentric annular tube with an inner diameter of 7.6 mm and an outer diameter of 14.3 mm, respectively. A quantity in the laminar and transition regions was suggested as the dominant variable involved in the forced convection heat transfer in the circular tube. The values of the local and average Nusselt numbers, Nu z and Nu av , respectively, for a concentric annular tube with d H = 6.7 mm and for a circular tube with d H = 6.7 mm were calculated to examine the effects of q, T in , and Pe on heat

  3. Prediction of strain energy-based liquefaction resistance of sand-silt mixtures: An evolutionary approach

    Science.gov (United States)

    Baziar, Mohammad H.; Jafarian, Yaser; Shahnazari, Habib; Movahed, Vahid; Amin Tutunchian, Mohammad

    2011-11-01

    Liquefaction is a catastrophic type of ground failure, which usually occurs in loose saturated soil deposits under earthquake excitations. A new predictive model is presented in this study to estimate the amount of strain energy density, which is required for the liquefaction triggering of sand-silt mixtures. A wide-ranging database containing the results of cyclic tests on sand-silt mixtures was first gathered from previously published studies. Input variables of the model were chosen from the available understandings evolved from the previous studies on the strain energy-based liquefaction potential assessment. In order to avoid overtraining, two sets of validation data were employed and a particular monitoring was made on the behavior of the evolved models. Results of a comprehensive parametric study on the proposed model are in accord with the previously published experimental observations. Accordingly, the amount of strain energy required for liquefaction onset increases with increase in initial effective overburden pressure, relative density, and mean grain size. The effect of nonplastic fines on strain energy-based liquefaction resistance shows a more complicated behavior. Accordingly, liquefaction resistance increases with increase in fines up to about 10-15% and then starts to decline for a higher increase in fines content. Further verifications of the model were carried out using the valuable results of some downhole array data as well as centrifuge model tests. These verifications confirm that the proposed model, which was derived from laboratory data, can be successfully utilized under field conditions.

  4. The solar energy based global economy. A policy leading to the ecological era

    International Nuclear Information System (INIS)

    Scheer, H.

    1999-01-01

    Bound in its fossil energy and raw materials supply chains, the global economy is heading for a global ecological crisis and dramatically aggravating conflicts. Moreover, this exclusive dependence on fossil energy and materials resources forces a global concentration process increasingly undermining democratic and free market systems. But the will to survive is not the only reason to consider a new industrial revolution to be imperative. Such a sweeping change, from a fossil energy based regime to a system relying exclusively on renewable energy sources and raw materials, would open up unique opportunities for the evolution of a peaceful and democratic global economy fostering the development of superior technologies and sustainable regional economic systems. The author of the book elaborates the scenario permitting such a radical change, and explains the necessary basic approaches and appropriate policies relating to technology, the economy, ecology, and the social system. The ultimate goal is that the evolution of the solar energy based global economy will be accompanied by an intrinsic economic driving force eventually leading to an ecological era. (orig./CB) [de

  5. Energy, exergy and sustainability analyses of hybrid renewable energy based hydrogen and electricity production and storage systems: Modeling and case study

    International Nuclear Information System (INIS)

    Caliskan, Hakan; Dincer, Ibrahim; Hepbasli, Arif

    2013-01-01

    In this study, hybrid renewable energy based hydrogen and electricity production and storage systems are conceptually modeled and analyzed in detail through energy, exergy and sustainability approaches. Several subsystems, namely hybrid geothermal energy-wind turbine-solar photovoltaic (PV) panel, inverter, electrolyzer, hydrogen storage system, Proton Exchange Membrane Fuel Cell (PEMFC), battery and loading system are considered. Also, a case study, based on hybrid wind–solar renewable energy system, is conducted and its results are presented. In addition, the dead state temperatures are considered as 0 °C, 10 °C, 20 °C and 30 °C, while the environment temperature is 30 °C. The maximum efficiencies of the wind turbine, solar PV panel, electrolyzer, PEMFC are calculated as 26.15%, 9.06%, 53.55%, and 33.06% through energy analysis, and 71.70%, 9.74%, 53.60%, and 33.02% through exergy analysis, respectively. Also, the overall exergy efficiency, ranging from 5.838% to 5.865%, is directly proportional to the dead state temperature and becomes higher than the corresponding energy efficiency of 3.44% for the entire system. -- Highlights: ► Developing a three-hybrid renewable energy (geothermal–wind–solar)-based system. ► Undertaking a parametric study at various dead state temperatures. ► Investigating the effect of dead state temperatures on exergy efficiency

  6. Microcanonical-ensemble computer simulation of the high-temperature expansion coefficients of the Helmholtz free energy of a square-well fluid

    Science.gov (United States)

    Sastre, Francisco; Moreno-Hilario, Elizabeth; Sotelo-Serna, Maria Guadalupe; Gil-Villegas, Alejandro

    2018-02-01

    The microcanonical-ensemble computer simulation method (MCE) is used to evaluate the perturbation terms Ai of the Helmholtz free energy of a square-well (SW) fluid. The MCE method offers a very efficient and accurate procedure for the determination of perturbation terms of discrete-potential systems such as the SW fluid and surpass the standard NVT canonical ensemble Monte Carlo method, allowing the calculation of the first six expansion terms. Results are presented for the case of a SW potential with attractive ranges 1.1 ≤ λ ≤ 1.8. Using semi-empirical representation of the MCE values for Ai, we also discuss the accuracy in the determination of the phase diagram of this system.

  7. Structural transformations of 3-fluoro and 3-fluoro-4-methoxy benzaldehydes under cryogenic conditions: A computational and low temperature infrared spectroscopy investigation

    Science.gov (United States)

    Ogruc Ildiz, G.; Konarska, J.; Fausto, R.

    2018-05-01

    Structural transformations of 3-fluorobenzaldehyde (C7H5FO; 3FBA) and 3-fluoro-4-methoxybenzaldehyde (C8H7FO2; 3F4MBA), taking place in different solid phase environments and at low temperature, were investigated by infrared spectroscopy, complemented by quantum chemistry calculations undertaken at the DFT(B3LYP)/6-311++G(d,p) level of approximation. The studied compounds were isolated from gas phase into cryogenic inert matrices (Ar, Xe), allowing to characterize their equilibrium conformational composition in gas-phase at room temperature. In both cases, two conformers differing by the orientation of the aldehyde moiety (with the carbonyl aldehyde bond cis or trans in relation to the aromatic ring fluorine substituent) were found to coexist, with the cis conformer being slightly more populated than the trans form. In situ narrowband UV irradiation of the as-deposited matrices led either to preferential isomerization of the cis conformer into the trans form or decarbonylation of both conformers, depending on the used excitation wavelength. Deposition of the vapours of 3F4MBA only, onto the cold (15 K) substrate, produced an amorphous solid containing also both the cis and trans conformers of the compound. Subsequent heating of the amorphous phase up to 268 K led to crystallization of the compound, which is accompanied by conformational selection, the cis form being the single species present in the crystal. The experimentally observed transformations of the studied compounds, together with the structural and vibrational results obtained from the performed quantum chemical calculations, allowed a detailed structural and vibrational characterization of the individual conformers.

  8. Renewable energy-based electricity for rural social and economic development in Ghana

    Energy Technology Data Exchange (ETDEWEB)

    Weingart, J.

    1997-12-01

    This paper describes a project whose goals include the establishment of a pilot renewable energy-based rural energy services enterprise to serve communities in the Mamprusi East District, focused on: economically productive activities; community services; household non-thermal energy. The program also seeks to establish the technical, economic, financial, institutional, and socio-cultural requirements for sustainability, to demonstrate bankability and financial sustainability, as a pre-investment prelude to commercial growth of such projects, and to establish technical, financial, and service performance standards for private sector rural energy service companies. This project is being implemented now because the government is undergoing structural reform, including privatization of the power sector, there is active foreign capital available for international development, and the government and people are committed to and able to pay for renewable energy services.

  9. Light and Energy Based Therapeutics for Genitourinary Syndrome of Menopause: Consensus and Controversies

    Science.gov (United States)

    Tadir, Yona; Gaspar, Adrian; Lev-Sagie, Ahinoam; Alexiades, Macrene; Alinsod, Red; Bader, Alex; Calligaro, Alberto; Elias, Jorge A.; Gambaciani, Marco; Gaviria, Jorge E.; Iglesia, Cheryl B.; Selih-Martinec, Ksenija; Mwesigwa, Patricia L.; Ogrinc, Urska B.; Salvatore, Stefano; Scollo, Paolo; Zerbinati, Nicola; Nelson, John Stuart

    2018-01-01

    Gynecologist and plastic surgeons pioneered the application of lasers in medicine and surgery almost 5 decades ago, initially used to treat cervical and vaginal pathologies. Ever since, energy-based devices have been deployed to treat pelvic pathologies and improve fertility. Recent technological developments triggered an unprecedented wave of publications, assessing the efficacy of fractional laser, and radiofrequency on the vaginal wall in reversing natural aging processes. Studies have shown that a certain degree of thermal energy deposited on the vaginal wall stimulates proliferation of the glycogen-enriched epithelium, neovascularization, and collagen formation in the lamina propria, and improves natural lubrication and control of urination. This review aimed to review such data and to guide future research. A unique assembly of experts from around the globe, compiled and edited this manuscript based on a thorough literature review and personal experience. Lasers Surg. Med. 49:137–159, 2017. PMID:28220946

  10. Detection of Human Impacts by an Adaptive Energy-Based Anisotropic Algorithm

    Directory of Open Access Journals (Sweden)

    Manuel Prado-Velasco

    2013-10-01

    Full Text Available Boosted by health consequences and the cost of falls in the elderly, this work develops and tests a novel algorithm and methodology to detect human impacts that will act as triggers of a two-layer fall monitor. The two main requirements demanded by socio-healthcare providers—unobtrusiveness and reliability—defined the objectives of the research. We have demonstrated that a very agile, adaptive, and energy-based anisotropic algorithm can provide 100% sensitivity and 78% specificity, in the task of detecting impacts under demanding laboratory conditions. The algorithm works together with an unsupervised real-time learning technique that addresses the adaptive capability, and this is also presented. The work demonstrates the robustness and reliability of our new algorithm, which will be the basis of a smart falling monitor. This is shown in this work to underline the relevance of the results.

  11. Simultaneous reconstruction of 3D refractive index, temperature, and intensity distribution of combustion flame by double computed tomography technologies based on spatial phase-shifting method

    Science.gov (United States)

    Guo, Zhenyan; Song, Yang; Yuan, Qun; Wulan, Tuya; Chen, Lei

    2017-06-01

    In this paper, a transient multi-parameter three-dimensional (3D) reconstruction method is proposed to diagnose and visualize a combustion flow field. Emission and transmission tomography based on spatial phase-shifted technology are combined to reconstruct, simultaneously, the various physical parameter distributions of a propane flame. Two cameras triggered by the internal trigger mode capture the projection information of the emission and moiré tomography, respectively. A two-step spatial phase-shifting method is applied to extract the phase distribution in the moiré fringes. By using the filtered back-projection algorithm, we reconstruct the 3D refractive-index distribution of the combustion flow field. Finally, the 3D temperature distribution of the flame is obtained from the refractive index distribution using the Gladstone-Dale equation. Meanwhile, the 3D intensity distribution is reconstructed based on the radiation projections from the emission tomography. Therefore, the structure and edge information of the propane flame are well visualized.

  12. Prediction of hot spot residues at protein-protein interfaces by combining machine learning and energy-based methods

    Directory of Open Access Journals (Sweden)

    Pontil Massimiliano

    2009-10-01

    Full Text Available Abstract Background Alanine scanning mutagenesis is a powerful experimental methodology for investigating the structural and energetic characteristics of protein complexes. Individual amino-acids are systematically mutated to alanine and changes in free energy of binding (ΔΔG measured. Several experiments have shown that protein-protein interactions are critically dependent on just a few residues ("hot spots" at the interface. Hot spots make a dominant contribution to the free energy of binding and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there is a need for accurate and reliable computational methods. Such methods would also add to our understanding of the determinants of affinity and specificity in protein-protein recognition. Results We present a novel computational strategy to identify hot spot residues, given the structure of a complex. We consider the basic energetic terms that contribute to hot spot interactions, i.e. van der Waals potentials, solvation energy, hydrogen bonds and Coulomb electrostatics. We treat them as input features and use machine learning algorithms such as Support Vector Machines and Gaussian Processes to optimally combine and integrate them, based on a set of training examples of alanine mutations. We show that our approach is effective in predicting hot spots and it compares favourably to other available methods. In particular we find the best performances using Transductive Support Vector Machines, a semi-supervised learning scheme. When hot spots are defined as those residues for which ΔΔG ≥ 2 kcal/mol, our method achieves a precision and a recall respectively of 56% and 65%. Conclusion We have developed an hybrid scheme in which energy terms are used as input features of machine learning models. This strategy combines the strengths of machine learning and energy-based methods. Although so far these two types of approaches have mainly been

  13. Dynamical calculation of nuclear temperature

    International Nuclear Information System (INIS)

    Zheng Yuming

    1998-01-01

    A new dynamical approach for measuring the temperature of a Hamiltonian dynamical system in the microcanonical ensemble of thermodynamics is presented. It shows that under the hypothesis of ergodicity the temperature can be computed as a time average of a function on the energy surface. This method not only yields an efficient computational approach for determining the temperature, but also provides an intrinsic link between dynamical system theory and the statistical mechanics of Hamiltonian system

  14. Energy-Based Design Criterion of Dissipative Bracing Systems for the Seismic Retrofit of Frame Structures

    Directory of Open Access Journals (Sweden)

    Gloria Terenzi

    2018-02-01

    Full Text Available Direct sizing criteria represent useful tools in the design of dissipative bracing systems for the advanced seismic protection of existing frame structures, especially when incorporated dampers feature a markedly non-linear behaviour. An energy-based procedure is proposed herein to this aim, focusing attention on systems including fluid viscous devices. The procedure starts by assuming prefixed reduction factors of the most critical response parameters in current conditions, which are evaluated by means of a conventional elastic finite element analysis. Simple formulas relating the reduction factors to the equivalent viscous damping ratio of the dampers, ξeq, are proposed. These formulas allow calculating the ξeq values that guarantee the achievement of the target factors. Finally, the energy dissipation capacity of the devices is deduced from ξeq, finalizing their sizing process. A detailed description of the procedure is presented in the article, by distinguishing the cases where the prevailing structural deficiencies are represented by poor strength of the constituting members, from the cases having excessive horizontal displacements. A demonstrative application to the retrofit design of a reinforced concrete gym building is then offered to explicate the steps of the sizing criterion in practice, as well as to evaluate the enhancement of the seismic response capacities generated by the installation of the dissipative system.

  15. From free energy to expected energy: Improving energy-based value function approximation in reinforcement learning.

    Science.gov (United States)

    Elfwing, Stefan; Uchibe, Eiji; Doya, Kenji

    2016-12-01

    Free-energy based reinforcement learning (FERL) was proposed for learning in high-dimensional state and action spaces. However, the FERL method does only really work well with binary, or close to binary, state input, where the number of active states is fewer than the number of non-active states. In the FERL method, the value function is approximated by the negative free energy of a restricted Boltzmann machine (RBM). In our earlier study, we demonstrated that the performance and the robustness of the FERL method can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that RBM function approximation can be further improved by approximating the value function by the negative expected energy (EERL), instead of the negative free energy, as well as being able to handle continuous state input. We validate our proposed method by demonstrating that EERL: (1) outperforms FERL, as well as standard neural network and linear function approximation, for three versions of a gridworld task with high-dimensional image state input; (2) achieves new state-of-the-art results in stochastic SZ-Tetris in both model-free and model-based learning settings; and (3) significantly outperforms FERL and standard neural network function approximation for a robot navigation task with raw and noisy RGB images as state input and a large number of actions. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  16. DC dynamic pull-in instability of a dielectric elastomer balloon: an energy-based approach

    Science.gov (United States)

    Sharma, Atul Kumar; Arora, Nitesh; Joglekar, M. M.

    2018-03-01

    This paper reports an energy-based method for the dynamic pull-in instability analysis of a spherical dielectric elastomer (DE) balloon subjected to a quasi-statically applied inflation pressure and a Heaviside step voltage across the balloon wall. The proposed technique relies on establishing the energy balance at the point of maximum stretch in an oscillation cycle, followed by the imposition of an instability condition for extracting the threshold parameters. The material models of the Ogden family are employed for describing the hyperelasticity of the balloon. The accuracy of the critical dynamic pull-in parameters is established by examining the saddle-node bifurcation in the transient response of the balloon obtained by integrating numerically the equation of motion, derived using the Euler-Lagrange equation. The parametric study brings out the effect of inflation pressure on the onset of the pull-in instability in the DE balloon. A quantitative comparison between the static and dynamic pull-in parameters at four different levels of the inflation pressure is presented. The results indicate that the dynamic pull-in instability gets triggered at electric fields that are lower than those corresponding to the static instability. The results of the present investigation can find potential use in the design and development of the balloon actuators subjected to transient loading. The method developed is versatile and can be used in the dynamic instability analysis of other conservative systems of interest.

  17. Energy-Based Wavelet De-Noising of Hydrologic Time Series

    Science.gov (United States)

    Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu

    2014-01-01

    De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533

  18. Energy-Based Adaptive Sliding Mode Speed Control for Switched Reluctance Motor Drive

    Directory of Open Access Journals (Sweden)

    M. M. Namazi Isfahani

    2012-03-01

    Full Text Available Torque ripple minimization of switched reluctance motor drives is a major subject based on these drives’ extensive use in the industry. In this paper, by using a well-known cascaded torque control structure and taking the machine physical structure characteristics into account, the proposed energy-based (passivity-based adaptive sliding algorithm derived from the view point of energy dissipation, control stability and algorithm robustness. First, a nonlinear dynamic model is developed and decomposed into separate slow and fast passive subsystems which are interconnected by negative feedbacks. Then, an outer loop speed control is employed by adaptive sliding controller to determine the appropriate torque command. Finally, to reduce torque ripple in switched reluctance motor a high-performance passivity-based current controller is proposed. It can overcome the inherent nonlinear characteristics of the system and make the whole system robust to uncertainties and bounded disturbances. The performance of the proposed controller algorithm has been demonstrated in simulation, and experimental using a 4KW, four-phase, 8/6 pole SRM DSP-based drive system.

  19. A low-cost approach to electronic excitation energies based on the driven similarity renormalization group

    Science.gov (United States)

    Li, Chenyang; Verma, Prakash; Hannon, Kevin P.; Evangelista, Francesco A.

    2017-08-01

    We propose an economical state-specific approach to evaluate electronic excitation energies based on the driven similarity renormalization group truncated to second order (DSRG-PT2). Starting from a closed-shell Hartree-Fock wave function, a model space is constructed that includes all single or single and double excitations within a given set of active orbitals. The resulting VCIS-DSRG-PT2 and VCISD-DSRG-PT2 methods are introduced and benchmarked on a set of 28 organic molecules [M. Schreiber et al., J. Chem. Phys. 128, 134110 (2008)]. Taking CC3 results as reference values, mean absolute deviations of 0.32 and 0.22 eV are observed for VCIS-DSRG-PT2 and VCISD-DSRG-PT2 excitation energies, respectively. Overall, VCIS-DSRG-PT2 yields results with accuracy comparable to those from time-dependent density functional theory using the B3LYP functional, while VCISD-DSRG-PT2 gives excitation energies comparable to those from equation-of-motion coupled cluster with singles and doubles.

  20. Optical Computing

    OpenAIRE

    Woods, Damien; Naughton, Thomas J.

    2008-01-01

    We consider optical computers that encode data using images and compute by transforming such images. We give an overview of a number of such optical computing architectures, including descriptions of the type of hardware commonly used in optical computing, as well as some of the computational efficiencies of optical devices. We go on to discuss optical computing from the point of view of computational complexity theory, with the aim of putting some old, and some very recent, re...

  1. Economic performance indicators of wind energy based on wind speed stochastic modeling

    International Nuclear Information System (INIS)

    D’Amico, Guglielmo; Petroni, Filippo; Prattico, Flavio

    2015-01-01

    Highlights: • We propose a new and different wind energy production indicator. • We compute financial profitability of potential wind power sites. • The wind speed process is modeled as an indexed semi-Markov chain. • We check if the wind energy is a good investment with and without incentives. - Abstract: We propose the computation of different wind energy production indicators and financial profitability of potential wind power sites. The computation is performed by modeling the wind speed process as an indexed semi-Markov chain to predict and simulate the wind speed dynamics. We demonstrate that the indexed semi-Markov chain approach enables reproducing the indicators calculated on real data. Two different time horizons of 15 and 30 years are analyzed. In the first case we consider the government incentives on the energy price now present in Italy, while in the second case the incentives have not been taken into account

  2. Performance assessment of a new solar energy-based multigeneration system

    International Nuclear Information System (INIS)

    Ozlu, Sinan; Dincer, Ibrahim

    2016-01-01

    In this study, a thermodynamic analysis is conducted on a multigeneration energy system based on renewable energy sources. The proposed system is developed for residential applications, including individual- and multi-building complexes, utilizing solar energy to produce useful outputs, namely electricity, heat, fresh water and hydrogen. Hydrogen is used for the purpose of storing energy to offset the mismatch between demand and supply when dealt with renewables, such as solar energy. The system is modeled thermodynamically to obtain the optimal energy and exergy efficiencies, heat and work outputs for the overall system. Moreover, greenhouse gas emissions caused by conventional energy systems utilized for the same outputs are calculated and compared with the studied systems. A solar collector area of 24 m 2 is considered for the present system and its analysis. The maximum energy efficiency is 36% and the maximum exergy efficiency is 44%. The total work output for electricity is 116 kW, and hence the CO 2 reduction achieved by this system is 476 tons per year. It can produce 0.04 kg/s desalinated water. The optimum number of suites, as an application for a building complex, which can be sustained with the proposed system is determined as 106 suites. - Highlights: • A solar energy based multigeneration system is proposed. • Energy, exergy efficiencies, heat, work outputs of the system are determined. • Optimization is done on efficiencies and work output. • Domestic heating, desalination, hydrogen, electricity is obtained. • Overall system energy and exergy efficiency is found to be 36% and 44%.

  3. Mathematics, Physics and Computer Sciences Temperature ...

    African Journals Online (AJOL)

    Regional geothermal gradients range from 1.5 – 2.5 ºC/100 m in the Coastal depobelt and Offshore regions and increases northward to 4.5 ºC/100 m. Vertical geothermal gradients show a continuous but non linear function with depth and increases with diminishing sand percentage from less than 1.0 ºC/100 m in the ...

  4. Computer group

    International Nuclear Information System (INIS)

    Bauer, H.; Black, I.; Heusler, A.; Hoeptner, G.; Krafft, F.; Lang, R.; Moellenkamp, R.; Mueller, W.; Mueller, W.F.; Schati, C.; Schmidt, A.; Schwind, D.; Weber, G.

    1983-01-01

    The computer groups has been reorganized to take charge for the general purpose computers DEC10 and VAX and the computer network (Dataswitch, DECnet, IBM - connections to GSI and IPP, preparation for Datex-P). (orig.)

  5. Computer Engineers.

    Science.gov (United States)

    Moncarz, Roger

    2000-01-01

    Looks at computer engineers and describes their job, employment outlook, earnings, and training and qualifications. Provides a list of resources related to computer engineering careers and the computer industry. (JOW)

  6. Computer Music

    Science.gov (United States)

    Cook, Perry R.

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).

  7. Numerical model and analysis of an energy-based system using microwaves for vision correction

    Science.gov (United States)

    Pertaub, Radha; Ryan, Thomas P.

    2009-02-01

    A treatment system was developed utilizing a microwave-based procedure capable of treating myopia and offering a less invasive alternative to laser vision correction without cutting the eye. Microwave thermal treatment elevates the temperature of the paracentral stroma of the cornea to create a predictable refractive change while preserving the epithelium and deeper structures of the eye. A pattern of shrinkage outside of the optical zone may be sufficient to flatten the central cornea. A numerical model was set up to investigate both the electromagnetic field and the resultant transient temperature distribution. A finite element model of the eye was created and the axisymmetric distribution of temperature calculated to characterize the combination of controlled power deposition combined with surface cooling to spare the epithelium, yet shrink the cornea, in a circularly symmetric fashion. The model variables included microwave power levels and pulse width, cooling timing, dielectric material and thickness, and electrode configuration and gap. Results showed that power is totally contained within the cornea and no significant temperature rise was found outside the anterior cornea, due to the near-field design of the applicator and limited thermal conduction with the short on-time. Target isothermal regions were plotted as a result of common energy parameters along with a variety of electrode shapes and sizes, which were compared. Dose plots showed the relationship between energy and target isothermic regions.

  8. Photon propagators at finite temperature

    International Nuclear Information System (INIS)

    Yee, J.H.

    1982-07-01

    We have used the real time formalism to compute the one-loop finite temperature corrections to the photon self energies in spinor and scalar QED. We show that, for a real photon, only the transverse components develop the temperature-dependent masses, while, for an external static electromagnetic field applied to the finite temperature system, only the static electric field is screened by thermal fluctuations. After showing how to compute systematically the imaginary parts of the finite temperature Green functions, we have attempted to give a microscopic interpretation of the imaginary parts of the self energies. (author)

  9. Computationally efficient prediction of area per lipid

    DEFF Research Database (Denmark)

    Chaban, Vitaly V.

    2014-01-01

    dynamics increases exponentially with respect to temperature. APL dependence on temperature is linear over an entire temperature range. I provide numerical evidence that thermal expansion coefficient of a lipid bilayer can be computed at elevated temperatures and extrapolated to the temperature of interest...

  10. Analog computing

    CERN Document Server

    Ulmann, Bernd

    2013-01-01

    This book is a comprehensive introduction to analog computing. As most textbooks about this powerful computing paradigm date back to the 1960s and 1970s, it fills a void and forges a bridge from the early days of analog computing to future applications. The idea of analog computing is not new. In fact, this computing paradigm is nearly forgotten, although it offers a path to both high-speed and low-power computing, which are in even more demand now than they were back in the heyday of electronic analog computers.

  11. Computational composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.; Redström, Johan

    2007-01-01

    Computational composite is introduced as a new type of composite material. Arguing that this is not just a metaphorical maneuver, we provide an analysis of computational technology as material in design, which shows how computers share important characteristics with other materials used in design...... and architecture. We argue that the notion of computational composites provides a precise understanding of the computer as material, and of how computations need to be combined with other materials to come to expression as material. Besides working as an analysis of computers from a designer’s point of view......, the notion of computational composites may also provide a link for computer science and human-computer interaction to an increasingly rapid development and use of new materials in design and architecture....

  12. Quantum Computing

    OpenAIRE

    Scarani, Valerio

    1998-01-01

    The aim of this thesis was to explain what quantum computing is. The information for the thesis was gathered from books, scientific publications, and news articles. The analysis of the information revealed that quantum computing can be broken down to three areas: theories behind quantum computing explaining the structure of a quantum computer, known quantum algorithms, and the actual physical realizations of a quantum computer. The thesis reveals that moving from classical memor...

  13. Application of the strain energy for fatigue life prediction (LCF) of metals by the energy-based criterion

    International Nuclear Information System (INIS)

    Shahram Shahrooi; Ibrahim Henk Metselaar; Zainul Huda; Ghezavati, H.R.

    2009-01-01

    Full text: In this study, the plastic strain energy under multiaxial fatigue condition has been calculated in the cyclic plasticity models by the stress-strain hysteresis loops. Then, using the results of these models, the fatigue lives in energy-based fatigue model is predicted and compared to experimental data. Moreover, a weighting factor on shear plastic work is presented to decrease the life factors in the model fatigue. (author)

  14. Energy based source location by using acoustic emission for damage detection in steel and composite CNG tank

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Il Sik; Han, Byeong Hee; Park, Choon Su; Yoon, Dong Jin [Center for Safety Measurements, Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of)

    2015-10-15

    Acoustic emission (AE) is an effective nondestructive test that uses transient elastic wave generated by the rapid release of energy within a material to detect any further growth or expansion of existing defects. Over the past decades, because of environmental issues, the use of compressed natural gas (CNG) as an alternative fuel for vehicles is increasing because of environmental issues. For this reason, the importance and necessity of detecting defects on a CNG fuel tank has also come to the fore. The conventional AE method used for source location is highly affected by the wave speed on the structure, and this creates problems in inspecting a composite CNG fuel tank. Because the speed and dispersion characteristics of the wave are different according to direction of structure and laminated layers. In this study, both the conventional AE method and the energy based contour map method were used for source location. This new method based on pre-acquired D/B was used for overcoming the limitation of damage localization in a composite CNG fuel tank specimen which consists of a steel liner cylinder overwrapped by GFRP. From the experimental results, it is observed that the damage localization is determined with a small error at all tested points by using the energy based contour map method, while there were a number of mis-locations or large errors at many tested points by using the conventional AE method. Therefore, the energy based contour map method used in this work is more suitable technology for inspecting composite structures.

  15. A COMPUTATIONAL STUDY ON THE HYDROGEN-BONDED ...

    African Journals Online (AJOL)

    the complexes were optimized employing the AM1 method [14] in the GAUSSIAN 03 program. [15] to accomplish the binding energies. Based on the optimized geometries, the electronic spectra of the complexes were computed using the INDO/CIS method [16-22] to obtain absorption wavelengths λ and oscillator strength f.

  16. An Energy Integrated Dispatching Strategy of Multi- energy Based on Energy Internet

    Science.gov (United States)

    Jin, Weixia; Han, Jun

    2018-01-01

    Energy internet is a new way of energy use. Energy internet achieves energy efficiency and low cost by scheduling a variety of different forms of energy. Particle Swarm Optimization (PSO) is an advanced algorithm with few parameters, high computational precision and fast convergence speed. By improving the parameters ω, c1 and c2, PSO can improve the convergence speed and calculation accuracy. The objective of optimizing model is lowest cost of fuel, which can meet the load of electricity, heat and cold after all the renewable energy is received. Due to the different energy structure and price in different regions, the optimization strategy needs to be determined according to the algorithm and model.

  17. Demand Side Management Using the Internet of Energy Based on LoRaWAN Technology

    DEFF Research Database (Denmark)

    Shahryari, Kolsoom; Anvari-Moghaddam, Amjad; Shahryari, Shadi

    2017-01-01

    into cloud because of a long range, low power, wide area and low bit rate wireless telecommunication system which is called LoRaWAN. All devices in fog domain are connected by long range wide area network (LoRa) into a smart gateway. The gateway which bridges fog domain and cloud, is introduced......The smart grid, as a communication network, allows numerous connected devices such as sensors, relays and actuators to interact and cooperate with each other. An Internet-based solution for electricity that provides bidirectional flow of information and power is internet of energy (IoE) which...... is an extension of smart grid concept. A large number of connected devices and the huge amount of data generated by IoE and issues related to data transmission, process and storage, force IoE to be integrated by cloud computing. Furthermore, in order to enhance the performance and reduce the volume of transmitted...

  18. Energy-based control for a biologically inspired hexapod robot with rolling locomotion

    Directory of Open Access Journals (Sweden)

    Takuma Nemoto

    2015-04-01

    Full Text Available This paper presents an approach to control rolling locomotion on the level ground with a biologically inspired hexapod robot. For controlling rolling locomotion, a controller which can compensate energy loss with rolling locomotion of the hexapod robot is designed based on its dynamic model. The dynamic model describes the rolling locomotion which is limited to planar one by an assumption that the hexapod robot does not fall down while rolling and influences due to collision and contact with the ground, and it is applied for computing the mechanical energy of the hexapod robot and a plant for a numerical simulation. The numerical simulation of the rolling locomotion on the level ground verifies the effectiveness of the proposed controller. The simulation results show that the hexapod robot can perform the rolling locomotion with the proposed controller. In conclusion, it is shown that the proposed control approach is effective in achieving the rolling locomotion on the level ground.

  19. A new model for commercially sustainable renewable energy-based rural electrification in Indonesia

    Energy Technology Data Exchange (ETDEWEB)

    Walt, Robb [Integrated Power Corporation-Indonesia, (United states)

    1995-12-31

    Rapidly increasing demands and requirements for access to electricity throughout the remote areas of Indonesia coupled with annual subsidies in excess of $500 million of dollars for rural electrification have forced the Government of Indonesia to search for alternatives to the conventional utility model for rural electrification. In 1992-1993 a study was conducted in collaboration with the Government of Indonesia`s Agency Application and Assessment of Technology (BPPT) and the national power utility, PLN to support the search for sustainable solutions for electrification of remote communities. This study produced a New commercial model for electrification of off-grid rural communities in Indonesia with utility quality electricity services. This new model is characterized by the use of new technologies for power generation, distribution, and sales of electricity. Key to the success of the new model are renewable energy-based hybrid power plants and the use of flexible, on-demand electricity dispensing meters. Estimated fees for electricity service are based on the current amounts now being paid by rural households for kerosene, candles and battery services at different income levels. The study showed that most rural households are willing and able to pay additional amounts for reliable, utility grade electricity for valuable services, such as better lighting, TV entertainment and for productive (economic) uses during daytime hours. A financial assessment was conducted for investments in hybrid power systems for off-grid communities with revenues generated on the basis of market fees, and collected through new technology for electricity purchase and prepayment on a commodity basis. The assessment demonstrates that this approach would provide superior electricity services on a full-time basis, with little or no subsidy required during the three- to five-year commercialization phase, and with profitability as an achievable goal in the full commercial phase. [Espanol

  20. A new model for commercially sustainable renewable energy-based rural electrification in Indonesia

    Energy Technology Data Exchange (ETDEWEB)

    Walt, Robb [Integrated Power Corporation-Indonesia, (United states)

    1996-12-31

    Rapidly increasing demands and requirements for access to electricity throughout the remote areas of Indonesia coupled with annual subsidies in excess of $500 million of dollars for rural electrification have forced the Government of Indonesia to search for alternatives to the conventional utility model for rural electrification. In 1992-1993 a study was conducted in collaboration with the Government of Indonesia`s Agency Application and Assessment of Technology (BPPT) and the national power utility, PLN to support the search for sustainable solutions for electrification of remote communities. This study produced a New commercial model for electrification of off-grid rural communities in Indonesia with utility quality electricity services. This new model is characterized by the use of new technologies for power generation, distribution, and sales of electricity. Key to the success of the new model are renewable energy-based hybrid power plants and the use of flexible, on-demand electricity dispensing meters. Estimated fees for electricity service are based on the current amounts now being paid by rural households for kerosene, candles and battery services at different income levels. The study showed that most rural households are willing and able to pay additional amounts for reliable, utility grade electricity for valuable services, such as better lighting, TV entertainment and for productive (economic) uses during daytime hours. A financial assessment was conducted for investments in hybrid power systems for off-grid communities with revenues generated on the basis of market fees, and collected through new technology for electricity purchase and prepayment on a commodity basis. The assessment demonstrates that this approach would provide superior electricity services on a full-time basis, with little or no subsidy required during the three- to five-year commercialization phase, and with profitability as an achievable goal in the full commercial phase. [Espanol

  1. Computational Medicine

    DEFF Research Database (Denmark)

    Nygaard, Jens Vinge

    2017-01-01

    The Health Technology Program at Aarhus University applies computational biology to investigate the heterogeneity of tumours......The Health Technology Program at Aarhus University applies computational biology to investigate the heterogeneity of tumours...

  2. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  3. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  4. Quantum computers and quantum computations

    International Nuclear Information System (INIS)

    Valiev, Kamil' A

    2005-01-01

    This review outlines the principles of operation of quantum computers and their elements. The theory of ideal computers that do not interact with the environment and are immune to quantum decohering processes is presented. Decohering processes in quantum computers are investigated. The review considers methods for correcting quantum computing errors arising from the decoherence of the state of the quantum computer, as well as possible methods for the suppression of the decohering processes. A brief enumeration of proposed quantum computer realizations concludes the review. (reviews of topical problems)

  5. Quantum Computing for Computer Architects

    CERN Document Server

    Metodi, Tzvetan

    2011-01-01

    Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm. While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore

  6. Pervasive Computing

    NARCIS (Netherlands)

    Silvis-Cividjian, N.

    This book provides a concise introduction to Pervasive Computing, otherwise known as Internet of Things (IoT) and Ubiquitous Computing (Ubicomp) which addresses the seamless integration of computing systems within everyday objects. By introducing the core topics and exploring assistive pervasive

  7. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  8. Spatial Computation

    Science.gov (United States)

    2003-12-01

    Computation and today’s microprocessors with the approach to operating system architecture, and the controversy between microkernels and monolithic kernels...Both Spatial Computation and microkernels break away a relatively monolithic architecture into in- dividual lightweight pieces, well specialized...for their particular functionality. Spatial Computation removes global signals and control, in the same way microkernels remove the global address

  9. Temperature Pill

    Science.gov (United States)

    1988-01-01

    Ingestible Thermal Monitoring System was developed at Johns Hopkins University as means of getting internal temperature readings for treatments of such emergency conditions as dangerously low (hypothermia) and dangerously high (hyperthermia) body temperatures. ITMS's accuracy is off no more than one hundredth of a degree and provides the only means of obtaining deep body temperature. System has additional applicability in fertility monitoring and some aspects of surgery, critical care obstetrics, metabolic disease treatment, gerontology (aging) and food processing research. Three-quarter inch silicone capsule contains telemetry system, micro battery, and a quartz crystal temperature sensor inserted vaginally, rectally, or swallowed.

  10. Control of supply temperature

    Energy Technology Data Exchange (ETDEWEB)

    Madsen, H; Nielsen, T S; Soegaard, H T

    1996-09-01

    For many district heating systems, e.g. the system in Hoeje Taastrup, it is desirable to minimize the supply temperature from the heat production unit(s). Lower supply temperature implies lower costs in connection with the production and distribution of heat. Factors having impact on the heat demand are for instance solar radiation, wind speed, wind direction and a climate independent part, which is a function of the time of the day/week/year. By applying an optimization strategy, which minimizes the supply temperature, it is assumed that optimal economical operation can be obtained by minimizing the supply temperature and thereby the heat losses in the system. The models and methods described in this report take such aspects into account, and can therefore be used as elements in a more efficient minimization of the supply temperature. The theoretical part of this report describes models and methods for optimal on-line control of the supply temperature in district heating systems. Some of the models and methods have been implemented - or are going to be implemented - in the computer program PRESS which is a tool for optimal control of supply temperature and forecasting of heat demand in district heating systems. The principles for using transfer function models are briefly described. The ordinary generalized predictive control (OGPC) method is reviewed, and several extensions of this method are suggested. New controller, which is called the extended generalized predictive controller (XGPC), is described. (EG) 57 refs.

  11. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  12. Human Computation

    CERN Multimedia

    CERN. Geneva

    2008-01-01

    What if people could play computer games and accomplish work without even realizing it? What if billions of people collaborated to solve important problems for humanity or generate training data for computers? My work aims at a general paradigm for doing exactly that: utilizing human processing power to solve computational problems in a distributed manner. In particular, I focus on harnessing human time and energy for addressing problems that computers cannot yet solve. Although computers have advanced dramatically in many respects over the last 50 years, they still do not possess the basic conceptual intelligence or perceptual capabilities...

  13. Quantum computation

    International Nuclear Information System (INIS)

    Deutsch, D.

    1992-01-01

    As computers become ever more complex, they inevitably become smaller. This leads to a need for components which are fabricated and operate on increasingly smaller size scales. Quantum theory is already taken into account in microelectronics design. This article explores how quantum theory will need to be incorporated into computers in future in order to give them their components functionality. Computation tasks which depend on quantum effects will become possible. Physicists may have to reconsider their perspective on computation in the light of understanding developed in connection with universal quantum computers. (UK)

  14. Computer software.

    Science.gov (United States)

    Rosenthal, L E

    1986-10-01

    Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.

  15. Computer sciences

    Science.gov (United States)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  16. Energy Dissipation in Quantum Computers

    OpenAIRE

    Granik, A.; Chapline, G.

    2003-01-01

    A method is described for calculating the heat generated in a quantum computer due to loss of quantum phase information. Amazingly enough, this heat generation can take place at zero temperature. and may explain why it is impossible to extract energy from vacuum fluctuations. Implications for optical computers and quantum cosmology are also briefly discussed.

  17. A new approach to the estimation of surface free energy based on Vickers microhardness data

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available A relation between surface free energy (σMHV and Meyer’s lines cut-values has been established using Vickers microhardness (MHV method and empirical physical laws. This relation allows the calculation of σMHV only from MHV data. The parameters required are Meyer’s lines cut-values and the mean value of diagonal length of the impression at different loads applied (drealmean. Our study of 12 samples of ultra high molecular weight polyethylene (PE-UHMW showed that the new approach is applicable when the slope value of Meyer’s lines equals 2 (i.e. n = 2. A γ-60Co source was used for the irradiation of 11 samples (one of the samples investigated is un-irradiated at room temperature in air. Doses of 1, 2, 5, 10, 20, 50, 100, 200, 500, 1000 and 1500 kGy were applied. The values of σMHV obtained are in a good agreement with the literature. The dependence of σMHV on the dose applied strictly corresponds to the radiation effects theory. MHV was measured at seven different loads – 0.0123, 0.0245, 0.049, 0.098, 0.196, 0.392, 0.785 N at a loading time of 30 s.

  18. Localized Energy-Based Normalization of Medical Images: Application to Chest Radiography.

    Science.gov (United States)

    Philipsen, R H H M; Maduskar, P; Hogeweg, L; Melendez, J; Sánchez, C I; van Ginneken, B

    2015-09-01

    Automated quantitative analysis systems for medical images often lack the capability to successfully process images from multiple sources. Normalization of such images prior to further analysis is a possible solution to this limitation. This work presents a general method to normalize medical images and thoroughly investigates its effectiveness for chest radiography (CXR). The method starts with an energy decomposition of the image in different bands. Next, each band's localized energy is scaled to a reference value and the image is reconstructed. We investigate iterative and local application of this technique. The normalization is applied iteratively to the lung fields on six datasets from different sources, each comprising 50 normal CXRs and 50 abnormal CXRs. The method is evaluated in three supervised computer-aided detection tasks related to CXR analysis and compared to two reference normalization methods. In the first task, automatic lung segmentation, the average Jaccard overlap significantly increased from 0.72±0.30 and 0.87±0.11 for both reference methods to with normalization. The second experiment was aimed at segmentation of the clavicles. The reference methods had an average Jaccard index of 0.57±0.26 and 0.53±0.26; with normalization this significantly increased to . The third experiment was detection of tuberculosis related abnormalities in the lung fields. The average area under the Receiver Operating Curve increased significantly from 0.72±0.14 and 0.79±0.06 using the reference methods to with normalization. We conclude that the normalization can be successfully applied in chest radiography and makes supervised systems more generally applicable to data from different sources.

  19. Development of computational methods for the safety assessment of gas-cooled high-temperature and supercritical light-water reactors. Final report; Rechenmethoden zur Bewertung der Sicherheit von gasgekuehlten Hochtemperaturreaktoren und superkritischen Leichtwasserreaktoren. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Buchholz, S.; Cron, D. von der; Hristov, H.; Lerchl, G.; Papukchiev, A.; Seubert, A.; Sureda, A.; Weis, J.; Weyermann, F.

    2012-12-15

    This report documents developments and results in the frame of the project RS1191 ''Development of computational methods for the safety assessment of gas-cooled high temperature and supercritical light-water reactors''. The report is structured according to the five work packages: 1. Reactor physics modeling of gas-cooled high temperature reactors; 2. Coupling of reactor physics and 3-D thermal hydraulics for the core barrel; 3. Extension of ATHLET models for application to supercritical reactors (HPLWR); 4. Further development of ATHLET for application to HTR; 5. Further development and validation of ANSYS CFX for application to alternative reactor concepts. Chapter 4 describes the extensions made in TORT-TD related to the simulation of pebble-bed HTR, e.g. spectral zone buckling, Iodine-Xenon dynamics, nuclear decay heat calculation and extension of the cross section interpolation algorithms to higher dimensions. For fast running scoping calculations, a time-dependent 3-D diffusion solver has been implemented in TORT-TD. For the PBMR-268 and PBMR-400 as well as for the HTR-10 reactor, appropriate TORT-TD models have been developed. Few-group nuclear cross sections have been generated using the spectral codes MICROX- 2 and DRAGON4. For verification and validation of nuclear cross sections and deterministic reactor models, MCNP models of reactor core and control rod of the HTR-10 have been developed. Comparisons with experimental data have been performed for the HTR-10 first criticality and control rod worth. The development of the coupled 3-D neutron kinetics and thermal hydraulics code system TORT-TD/ATTICA3D is documented in chapter 5. Similar to the couplings with ATHLET and COBRA-TF, the ''internal'' coupling approach has been implemented. Regarding the review of experiments and benchmarks relevant to HTR for validation of the coupled code system, the PBMR-400 benchmarks and the HTR-10 test reactor have been selected

  20. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  1. Organic Computing

    CERN Document Server

    Würtz, Rolf P

    2008-01-01

    Organic Computing is a research field emerging around the conviction that problems of organization in complex systems in computer science, telecommunications, neurobiology, molecular biology, ethology, and possibly even sociology can be tackled scientifically in a unified way. From the computer science point of view, the apparent ease in which living systems solve computationally difficult problems makes it inevitable to adopt strategies observed in nature for creating information processing machinery. In this book, the major ideas behind Organic Computing are delineated, together with a sparse sample of computational projects undertaken in this new field. Biological metaphors include evolution, neural networks, gene-regulatory networks, networks of brain modules, hormone system, insect swarms, and ant colonies. Applications are as diverse as system design, optimization, artificial growth, task allocation, clustering, routing, face recognition, and sign language understanding.

  2. Computational biomechanics

    International Nuclear Information System (INIS)

    Ethier, C.R.

    2004-01-01

    Computational biomechanics is a fast-growing field that integrates modern biological techniques and computer modelling to solve problems of medical and biological interest. Modelling of blood flow in the large arteries is the best-known application of computational biomechanics, but there are many others. Described here is work being carried out in the laboratory on the modelling of blood flow in the coronary arteries and on the transport of viral particles in the eye. (author)

  3. Derivation of energy-based base shear force coefficient considering hysteretic behavior and P-delta effects

    Science.gov (United States)

    Ucar, Taner; Merter, Onur

    2018-01-01

    A modified energy-balance equation accounting for P-delta effects and hysteretic behavior of reinforced concrete members is derived. Reduced hysteretic properties of structural components due to combined stiffness and strength degradation and pinching effects, and hysteretic damping are taken into account in a simple manner by utilizing plastic energy and seismic input energy modification factors. Having a pre-selected yield mechanism, energy balance of structure in inelastic range is considered. P-delta effects are included in derived equation by adding the external work of gravity loads to the work of equivalent inertia forces and equating the total external work to the modified plastic energy. Earthquake energy input to multi degree of freedom (MDOF) system is approximated by using the modal energy-decomposition. Energy-based base shear coefficients are verified by means of both pushover analysis and nonlinear time history (NLTH) analysis of several RC frames having different number of stories. NLTH analyses of frames are performed by using the time histories of ten scaled ground motions compatible with elastic design acceleration spectrum and fulfilling duration/amplitude related requirements of Turkish Seismic Design Code. The observed correlation between energy-based base shear force coefficients and the average base shear force coefficients of NLTH analyses provides a reasonable confidence in estimation of nonlinear base shear force capacity of frames by using the derived equation.

  4. Computational Composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.

    to understand the computer as a material like any other material we would use for design, like wood, aluminum, or plastic. That as soon as the computer forms a composition with other materials it becomes just as approachable and inspiring as other smart materials. I present a series of investigations of what...... Computational Composite, and Telltale). Through the investigations, I show how the computer can be understood as a material and how it partakes in a new strand of materials whose expressions come to be in context. I uncover some of their essential material properties and potential expressions. I develop a way...

  5. Analysis of energy-based algorithms for RNA secondary structure prediction

    Directory of Open Access Journals (Sweden)

    Hajiaghayi Monir

    2012-02-01

    Full Text Available Abstract Background RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA or pseudo-expected accuracy (pseudo-MEA methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-MEA-based methods, with respect to the latest datasets and energy parameters. Results We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms

  6. Computer automation of a dilution cryogenic system

    International Nuclear Information System (INIS)

    Nogues, C.

    1992-09-01

    This study has been realized in the framework of studies on developing new technic for low temperature detectors for neutrinos and dark matter. The principles of low temperature physics and helium 4 and dilution cryostats, are first reviewed. The cryogenic system used and the technic for low temperature thermometry and regulation systems are then described. The computer automation of the dilution cryogenic system involves: numerical measurement of the parameter set (pressure, temperature, flow rate); computer assisted operating of the cryostat and the pump bench; numerical regulation of pressure and temperature; operation sequence full automation allowing the system to evolve from a state to another (temperature descent for example)

  7. Temperature metrology

    Science.gov (United States)

    Fischer, J.; Fellmuth, B.

    2005-05-01

    The majority of the processes used by the manufacturing industry depend upon the accurate measurement and control of temperature. Thermal metrology is also a key factor affecting the efficiency and environmental impact of many high-energy industrial processes, the development of innovative products and the health and safety of the general population. Applications range from the processing, storage and shipment of perishable foodstuffs and biological materials to the development of more efficient and less environmentally polluting combustion processes for steel-making. Accurate measurement and control of temperature is, for instance, also important in areas such as the characterization of new materials used in the automotive, aerospace and semiconductor industries. This paper reviews the current status of temperature metrology. It starts with the determination of thermodynamic temperatures required on principle because temperature is an intensive quantity. Methods to determine thermodynamic temperatures are reviewed in detail to introduce the underlying physical basis. As these methods cannot usually be applied for practical measurements the need for a practical temperature scale for day-to-day work is motivated. The International Temperature Scale of 1990 and the Provisional Low Temperature Scale PLTS-2000 are described as important parts of the International System of Units to support science and technology. Its main importance becomes obvious in connection with industrial development and international markets. Every country is strongly interested in unique measures, in order to guarantee quality, reproducibility and functionability of products. The eventual realization of an international system, however, is only possible within the well-functioning organization of metrological laboratories. In developed countries the government established scientific institutes have certain metrological duties, as, for instance, the maintenance and dissemination of national

  8. Temperature metrology

    International Nuclear Information System (INIS)

    Fischer, J; Fellmuth, B

    2005-01-01

    The majority of the processes used by the manufacturing industry depend upon the accurate measurement and control of temperature. Thermal metrology is also a key factor affecting the efficiency and environmental impact of many high-energy industrial processes, the development of innovative products and the health and safety of the general population. Applications range from the processing, storage and shipment of perishable foodstuffs and biological materials to the development of more efficient and less environmentally polluting combustion processes for steel-making. Accurate measurement and control of temperature is, for instance, also important in areas such as the characterization of new materials used in the automotive, aerospace and semiconductor industries. This paper reviews the current status of temperature metrology. It starts with the determination of thermodynamic temperatures required on principle because temperature is an intensive quantity. Methods to determine thermodynamic temperatures are reviewed in detail to introduce the underlying physical basis. As these methods cannot usually be applied for practical measurements the need for a practical temperature scale for day-to-day work is motivated. The International Temperature Scale of 1990 and the Provisional Low Temperature Scale PLTS-2000 are described as important parts of the International System of Units to support science and technology. Its main importance becomes obvious in connection with industrial development and international markets. Every country is strongly interested in unique measures, in order to guarantee quality, reproducibility and functionability of products. The eventual realization of an international system, however, is only possible within the well-functioning organization of metrological laboratories. In developed countries the government established scientific institutes have certain metrological duties, as, for instance, the maintenance and dissemination of national

  9. GPGPU COMPUTING

    Directory of Open Access Journals (Sweden)

    BOGDAN OANCEA

    2012-05-01

    Full Text Available Since the first idea of using GPU to general purpose computing, things have evolved over the years and now there are several approaches to GPU programming. GPU computing practically began with the introduction of CUDA (Compute Unified Device Architecture by NVIDIA and Stream by AMD. These are APIs designed by the GPU vendors to be used together with the hardware that they provide. A new emerging standard, OpenCL (Open Computing Language tries to unify different GPU general computing API implementations and provides a framework for writing programs executed across heterogeneous platforms consisting of both CPUs and GPUs. OpenCL provides parallel computing using task-based and data-based parallelism. In this paper we will focus on the CUDA parallel computing architecture and programming model introduced by NVIDIA. We will present the benefits of the CUDA programming model. We will also compare the two main approaches, CUDA and AMD APP (STREAM and the new framwork, OpenCL that tries to unify the GPGPU computing models.

  10. Quantum Computing

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 9. Quantum Computing - Building Blocks of a Quantum Computer. C S Vijay Vishal Gupta. General Article Volume 5 Issue 9 September 2000 pp 69-81. Fulltext. Click here to view fulltext PDF. Permanent link:

  11. Platform computing

    CERN Multimedia

    2002-01-01

    "Platform Computing releases first grid-enabled workload management solution for IBM eServer Intel and UNIX high performance computing clusters. This Out-of-the-box solution maximizes the performance and capability of applications on IBM HPC clusters" (1/2 page) .

  12. Quantum Computing

    Indian Academy of Sciences (India)

    In the first part of this article, we had looked at how quantum physics can be harnessed to make the building blocks of a quantum computer. In this concluding part, we look at algorithms which can exploit the power of this computational device, and some practical difficulties in building such a device. Quantum Algorithms.

  13. Quantum computing

    OpenAIRE

    Burba, M.; Lapitskaya, T.

    2017-01-01

    This article gives an elementary introduction to quantum computing. It is a draft for a book chapter of the "Handbook of Nature-Inspired and Innovative Computing", Eds. A. Zomaya, G.J. Milburn, J. Dongarra, D. Bader, R. Brent, M. Eshaghian-Wilner, F. Seredynski (Springer, Berlin Heidelberg New York, 2006).

  14. Computational Pathology

    Science.gov (United States)

    Louis, David N.; Feldman, Michael; Carter, Alexis B.; Dighe, Anand S.; Pfeifer, John D.; Bry, Lynn; Almeida, Jonas S.; Saltz, Joel; Braun, Jonathan; Tomaszewski, John E.; Gilbertson, John R.; Sinard, John H.; Gerber, Georg K.; Galli, Stephen J.; Golden, Jeffrey A.; Becich, Michael J.

    2016-01-01

    Context We define the scope and needs within the new discipline of computational pathology, a discipline critical to the future of both the practice of pathology and, more broadly, medical practice in general. Objective To define the scope and needs of computational pathology. Data Sources A meeting was convened in Boston, Massachusetts, in July 2014 prior to the annual Association of Pathology Chairs meeting, and it was attended by a variety of pathologists, including individuals highly invested in pathology informatics as well as chairs of pathology departments. Conclusions The meeting made recommendations to promote computational pathology, including clearly defining the field and articulating its value propositions; asserting that the value propositions for health care systems must include means to incorporate robust computational approaches to implement data-driven methods that aid in guiding individual and population health care; leveraging computational pathology as a center for data interpretation in modern health care systems; stating that realizing the value proposition will require working with institutional administrations, other departments, and pathology colleagues; declaring that a robust pipeline should be fostered that trains and develops future computational pathologists, for those with both pathology and non-pathology backgrounds; and deciding that computational pathology should serve as a hub for data-related research in health care systems. The dissemination of these recommendations to pathology and bioinformatics departments should help facilitate the development of computational pathology. PMID:26098131

  15. Cloud Computing

    DEFF Research Database (Denmark)

    Krogh, Simon

    2013-01-01

    with technological changes, the paradigmatic pendulum has swung between increased centralization on one side and a focus on distributed computing that pushes IT power out to end users on the other. With the introduction of outsourcing and cloud computing, centralization in large data centers is again dominating...... the IT scene. In line with the views presented by Nicolas Carr in 2003 (Carr, 2003), it is a popular assumption that cloud computing will be the next utility (like water, electricity and gas) (Buyya, Yeo, Venugopal, Broberg, & Brandic, 2009). However, this assumption disregards the fact that most IT production......), for instance, in establishing and maintaining trust between the involved parties (Sabherwal, 1999). So far, research in cloud computing has neglected this perspective and focused entirely on aspects relating to technology, economy, security and legal questions. While the core technologies of cloud computing (e...

  16. Computability theory

    CERN Document Server

    Weber, Rebecca

    2012-01-01

    What can we compute--even with unlimited resources? Is everything within reach? Or are computations necessarily drastically limited, not just in practice, but theoretically? These questions are at the heart of computability theory. The goal of this book is to give the reader a firm grounding in the fundamentals of computability theory and an overview of currently active areas of research, such as reverse mathematics and algorithmic randomness. Turing machines and partial recursive functions are explored in detail, and vital tools and concepts including coding, uniformity, and diagonalization are described explicitly. From there the material continues with universal machines, the halting problem, parametrization and the recursion theorem, and thence to computability for sets, enumerability, and Turing reduction and degrees. A few more advanced topics round out the book before the chapter on areas of research. The text is designed to be self-contained, with an entire chapter of preliminary material including re...

  17. Computational Streetscapes

    Directory of Open Access Journals (Sweden)

    Paul M. Torrens

    2016-09-01

    Full Text Available Streetscapes have presented a long-standing interest in many fields. Recently, there has been a resurgence of attention on streetscape issues, catalyzed in large part by computing. Because of computing, there is more understanding, vistas, data, and analysis of and on streetscape phenomena than ever before. This diversity of lenses trained on streetscapes permits us to address long-standing questions, such as how people use information while mobile, how interactions with people and things occur on streets, how we might safeguard crowds, how we can design services to assist pedestrians, and how we could better support special populations as they traverse cities. Amid each of these avenues of inquiry, computing is facilitating new ways of posing these questions, particularly by expanding the scope of what-if exploration that is possible. With assistance from computing, consideration of streetscapes now reaches across scales, from the neurological interactions that form among place cells in the brain up to informatics that afford real-time views of activity over whole urban spaces. For some streetscape phenomena, computing allows us to build realistic but synthetic facsimiles in computation, which can function as artificial laboratories for testing ideas. In this paper, I review the domain science for studying streetscapes from vantages in physics, urban studies, animation and the visual arts, psychology, biology, and behavioral geography. I also review the computational developments shaping streetscape science, with particular emphasis on modeling and simulation as informed by data acquisition and generation, data models, path-planning heuristics, artificial intelligence for navigation and way-finding, timing, synthetic vision, steering routines, kinematics, and geometrical treatment of collision detection and avoidance. I also discuss the implications that the advances in computing streetscapes might have on emerging developments in cyber

  18. COMPUTATIONAL THINKING

    Directory of Open Access Journals (Sweden)

    Evgeniy K. Khenner

    2016-01-01

    Full Text Available Abstract. The aim of the research is to draw attention of the educational community to the phenomenon of computational thinking which actively discussed in the last decade in the foreign scientific and educational literature, to substantiate of its importance, practical utility and the right on affirmation in Russian education.Methods. The research is based on the analysis of foreign studies of the phenomenon of computational thinking and the ways of its formation in the process of education; on comparing the notion of «computational thinking» with related concepts used in the Russian scientific and pedagogical literature.Results. The concept «computational thinking» is analyzed from the point of view of intuitive understanding and scientific and applied aspects. It is shown as computational thinking has evolved in the process of development of computers hardware and software. The practice-oriented interpretation of computational thinking which dominant among educators is described along with some ways of its formation. It is shown that computational thinking is a metasubject result of general education as well as its tool. From the point of view of the author, purposeful development of computational thinking should be one of the tasks of the Russian education.Scientific novelty. The author gives a theoretical justification of the role of computational thinking schemes as metasubject results of learning. The dynamics of the development of this concept is described. This process is connected with the evolution of computer and information technologies as well as increase of number of the tasks for effective solutions of which computational thinking is required. Author substantiated the affirmation that including «computational thinking » in the set of pedagogical concepts which are used in the national education system fills an existing gap.Practical significance. New metasubject result of education associated with

  19. Energy-Based Yield Criteria for Orthotropic Materials, Exhibiting Strength-Differential Effect. Specification for Sheets under Plane Stress State

    Directory of Open Access Journals (Sweden)

    Szeptyński P.

    2017-06-01

    Full Text Available A general proposition of an energy-based limit condition for anisotropic materials exhibiting strength-differential effect (SDE based on spectral decomposition of elasticity tensors and the use of scaling pressure-dependent functions is specified for the case of orthotropic materials. A detailed algorithm (based on classical solutions of cubic equations for the determination of elastic eigenstates and eigenvalues of the orthotropic stiffness tensor is presented. A yield condition is formulated for both two-dimensional and three-dimensional cases. Explicit formulas based on simple strength tests are derived for parameters of criterion in the plane case. The application of both criteria for the description of yielding and plastic deformation of metal sheets is discussed in detail. The plane case criterion is verified with experimental results from the literature.

  20. Computer interfacing

    CERN Document Server

    Dixey, Graham

    1994-01-01

    This book explains how computers interact with the world around them and therefore how to make them a useful tool. Topics covered include descriptions of all the components that make up a computer, principles of data exchange, interaction with peripherals, serial communication, input devices, recording methods, computer-controlled motors, and printers.In an informative and straightforward manner, Graham Dixey describes how to turn what might seem an incomprehensible 'black box' PC into a powerful and enjoyable tool that can help you in all areas of your work and leisure. With plenty of handy

  1. Computational physics

    CERN Document Server

    Newman, Mark

    2013-01-01

    A complete introduction to the field of computational physics, with examples and exercises in the Python programming language. Computers play a central role in virtually every major physics discovery today, from astrophysics and particle physics to biophysics and condensed matter. This book explains the fundamentals of computational physics and describes in simple terms the techniques that every physicist should know, such as finite difference methods, numerical quadrature, and the fast Fourier transform. The book offers a complete introduction to the topic at the undergraduate level, and is also suitable for the advanced student or researcher who wants to learn the foundational elements of this important field.

  2. Computational physics

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1987-01-15

    Computers have for many years played a vital role in the acquisition and treatment of experimental data, but they have more recently taken up a much more extended role in physics research. The numerical and algebraic calculations now performed on modern computers make it possible to explore consequences of basic theories in a way which goes beyond the limits of both analytic insight and experimental investigation. This was brought out clearly at the Conference on Perspectives in Computational Physics, held at the International Centre for Theoretical Physics, Trieste, Italy, from 29-31 October.

  3. Cloud Computing

    CERN Document Server

    Baun, Christian; Nimis, Jens; Tai, Stefan

    2011-01-01

    Cloud computing is a buzz-word in today's information technology (IT) that nobody can escape. But what is really behind it? There are many interpretations of this term, but no standardized or even uniform definition. Instead, as a result of the multi-faceted viewpoints and the diverse interests expressed by the various stakeholders, cloud computing is perceived as a rather fuzzy concept. With this book, the authors deliver an overview of cloud computing architecture, services, and applications. Their aim is to bring readers up to date on this technology and thus to provide a common basis for d

  4. Computational Viscoelasticity

    CERN Document Server

    Marques, Severino P C

    2012-01-01

    This text is a guide how to solve problems in which viscoelasticity is present using existing commercial computational codes. The book gives information on codes’ structure and use, data preparation  and output interpretation and verification. The first part of the book introduces the reader to the subject, and to provide the models, equations and notation to be used in the computational applications. The second part shows the most important Computational techniques: Finite elements formulation, Boundary elements formulation, and presents the solutions of Viscoelastic problems with Abaqus.

  5. Optical computing.

    Science.gov (United States)

    Stroke, G. W.

    1972-01-01

    Applications of the optical computer include an approach for increasing the sharpness of images obtained from the most powerful electron microscopes and fingerprint/credit card identification. The information-handling capability of the various optical computing processes is very great. Modern synthetic-aperture radars scan upward of 100,000 resolvable elements per second. Fields which have assumed major importance on the basis of optical computing principles are optical image deblurring, coherent side-looking synthetic-aperture radar, and correlative pattern recognition. Some examples of the most dramatic image deblurring results are shown.

  6. Computational physics

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    Computers have for many years played a vital role in the acquisition and treatment of experimental data, but they have more recently taken up a much more extended role in physics research. The numerical and algebraic calculations now performed on modern computers make it possible to explore consequences of basic theories in a way which goes beyond the limits of both analytic insight and experimental investigation. This was brought out clearly at the Conference on Perspectives in Computational Physics, held at the International Centre for Theoretical Physics, Trieste, Italy, from 29-31 October

  7. Phenomenological Computation?

    DEFF Research Database (Denmark)

    Brier, Søren

    2014-01-01

    Open peer commentary on the article “Info-computational Constructivism and Cognition” by Gordana Dodig-Crnkovic. Upshot: The main problems with info-computationalism are: (1) Its basic concept of natural computing has neither been defined theoretically or implemented practically. (2. It cannot...... encompass human concepts of subjective experience and intersubjective meaningful communication, which prevents it from being genuinely transdisciplinary. (3) Philosophically, it does not sufficiently accept the deep ontological differences between various paradigms such as von Foerster’s second- order...

  8. Essentials of cloud computing

    CERN Document Server

    Chandrasekaran, K

    2014-01-01

    ForewordPrefaceComputing ParadigmsLearning ObjectivesPreambleHigh-Performance ComputingParallel ComputingDistributed ComputingCluster ComputingGrid ComputingCloud ComputingBiocomputingMobile ComputingQuantum ComputingOptical ComputingNanocomputingNetwork ComputingSummaryReview PointsReview QuestionsFurther ReadingCloud Computing FundamentalsLearning ObjectivesPreambleMotivation for Cloud ComputingThe Need for Cloud ComputingDefining Cloud ComputingNIST Definition of Cloud ComputingCloud Computing Is a ServiceCloud Computing Is a Platform5-4-3 Principles of Cloud computingFive Essential Charact

  9. Personal Computers.

    Science.gov (United States)

    Toong, Hoo-min D.; Gupta, Amar

    1982-01-01

    Describes the hardware, software, applications, and current proliferation of personal computers (microcomputers). Includes discussions of microprocessors, memory, output (including printers), application programs, the microcomputer industry, and major microcomputer manufacturers (Apple, Radio Shack, Commodore, and IBM). (JN)

  10. Computational Literacy

    DEFF Research Database (Denmark)

    Chongtay, Rocio; Robering, Klaus

    2016-01-01

    In recent years, there has been a growing interest in and recognition of the importance of Computational Literacy, a skill generally considered to be necessary for success in the 21st century. While much research has concentrated on requirements, tools, and teaching methodologies for the acquisit......In recent years, there has been a growing interest in and recognition of the importance of Computational Literacy, a skill generally considered to be necessary for success in the 21st century. While much research has concentrated on requirements, tools, and teaching methodologies...... for the acquisition of Computational Literacy at basic educational levels, focus on higher levels of education has been much less prominent. The present paper considers the case of courses for higher education programs within the Humanities. A model is proposed which conceives of Computational Literacy as a layered...

  11. Computing Religion

    DEFF Research Database (Denmark)

    Nielbo, Kristoffer Laigaard; Braxton, Donald M.; Upal, Afzal

    2012-01-01

    The computational approach has become an invaluable tool in many fields that are directly relevant to research in religious phenomena. Yet the use of computational tools is almost absent in the study of religion. Given that religion is a cluster of interrelated phenomena and that research...... concerning these phenomena should strive for multilevel analysis, this article argues that the computational approach offers new methodological and theoretical opportunities to the study of religion. We argue that the computational approach offers 1.) an intermediary step between any theoretical construct...... and its targeted empirical space and 2.) a new kind of data which allows the researcher to observe abstract constructs, estimate likely outcomes, and optimize empirical designs. Because sophisticated mulitilevel research is a collaborative project we also seek to introduce to scholars of religion some...

  12. Computational Controversy

    NARCIS (Netherlands)

    Timmermans, Benjamin; Kuhn, Tobias; Beelen, Kaspar; Aroyo, Lora

    2017-01-01

    Climate change, vaccination, abortion, Trump: Many topics are surrounded by fierce controversies. The nature of such heated debates and their elements have been studied extensively in the social science literature. More recently, various computational approaches to controversy analysis have

  13. Grid Computing

    Indian Academy of Sciences (India)

    IAS Admin

    emergence of supercomputers led to the use of computer simula- tion as an .... Scientific and engineering applications (e.g., Tera grid secure gate way). Collaborative ... Encryption, privacy, protection from malicious software. Physical Layer.

  14. Computer tomographs

    International Nuclear Information System (INIS)

    Niedzwiedzki, M.

    1982-01-01

    Physical foundations and the developments in the transmission and emission computer tomography are presented. On the basis of the available literature and private communications a comparison is made of the various transmission tomographs. A new technique of computer emission tomography ECT, unknown in Poland, is described. The evaluation of two methods of ECT, namely those of positron and single photon emission tomography is made. (author)

  15. Computational sustainability

    CERN Document Server

    Kersting, Kristian; Morik, Katharina

    2016-01-01

    The book at hand gives an overview of the state of the art research in Computational Sustainability as well as case studies of different application scenarios. This covers topics such as renewable energy supply, energy storage and e-mobility, efficiency in data centers and networks, sustainable food and water supply, sustainable health, industrial production and quality, etc. The book describes computational methods and possible application scenarios.

  16. Computing farms

    International Nuclear Information System (INIS)

    Yeh, G.P.

    2000-01-01

    High-energy physics, nuclear physics, space sciences, and many other fields have large challenges in computing. In recent years, PCs have achieved performance comparable to the high-end UNIX workstations, at a small fraction of the price. We review the development and broad applications of commodity PCs as the solution to CPU needs, and look forward to the important and exciting future of large-scale PC computing

  17. Computational chemistry

    Science.gov (United States)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  18. Computational creativity

    Directory of Open Access Journals (Sweden)

    López de Mántaras Badia, Ramon

    2013-12-01

    Full Text Available New technologies, and in particular artificial intelligence, are drastically changing the nature of creative processes. Computers are playing very significant roles in creative activities such as music, architecture, fine arts, and science. Indeed, the computer is already a canvas, a brush, a musical instrument, and so on. However, we believe that we must aim at more ambitious relations between computers and creativity. Rather than just seeing the computer as a tool to help human creators, we could see it as a creative entity in its own right. This view has triggered a new subfield of Artificial Intelligence called Computational Creativity. This article addresses the question of the possibility of achieving computational creativity through some examples of computer programs capable of replicating some aspects of creative behavior in the fields of music and science.Las nuevas tecnologías y en particular la Inteligencia Artificial están cambiando de forma importante la naturaleza del proceso creativo. Los ordenadores están jugando un papel muy significativo en actividades artísticas tales como la música, la arquitectura, las bellas artes y la ciencia. Efectivamente, el ordenador ya es el lienzo, el pincel, el instrumento musical, etc. Sin embargo creemos que debemos aspirar a relaciones más ambiciosas entre los ordenadores y la creatividad. En lugar de verlos solamente como herramientas de ayuda a la creación, los ordenadores podrían ser considerados agentes creativos. Este punto de vista ha dado lugar a un nuevo subcampo de la Inteligencia Artificial denominado Creatividad Computacional. En este artículo abordamos la cuestión de la posibilidad de alcanzar dicha creatividad computacional mediante algunos ejemplos de programas de ordenador capaces de replicar algunos aspectos relacionados con el comportamiento creativo en los ámbitos de la música y la ciencia.

  19. temperature overspecification

    Directory of Open Access Journals (Sweden)

    Mehdi Dehghan

    2001-01-01

    Full Text Available Two different finite difference schemes for solving the two-dimensional parabolic inverse problem with temperature overspecification are considered. These schemes are developed for indentifying the control parameter which produces, at any given time, a desired temperature distribution at a given point in the spatial domain. The numerical methods discussed, are based on the (3,3 alternating direction implicit (ADI finite difference scheme and the (3,9 alternating direction implicit formula. These schemes are unconditionally stable. The basis of analysis of the finite difference equation considered here is the modified equivalent partial differential equation approach, developed from the 1974 work of Warming and Hyett [17]. This allows direct and simple comparison of the errors associated with the equations as well as providing a means to develop more accurate finite difference schemes. These schemes use less central processor times than the fully implicit schemes for two-dimensional diffusion with temperature overspecification. The alternating direction implicit schemes developed in this report use more CPU times than the fully explicit finite difference schemes, but their unconditional stability is significant. The results of numerical experiments are presented, and accuracy and the Central Processor (CPU times needed for each of the methods are discussed. We also give error estimates in the maximum norm for each of these methods.

  20. Quantum computing

    International Nuclear Information System (INIS)

    Steane, Andrew

    1998-01-01

    The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This review aims to summarize not just quantum computing, but the whole subject of quantum information theory. Information can be identified as the most general thing which must propagate from a cause to an effect. It therefore has a fundamentally important role in the science of physics. However, the mathematical treatment of information, especially information processing, is quite recent, dating from the mid-20th century. This has meant that the full significance of information as a basic concept in physics is only now being discovered. This is especially true in quantum mechanics. The theory of quantum information and computing puts this significance on a firm footing, and has led to some profound and exciting new insights into the natural world. Among these are the use of quantum states to permit the secure transmission of classical information (quantum cryptography), the use of quantum entanglement to permit reliable transmission of quantum states (teleportation), the possibility of preserving quantum coherence in the presence of irreversible noise processes (quantum error correction), and the use of controlled quantum evolution for efficient computation (quantum computation). The common theme of all these insights is the use of quantum entanglement as a computational resource. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, this review begins with an introduction to classical information theory and computer science, including Shannon's theorem, error correcting codes, Turing machines and computational complexity. The principles of quantum mechanics are then outlined, and the Einstein, Podolsky and Rosen (EPR) experiment described. The EPR-Bell correlations, and quantum entanglement in general, form the essential new ingredient which distinguishes quantum from

  1. Quantum computing

    Energy Technology Data Exchange (ETDEWEB)

    Steane, Andrew [Department of Atomic and Laser Physics, University of Oxford, Clarendon Laboratory, Oxford (United Kingdom)

    1998-02-01

    The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This review aims to summarize not just quantum computing, but the whole subject of quantum information theory. Information can be identified as the most general thing which must propagate from a cause to an effect. It therefore has a fundamentally important role in the science of physics. However, the mathematical treatment of information, especially information processing, is quite recent, dating from the mid-20th century. This has meant that the full significance of information as a basic concept in physics is only now being discovered. This is especially true in quantum mechanics. The theory of quantum information and computing puts this significance on a firm footing, and has led to some profound and exciting new insights into the natural world. Among these are the use of quantum states to permit the secure transmission of classical information (quantum cryptography), the use of quantum entanglement to permit reliable transmission of quantum states (teleportation), the possibility of preserving quantum coherence in the presence of irreversible noise processes (quantum error correction), and the use of controlled quantum evolution for efficient computation (quantum computation). The common theme of all these insights is the use of quantum entanglement as a computational resource. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, this review begins with an introduction to classical information theory and computer science, including Shannon's theorem, error correcting codes, Turing machines and computational complexity. The principles of quantum mechanics are then outlined, and the Einstein, Podolsky and Rosen (EPR) experiment described. The EPR-Bell correlations, and quantum entanglement in general, form the essential new ingredient which distinguishes quantum from

  2. Multiparty Computations

    DEFF Research Database (Denmark)

    Dziembowski, Stefan

    here and discuss other problems caused by the adaptiveness. All protocols in the thesis are formally specified and the proofs of their security are given. [1]Ronald Cramer, Ivan Damgård, Stefan Dziembowski, Martin Hirt, and Tal Rabin. Efficient multiparty computations with dishonest minority......In this thesis we study a problem of doing Verifiable Secret Sharing (VSS) and Multiparty Computations in a model where private channels between the players and a broadcast channel is available. The adversary is active, adaptive and has an unbounded computing power. The thesis is based on two...... to a polynomial time black-box reduction, the complexity of adaptively secure VSS is the same as that of ordinary secret sharing (SS), where security is only required against a passive, static adversary. Previously, such a connection was only known for linear secret sharing and VSS schemes. We then show...

  3. Scientific computing

    CERN Document Server

    Trangenstein, John A

    2017-01-01

    This is the third of three volumes providing a comprehensive presentation of the fundamentals of scientific computing. This volume discusses topics that depend more on calculus than linear algebra, in order to prepare the reader for solving differential equations. This book and its companions show how to determine the quality of computational results, and how to measure the relative efficiency of competing methods. Readers learn how to determine the maximum attainable accuracy of algorithms, and how to select the best method for computing problems. This book also discusses programming in several languages, including C++, Fortran and MATLAB. There are 90 examples, 200 exercises, 36 algorithms, 40 interactive JavaScript programs, 91 references to software programs and 1 case study. Topics are introduced with goals, literature references and links to public software. There are descriptions of the current algorithms in GSLIB and MATLAB. This book could be used for a second course in numerical methods, for either ...

  4. Computational Psychiatry

    Science.gov (United States)

    Wang, Xiao-Jing; Krystal, John H.

    2014-01-01

    Psychiatric disorders such as autism and schizophrenia arise from abnormalities in brain systems that underlie cognitive, emotional and social functions. The brain is enormously complex and its abundant feedback loops on multiple scales preclude intuitive explication of circuit functions. In close interplay with experiments, theory and computational modeling are essential for understanding how, precisely, neural circuits generate flexible behaviors and their impairments give rise to psychiatric symptoms. This Perspective highlights recent progress in applying computational neuroscience to the study of mental disorders. We outline basic approaches, including identification of core deficits that cut across disease categories, biologically-realistic modeling bridging cellular and synaptic mechanisms with behavior, model-aided diagnosis. The need for new research strategies in psychiatry is urgent. Computational psychiatry potentially provides powerful tools for elucidating pathophysiology that may inform both diagnosis and treatment. To achieve this promise will require investment in cross-disciplinary training and research in this nascent field. PMID:25442941

  5. Programa computacional para a estimativa da temperatura do ar para a região Nordeste do Brasil A computer program to estimate air temperature for Northeast region of Brazil

    Directory of Open Access Journals (Sweden)

    Enilson P. Cavalcanti

    2006-03-01

    Full Text Available A meta principal através desta pesquisa foi estabelecer modelo de estimativa de temperatura do ar (Estima_T em função das coordenadas geográficas e das Anomalias de Temperaturas da Superfície do Mar (ATSM. Neste estudo foram utilizadas as séries temporais das médias mensais de temperatura do ar (média diária, mínima e máxima de 69 estações meteorológicas do Nordeste do Brasil (NEB e ATSM do oceano Atlântico Tropical. O modelo Estima_T mostrou-se capaz de reconstruir séries temporais de temperatura do ar com razoável precisão para todo o NEB. Os resultados mostraram correlações estatisticamente significantes ao nível de 1% de probabilidade entre as temperaturas do ar observada e estimada pelo modelo, em toda a região de estudo.The objective of this research was to establish a model to estimate air temperature (Estima_T as a function of geographical coordinates and Sea Surface Temperature Anomalies (SSTA. The mean monthly time series of air temperatures (daily mean, minimum and maximum of 69 weather stations and SSTA of the Tropical Atlantic were analyzed. The model Estima_T showed good agreement between real and estimated air temperature data of Northeast Brazil. The results showed statistically significant correlation at 1% level between observed air temperatures and those estimated by model in the whole area of study.

  6. Computational artifacts

    DEFF Research Database (Denmark)

    Schmidt, Kjeld; Bansler, Jørgen P.

    2016-01-01

    The key concern of CSCW research is that of understanding computing technologies in the social context of their use, that is, as integral features of our practices and our lives, and to think of their design and implementation under that perspective. However, the question of the nature...... of that which is actually integrated in our practices is often discussed in confusing ways, if at all. The article aims to try to clarify the issue and in doing so revisits and reconsiders the notion of ‘computational artifact’....

  7. Computer security

    CERN Document Server

    Gollmann, Dieter

    2011-01-01

    A completely up-to-date resource on computer security Assuming no previous experience in the field of computer security, this must-have book walks you through the many essential aspects of this vast topic, from the newest advances in software and technology to the most recent information on Web applications security. This new edition includes sections on Windows NT, CORBA, and Java and discusses cross-site scripting and JavaScript hacking as well as SQL injection. Serving as a helpful introduction, this self-study guide is a wonderful starting point for examining the variety of competing sec

  8. Cloud Computing

    CERN Document Server

    Antonopoulos, Nick

    2010-01-01

    Cloud computing has recently emerged as a subject of substantial industrial and academic interest, though its meaning and scope is hotly debated. For some researchers, clouds are a natural evolution towards the full commercialisation of grid systems, while others dismiss the term as a mere re-branding of existing pay-per-use technologies. From either perspective, 'cloud' is now the label of choice for accountable pay-per-use access to third party applications and computational resources on a massive scale. Clouds support patterns of less predictable resource use for applications and services a

  9. Computational Logistics

    DEFF Research Database (Denmark)

    Pacino, Dario; Voss, Stefan; Jensen, Rune Møller

    2013-01-01

    This book constitutes the refereed proceedings of the 4th International Conference on Computational Logistics, ICCL 2013, held in Copenhagen, Denmark, in September 2013. The 19 papers presented in this volume were carefully reviewed and selected for inclusion in the book. They are organized in to...... in topical sections named: maritime shipping, road transport, vehicle routing problems, aviation applications, and logistics and supply chain management.......This book constitutes the refereed proceedings of the 4th International Conference on Computational Logistics, ICCL 2013, held in Copenhagen, Denmark, in September 2013. The 19 papers presented in this volume were carefully reviewed and selected for inclusion in the book. They are organized...

  10. Computational Logistics

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 4th International Conference on Computational Logistics, ICCL 2013, held in Copenhagen, Denmark, in September 2013. The 19 papers presented in this volume were carefully reviewed and selected for inclusion in the book. They are organized in to...... in topical sections named: maritime shipping, road transport, vehicle routing problems, aviation applications, and logistics and supply chain management.......This book constitutes the refereed proceedings of the 4th International Conference on Computational Logistics, ICCL 2013, held in Copenhagen, Denmark, in September 2013. The 19 papers presented in this volume were carefully reviewed and selected for inclusion in the book. They are organized...

  11. Computational engineering

    CERN Document Server

    2014-01-01

    The book presents state-of-the-art works in computational engineering. Focus is on mathematical modeling, numerical simulation, experimental validation and visualization in engineering sciences. In particular, the following topics are presented: constitutive models and their implementation into finite element codes, numerical models in nonlinear elasto-dynamics including seismic excitations, multiphase models in structural engineering and multiscale models of materials systems, sensitivity and reliability analysis of engineering structures, the application of scientific computing in urban water management and hydraulic engineering, and the application of genetic algorithms for the registration of laser scanner point clouds.

  12. Computer busses

    CERN Document Server

    Buchanan, William

    2000-01-01

    As more and more equipment is interface or'bus' driven, either by the use of controllers or directly from PCs, the question of which bus to use is becoming increasingly important both in industry and in the office. 'Computer Busses' has been designed to help choose the best type of bus for the particular application.There are several books which cover individual busses, but none which provide a complete guide to computer busses. The author provides a basic theory of busses and draws examples and applications from real bus case studies. Busses are analysed using from a top-down approach, helpin

  13. Reconfigurable Computing

    CERN Document Server

    Cardoso, Joao MP

    2011-01-01

    As the complexity of modern embedded systems increases, it becomes less practical to design monolithic processing platforms. As a result, reconfigurable computing is being adopted widely for more flexible design. Reconfigurable Computers offer the spatial parallelism and fine-grained customizability of application-specific circuits with the postfabrication programmability of software. To make the most of this unique combination of performance and flexibility, designers need to be aware of both hardware and software issues. FPGA users must think not only about the gates needed to perform a comp

  14. Low fingertip temperature rebound measured by digital thermal monitoring strongly correlates with the presence and extent of coronary artery disease diagnosed by 64-slice multi-detector computed tomography.

    Science.gov (United States)

    Ahmadi, Naser; Nabavi, Vahid; Nuguri, Vivek; Hajsadeghi, Fereshteh; Flores, Ferdinand; Akhtar, Mohammad; Kleis, Stanley; Hecht, Harvey; Naghavi, Morteza; Budoff, Matthew

    2009-10-01

    Previous studies showed strong correlations between low fingertip temperature rebound measured by digital thermal monitoring (DTM) during a 5 min arm-cuff induced reactive hyperemia and both the Framingham Risk Score (FRS), and coronary artery calcification (CAC) in asymptomatic populations. This study evaluates the correlation between DTM and coronary artery disease (CAD) measured by CT angiography (CTA) in symptomatic patients. It also investigates the correlation between CTA and a new index of neurovascular reactivity measured by DTM. 129 patients, age 63 +/- 9 years, 68% male, underwent DTM, CAC and CTA. Adjusted DTM indices in the occluded arm were calculated: temperature rebound: aTR and area under the temperature curve aTMP-AUC. DTM neurovascular reactivity (NVR) index was measured based on increased fingertip temperature in the non-occluded arm. Obstructive CAD was defined as >or=50% luminal stenosis, and normal as no stenosis and CAC = 0. Baseline fingertip temperature was not different across the groups. However, all DTM indices of vascular and neurovascular reactivity significantly decreased from normal to non-obstructive to obstructive CAD [(aTR 1.77 +/- 1.18 to 1.24 +/- 1.14 to 0.94 +/- 0.92) (P = 0.009), (aTMP-AUC: 355.6 +/- 242.4 to 277.4 +/- 182.4 to 184.4 +/- 171.2) (P = 0.001), (NVR: 161.5 +/- 147.4 to 77.6 +/- 88.2 to 48.8 +/- 63.8) (P = 0.015)]. After adjusting for risk factors, the odds ratio for obstructive CAD compared to normal in the lowest versus two upper tertiles of FRS, aTR, aTMP-AUC, and NVR were 2.41 (1.02-5.93), P = 0.05, 8.67 (2.6-9.4), P = 0.001, 11.62 (5.1-28.7), P = 0.001, and 3.58 (1.09-11.69), P = 0.01, respectively. DTM indices and FRS combined resulted in a ROC curve area of 0.88 for the prediction of obstructive CAD. In patients suspected of CAD, low fingertip temperature rebound measured by DTM significantly predicted CTA-diagnosed obstructive disease.

  15. Computational chemistry research

    Science.gov (United States)

    Levin, Eugene

    1987-01-01

    Task 41 is composed of two parts: (1) analysis and design studies related to the Numerical Aerodynamic Simulation (NAS) Extended Operating Configuration (EOC) and (2) computational chemistry. During the first half of 1987, Dr. Levin served as a member of an advanced system planning team to establish the requirements, goals, and principal technical characteristics of the NAS EOC. A paper entitled 'Scaling of Data Communications for an Advanced Supercomputer Network' is included. The high temperature transport properties (such as viscosity, thermal conductivity, etc.) of the major constituents of air (oxygen and nitrogen) were correctly determined. The results of prior ab initio computer solutions of the Schroedinger equation were combined with the best available experimental data to obtain complete interaction potentials for both neutral and ion-atom collision partners. These potentials were then used in a computer program to evaluate the collision cross-sections from which the transport properties could be determined. A paper entitled 'High Temperature Transport Properties of Air' is included.

  16. Riemannian computing in computer vision

    CERN Document Server

    Srivastava, Anuj

    2016-01-01

    This book presents a comprehensive treatise on Riemannian geometric computations and related statistical inferences in several computer vision problems. This edited volume includes chapter contributions from leading figures in the field of computer vision who are applying Riemannian geometric approaches in problems such as face recognition, activity recognition, object detection, biomedical image analysis, and structure-from-motion. Some of the mathematical entities that necessitate a geometric analysis include rotation matrices (e.g. in modeling camera motion), stick figures (e.g. for activity recognition), subspace comparisons (e.g. in face recognition), symmetric positive-definite matrices (e.g. in diffusion tensor imaging), and function-spaces (e.g. in studying shapes of closed contours).   ·         Illustrates Riemannian computing theory on applications in computer vision, machine learning, and robotics ·         Emphasis on algorithmic advances that will allow re-application in other...

  17. Statistical Computing

    Indian Academy of Sciences (India)

    inference and finite population sampling. Sudhakar Kunte. Elements of statistical computing are discussed in this series. ... which captain gets an option to decide whether to field first or bat first ... may of course not be fair, in the sense that the team which wins ... describe two methods of drawing a random number between 0.

  18. Computational biology

    DEFF Research Database (Denmark)

    Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue

    2011-01-01

    Computation via biological devices has been the subject of close scrutiny since von Neumann’s early work some 60 years ago. In spite of the many relevant works in this field, the notion of programming biological devices seems to be, at best, ill-defined. While many devices are claimed or proved t...

  19. Computing News

    CERN Multimedia

    McCubbin, N

    2001-01-01

    We are still five years from the first LHC data, so we have plenty of time to get the computing into shape, don't we? Well, yes and no: there is time, but there's an awful lot to do! The recently-completed CERN Review of LHC Computing gives the flavour of the LHC computing challenge. The hardware scale for each of the LHC experiments is millions of 'SpecInt95' (SI95) units of cpu power and tens of PetaBytes of data storage. PCs today are about 20-30SI95, and expected to be about 100 SI95 by 2005, so it's a lot of PCs. This hardware will be distributed across several 'Regional Centres' of various sizes, connected by high-speed networks. How to realise this in an orderly and timely fashion is now being discussed in earnest by CERN, Funding Agencies, and the LHC experiments. Mixed in with this is, of course, the GRID concept...but that's a topic for another day! Of course hardware, networks and the GRID constitute just one part of the computing. Most of the ATLAS effort is spent on software development. What we ...

  20. Quantum Computation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 16; Issue 9. Quantum Computation - Particle and Wave Aspects of Algorithms. Apoorva Patel. General Article Volume 16 Issue 9 September 2011 pp 821-835. Fulltext. Click here to view fulltext PDF. Permanent link:

  1. Cloud computing.

    Science.gov (United States)

    Wink, Diane M

    2012-01-01

    In this bimonthly series, the author examines how nurse educators can use Internet and Web-based technologies such as search, communication, and collaborative writing tools; social networking and social bookmarking sites; virtual worlds; and Web-based teaching and learning programs. This article describes how cloud computing can be used in nursing education.

  2. Computer Recreations.

    Science.gov (United States)

    Dewdney, A. K.

    1988-01-01

    Describes the creation of the computer program "BOUNCE," designed to simulate a weighted piston coming into equilibrium with a cloud of bouncing balls. The model follows the ideal gas law. Utilizes the critical event technique to create the model. Discusses another program, "BOOM," which simulates a chain reaction. (CW)

  3. [Grid computing

    CERN Multimedia

    Wolinsky, H

    2003-01-01

    "Turn on a water spigot, and it's like tapping a bottomless barrel of water. Ditto for electricity: Flip the switch, and the supply is endless. But computing is another matter. Even with the Internet revolution enabling us to connect in new ways, we are still limited to self-contained systems running locally stored software, limited by corporate, institutional and geographic boundaries" (1 page).

  4. Computational Finance

    DEFF Research Database (Denmark)

    Rasmussen, Lykke

    One of the major challenges in todays post-crisis finance environment is calculating the sensitivities of complex products for hedging and risk management. Historically, these derivatives have been determined using bump-and-revalue, but due to the increasing magnitude of these computations does...

  5. Optical Computing

    Indian Academy of Sciences (India)

    Optical computing technology is, in general, developing in two directions. One approach is ... current support in many places, with private companies as well as governments in several countries encouraging such research work. For example, much ... which enables more information to be carried and data to be processed.

  6. An Energy-Based Control Strategy for Battery Energy Storage Systems: A Case Study on Microgrid Applications

    Directory of Open Access Journals (Sweden)

    Rui Hou

    2017-02-01

    Full Text Available Battery energy storage systems (BESSs with proportional-integral (PI control methods have been widely studied in microgrids (MGs. However, the performance of PI control methods might be unsatisfactory for BESSs due to the nonlinear characteristics of the system. To overcome this problem, an energy-based (EB control method is applied to control the converter of a BESS in this study. The EB method is a robust nonlinear control method based on passivity theory with good performance in both transient and steady states. The detailed design process of the EB method in the BESS by adopting an interconnection and damping assignment (IDA strategy is described. The design process comprises three steps: the construction of the port-controlled Hamiltonian model, the determination of the equilibrium point and the solution of the undetermined matrix. In addition, integral action is combined to eliminate the steady state error generated by the model mismatch. To establish the correctness and validity of the proposed method, we implement several case simulation studies based on a test MG system and compare the control performance of the EB and PI methods carefully. The case simulation results demonstrate that the EB method has better tracking and anti-disturbance performance compared with the classic PI method. Moreover, the proposed EB method shows stronger robustness to the uncertainty of system parameters.

  7. Nanoscopic analysis using Maruhn-Greiner theory by energy based variables in lattice for low energy nuclear reactions (LENRs)

    International Nuclear Information System (INIS)

    Cho, Hyo Sung; WooTae Ho

    2016-01-01

    Maruhn-Greiner theory is investigated for the low energy nuclear reactions (LENRs) in the aspect of the energy productions. Conventional nuclear reactions could give the hints in another kind of the nuclear theoretical utilizations. The results of simulations show the ranges of the configurations for H-ion to Pd with 10; 000 ions as 10 and 180 keV. The most probable ranges are 30 and 600 nanometers respectively. In the simulation result of broad energy regions, the cutoff energy, 350 keV , is very significant in analyzing the LENR, because the range usually depends on the entering particle, target particle, and energy of the entering particle. Therefore, the 350 keV shows there is priority for hydrogen interaction from the energy. In the analysis, the water (H_2O) has the better possibility in LENR after the 350 keV . Following the simulation for searching LENRs, the possible conditions that include the energy based variables of atomic ranges, Debye length, and reaction time has been investigated for the designed energy productions

  8. Computable Frames in Computable Banach Spaces

    Directory of Open Access Journals (Sweden)

    S.K. Kaushik

    2016-06-01

    Full Text Available We develop some parts of the frame theory in Banach spaces from the point of view of Computable Analysis. We define computable M-basis and use it to construct a computable Banach space of scalar valued sequences. Computable Xd frames and computable Banach frames are also defined and computable versions of sufficient conditions for their existence are obtained.

  9. Algebraic computing

    International Nuclear Information System (INIS)

    MacCallum, M.A.H.

    1990-01-01

    The implementation of a new computer algebra system is time consuming: designers of general purpose algebra systems usually say it takes about 50 man-years to create a mature and fully functional system. Hence the range of available systems and their capabilities changes little between one general relativity meeting and the next, despite which there have been significant changes in the period since the last report. The introductory remarks aim to give a brief survey of capabilities of the principal available systems and highlight one or two trends. The reference to the most recent full survey of computer algebra in relativity and brief descriptions of the Maple, REDUCE and SHEEP and other applications are given. (author)

  10. Computational Controversy

    OpenAIRE

    Timmermans, Benjamin; Kuhn, Tobias; Beelen, Kaspar; Aroyo, Lora

    2017-01-01

    Climate change, vaccination, abortion, Trump: Many topics are surrounded by fierce controversies. The nature of such heated debates and their elements have been studied extensively in the social science literature. More recently, various computational approaches to controversy analysis have appeared, using new data sources such as Wikipedia, which help us now better understand these phenomena. However, compared to what social sciences have discovered about such debates, the existing computati...

  11. Computed tomography

    International Nuclear Information System (INIS)

    Andre, M.; Resnick, D.

    1988-01-01

    Computed tomography (CT) has matured into a reliable and prominent tool for study of the muscoloskeletal system. When it was introduced in 1973, it was unique in many ways and posed a challenge to interpretation. It is in these unique features, however, that its advantages lie in comparison with conventional techniques. These advantages will be described in a spectrum of important applications in orthopedics and rheumatology

  12. Computed radiography

    International Nuclear Information System (INIS)

    Pupchek, G.

    2004-01-01

    Computed radiography (CR) is an image acquisition process that is used to create digital, 2-dimensional radiographs. CR employs a photostimulable phosphor-based imaging plate, replacing the standard x-ray film and intensifying screen combination. Conventional radiographic exposure equipment is used with no modification required to the existing system. CR can transform an analog x-ray department into a digital one and eliminates the need for chemicals, water, darkrooms and film processor headaches. (author)

  13. Computational universes

    International Nuclear Information System (INIS)

    Svozil, Karl

    2005-01-01

    Suspicions that the world might be some sort of a machine or algorithm existing 'in the mind' of some symbolic number cruncher have lingered from antiquity. Although popular at times, the most radical forms of this idea never reached mainstream. Modern developments in physics and computer science have lent support to the thesis, but empirical evidence is needed before it can begin to replace our contemporary world view

  14. Temperature, chemical potential and the ρ meson

    International Nuclear Information System (INIS)

    Roberts, C. D.; Schmidt, S. M.

    2000-01-01

    Models of QCD must confront nonperturbative phenomena such as confinement, dynamical chiral symmetry breaking (DCSB) and the formation of bound states. In addition, a unified approach should describe the deconfinement and chiral symmetry restoring phase transition exhibited by strongly-interacting matter under extreme conditions of temperature and density. Nonperturbative Dyson-Schwinger equation (DSE) models provide insight into a wide range of zero temperature hadronic phenomena; e.g., non-hadronic electroweak interactions of light- and heavy-mesons, and diverse meson-meson and meson-nucleon form factors. This is the foundation for their application at nonzero-(T, μ). Herein the authors describe the calculation of the reconfinement and chiral symmetry restoring phase boundary, and the medium dependence of ρ-meson properties. They also introduce an extension to describe the time-evolution in the plasma of the quark's scalar and vector self energies based on a Vlasov equation

  15. On the evaluation of the efficacy of a smart damper: a new equivalent energy-based probabilistic approach

    International Nuclear Information System (INIS)

    Aly, A M; Christenson, R E

    2008-01-01

    Smart damping technology has been proposed to protect civil structures from dynamic loads. Each application of smart damping control provides varying levels of performance relative to active and passive control strategies. Currently, researchers compare the relative efficacy of smart damping control to active and passive strategies by running numerous simulations. These simulations can require significant computation time and resources. Because of this, it is desirable to develop an approach to assess the applicability of smart damping technology which requires less computation time. This paper discusses and verifies a probabilistic approach to determine the efficacy of smart damping technology based on clipped optimal state feedback control theory

  16. MULTISYS: Computer code for calculating multicomponent equilibria in high-temperature subcritical and supercritical aqueous systems; MULTISYS: Ein Programm zur Berechnung von Mehrkomponenten-Gleichgewichten in unterkritischen und ueberkritischen waessrigen Systemen

    Energy Technology Data Exchange (ETDEWEB)

    Lvov, S.N.; Akinfiev, N.N.; Bandura, A.V.; Sigon, F.; Perboni, G.

    1999-12-01

    The studies of the corrosion and deposition processes on the structural materials of power plants in high temperature aqueous environments have aroused considerable interest. To understand these processes, it is required to develop a detailed knowledge of thermodynamics that control the solution chemistry, phase relations, kinetics, and mass and heat transport. By knowing the equilibrium composition of the high temperature aqueous solution and by accessing the thermodynamic properties of the corrosion products, it is possible to predict the initial stage of the corrosion processes or oxides deposition, in order to protect plant equipment against damages. (orig.) [German] Studien zur Korrosion und zu Ablagerungsprozessen auf den Komponentenwerkstoffen in Kraftwerken in einer waessrigen Hochtemperaturumgebung sind von betraechtlichem Interesse. Um solche Prozesse zu verstehen, muss ein detailliertes Verstaendnis der Thermodynamik vorhanden sein, welches die Loesungschemie, Phasenbeziehungen, Kinetik sowie Masse- und Waermetransport kontrolliert. Mit dem Wissen um die Gleichgewichtszusammensetzung der waessrigen Hochtemperaturloesung und unter Zugriff auf die thermodynamischen Eigenschaften der Korrosionsprodukte ist es moeglich, die einleitende Stufe von Korrosionsprozessen oder Oxidablagerungen vorherzusagen, um so Anlagenkomponenten vor Schaeden zu schuetzen. (orig.)

  17. Structure resolution of Ba5Al3F19 and Iivestigation of fluorine ion dynamics by synchrotron powder diffraction, variable-temperature solid-state NMR, and quantum computations

    International Nuclear Information System (INIS)

    Martineau, C.; Fayon, F.; Suchomel, M.R.; Allix, M.; Massiot, D.; Taulelle, F.

    2011-01-01

    The room temperature structure of Ba 5 Al 3 F 19 has been solved using electron microscopy and synchrotron powder diffraction data. One-dimensional (1D) 27 Al and ultrafast magic-angle-spinning (MAS) 19 F NMR spectra have been recorded and are in agreement with the proposed structural model for Ba 5 Al 3 F 19 . The 19 F isotropic chemical shift and 27 Al quadrupolar parameters have been calculated using the CASTEP code from the experimental and density functional theory geometry-optimized structures. After optimization, the calculated NMR parameters of both the 19 F and 27 Al nuclei show improved consistency with the experimental values, demonstrating that the geometry optimization step is necessary to obtain more accurate and reliable structural data. This also enables a complete and unambiguous assignment of the 19 F MAS NMR spectrum of Ba 5 Al 3 F 19 . Variable-temperature 1D MAS 19 F NMR experiments have been carried out, showing the occurrence of fluorine ion mobility. Complementary insights were obtained from both two-dimensional (2D) exchange and 2D double-quantum dipolar recoupling NMR experiments, and a detailed analysis of the anionic motion in Ba 5 Al 3 F 19 is proposed, including the distinction between reorientational processes and chemical exchange involving bond breaking and re-formation.

  18. Temperature dependency of the hysteresis behaviour of PZT actuators using Preisach model

    DEFF Research Database (Denmark)

    Mangeot, Charles; Zsurzsan, Tiberiu-Gabriel

    2016-01-01

    The Preisach model is a powerful tool for modelling the hysteresis phenomenon on multilayer piezo actuators under large signal excitation. In this paper, measurements at different temperatures are presented, showing the effect on the density of the Preisach matrix. An energy-based approach is pre...

  19. Customizable computing

    CERN Document Server

    Chen, Yu-Ting; Gill, Michael; Reinman, Glenn; Xiao, Bingjun

    2015-01-01

    Since the end of Dennard scaling in the early 2000s, improving the energy efficiency of computation has been the main concern of the research community and industry. The large energy efficiency gap between general-purpose processors and application-specific integrated circuits (ASICs) motivates the exploration of customizable architectures, where one can adapt the architecture to the workload. In this Synthesis lecture, we present an overview and introduction of the recent developments on energy-efficient customizable architectures, including customizable cores and accelerators, on-chip memory

  20. $\\delta$-Expansion at Finite Temperature

    OpenAIRE

    Ramos, Rudnei O.

    1996-01-01

    We apply the $\\delta$-expansion perturbation scheme to the $\\lambda \\phi^{4}$ self-interacting scalar field theory in 3+1 D at finite temperature. In the $\\delta$-expansion the interaction term is written as $\\lambda (\\phi^{2})^{ 1 + \\delta}$ and $\\delta$ is considered as the perturbation parameter. We compute within this perturbative approach the renormalized mass at finite temperature at a finite order in $\\delta$. The results are compared with the usual loop-expansion at finite temperature.

  1. Temperature Calculations in the Coastal Modeling System

    Science.gov (United States)

    2017-04-01

    ERDC/CHL CHETN-IV-110 April 2017 Approved for public release; distribution is unlimited . Temperature Calculations in the Coastal Modeling...tide) and river discharge at model boundaries, wave radiation stress, and wind forcing over a model computational domain. Physical processes calculated...calculated in the CMS using the following meteorological parameters: solar radiation, cloud cover, air temperature, wind speed, and surface water temperature

  2. Computed tomography

    International Nuclear Information System (INIS)

    Wells, P.; Davis, J.; Morgan, M.

    1994-01-01

    X-ray or gamma-ray transmission computed tomography (CT) is a powerful non-destructive evaluation (NDE) technique that produces two-dimensional cross-sectional images of an object without the need to physically section it. CT is also known by the acronym CAT, for computerised axial tomography. This review article presents a brief historical perspective on CT, its current status and the underlying physics. The mathematical fundamentals of computed tomography are developed for the simplest transmission CT modality. A description of CT scanner instrumentation is provided with an emphasis on radiation sources and systems. Examples of CT images are shown indicating the range of materials that can be scanned and the spatial and contrast resolutions that may be achieved. Attention is also given to the occurrence, interpretation and minimisation of various image artefacts that may arise. A final brief section is devoted to the principles and potential of a range of more recently developed tomographic modalities including diffraction CT, positron emission CT and seismic tomography. 57 refs., 2 tabs., 14 figs

  3. Zero Temperature Hope Calculations

    International Nuclear Information System (INIS)

    Rozsnyai, B. F.

    2002-01-01

    The primary purpose of the HOPE code is to calculate opacities over a wide temperature and density range. It can also produce equation of state (EOS) data. Since the experimental data at the high temperature region are scarce, comparisons of predictions with the ample zero temperature data provide a valuable physics check of the code. In this report we show a selected few examples across the periodic table. Below we give a brief general information about the physics of the HOPE code. The HOPE code is an ''average atom'' (AA) Dirac-Slater self-consistent code. The AA label in the case of finite temperature means that the one-electron levels are populated according to the Fermi statistics, at zero temperature it means that the ''aufbau'' principle works, i.e. no a priory electronic configuration is set, although it can be done. As such, it is a one-particle model (any Hartree-Fock model is a one particle model). The code is an ''ion-sphere'' model, meaning that the atom under investigation is neutral within the ion-sphere radius. Furthermore, the boundary conditions for the bound states are also set at the ion-sphere radius, which distinguishes the code from the INFERNO, OPAL and STA codes. Once the self-consistent AA state is obtained, the code proceeds to generate many-electron configurations and proceeds to calculate photoabsorption in the ''detailed configuration accounting'' (DCA) scheme. However, this last feature is meaningless at zero temperature. There is one important feature in the HOPE code which should be noted; any self-consistent model is self-consistent in the space of the occupied orbitals. The unoccupied orbitals, where electrons are lifted via photoexcitation, are unphysical. The rigorous way to deal with that problem is to carry out complete self-consistent calculations both in the initial and final states connecting photoexcitations, an enormous computational task. The Amaldi correction is an attempt to address this problem by distorting the

  4. Computing Services and Assured Computing

    Science.gov (United States)

    2006-05-01

    fighters’ ability to execute the mission.” Computing Services 4 We run IT Systems that: provide medical care pay the warfighters manage maintenance...users • 1,400 applications • 18 facilities • 180 software vendors • 18,000+ copies of executive software products • Virtually every type of mainframe and... chocs electriques, de branchez les deux cordons d’al imentation avant de faire le depannage P R IM A R Y SD A S B 1 2 PowerHub 7000 RST U L 00- 00

  5. Carbon implantation into tungsten at elevated temperatures

    International Nuclear Information System (INIS)

    Eckstein, W.; Shulga, V.I.; Roth, J.

    1999-01-01

    The bombardment of W with 2.4 keV C at normal incidence in the temperature range between 293 and 973 K is investigated experimentally and by computer simulation. At room temperature the measured weight is at first increasing but then changing to a weight gain with the incident C fluence. This weight gain is reduced at temperatures above 750 K, and at 973 K a constant weight loss is observed. The computing approach was to couple the dynamic Monte Carlo program TRIDYN with the diffusion program PIDAT. Input data for C diffusion in W are taken from the literature. Agreement between experimental data and computed results can be achieved at temperatures around 800 K by using a smaller diffusion coefficient in the simulations than those found in the literature

  6. Computing bubble-points of CO

    NARCIS (Netherlands)

    Ramdin, M.; Balaji, S.P.; Vicent Luna, J.M.; Torres-Knoop, A; Chen, Q.; Dubbeldam, D.; Calero, S; de Loos, T.W.; Vlugt, T.J.H.

    2016-01-01

    Computing bubble-points of multicomponent mixtures using Monte Carlo simulations is a non-trivial task. A new method is used to compute gas compositions from a known temperature, bubble-point pressure, and liquid composition. Monte Carlo simulations are used to calculate the bubble-points of

  7. Computational neuroscience

    CERN Document Server

    Blackwell, Kim L

    2014-01-01

    Progress in Molecular Biology and Translational Science provides a forum for discussion of new discoveries, approaches, and ideas in molecular biology. It contains contributions from leaders in their fields and abundant references. This volume brings together different aspects of, and approaches to, molecular and multi-scale modeling, with applications to a diverse range of neurological diseases. Mathematical and computational modeling offers a powerful approach for examining the interaction between molecular pathways and ionic channels in producing neuron electrical activity. It is well accepted that non-linear interactions among diverse ionic channels can produce unexpected neuron behavior and hinder a deep understanding of how ion channel mutations bring about abnormal behavior and disease. Interactions with the diverse signaling pathways activated by G protein coupled receptors or calcium influx adds an additional level of complexity. Modeling is an approach to integrate myriad data sources into a cohesiv...

  8. Social Computing

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    The past decade has witnessed a momentous transformation in the way people interact with each other. Content is now co-produced, shared, classified, and rated by millions of people, while attention has become the ephemeral and valuable resource that everyone seeks to acquire. This talk will describe how social attention determines the production and consumption of content within both the scientific community and social media, how its dynamics can be used to predict the future and the role that social media plays in setting the public agenda. About the speaker Bernardo Huberman is a Senior HP Fellow and Director of the Social Computing Lab at Hewlett Packard Laboratories. He received his Ph.D. in Physics from the University of Pennsylvania, and is currently a Consulting Professor in the Department of Applied Physics at Stanford University. He originally worked in condensed matter physics, ranging from superionic conductors to two-dimensional superfluids, and made contributions to the theory of critical p...

  9. computer networks

    Directory of Open Access Journals (Sweden)

    N. U. Ahmed

    2002-01-01

    Full Text Available In this paper, we construct a new dynamic model for the Token Bucket (TB algorithm used in computer networks and use systems approach for its analysis. This model is then augmented by adding a dynamic model for a multiplexor at an access node where the TB exercises a policing function. In the model, traffic policing, multiplexing and network utilization are formally defined. Based on the model, we study such issues as (quality of service QoS, traffic sizing and network dimensioning. Also we propose an algorithm using feedback control to improve QoS and network utilization. Applying MPEG video traces as the input traffic to the model, we verify the usefulness and effectiveness of our model.

  10. Computer Tree

    Directory of Open Access Journals (Sweden)

    Onur AĞAOĞLU

    2014-12-01

    Full Text Available It is crucial that gifted and talented students should be supported by different educational methods for their interests and skills. The science and arts centres (gifted centres provide the Supportive Education Program for these students with an interdisciplinary perspective. In line with the program, an ICT lesson entitled “Computer Tree” serves for identifying learner readiness levels, and defining the basic conceptual framework. A language teacher also contributes to the process, since it caters for the creative function of the basic linguistic skills. The teaching technique is applied for 9-11 aged student level. The lesson introduces an evaluation process including basic information, skills, and interests of the target group. Furthermore, it includes an observation process by way of peer assessment. The lesson is considered to be a good sample of planning for any subject, for the unpredicted convergence of visual and technical abilities with linguistic abilities.

  11. Computed tomography

    International Nuclear Information System (INIS)

    Boyd, D.P.

    1989-01-01

    This paper reports on computed tomographic (CT) scanning which has improved computer-assisted imaging modalities for radiologic diagnosis. The advantage of this modality is its ability to image thin cross-sectional planes of the body, thus uncovering density information in three dimensions without tissue superposition problems. Because this enables vastly superior imaging of soft tissues in the brain and body, CT scanning was immediately successful and continues to grow in importance as improvements are made in speed, resolution, and cost efficiency. CT scanners are used for general purposes, and the more advanced machines are generally preferred in large hospitals, where volume and variety of usage justifies the cost. For imaging in the abdomen, a scanner with a rapid speed is preferred because peristalsis, involuntary motion of the diaphram, and even cardiac motion are present and can significantly degrade image quality. When contrast media is used in imaging to demonstrate scanner, immediate review of images, and multiformat hardcopy production. A second console is reserved for the radiologist to read images and perform the several types of image analysis that are available. Since CT images contain quantitative information in terms of density values and contours of organs, quantitation of volumes, areas, and masses is possible. This is accomplished with region-of- interest methods, which involve the electronic outlining of the selected region of the television display monitor with a trackball-controlled cursor. In addition, various image- processing options, such as edge enhancement (for viewing fine details of edges) or smoothing filters (for enhancing the detectability of low-contrast lesions) are useful tools

  12. Cloud Computing: The Future of Computing

    OpenAIRE

    Aggarwal, Kanika

    2013-01-01

    Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. The basic principles of cloud computing is to make the computing be assigned in a great number of distributed computers, rather then local computer ...

  13. Ion composition and temperature in the topside ionosphere.

    Science.gov (United States)

    Brace, L. H.; Dunham, G. S.; Mayr, H. G.

    1967-01-01

    Particle and energy continuity equations derived and solved by computer method ion composition and plasma temperature measured by Explorer XXII PARTICLE and energy continuity equations derived and solved by computer method for ion composition and plasma temperature measured by Explorer XXII

  14. Teaching Temperature with Technology

    Science.gov (United States)

    Schillaci, Michael

    2010-10-01

    In recent years it has become very popular to introduce computational tools and/or simulations into the classroom. While the intention of this classroom addition is often meant to help elucidate a particular physical phenomena, teachers at ALL levels --- whether graduate or undergraduate, secondary- or middle-school --- may miss important teaching moments by either relying upon or struggling with the technology! I will demonstrate this phenomena with a sample teaching module developed at our instiitution that seeks to discover the relationship between temperature and latitude by having students gather data (e.g., average monthly temperature for a chosen city) from various world wide web resources. This task may be very difficult for students and teachers for reasons ranging from slow connection speeds to an inability to plot and interpret data.I will wrap up by demonstarting a simple Maple routine that will produce the graphs easily and discuss ways in which this kind of top-down solution may be the best bet for using and teaching technology at all levels.

  15. Computer Refurbishment

    International Nuclear Information System (INIS)

    Ichiyen, Norman; Chan, Dominic; Thompson, Paul

    2004-01-01

    The major activity for the 18-month refurbishment outage at the Point Lepreau Generating Station is the replacement of all 380 fuel channel and calandria tube assemblies and the lower portion of connecting feeder pipes. New Brunswick Power would also take advantage of this outage to conduct a number of repairs, replacements, inspections and upgrades (such as rewinding or replacing the generator, replacement of shutdown system trip computers, replacement of certain valves and expansion joints, inspection of systems not normally accessible, etc.). This would allow for an additional 25 to 30 years. Among the systems to be replaced are the PDC's for both shutdown systems. Assessments have been completed for both the SDS1 and SDS2 PDC's, and it has been decided to replace the SDS2 PDCs with the same hardware and software approach that has been used successfully for the Wolsong 2, 3, and 4 and the Qinshan 1 and 2 SDS2 PDCs. For SDS1, it has been decided to use the same software development methodology that was used successfully for the Wolsong and Qinshan called the I A and to use a new hardware platform in order to ensure successful operation for the 25-30 year station operating life. The selected supplier is Triconex, which uses a triple modular redundant architecture that will enhance the robustness/fault tolerance of the design with respect to equipment failures

  16. Temperature dependence of grain boundary free energy and elastic constants

    International Nuclear Information System (INIS)

    Foiles, Stephen M.

    2010-01-01

    This work explores the suggestion that the temperature dependence of the grain boundary free energy can be estimated from the temperature dependence of the elastic constants. The temperature-dependent elastic constants and free energy of a symmetric Σ79 tilt boundary are computed for an embedded atom method model of Ni. The grain boundary free energy scales with the product of the shear modulus times the lattice constant for temperatures up to about 0.75 the melting temperature.

  17. Quantum computing with defects in diamond

    International Nuclear Information System (INIS)

    Jelezko, F.; Gaebel, T.; Popa, I.; Domhan, M.; Wittmann, C.; Wrachtrup, J.

    2005-01-01

    Full text: Single spins in semiconductors, in particular associated with defect centers, are promising candidates for practical and scalable implementation of quantum computing even at room temperature. Such an implementation may also use the reliable and well known gate constructions from bulk nuclear magnetic resonance (NMR) quantum computing. Progress in development of quantum processor based on defects in diamond will be discussed. By combining optical microscopy, and magnetic resonance techniques, the first quantum logical operations on single spins in a solid are now demonstrated. The system is perspective for room temperature operation because of a weak dependence of decoherence on temperature (author)

  18. Energy-Based Seismic Risk Evaluation of Tall Reinforced Concrete Building in Vancouver, BC, Canada, under Mw9 Megathrust Subduction Earthquakes and Aftershocks

    Directory of Open Access Journals (Sweden)

    Solomon Tesfamariam

    2017-05-01

    Full Text Available This article presents a seismic performance evaluation framework for reinforced concrete (RC buildings, comprising shear walls and gravity frames. The evaluation is undertaken within a performance-based earthquake engineering framework by considering regional seismicity and site-specific ground motion selection. Different engineering demand parameters (EDPs, i.e., maximum interstory drift ratio (MaxISDR and energy-based damage index, are considered as performance indicators. Various prediction models of EDPs are developed by considering four ground motion intensity measures (IMs, i.e., spectral acceleration at the fundamental period, Arias intensity, cumulative absolute velocity (CAV, and significant duration of ground motion. For this study, a 15-story RC building, located in Vancouver, BC, Canada, is considered as a case study. By using 50 mainshock and 50 mainshock–aftershock (MS-AS earthquake records (2 horizontal components per record and bidirectional loading, non-linear dynamic analyses are performed. Subsequently, the calculated MaxISDRs and damage indices are correlated with suitable IMs using cloud analysis, and the most efficient IM-EDP prediction models are selected by comparing standard deviations (SDs of the regression errors. The MaxISDR of the shear walls is less than 1% for the mainshock and MS-AS records. The energy-based damage index shows sensitivity to delineate impact of earthquake types and aftershocks. The CAV is showed to be the most efficient IM for the energy-based damage index.

  19. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Sinuses Computed tomography (CT) of the sinuses ... CT of the Sinuses? What is CT (Computed Tomography) of the Sinuses? Computed tomography, more commonly known ...

  20. Illustrated computer tomography

    International Nuclear Information System (INIS)

    Takahashi, S.

    1983-01-01

    This book provides the following information: basic aspects of computed tomography; atlas of computed tomography of the normal adult; clinical application of computed tomography; and radiotherapy planning and computed tomography

  1. Analog and hybrid computing

    CERN Document Server

    Hyndman, D E

    2013-01-01

    Analog and Hybrid Computing focuses on the operations of analog and hybrid computers. The book first outlines the history of computing devices that influenced the creation of analog and digital computers. The types of problems to be solved on computers, computing systems, and digital computers are discussed. The text looks at the theory and operation of electronic analog computers, including linear and non-linear computing units and use of analog computers as operational amplifiers. The monograph examines the preparation of problems to be deciphered on computers. Flow diagrams, methods of ampl

  2. Cloud Computing Fundamentals

    Science.gov (United States)

    Furht, Borko

    In the introductory chapter we define the concept of cloud computing and cloud services, and we introduce layers and types of cloud computing. We discuss the differences between cloud computing and cloud services. New technologies that enabled cloud computing are presented next. We also discuss cloud computing features, standards, and security issues. We introduce the key cloud computing platforms, their vendors, and their offerings. We discuss cloud computing challenges and the future of cloud computing.

  3. Weather Derivatives and Stochastic Modelling of Temperature

    Directory of Open Access Journals (Sweden)

    Fred Espen Benth

    2011-01-01

    Full Text Available We propose a continuous-time autoregressive model for the temperature dynamics with volatility being the product of a seasonal function and a stochastic process. We use the Barndorff-Nielsen and Shephard model for the stochastic volatility. The proposed temperature dynamics is flexible enough to model temperature data accurately, and at the same time being analytically tractable. Futures prices for commonly traded contracts at the Chicago Mercantile Exchange on indices like cooling- and heating-degree days and cumulative average temperatures are computed, as well as option prices on them.

  4. Unconventional Quantum Computing Devices

    OpenAIRE

    Lloyd, Seth

    2000-01-01

    This paper investigates a variety of unconventional quantum computation devices, including fermionic quantum computers and computers that exploit nonlinear quantum mechanics. It is shown that unconventional quantum computing devices can in principle compute some quantities more rapidly than `conventional' quantum computers.

  5. Computing handbook computer science and software engineering

    CERN Document Server

    Gonzalez, Teofilo; Tucker, Allen

    2014-01-01

    Overview of Computer Science Structure and Organization of Computing Peter J. DenningComputational Thinking Valerie BarrAlgorithms and Complexity Data Structures Mark WeissBasic Techniques for Design and Analysis of Algorithms Edward ReingoldGraph and Network Algorithms Samir Khuller and Balaji RaghavachariComputational Geometry Marc van KreveldComplexity Theory Eric Allender, Michael Loui, and Kenneth ReganFormal Models and Computability Tao Jiang, Ming Li, and Bala

  6. Parameterizing the interstellar dust temperature

    Science.gov (United States)

    Hocuk, S.; Szűcs, L.; Caselli, P.; Cazaux, S.; Spaans, M.; Esplugues, G. B.

    2017-08-01

    The temperature of interstellar dust particles is of great importance to astronomers. It plays a crucial role in the thermodynamics of interstellar clouds, because of the gas-dust collisional coupling. It is also a key parameter in astrochemical studies that governs the rate at which molecules form on dust. In 3D (magneto)hydrodynamic simulations often a simple expression for the dust temperature is adopted, because of computational constraints, while astrochemical modelers tend to keep the dust temperature constant over a large range of parameter space. Our aim is to provide an easy-to-use parametric expression for the dust temperature as a function of visual extinction (AV) and to shed light on the critical dependencies of the dust temperature on the grain composition. We obtain an expression for the dust temperature by semi-analytically solving the dust thermal balance for different types of grains and compare to a collection of recent observational measurements. We also explore the effect of ices on the dust temperature. Our results show that a mixed carbonaceous-silicate type dust with a high carbon volume fraction matches the observations best. We find that ice formation allows the dust to be warmer by up to 15% at high optical depths (AV> 20 mag) in the interstellar medium. Our parametric expression for the dust temperature is presented as Td = [ 11 + 5.7 × tanh(0.61 - log 10(AV) ]χuv1/5.9, where χuv is in units of the Draine (1978, ApJS, 36, 595) UV field.

  7. Development of an operation strategy for hydrogen production using solar PV energy based on fluid dynamic aspects

    Directory of Open Access Journals (Sweden)

    Amores Ernesto

    2017-06-01

    Full Text Available Alkaline water electrolysis powered by renewable energy sources is one of the most promising strategies for environmentally friendly hydrogen production. However, wind and solar energy sources are highly dependent on weather conditions. As a result, power fluctuations affect the electrolyzer and cause several negative effects. Considering these limiting effects which reduce the water electrolysis efficiency, a novel operation strategy is proposed in this study. It is based on pumping the electrolyte according to the current density supplied by a solar PV module, in order to achieve the suitable fluid dynamics conditions in an electrolysis cell. To this aim, a mathematical model including the influence of electrode-membrane distance, temperature and electrolyte flow rate has been developed and used as optimization tool. The obtained results confirm the convenience of the selected strategy, especially when the electrolyzer is powered by renewable energies.

  8. Specialized computer architectures for computational aerodynamics

    Science.gov (United States)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  9. Methods for Melting Temperature Calculation

    Science.gov (United States)

    Hong, Qi-Jun

    Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly. We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments. We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which

  10. Relative Wave Energy based Adaptive Neuro-Fuzzy Inference System model for the Estimation of Depth of Anaesthesia.

    Science.gov (United States)

    Benzy, V K; Jasmin, E A; Koshy, Rachel Cherian; Amal, Frank; Indiradevi, K P

    2018-01-01

    The advancement in medical research and intelligent modeling techniques has lead to the developments in anaesthesia management. The present study is targeted to estimate the depth of anaesthesia using cognitive signal processing and intelligent modeling techniques. The neurophysiological signal that reflects cognitive state of anaesthetic drugs is the electroencephalogram signal. The information available on electroencephalogram signals during anaesthesia are drawn by extracting relative wave energy features from the anaesthetic electroencephalogram signals. Discrete wavelet transform is used to decomposes the electroencephalogram signals into four levels and then relative wave energy is computed from approximate and detail coefficients of sub-band signals. Relative wave energy is extracted to find out the degree of importance of different electroencephalogram frequency bands associated with different anaesthetic phases awake, induction, maintenance and recovery. The Kruskal-Wallis statistical test is applied on the relative wave energy features to check the discriminating capability of relative wave energy features as awake, light anaesthesia, moderate anaesthesia and deep anaesthesia. A novel depth of anaesthesia index is generated by implementing a Adaptive neuro-fuzzy inference system based fuzzy c-means clustering algorithm which uses relative wave energy features as inputs. Finally, the generated depth of anaesthesia index is compared with a commercially available depth of anaesthesia monitor Bispectral index.

  11. Body temperature norms

    Science.gov (United States)

    Normal body temperature; Temperature - normal ... Morrison SF. Regulation of body temperature. In: Boron WF, Boulpaep EL, eds. Medical Physiology . 3rd ed. Philadelphia, PA: Elsevier; 2017:chap 59. Sajadi MM, Mackowiak ...

  12. Temperature diagnostic line ratios of Fe XVII

    International Nuclear Information System (INIS)

    Raymond, J.C.; Smith, B.W.; Los Alamos National Lab., NM)

    1986-01-01

    Based on extensive calculations of the excitation rates of Fe XVII, four temperature-sensitive line ratios are investigated, paying special attention to the contribution of resonances to the excitation rates and to the contributions of dielectronic recombination satellites to the observed line intensities. The predictions are compared to FPCS observations of Puppis A and to Solar Maximum Mission (SMM) and SOLEX observations of the sun. Temperature-sensitive line ratios are also computed for emitting gas covering a broad temperature range. It is found that each ratio yields a differently weighted average for the temperature and that this accounts for some apparent discrepancies between the theoretical ratios and solar observations. The effects of this weighting on the Fe XVII temperature diagnostics and on the analogous Fe XXIV/Fe XXV satellite line temperature diagnostics are discussed. 27 references

  13. Applied Parallel Computing Industrial Computation and Optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; NA NA NA Olesen, Dorte

    Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

  14. Further computer appreciation

    CERN Document Server

    Fry, T F

    2014-01-01

    Further Computer Appreciation is a comprehensive cover of the principles and aspects in computer appreciation. The book starts by describing the development of computers from the first to the third computer generations, to the development of processors and storage systems, up to the present position of computers and future trends. The text tackles the basic elements, concepts and functions of digital computers, computer arithmetic, input media and devices, and computer output. The basic central processor functions, data storage and the organization of data by classification of computer files,

  15. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  16. Democratizing Computer Science

    Science.gov (United States)

    Margolis, Jane; Goode, Joanna; Ryoo, Jean J.

    2015-01-01

    Computer science programs are too often identified with a narrow stratum of the student population, often white or Asian boys who have access to computers at home. But because computers play such a huge role in our world today, all students can benefit from the study of computer science and the opportunity to build skills related to computing. The…

  17. Computing at Stanford.

    Science.gov (United States)

    Feigenbaum, Edward A.; Nielsen, Norman R.

    1969-01-01

    This article provides a current status report on the computing and computer science activities at Stanford University, focusing on the Computer Science Department, the Stanford Computation Center, the recently established regional computing network, and the Institute for Mathematical Studies in the Social Sciences. Also considered are such topics…

  18. The implementation of CP1 computer code in the Honeywell Bull computer in Brazilian Nuclear Energy Commission (CNEN)

    International Nuclear Information System (INIS)

    Couto, R.T.

    1987-01-01

    The implementation of the CP1 computer code in the Honeywell Bull computer in Brazilian Nuclear Energy Comission is presented. CP1 is a computer code used to solve the equations of punctual kinetic with Doppler feed back from the system temperature variation based on the Newton refrigeration equation (E.G.) [pt

  19. Linear Interaction Energy Based Prediction of Cytochrome P450 1A2 Binding Affinities with Reliability Estimation.

    Directory of Open Access Journals (Sweden)

    Luigi Capoferri

    Full Text Available Prediction of human Cytochrome P450 (CYP binding affinities of small ligands, i.e., substrates and inhibitors, represents an important task for predicting drug-drug interactions. A quantitative assessment of the ligand binding affinity towards different CYPs can provide an estimate of inhibitory activity or an indication of isoforms prone to interact with the substrate of inhibitors. However, the accuracy of global quantitative models for CYP substrate binding or inhibition based on traditional molecular descriptors can be limited, because of the lack of information on the structure and flexibility of the catalytic site of CYPs. Here we describe the application of a method that combines protein-ligand docking, Molecular Dynamics (MD simulations and Linear Interaction Energy (LIE theory, to allow for quantitative CYP affinity prediction. Using this combined approach, a LIE model for human CYP 1A2 was developed and evaluated, based on a structurally diverse dataset for which the estimated experimental uncertainty was 3.3 kJ mol-1. For the computed CYP 1A2 binding affinities, the model showed a root mean square error (RMSE of 4.1 kJ mol-1 and a standard error in prediction (SDEP in cross-validation of 4.3 kJ mol-1. A novel approach that includes information on both structural ligand description and protein-ligand interaction was developed for estimating the reliability of predictions, and was able to identify compounds from an external test set with a SDEP for the predicted affinities of 4.6 kJ mol-1 (corresponding to 0.8 pKi units.

  20. QCD and instantons at finite temperature

    International Nuclear Information System (INIS)

    Gross, D.J.; Pisarski, R.D.; Yaffe, L.G.

    1981-01-01

    The current understanding of the behavior of quantum chromodynamics at finite temperature is presented. Perturbative methods are used to explore the high-temperature dynamics. At sufficiently high temperatures the plasma of thermal excitations screens all color electric fields and quarks are unconfined. It is believed that the high-temperature theory develops a dynamical mass gap. However in perturbation theory the infrared behavior of magnetic fluctuations is so singular that beyond some order the perturbative expansion breaks down. The topological classification of finite-energy, periodic fields is presented and the classical solutions which minimize the action in each topological sector are examined. These include periodic instantons and magnetic monopoles. At sufficiently high temperature only fields with integral topological charge can contribute to the functional integral. Electric screening completely suppresses the contribution of fields with nonintegral topological charge. Consequently the theta dependence of the free energy at high temperature is dominated by the contribution of instantons. The complete temperature dependence of the instanton density is explicitly computed and large-scale instantons are found to be suppressed. Therefore the effects of instantons may be reliably calculated at sufficiently high temperature. The behavior of the theory in the vicinity of the transition from the high-temperature quark phase to the low-temperature hadronic phase cannot be accurately computed. However, at least in the absence of light quarks, semiclassical techniques and lattice methods may be combined to yield a simple picture of the dynamics valid for both high and low temperature, and to estimate the transition temperature

  1. Applications of high-temperature superconductivity

    International Nuclear Information System (INIS)

    Malozemoff, A.P.; Gallagher, W.J.; Schwall, R.E.

    1987-01-01

    The new high temperature superconductors open up possibilities for applications in magnets, power transmission, computer interconnections, Josephson devices and instrumentation, among many others. The success of these applications hinges on many interlocking factors, including critical current density, critical fields, allowable processing temperatures, mechanical properties and chemical stability. An analysis of some of these factors suggests which applications may be the easiest to realize and which may have the greatest potential

  2. Modeling of concrete response at high temperature

    International Nuclear Information System (INIS)

    Pfeiffer, P.; Marchertas, A.

    1984-01-01

    A rate-type creep law is implemented into the computer code TEMP-STRESS for high temperature concrete analysis. The disposition of temperature, pore pressure and moisture for the particular structure in question is provided as input for the thermo-mechanical code. The loss of moisture from concrete also induces material shrinkage which is accounted for in the analytical model. Examples are given to illustrate the numerical results

  3. Evaluation of temperature dependent neutron resonance integrals

    International Nuclear Information System (INIS)

    Menon, S.V.G.; Sahni, D.C.

    1975-01-01

    The Fourier transform method is extended for evaluating temperature dependent resonance integrals and Doppler coefficients. With the temperature dependent cross-sections, the slowing-down equation is transformed into a Fredholm integral equation of second kind. A method of solution is presented using the familiar Gauss-Hermite quadrature formulae. As a byproduct of the above technique, a fast and accurate method for computing the resonance integral J-function is given. (orig.) [de

  4. Nonlinear Corrections to Temperature in Computer Simulations of Complex Systems

    Science.gov (United States)

    2015-06-23

    Diezemann, G.; Geil, B.; Heuer, A.; Hinze, G.; Kuebler, S.C.; Richert, R.; Schiener, B.; Sillescu, H.; Spiess , H.W.; Tracht, U.; Wilhelm, M., Nature of the...frequency wing of supercooled glycerol, J. Chem. Phys. 2002, 118, 1356-1363 33 Reinsberg, S.A.; Heuer, A.; Doliwa, B.; Zimmermann, H.; Spiess , H.W...Rohr, K.; Spiess , H.W., Nature of nonexponential loss of correlation above the glass transition investigated by multidimensional NMR, Phys. Rev. Lett

  5. Soft computing in computer and information science

    CERN Document Server

    Fray, Imed; Pejaś, Jerzy

    2015-01-01

    This book presents a carefully selected and reviewed collection of papers presented during the 19th Advanced Computer Systems conference ACS-2014. The Advanced Computer Systems conference concentrated from its beginning on methods and algorithms of artificial intelligence. Further future brought new areas of interest concerning technical informatics related to soft computing and some more technological aspects of computer science such as multimedia and computer graphics, software engineering, web systems, information security and safety or project management. These topics are represented in the present book under the categories Artificial Intelligence, Design of Information and Multimedia Systems, Information Technology Security and Software Technologies.

  6. Computational Intelligence, Cyber Security and Computational Models

    CERN Document Server

    Anitha, R; Lekshmi, R; Kumar, M; Bonato, Anthony; Graña, Manuel

    2014-01-01

    This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19–21, 2013. The materials in the book include theory and applications for design, analysis, and modeling of computational intelligence and security. The book will be useful material for students, researchers, professionals, and academicians. It will help in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.

  7. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... When the image slices are reassembled by computer software, the result is a very detailed multidimensional view ... Safety Images related to Computed Tomography (CT) - Head Videos related to Computed Tomography (CT) - Head Sponsored by ...

  8. Computers: Instruments of Change.

    Science.gov (United States)

    Barkume, Megan

    1993-01-01

    Discusses the impact of computers in the home, the school, and the workplace. Looks at changes in computer use by occupations and by industry. Provides information on new job titles in computer occupations. (JOW)

  9. DNA computing models

    CERN Document Server

    Ignatova, Zoya; Zimmermann, Karl-Heinz

    2008-01-01

    In this excellent text, the reader is given a comprehensive introduction to the field of DNA computing. The book emphasizes computational methods to tackle central problems of DNA computing, such as controlling living cells, building patterns, and generating nanomachines.

  10. Distributed multiscale computing

    NARCIS (Netherlands)

    Borgdorff, J.

    2014-01-01

    Multiscale models combine knowledge, data, and hypotheses from different scales. Simulating a multiscale model often requires extensive computation. This thesis evaluates distributing these computations, an approach termed distributed multiscale computing (DMC). First, the process of multiscale

  11. Computational Modeling | Bioenergy | NREL

    Science.gov (United States)

    cell walls and are the source of biofuels and biomaterials. Our modeling investigates their properties . Quantum Mechanical Models NREL studies chemical and electronic properties and processes to reduce barriers Computational Modeling Computational Modeling NREL uses computational modeling to increase the

  12. Computer Viruses: An Overview.

    Science.gov (United States)

    Marmion, Dan

    1990-01-01

    Discusses the early history and current proliferation of computer viruses that occur on Macintosh and DOS personal computers, mentions virus detection programs, and offers suggestions for how libraries can protect themselves and their users from damage by computer viruses. (LRW)

  13. Computer Virus and Trends

    OpenAIRE

    Tutut Handayani; Soenarto Usna,Drs.MMSI

    2004-01-01

    Since its appearance the first time in the mid-1980s, computer virus has invited various controversies that still lasts to this day. Along with the development of computer systems technology, viruses komputerpun find new ways to spread itself through a variety of existing communications media. This paper discusses about some things related to computer viruses, namely: the definition and history of computer viruses; the basics of computer viruses; state of computer viruses at this time; and ...

  14. Plasticity: modeling & computation

    National Research Council Canada - National Science Library

    Borja, Ronaldo Israel

    2013-01-01

    .... "Plasticity Modeling & Computation" is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids...

  15. Cloud Computing Quality

    Directory of Open Access Journals (Sweden)

    Anamaria Şiclovan

    2013-02-01

    Full Text Available Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offered to the consumers as a product delivered online. This paper is meant to describe the quality of cloud computing services, analyzing the advantages and characteristics offered by it. It is a theoretical paper.Keywords: Cloud computing, QoS, quality of cloud computing

  16. Computer hardware fault administration

    Science.gov (United States)

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-09-14

    Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

  17. Computer jargon explained

    CERN Document Server

    Enticknap, Nicholas

    2014-01-01

    Computer Jargon Explained is a feature in Computer Weekly publications that discusses 68 of the most commonly used technical computing terms. The book explains what the terms mean and why the terms are important to computer professionals. The text also discusses how the terms relate to the trends and developments that are driving the information technology industry. Computer jargon irritates non-computer people and in turn causes problems for computer people. The technology and the industry are changing so rapidly; it is very hard even for professionals to keep updated. Computer people do not

  18. Computers and data processing

    CERN Document Server

    Deitel, Harvey M

    1985-01-01

    Computers and Data Processing provides information pertinent to the advances in the computer field. This book covers a variety of topics, including the computer hardware, computer programs or software, and computer applications systems.Organized into five parts encompassing 19 chapters, this book begins with an overview of some of the fundamental computing concepts. This text then explores the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. Other chapters consider how computers present their results and explain the storage and retrieval of

  19. Computers in nuclear medicine

    International Nuclear Information System (INIS)

    Giannone, Carlos A.

    1999-01-01

    This chapter determines: capture and observation of images in computers; hardware and software used, personal computers, networks and workstations. The use of special filters determine the quality image

  20. Domain walls at finite temperature

    International Nuclear Information System (INIS)

    Carvalho, C.A. de; Marques, G.C.; Silva, A.J. da; Ventura, I.

    1983-08-01

    It is suggested that the phase transition of lambda phi 4 theory as a function of temperature coincides with the spontaneous appearance of domain walls. Based on one-loop calculations, T sub(c) = 4M/√ lambda is estimated as the temperature for these domains to because energetically favored, to be compared with T sub(c) = 4.9M/√ lambda from effective potential calculations (which are performed directly in the broken phase). Domain walls, as well as other Types of fluctuations, disorder the system above T sub(c), leading to =0. The critical exponent for the specific heat above T sub(c) is computed; and α=2/3 + 0 (√ lambda) is obtained. (Author) [pt

  1. Glass temperatures in free-standing canisters

    International Nuclear Information System (INIS)

    Hardy, B.J.; Hensel, S.J.

    1993-01-01

    The waste-forms produced by the Defense Waste Processing Facility (DWPF) are subject to the requirements of the Waste Acceptance Product Specifications (WAPS). The WAPS sets the maximum post cooldown temperature of the waste-form glass at 400 degrees C. This criterion must be satisfied for the ambient conditions and heat generation rates expected for the waste-forms. As part of the work described in task plan, WSRC-RP-93-1177, Rev. 0, a computer model was used to calculate the maximum glass temperatures in free standing wasteforms for a variety of ambient temperatures and heat generation rates

  2. Advances in unconventional computing

    CERN Document Server

    2017-01-01

    The unconventional computing is a niche for interdisciplinary science, cross-bred of computer science, physics, mathematics, chemistry, electronic engineering, biology, material science and nanotechnology. The aims of this book are to uncover and exploit principles and mechanisms of information processing in and functional properties of physical, chemical and living systems to develop efficient algorithms, design optimal architectures and manufacture working prototypes of future and emergent computing devices. This first volume presents theoretical foundations of the future and emergent computing paradigms and architectures. The topics covered are computability, (non-)universality and complexity of computation; physics of computation, analog and quantum computing; reversible and asynchronous devices; cellular automata and other mathematical machines; P-systems and cellular computing; infinity and spatial computation; chemical and reservoir computing. The book is the encyclopedia, the first ever complete autho...

  3. On techniques of ATR lattice computation

    International Nuclear Information System (INIS)

    1997-08-01

    Lattice computation is to compute the average nuclear constants of unit fuel lattice which are required for computing core nuclear characteristics such as core power distribution and reactivity characteristics. The main nuclear constants are infinite multiplying rate, neutron movement area, cross section for diffusion computation, local power distribution and isotope composition. As for the lattice computation code, WIMS-ATR is used, which is based on the WIMS-D code developed in U.K., and for the purpose of heightening the accuracy of analysis, which was improved by adding heavy water scattering cross section considering the temperature dependence by Honeck model. For the computation of the neutron absorption by control rods, LOIEL BLUE code is used. The extrapolation distance of neutron flux on control rod surfaces is computed by using THERMOS and DTF codes, and the lattice constants of adjoining lattices are computed by using the WIMS-ATR code. As for the WIMS-ATR code, the computation flow and nuclear data library, and as for the LOIEL BLUE code, the computation flow are explained. The local power distribution in fuel assemblies determined by the WIMS-ATR code was verified with the measured data, and the results are reported. (K.I.)

  4. Finite difference program for calculating hydride bed wall temperature profiles

    International Nuclear Information System (INIS)

    Klein, J.E.

    1992-01-01

    A QuickBASIC finite difference program was written for calculating one dimensional temperature profiles in up to two media with flat, cylindrical, or spherical geometries. The development of the program was motivated by the need to calculate maximum temperature differences across the walls of the Tritium metal hydrides beds for thermal fatigue analysis. The purpose of this report is to document the equations and the computer program used to calculate transient wall temperatures in stainless steel hydride vessels. The development of the computer code was motivated by the need to calculate maximum temperature differences across the walls of the hydrides beds in the Tritium Facility for thermal fatigue analysis

  5. Theory of high temperature plasmas. Final report

    International Nuclear Information System (INIS)

    Davidson, R.C.; Liu, C.S.

    1977-01-01

    This is a report on the technical progress in our analytic studies of high-temperature fusion plasmas. We also emphasize that the research summarized here makes extensive use of computational methods and therefore forms a strong interface with our numerical modeling program which is discussed later in the report

  6. Versatile microcomputer-based temperature controller

    International Nuclear Information System (INIS)

    Yarberry, V.R.

    1980-09-01

    The wide range of thermal responses required in laboratory and scientific equipment requires a temperature controller with a great deal of flexibility. While a number of analog temperature controllers are commercially available, they have certain limitations, such as inflexible parameter control or insufficient precision. Most lack digital interface capabilities--a necessity when the temperature controller is part of a computer-controlled automatic data acquisition system. We have developed an extremely versatile microcomputer-based temperature controller to fulfill this need in a variety of equipment. The control algorithm used allows optimal tailoring of parameters to control overshoot, response time, and accuracy. This microcomputer-based temperature controller can be used as a standalone instrument (with a teletype used to enter para-meters), or it can be integrated into a data acquisition system

  7. Computability and unsolvability

    CERN Document Server

    Davis, Martin

    1985-01-01

    ""A clearly written, well-presented survey of an intriguing subject."" - Scientific American. Classic text considers general theory of computability, computable functions, operations on computable functions, Turing machines self-applied, unsolvable decision problems, applications of general theory, mathematical logic, Kleene hierarchy, computable functionals, classification of unsolvable decision problems and more.

  8. Mathematics for computer graphics

    CERN Document Server

    Vince, John

    2006-01-01

    Helps you understand the mathematical ideas used in computer animation, virtual reality, CAD, and other areas of computer graphics. This work also helps you to rediscover the mathematical techniques required to solve problems and design computer programs for computer graphic applications

  9. Computations and interaction

    NARCIS (Netherlands)

    Baeten, J.C.M.; Luttik, S.P.; Tilburg, van P.J.A.; Natarajan, R.; Ojo, A.

    2011-01-01

    We enhance the notion of a computation of the classical theory of computing with the notion of interaction. In this way, we enhance a Turing machine as a model of computation to a Reactive Turing Machine that is an abstract model of a computer as it is used nowadays, always interacting with the user

  10. Symbiotic Cognitive Computing

    OpenAIRE

    Farrell, Robert G.; Lenchner, Jonathan; Kephjart, Jeffrey O.; Webb, Alan M.; Muller, MIchael J.; Erikson, Thomas D.; Melville, David O.; Bellamy, Rachel K.E.; Gruen, Daniel M.; Connell, Jonathan H.; Soroker, Danny; Aaron, Andy; Trewin, Shari M.; Ashoori, Maryam; Ellis, Jason B.

    2016-01-01

    IBM Research is engaged in a research program in symbiotic cognitive computing to investigate how to embed cognitive computing in physical spaces. This article proposes 5 key principles of symbiotic cognitive computing.  We describe how these principles are applied in a particular symbiotic cognitive computing environment and in an illustrative application.  

  11. Computer scientist looks at reliability computations

    International Nuclear Information System (INIS)

    Rosenthal, A.

    1975-01-01

    Results from the theory of computational complexity are applied to reliability computations on fault trees and networks. A well known class of problems which almost certainly have no fast solution algorithms is presented. It is shown that even approximately computing the reliability of many systems is difficult enough to be in this class. In the face of this result, which indicates that for general systems the computation time will be exponential in the size of the system, decomposition techniques which can greatly reduce the effective size of a wide variety of realistic systems are explored

  12. Roadmap to greener computing

    CERN Document Server

    Nguemaleu, Raoul-Abelin Choumin

    2014-01-01

    A concise and accessible introduction to green computing and green IT, this book addresses how computer science and the computer infrastructure affect the environment and presents the main challenges in making computing more environmentally friendly. The authors review the methodologies, designs, frameworks, and software development tools that can be used in computer science to reduce energy consumption and still compute efficiently. They also focus on Computer Aided Design (CAD) and describe what design engineers and CAD software applications can do to support new streamlined business directi

  13. Brief: Managing computing technology

    International Nuclear Information System (INIS)

    Startzman, R.A.

    1994-01-01

    While computing is applied widely in the production segment of the petroleum industry, its effective application is the primary goal of computing management. Computing technology has changed significantly since the 1950's, when computers first began to influence petroleum technology. The ability to accomplish traditional tasks faster and more economically probably is the most important effect that computing has had on the industry. While speed and lower cost are important, are they enough? Can computing change the basic functions of the industry? When new computing technology is introduced improperly, it can clash with traditional petroleum technology. This paper examines the role of management in merging these technologies

  14. Computer mathematics for programmers

    CERN Document Server

    Abney, Darrell H; Sibrel, Donald W

    1985-01-01

    Computer Mathematics for Programmers presents the Mathematics that is essential to the computer programmer.The book is comprised of 10 chapters. The first chapter introduces several computer number systems. Chapter 2 shows how to perform arithmetic operations using the number systems introduced in Chapter 1. The third chapter covers the way numbers are stored in computers, how the computer performs arithmetic on real numbers and integers, and how round-off errors are generated in computer programs. Chapter 4 details the use of algorithms and flowcharting as problem-solving tools for computer p

  15. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  16. Temperature indicating device

    International Nuclear Information System (INIS)

    Angus, J.P.; Salt, D.

    1988-01-01

    A temperature indicating device comprises a plurality of planar elements some undergoing a reversible change in appearance at a given temperature the remainder undergoing an irreversible change in appearance at a given temperature. The device is useful in indicating the temperature which an object has achieved as well as its actual temperature. The reversible change is produced by liquid crystal devices. The irreversible change is produced by an absorbent surface carrying substances e.g. waxes which melt at predetermined temperatures and are absorbed by the surface; alternatively paints may be used. The device is used for monitoring processes of encapsulation of radio active waste. (author)

  17. The digital computer

    CERN Document Server

    Parton, K C

    2014-01-01

    The Digital Computer focuses on the principles, methodologies, and applications of the digital computer. The publication takes a look at the basic concepts involved in using a digital computer, simple autocode examples, and examples of working advanced design programs. Discussions focus on transformer design synthesis program, machine design analysis program, solution of standard quadratic equations, harmonic analysis, elementary wage calculation, and scientific calculations. The manuscript then examines commercial and automatic programming, how computers work, and the components of a computer

  18. Cloud computing for radiologists

    OpenAIRE

    Amit T Kharat; Amjad Safvi; S S Thind; Amarjit Singh

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as...

  19. Toward Cloud Computing Evolution

    OpenAIRE

    Susanto, Heru; Almunawar, Mohammad Nabil; Kang, Chen Chin

    2012-01-01

    -Information Technology (IT) shaped the success of organizations, giving them a solid foundation that increases both their level of efficiency as well as productivity. The computing industry is witnessing a paradigm shift in the way computing is performed worldwide. There is a growing awareness among consumers and enterprises to access their IT resources extensively through a "utility" model known as "cloud computing." Cloud computing was initially rooted in distributed grid-based computing. ...

  20. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  1. Computer vision syndrome (CVS) - Thermographic Analysis

    Science.gov (United States)

    Llamosa-Rincón, L. E.; Jaime-Díaz, J. M.; Ruiz-Cardona, D. F.

    2017-01-01

    The use of computers has reported an exponential growth in the last decades, the possibility of carrying out several tasks for both professional and leisure purposes has contributed to the great acceptance by the users. The consequences and impact of uninterrupted tasks with computers screens or displays on the visual health, have grabbed researcher’s attention. When spending long periods of time in front of a computer screen, human eyes are subjected to great efforts, which in turn triggers a set of symptoms known as Computer Vision Syndrome (CVS). Most common of them are: blurred vision, visual fatigue and Dry Eye Syndrome (DES) due to unappropriate lubrication of ocular surface when blinking decreases. An experimental protocol was de-signed and implemented to perform thermographic studies on healthy human eyes during exposure to dis-plays of computers, with the main purpose of comparing the existing differences in temperature variations of healthy ocular surfaces.

  2. Temperature fluctuations superimposed on background temperature change

    International Nuclear Information System (INIS)

    Otto, James; Roberts, J.A.

    2016-01-01

    Proxy data allows the temperature of the Earth to be mapped over long periods of time. In this work the temperature fluctuations for over 200 proxy data sets were examined and from this set 50 sets were analyzed to test for periodic and quasi-periodic fluctuations in the data sets. Temperature reconstructions over 4 different time scales were analyzed to see if patterns emerged. Data were put into four time intervals; 4,000 years, 14,000 years, 1,000,000 years, and 3,000,000 years and analyzed with a goal to understanding periodic and quasi-periodic patterns in global temperature change superimposed on a “background” average temperature change. Quasi-periodic signatures were identified that predate the Industrial Revolution, during much of which direct data on temperature are not available. These data indicate that Earth temperatures have undergone a number of periodic and quasi-periodic intervals that contain both global warming and global cooling cycles. The fluctuations are superimposed on a background of temperature change that has a declining slope during the two periods, pre-ice age and post ice age with a transition about 12,000 BCE. The data are divided into “events” that span the time periods 3,000,000 BCE to “0” CE, 1,000,000 BCE to “0” CE, 12,000 BCE to 2,000 CE and 2,000 BCE to 2,000 CE. An equation using a quasi-periodic (frequency modulated sine waves) patterns was developed to analyze the date sets for quasi-periodic patterns. “Periodicities” which show reasonable agreement with the predictions of Milankovitch and other investigators were found in the data sets.

  3. Quantum computing with defects.

    Science.gov (United States)

    Weber, J R; Koehl, W F; Varley, J B; Janotti, A; Buckley, B B; Van de Walle, C G; Awschalom, D D

    2010-05-11

    Identifying and designing physical systems for use as qubits, the basic units of quantum information, are critical steps in the development of a quantum computer. Among the possibilities in the solid state, a defect in diamond known as the nitrogen-vacancy (NV(-1)) center stands out for its robustness--its quantum state can be initialized, manipulated, and measured with high fidelity at room temperature. Here we describe how to systematically identify other deep center defects with similar quantum-mechanical properties. We present a list of physical criteria that these centers and their hosts should meet and explain how these requirements can be used in conjunction with electronic structure theory to intelligently sort through candidate defect systems. To illustrate these points in detail, we compare electronic structure calculations of the NV(-1) center in diamond with those of several deep centers in 4H silicon carbide (SiC). We then discuss the proposed criteria for similar defects in other tetrahedrally coordinated semiconductors.

  4. Synthetic Computation: Chaos Computing, Logical Stochastic Resonance, and Adaptive Computing

    Science.gov (United States)

    Kia, Behnam; Murali, K.; Jahed Motlagh, Mohammad-Reza; Sinha, Sudeshna; Ditto, William L.

    Nonlinearity and chaos can illustrate numerous behaviors and patterns, and one can select different patterns from this rich library of patterns. In this paper we focus on synthetic computing, a field that engineers and synthesizes nonlinear systems to obtain computation. We explain the importance of nonlinearity, and describe how nonlinear systems can be engineered to perform computation. More specifically, we provide an overview of chaos computing, a field that manually programs chaotic systems to build different types of digital functions. Also we briefly describe logical stochastic resonance (LSR), and then extend the approach of LSR to realize combinational digital logic systems via suitable concatenation of existing logical stochastic resonance blocks. Finally we demonstrate how a chaotic system can be engineered and mated with different machine learning techniques, such as artificial neural networks, random searching, and genetic algorithm, to design different autonomous systems that can adapt and respond to environmental conditions.

  5. Superconducting critical temperature under pressure

    Science.gov (United States)

    González-Pedreros, G. I.; Baquero, R.

    2018-05-01

    The present record on the critical temperature of a superconductor is held by sulfur hydride (approx. 200 K) under very high pressure (approx. 56 GPa.). As a consequence, the dependence of the superconducting critical temperature on pressure became a subject of great interest and a high number of papers on of different aspects of this subject have been published in the scientific literature since. In this paper, we calculate the superconducting critical temperature as a function of pressure, Tc(P), by a simple method. Our method is based on the functional derivative of the critical temperature with the Eliashberg function, δTc(P)/δα2F(ω). We obtain the needed coulomb electron-electron repulsion parameter, μ*(P) at each pressure in a consistent way by fitting it to the corresponding Tc using the linearized Migdal-Eliashberg equation. This method requires as input the knowledge of Tc at the starting pressure only. It applies to superconductors for which the Migdal-Eliashberg equations hold. We study Al and β - Sn two weak-coupling low-Tc superconductors and Nb, the strong coupling element with the highest critical temperature. For Al, our results for Tc(P) show an excellent agreement with the calculations of Profeta et al. which are known to agree well with experiment. For β - Sn and Nb, we found a good agreement with the experimental measurements reported in several works. This method has also been applied successfully to PdH elsewhere. Our method is simple, computationally light and gives very accurate results.

  6. MFTF sensor verification computer program

    International Nuclear Information System (INIS)

    Chow, H.K.

    1984-01-01

    The design, requirements document and implementation of the MFE Sensor Verification System were accomplished by the Measurement Engineering Section (MES), a group which provides instrumentation for the MFTF magnet diagnostics. The sensors, installed on and around the magnets and solenoids, housed in a vacuum chamber, will supply information about the temperature, strain, pressure, liquid helium level and magnet voltage to the facility operator for evaluation. As the sensors are installed, records must be maintained as to their initial resistance values. Also, as the work progresses, monthly checks will be made to insure continued sensor health. Finally, after the MFTF-B demonstration, yearly checks will be performed as well as checks of sensors as problem develops. The software to acquire and store the data was written by Harry Chow, Computations Department. The acquired data will be transferred to the MFE data base computer system

  7. Analysis of the effect of gallium content on the magnetomechanical behavior of single-crystal FeGa alloys using an energy-based model

    International Nuclear Information System (INIS)

    Atulasimha, Jayasimha; Flatau, Alison B; Cullen, James R

    2008-01-01

    The magnetomechanical behavior of single-crystal iron–gallium alloys with varying gallium content was found to be strongly dependent on the Ga content (Atulasimha 2006 PhD Thesis). An energy-based model (Atulasimha 2006 PhD Thesis, Armstrong and William 1997 J. Appl. Phys. 81 2321) is employed to simulate the strikingly different actuation behavior (λ–H and B–H curves under different compressive stresses) and validated against experimental data for 19, 24.7 and 29 at.% Ga, [100] oriented, slow-cooled single-crystal FeGa alloys. The effect of gallium content on the model parameters, specifically the cubic magnetocrystalline anisotropy constants and the Armstrong-smoothing factor Ω, their physical significance and ultimately their effect on the magnetomechanical behavior are analyzed and explained

  8. Status of bio energy based on forest material - from the stub to the stove door; Status for bioenergi basert paa skogsvirke - fra stubben til ovnsdoera

    Energy Technology Data Exchange (ETDEWEB)

    Lileng, Joern K.; Gjoelsjoe, Simen

    1999-07-01

    It is not well documented in Norway who possesses the competence and who is the user of forest material as bio fuel. Forest material is forest chips, secondary products from forestry, and firewood. The project reported was a literature study that throws light on the problem by referring to central persons and institutions in this field. The report also is a general introduction to bio energy based on forest material. The principle sections deal with (1) Climate policy, (2) Energy carriers based on forest material, (3) Systems for selection and treatment of bio fuel, (4) Methods for calculation of the supply of biomass in the forest, (5) Returning ash to the forest, (6) Transport, (7) Storage, (8) Assessment of bio energy, (9) Abstract of relevant Norwegian reports, (10) Bio energy projects and bio energy actors in Norway and (11) Proposed research projects and research work on forest material as fuel.

  9. Maine River Temperature Monitoring

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — We collect seasonal and annual temperature measurements on an hourly or quarter hourly basis to monitor habitat suitability for ATS and other species. Temperature...

  10. GISS Surface Temperature Analysis

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The GISTEMP dataset is a global 2x2 gridded temperature anomaly dataset. Temperature data is updated around the middle of every month using current data files from...

  11. Supersymmetry at high temperatures

    International Nuclear Information System (INIS)

    Das, A.; Kaku, M.

    1978-01-01

    We investigate the properties of Green's functions in a spontaneously broken supersymmetric model at high temperatures. We show that, even at high temperatures, we do not get restoration of supersymmetry, at least in the one-loop approximation

  12. Future Computer Requirements for Computational Aerodynamics

    Science.gov (United States)

    1978-01-01

    Recent advances in computational aerodynamics are discussed as well as motivations for and potential benefits of a National Aerodynamic Simulation Facility having the capability to solve fluid dynamic equations at speeds two to three orders of magnitude faster than presently possible with general computers. Two contracted efforts to define processor architectures for such a facility are summarized.

  13. Computers and Computation. Readings from Scientific American.

    Science.gov (United States)

    Fenichel, Robert R.; Weizenbaum, Joseph

    A collection of articles from "Scientific American" magazine has been put together at this time because the current period in computer science is one of consolidation rather than innovation. A few years ago, computer science was moving so swiftly that even the professional journals were more archival than informative; but today it is…

  14. Know Your Personal Computer Introduction to Computers

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 1. Know Your Personal Computer Introduction to Computers. Siddhartha Kumar Ghoshal. Series Article Volume 1 Issue 1 January 1996 pp 48-55. Fulltext. Click here to view fulltext PDF. Permanent link:

  15. Supersymmetry at finite temperature

    International Nuclear Information System (INIS)

    Clark, T.E.; Love, S.T.

    1983-01-01

    Finite-temperature supersymmetry (SUSY) is characterized by unbroken Ward identities for SUSY variations of ensemble averages of Klein-operator inserted imaginary time-ordered products of fields. Path-integral representations of these products are defined and the Feynman rules in superspace are given. The finite-temperature no-renormalization theorem is derived. Spontaneously broken SUSY at zero temperature is shown not to be restored at high temperature. (orig.)

  16. Room temperature superconductors

    International Nuclear Information System (INIS)

    Sleight, A.W.

    1995-01-01

    If the Holy Grail of room temperature superconductivity could be achieved, the impact on could be enormous. However, a useful room temperature superconductor for most applications must possess a T c somewhat above room temperature and must be capable of sustaining superconductivity in the presence of magnetic fields while carrying a significant current load. The authors will return to the subject of just what characteristics one might seek for a compound to be a room temperature superconductor. 30 refs., 3 figs., 1 tab

  17. Heterotic computing: exploiting hybrid computational devices.

    Science.gov (United States)

    Kendon, Viv; Sebald, Angelika; Stepney, Susan

    2015-07-28

    Current computational theory deals almost exclusively with single models: classical, neural, analogue, quantum, etc. In practice, researchers use ad hoc combinations, realizing only recently that they can be fundamentally more powerful than the individual parts. A Theo Murphy meeting brought together theorists and practitioners of various types of computing, to engage in combining the individual strengths to produce powerful new heterotic devices. 'Heterotic computing' is defined as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This post-meeting collection of articles provides a wide-ranging survey of the state of the art in diverse computational paradigms, together with reflections on their future combination into powerful and practical applications. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  18. Computationally Efficient Prediction of Ionic Liquid Properties

    DEFF Research Database (Denmark)

    Chaban, V. V.; Prezhdo, O. V.

    2014-01-01

    Due to fundamental differences, room-temperature ionic liquids (RTIL) are significantly more viscous than conventional molecular liquids and require long simulation times. At the same time, RTILs remain in the liquid state over a much broader temperature range than the ordinary liquids. We exploit...... to ambient temperatures. We numerically prove the validity of the proposed concept for density and ionic diffusion of four different RTILs. This simple method enhances the computational efficiency of the existing simulation approaches as applied to RTILs by more than an order of magnitude....

  19. Quantum fields on the computer

    CERN Document Server

    1992-01-01

    This book provides an overview of recent progress in computer simulations of nonperturbative phenomena in quantum field theory, particularly in the context of the lattice approach. It is a collection of extensive self-contained reviews of various subtopics, including algorithms, spectroscopy, finite temperature physics, Yukawa and chiral theories, bounds on the Higgs meson mass, the renormalization group, and weak decays of hadrons.Physicists with some knowledge of lattice gauge ideas will find this book a useful and interesting source of information on the recent developments in the field.

  20. Digital temperature meter

    Energy Technology Data Exchange (ETDEWEB)

    Glowacki, S

    1982-01-01

    Digital temperature meter for precise temperature measurements is presented. Its parts such as thermostat, voltage-frequency converter and digital frequency meter are described. Its technical parameters such as temperature range 50degC-700degC, measurement precision 1degC, measurement error +-1degC are given. (A.S.).