WorldWideScience

Sample records for equilibrium computation technique

  1. Computational methods for reversed-field equilibrium

    International Nuclear Information System (INIS)

    Boyd, J.K.; Auerbach, S.P.; Willmann, P.A.; Berk, H.L.; McNamara, B.

    1980-01-01

    Investigating the temporal evolution of reversed-field equilibrium caused by transport processes requires the solution of the Grad-Shafranov equation and computation of field-line-averaged quantities. The technique for field-line averaging and the computation of the Grad-Shafranov equation are presented. Application of Green's function to specify the Grad-Shafranov equation boundary condition is discussed. Hill's vortex formulas used to verify certain computations are detailed. Use of computer software to implement computational methods is described

  2. Computation of Phase Equilibrium and Phase Envelopes

    DEFF Research Database (Denmark)

    Ritschel, Tobias Kasper Skovborg; Jørgensen, John Bagterp

    formulate the involved equations in terms of the fugacity coefficients. We present expressions for the first-order derivatives. Such derivatives are necessary in computationally efficient gradient-based methods for solving the vapor-liquid equilibrium equations and for computing phase envelopes. Finally, we......In this technical report, we describe the computation of phase equilibrium and phase envelopes based on expressions for the fugacity coefficients. We derive those expressions from the residual Gibbs energy. We consider 1) ideal gases and liquids modeled with correlations from the DIPPR database...... and 2) nonideal gases and liquids modeled with cubic equations of state. Next, we derive the equilibrium conditions for an isothermal-isobaric (constant temperature, constant pressure) vapor-liquid equilibrium process (PT flash), and we present a method for the computation of phase envelopes. We...

  3. Computing Equilibrium Chemical Compositions

    Science.gov (United States)

    Mcbride, Bonnie J.; Gordon, Sanford

    1995-01-01

    Chemical Equilibrium With Transport Properties, 1993 (CET93) computer program provides data on chemical-equilibrium compositions. Aids calculation of thermodynamic properties of chemical systems. Information essential in design and analysis of such equipment as compressors, turbines, nozzles, engines, shock tubes, heat exchangers, and chemical-processing equipment. CET93/PC is version of CET93 specifically designed to run within 640K memory limit of MS-DOS operating system. CET93/PC written in FORTRAN.

  4. Computer program for calculation of complex chemical equilibrium compositions and applications. Part 1: Analysis

    Science.gov (United States)

    Gordon, Sanford; Mcbride, Bonnie J.

    1994-01-01

    This report presents the latest in a number of versions of chemical equilibrium and applications programs developed at the NASA Lewis Research Center over more than 40 years. These programs have changed over the years to include additional features and improved calculation techniques and to take advantage of constantly improving computer capabilities. The minimization-of-free-energy approach to chemical equilibrium calculations has been used in all versions of the program since 1967. The two principal purposes of this report are presented in two parts. The first purpose, which is accomplished here in part 1, is to present in detail a number of topics of general interest in complex equilibrium calculations. These topics include mathematical analyses and techniques for obtaining chemical equilibrium; formulas for obtaining thermodynamic and transport mixture properties and thermodynamic derivatives; criteria for inclusion of condensed phases; calculations at a triple point; inclusion of ionized species; and various applications, such as constant-pressure or constant-volume combustion, rocket performance based on either a finite- or infinite-chamber-area model, shock wave calculations, and Chapman-Jouguet detonations. The second purpose of this report, to facilitate the use of the computer code, is accomplished in part 2, entitled 'Users Manual and Program Description'. Various aspects of the computer code are discussed, and a number of examples are given to illustrate its versatility.

  5. Communication: A method to compute the transport coefficient of pure fluids diffusing through planar interfaces from equilibrium molecular dynamics simulations.

    Science.gov (United States)

    Vermorel, Romain; Oulebsir, Fouad; Galliero, Guillaume

    2017-09-14

    The computation of diffusion coefficients in molecular systems ranks among the most useful applications of equilibrium molecular dynamics simulations. However, when dealing with the problem of fluid diffusion through vanishingly thin interfaces, classical techniques are not applicable. This is because the volume of space in which molecules diffuse is ill-defined. In such conditions, non-equilibrium techniques allow for the computation of transport coefficients per unit interface width, but their weak point lies in their inability to isolate the contribution of the different physical mechanisms prone to impact the flux of permeating molecules. In this work, we propose a simple and accurate method to compute the diffusional transport coefficient of a pure fluid through a planar interface from equilibrium molecular dynamics simulations, in the form of a diffusion coefficient per unit interface width. In order to demonstrate its validity and accuracy, we apply our method to the case study of a dilute gas diffusing through a smoothly repulsive single-layer porous solid. We believe this complementary technique can benefit to the interpretation of the results obtained on single-layer membranes by means of complex non-equilibrium methods.

  6. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  7. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  8. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    Science.gov (United States)

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  9. Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Young, Kevin C

    2013-01-01

    While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. (paper)

  10. Equilibrium gas-oil ratio measurements using a microfluidic technique.

    Science.gov (United States)

    Fisher, Robert; Shah, Mohammad Khalid; Eskin, Dmitry; Schmidt, Kurt; Singh, Anil; Molla, Shahnawaz; Mostowfi, Farshid

    2013-07-07

    A method for measuring the equilibrium GOR (gas-oil ratio) of reservoir fluids using microfluidic technology is developed. Live crude oils (crude oil with dissolved gas) are injected into a long serpentine microchannel at reservoir pressure. The fluid forms a segmented flow as it travels through the channel. Gas and liquid phases are produced from the exit port of the channel that is maintained at atmospheric conditions. The process is analogous to the production of crude oil from a formation. By using compositional analysis and thermodynamic principles of hydrocarbon fluids, we show excellent equilibrium between the produced gas and liquid phases is achieved. The GOR of a reservoir fluid is a key parameter in determining the equation of state of a crude oil. Equations of state that are commonly used in petroleum engineering and reservoir simulations describe the phase behaviour of a fluid at equilibrium state. Therefore, to accurately determine the coefficients of an equation of state, the produced gas and liquid phases have to be as close to the thermodynamic equilibrium as possible. In the examples presented here, the GORs measured with the microfluidic technique agreed with GOR values obtained from conventional methods. Furthermore, when compared to conventional methods, the microfluidic technique was simpler to perform, required less equipment, and yielded better repeatability.

  11. Benchmark studies of computer prediction techniques for equilibrium chemistry and radionuclide transport in groundwater flow

    International Nuclear Information System (INIS)

    Broyd, T.W.

    1988-01-01

    A brief review of two recent benchmark exercises is presented. These were separately concerned with the equilibrium chemistry of groundwater and the geosphere migration of radionuclides, and involved the use of a total of 19 computer codes by 11 organisations in Europe and Canada. A similar methodology was followed for each exercise, in that series of hypothetical test cases were used to explore the limits of each code's application, and so provide an overview of current modelling potential. Aspects of the user-friendliness of individual codes were also considered. The benchmark studies have benefited participating organisations by providing a means of verifying current codes, and have provided problem data sets by which future models may be compared. (author)

  12. Computer Program for Calculation of Complex Chemical Equilibrium Compositions, Rocket Performance, Incident and Reflected Shocks, and Chapman-Jouguet Detonations. Interim Revision, March 1976

    Science.gov (United States)

    Gordon, S.; Mcbride, B. J.

    1976-01-01

    A detailed description of the equations and computer program for computations involving chemical equilibria in complex systems is given. A free-energy minimization technique is used. The program permits calculations such as (1) chemical equilibrium for assigned thermodynamic states (T,P), (H,P), (S,P), (T,V), (U,V), or (S,V), (2) theoretical rocket performance for both equilibrium and frozen compositions during expansion, (3) incident and reflected shock properties, and (4) Chapman-Jouguet detonation properties. The program considers condensed species as well as gaseous species.

  13. Computation of thermodynamic equilibrium in systems under stress

    Science.gov (United States)

    Vrijmoed, Johannes C.; Podladchikov, Yuri Y.

    2016-04-01

    Metamorphic reactions may be partly controlled by the local stress distribution as suggested by observations of phase assemblages around garnet inclusions related to an amphibolite shear zone in granulite of the Bergen Arcs in Norway. A particular example presented in fig. 14 of Mukai et al. [1] is discussed here. A garnet crystal embedded in a plagioclase matrix is replaced on the left side by a high pressure intergrowth of kyanite and quartz and on the right side by chlorite-amphibole. This texture apparently represents disequilibrium. In this case, the minerals adapt to the low pressure ambient conditions only where fluids were present. Alternatively, here we compute that this particular low pressure and high pressure assemblage around a stressed rigid inclusion such as garnet can coexist in equilibrium. To do the computations we developed the Thermolab software package. The core of the software package consists of Matlab functions that generate Gibbs energy of minerals and melts from the Holland and Powell database [2] and aqueous species from the SUPCRT92 database [3]. Most up to date solid solutions are included in a general formulation. The user provides a Matlab script to do the desired calculations using the core functions. Gibbs energy of all minerals, solutions and species are benchmarked versus THERMOCALC, PerpleX [4] and SUPCRT92 and are reproduced within round off computer error. Multi-component phase diagrams have been calculated using Gibbs minimization to benchmark with THERMOCALC and Perple_X. The Matlab script to compute equilibrium in a stressed system needs only two modifications of the standard phase diagram script. Firstly, Gibbs energy of phases considered in the calculation is generated for multiple values of thermodynamic pressure. Secondly, for the Gibbs minimization the proportion of the system at each particular thermodynamic pressure needs to be constrained. The user decides which part of the stress tensor is input as thermodynamic

  14. Computing Nash Equilibrium in Wireless Ad Hoc Networks

    DEFF Research Database (Denmark)

    Bulychev, Peter E.; David, Alexandre; Larsen, Kim G.

    2012-01-01

    This paper studies the problem of computing Nash equilibrium in wireless networks modeled by Weighted Timed Automata. Such formalism comes together with a logic that can be used to describe complex features such as timed energy constraints. Our contribution is a method for solving this problem...

  15. Generalized multivalued equilibrium-like problems: auxiliary principle technique and predictor-corrector methods

    Directory of Open Access Journals (Sweden)

    Vahid Dadashi

    2016-02-01

    Full Text Available Abstract This paper is dedicated to the introduction a new class of equilibrium problems named generalized multivalued equilibrium-like problems which includes the classes of hemiequilibrium problems, equilibrium-like problems, equilibrium problems, hemivariational inequalities, and variational inequalities as special cases. By utilizing the auxiliary principle technique, some new predictor-corrector iterative algorithms for solving them are suggested and analyzed. The convergence analysis of the proposed iterative methods requires either partially relaxed monotonicity or jointly pseudomonotonicity of the bifunctions involved in generalized multivalued equilibrium-like problem. Results obtained in this paper include several new and known results as special cases.

  16. A rapid method for the computation of equilibrium chemical composition of air to 15000 K

    Science.gov (United States)

    Prabhu, Ramadas K.; Erickson, Wayne D.

    1988-01-01

    A rapid computational method has been developed to determine the chemical composition of equilibrium air to 15000 K. Eleven chemically reacting species, i.e., O2, N2, O, NO, N, NO+, e-, N+, O+, Ar, and Ar+ are included. The method involves combining algebraically seven nonlinear equilibrium equations and four linear elemental mass balance and charge neutrality equations. Computational speeds for determining the equilibrium chemical composition are significantly faster than the often used free energy minimization procedure. Data are also included from which the thermodynamic properties of air can be computed. A listing of the computer program together with a set of sample results are included.

  17. Collapse and equilibrium of rotating, adiabatic clouds

    International Nuclear Information System (INIS)

    Boss, A.P.

    1980-01-01

    A numerical hydrodynamics computer code has been used to follow the collapse and establishment of equilibrium of adiabatic gas clouds restricted to axial symmetry. The clouds are initially uniform in density and rotation, with adiabatic exponents γ=5/3 and 7/5. The numerical technique allows, for the first time, a direct comparison to be made between the dynamic collapse and approach to equilibrium of unconstrained clouds on the one hand, and the results for incompressible, uniformly rotating equilibrium clouds, and the equilibrium structures of differentially rotating polytropes, on the other hand

  18. Computer simulations of equilibrium magnetization and microstructure in magnetic fluids

    Science.gov (United States)

    Rosa, A. P.; Abade, G. C.; Cunha, F. R.

    2017-09-01

    In this work, Monte Carlo and Brownian Dynamics simulations are developed to compute the equilibrium magnetization of a magnetic fluid under action of a homogeneous applied magnetic field. The particles are free of inertia and modeled as hard spheres with the same diameters. Two different periodic boundary conditions are implemented: the minimum image method and Ewald summation technique by replicating a finite number of particles throughout the suspension volume. A comparison of the equilibrium magnetization resulting from the minimum image approach and Ewald sums is performed by using Monte Carlo simulations. The Monte Carlo simulations with minimum image and lattice sums are used to investigate suspension microstructure by computing the important radial pair-distribution function go(r), which measures the probability density of finding a second particle at a distance r from a reference particle. This function provides relevant information on structure formation and its anisotropy through the suspension. The numerical results of go(r) are compared with theoretical predictions based on quite a different approach in the absence of the field and dipole-dipole interactions. A very good quantitative agreement is found for a particle volume fraction of 0.15, providing a validation of the present simulations. In general, the investigated suspensions are dominated by structures like dimmer and trimmer chains with trimmers having probability to form an order of magnitude lower than dimmers. Using Monte Carlo with lattice sums, the density distribution function g2(r) is also examined. Whenever this function is different from zero, it indicates structure-anisotropy in the suspension. The dependence of the equilibrium magnetization on the applied field, the magnetic particle volume fraction, and the magnitude of the dipole-dipole magnetic interactions for both boundary conditions are explored in this work. Results show that at dilute regimes and with moderate dipole

  19. Computation of hypersonic axisymmetric flows of equilibrium gas over blunt bodies

    International Nuclear Information System (INIS)

    Hejranfar, K.; Esfahanian, V.; Moghadam, R.K.

    2005-01-01

    An appropriate combination of the thin-layer Navier-Stokes (TLNS) and parabolized Navier-Stokes (PNS) solvers is used to accurately and efficiently compute hypersonic flowfields of equilibrium air around blunt-body configurations. The TLNS equations are solved in the nose region to provide the initial data plane needed for the solution of the PNS equations. Then the PNS equations are employed to efficiently compute the flowfield for the afterbody region by using a space marching procedure. Both the TLNS and the PNS equations are numerically solved by using the implicit non-iterative finite-difference algorithm of Beam and Warming. A shock fitting technique is used in both the TLNS and PNS codes to obtain accurate solution in the vicinity of the shock. To validate the results of the developed TLNS code, hypersonic laminar flow over a sphere at Mach number of 11.26 is computed. To demonstrate the accuracy and efficiency of using the present TLNS-PNS methodology, the computations are performed for hypersonic flow over 5 o long slender blunt cone at Mach number of 19.25. The results of these computations are found to be in good agreement with available numerical and experimental data. The effects of real gas on the flowfield characteristics are also studied in both the TLNS and PNS solutions. (author)

  20. The technique for calculation of equilibrium in heterogeneous systems of the InP-GaP-HCl type

    International Nuclear Information System (INIS)

    Voronin, V.A.; Prokhorov, V.A.; Goliusov, V.A.; Chuchmarev, S.K.

    1983-01-01

    Technique for calculation of equilibrium in heterogeneous systems based on A 1 3 B 5 -A 2 3 B 5 solid solutions implying the use of structural-topological models of chemical equilibrium in the investigated systems, is developed. Chemical equilibrium in the InP-GaP-HCl systems is analyzed by means of the suggested technique and the equilibrium composition of the gas phase is calculated

  1. Computing a quasi-perfect equilibrium of a two-player game

    DEFF Research Database (Denmark)

    Miltersen, Peter Bro; Sørensen, Troels Bjerre

    2010-01-01

    Refining an algorithm due to Koller, Megiddo and von Stengel, we show how to apply Lemke's algorithm for solving linear complementarity programs to compute a quasi-perfect equilibrium in behavior strategies of a given two-player extensive-form game of perfect recall. A quasi-perfect equilibrium...... of a zero-sum game, we devise variants of the algorithm that rely on linear programming rather than linear complementarity programming and use the simplex algorithm or other algorithms for linear programming rather than Lemke's algorithm. We argue that these latter algorithms are relevant for recent...

  2. Rapid computation of chemical equilibrium composition - An application to hydrocarbon combustion

    Science.gov (United States)

    Erickson, W. D.; Prabhu, R. K.

    1986-01-01

    A scheme for rapidly computing the chemical equilibrium composition of hydrocarbon combustion products is derived. A set of ten governing equations is reduced to a single equation that is solved by the Newton iteration method. Computation speeds are approximately 80 times faster than the often used free-energy minimization method. The general approach also has application to many other chemical systems.

  3. Computer program determines chemical composition of physical system at equilibrium

    Science.gov (United States)

    Kwong, S. S.

    1966-01-01

    FORTRAN 4 digital computer program calculates equilibrium composition of complex, multiphase chemical systems. This is a free energy minimization method with solution of the problem reduced to mathematical operations, without concern for the chemistry involved. Also certain thermodynamic properties are determined as byproducts of the main calculations.

  4. What is the real role of the equilibrium phase in abdominal computed tomography?

    Energy Technology Data Exchange (ETDEWEB)

    Salvadori, Priscila Silveira [Universidade Federal de Sao Paulo (EPM-Unifesp), Sao Paulo, SP (Brazil). Escola Paulista de Medicina; Costa, Danilo Manuel Cerqueira; Romano, Ricardo Francisco Tavares; Galvao, Breno Vitor Tomaz; Monjardim, Rodrigo da Fonseca; Bretas, Elisa Almeida Sathler; Rios, Lucas Torres; Shigueoka, David Carlos; Caldana, Rogerio Pedreschi; D' Ippolito, Giuseppe, E-mail: giuseppe_dr@uol.com.br [Universidade Federal de Sao Paulo (EPM-Unifesp), Sao Paulo, SP (Brazil). Escola Paulista de Medicina. Department of Diagnostic Imaging

    2013-03-15

    Objective: To evaluate the role of the equilibrium phase in abdominal computed tomography. Materials and Methods: A retrospective, cross-sectional, observational study reviewed 219 consecutive contrast-enhanced abdominal computed tomography images acquired in a three-month period, for different clinical indications. For each study, two reports were issued - one based on the initial analysis of non-contrast-enhanced, arterial and portal phases only (first analysis), and a second reading of these phases added to the equilibrium phase (second analysis). At the end of both readings, differences between primary and secondary diagnoses were pointed out and recorded, in order to measure the impact of suppressing the equilibrium phase on the clinical outcome for each of the patients. The extension of the exact Fisher's test was utilized to evaluate the changes in the primary diagnosis (p < 0.05 as significant). Results: Among the 219 cases reviewed, the absence of the equilibrium phase determined change in the primary diagnosis in only one case (0.46%; p > 0.999). As regards secondary diagnoses, changes after the second analysis were observed in five cases (2.3%). Conclusion: For clinical scenarios such as cancer staging, acute abdomen and investigation for abdominal collections, the equilibrium phase is dispensable and does not offer any significant diagnostic contribution. (author)

  5. Can markets compute equilibria?

    CERN Document Server

    Monroe , Hunter K

    2009-01-01

    Recent turmoil in financial and commodities markets has renewed questions regarding how well markets discover equilibrium prices, particularly when those markets are highly complex. A relatively new critique questions whether markets can realistically find equilibrium prices if computers cannot. For instance, in a simple exchange economy with Leontief preferences, the time required to compute equilibrium prices using the fastest known techniques is an exponential function of the number of goods. Furthermore, no efficient technique for this problem exists if a famous mathematical conjecture is

  6. Medium-term generation programming in competitive environments: a new optimisation approach for market equilibrium computing

    International Nuclear Information System (INIS)

    Barquin, J.; Centeno, E.; Reneses, J.

    2004-01-01

    The paper proposes a model to represent medium-term hydro-thermal operation of electrical power systems in deregulated frameworks. The model objective is to compute the oligopolistic market equilibrium point in which each utility maximises its profit, based on other firms' behaviour. This problem is not an optimisation one. The main contribution of the paper is to demonstrate that, nevertheless, under some reasonable assumptions, it can be formulated as an equivalent minimisation problem. A computer program has been coded by using the proposed approach. It is used to compute the market equilibrium of a real-size system. (author)

  7. 3D equilibrium codes for mirror machines

    International Nuclear Information System (INIS)

    Kaiser, T.B.

    1983-01-01

    The codes developed for cumputing three-dimensional guiding center equilibria for quadrupole tandem mirrors are discussed. TEBASCO (Tandem equilibrium and ballooning stability code) is a code developed at LLNL that uses a further expansion of the paraxial equilibrium equation in powers of β (plasma pressure/magnetic pressure). It has been used to guide the design of the TMX-U and MFTF-B experiments at Livermore. Its principal weakness is its perturbative nature, which renders its validity for high-β calculation open to question. In order to compute high-β equilibria, the reduced MHD technique that has been proven useful for determining toroidal equilibria was adapted to the tandem mirror geometry. In this approach, the paraxial expansion of the MHD equations yields a set of coupled nonlinear equations of motion valid for arbitrary β, that are solved as an initial-value problem. Two particular formulations have been implemented in computer codes developed at NYU/Kyoto U and LLNL. They differ primarily in the type of grid, the location of the lateral boundary and the damping techniques employed, and in the method of calculating pressure-balance equilibrium. Discussions on these codes are presented in this paper. (Kato, T.)

  8. Computer experiments on dynamical cloud and space time fluctuations in one-dimensional meta-equilibrium plasmas

    International Nuclear Information System (INIS)

    Rouet, J.L.; Feix, M.R.

    1996-01-01

    The test particle picture is a central theory of weakly correlated plasma. While experiments and computer experiments have confirmed the validity of this theory at thermal equilibrium, the extension to meta-equilibrium distributions presents interesting and intriguing points connected to the under or over-population of the tail of these distributions (high velocity) which have not yet been tested. Moreover, the general dynamical Debye cloud (which is a generalization of the static Debye cloud supposing a plasma at thermal equilibrium and a test particle of zero velocity) for any test particle velocity and three typical velocity distributions (equilibrium plus two meta-equilibriums) are presented. The simulations deal with a one-dimensional two-component plasma and, moreover, the relevance of the check for real three-dimensional plasma is outlined. Two kinds of results are presented: the dynamical cloud itself and the more usual density (or energy) fluctuation spectrums. Special attention is paid to the behavior of long wavelengths which needs long systems with very small graininess effects and, consequently, sizable computation efforts. Finally, the divergence or absence of energy in the small wave numbers connected to the excess or lack of fast particles of the two above mentioned meta-equilibrium is exhibited. copyright 1996 American Institute of Physics

  9. Teaching Chemical Equilibrium with the Jigsaw Technique

    Science.gov (United States)

    Doymus, Kemal

    2008-03-01

    This study investigates the effect of cooperative learning (jigsaw) versus individual learning methods on students’ understanding of chemical equilibrium in a first-year general chemistry course. This study was carried out in two different classes in the department of primary science education during the 2005-2006 academic year. One of the classes was randomly assigned as the non-jigsaw group (control) and other as the jigsaw group (cooperative). Students participating in the jigsaw group were divided into four “home groups” since the topic chemical equilibrium is divided into four subtopics (Modules A, B, C and D). Each of these home groups contained four students. The groups were as follows: (1) Home Group A (HGA), representin g the equilibrium state and quantitative aspects of equilibrium (Module A), (2) Home Group B (HGB), representing the equilibrium constant and relationships involving equilibrium constants (Module B), (3) Home Group C (HGC), representing Altering Equilibrium Conditions: Le Chatelier’s principle (Module C), and (4) Home Group D (HGD), representing calculations with equilibrium constants (Module D). The home groups then broke apart, like pieces of a jigsaw puzzle, and the students moved into jigsaw groups consisting of members from the other home groups who were assigned the same portion of the material. The jigsaw groups were then in charge of teaching their specific subtopic to the rest of the students in their learning group. The main data collection tool was a Chemical Equilibrium Achievement Test (CEAT), which was applied to both the jigsaw and non-jigsaw groups The results indicated that the jigsaw group was more successful than the non-jigsaw group (individual learning method).

  10. A new algorithm to compute conjectured supply function equilibrium in electricity markets

    International Nuclear Information System (INIS)

    Diaz, Cristian A.; Villar, Jose; Campos, Fco Alberto; Rodriguez, M. Angel

    2011-01-01

    Several types of market equilibria approaches, such as Cournot, Conjectural Variation (CVE), Supply Function (SFE) or Conjectured Supply Function (CSFE) have been used to model electricity markets for the medium and long term. Among them, CSFE has been proposed as a generalization of the classic Cournot. It computes the equilibrium considering the reaction of the competitors against changes in their strategy, combining several characteristics of both CVE and SFE. Unlike linear SFE approaches, strategies are linearized only at the equilibrium point, using their first-order Taylor approximation. But to solve CSFE, the slope or the intercept of the linear approximations must be given, which has been proved to be very restrictive. This paper proposes a new algorithm to compute CSFE. Unlike previous approaches, the main contribution is that the competitors' strategies for each generator are initially unknown (both slope and intercept) and endogenously computed by this new iterative algorithm. To show the applicability of the proposed approach, it has been applied to several case examples where its qualitative behavior has been analyzed in detail. (author)

  11. Equilibrium chemical reaction of supersonic hydrogen-air jets (the ALMA computer program)

    Science.gov (United States)

    Elghobashi, S.

    1977-01-01

    The ALMA (axi-symmetrical lateral momentum analyzer) program is concerned with the computation of two dimensional coaxial jets with large lateral pressure gradients. The jets may be free or confined, laminar or turbulent, reacting or non-reacting. Reaction chemistry is equilibrium.

  12. A Comparison of the Computation Times of Thermal Equilibrium and Non-equilibrium Models of Droplet Field in a Two-Fluid Three-Field Model

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ik Kyu; Cho, Heong Kyu; Kim, Jong Tae; Yoon, Han Young; Jeong, Jae Jun

    2007-12-15

    A computational model for transient, 3 dimensional 2 phase flows was developed by using 'unstructured-FVM-based, non-staggered, semi-implicit numerical scheme' considering the thermally non-equilibrium droplets. The assumption of the thermally equilibrium between liquid and droplets of previous studies was not used any more, and three energy conservation equations for vapor, liquid, liquid droplets were set up. Thus, 9 conservation equations for mass, momentum, and energy were established to simulate 2 phase flows. In this report, the governing equations and a semi-implicit numerical sheme for a transient 1 dimensional 2 phase flows was described considering the thermally non-equilibrium between liquid and liquid droplets. The comparison with the previous model considering the thermally non-equilibrium between liquid and liquid droplets was also reported.

  13. Spectral Quasi-Equilibrium Manifold for Chemical Kinetics.

    Science.gov (United States)

    Kooshkbaghi, Mahdi; Frouzakis, Christos E; Boulouchos, Konstantinos; Karlin, Iliya V

    2016-05-26

    The Spectral Quasi-Equilibrium Manifold (SQEM) method is a model reduction technique for chemical kinetics based on entropy maximization under constraints built by the slowest eigenvectors at equilibrium. The method is revisited here and discussed and validated through the Michaelis-Menten kinetic scheme, and the quality of the reduction is related to the temporal evolution and the gap between eigenvalues. SQEM is then applied to detailed reaction mechanisms for the homogeneous combustion of hydrogen, syngas, and methane mixtures with air in adiabatic constant pressure reactors. The system states computed using SQEM are compared with those obtained by direct integration of the detailed mechanism, and good agreement between the reduced and the detailed descriptions is demonstrated. The SQEM reduced model of hydrogen/air combustion is also compared with another similar technique, the Rate-Controlled Constrained-Equilibrium (RCCE). For the same number of representative variables, SQEM is found to provide a more accurate description.

  14. Computing the Pareto-Nash equilibrium set in finite multi-objective mixed-strategy games

    Directory of Open Access Journals (Sweden)

    Victoria Lozan

    2013-10-01

    Full Text Available The Pareto-Nash equilibrium set (PNES is described as intersection of graphs of efficient response mappings. The problem of PNES computing in finite multi-objective mixed-strategy games (Pareto-Nash games is considered. A method for PNES computing is studied. Mathematics Subject Classification 2010: 91A05, 91A06, 91A10, 91A43, 91A44.

  15. A procedure to compute equilibrium concentrations in multicomponent systems by Gibbs energy minimization on spreadsheets

    International Nuclear Information System (INIS)

    Lima da Silva, Aline; Heck, Nestor Cesar

    2003-01-01

    Equilibrium concentrations are traditionally calculated with the help of equilibrium constant equations from selected reactions. This procedure, however, is only useful for simpler problems. Analysis of the equilibrium state in a multicomponent and multiphase system necessarily involves solution of several simultaneous equations, and, as the number of system components grows, the required computation becomes more complex and tedious. A more direct and general method for solving the problem is the direct minimization of the Gibbs energy function. The solution for the nonlinear problem consists in minimizing the objective function (Gibbs energy of the system) subjected to the constraints of the elemental mass-balance. To solve it, usually a computer code is developed, which requires considerable testing and debugging efforts. In this work, a simple method to predict equilibrium composition in multicomponent systems is presented, which makes use of an electronic spreadsheet. The ability to carry out these calculations within a spreadsheet environment shows several advantages. First, spreadsheets are available 'universally' on nearly all personal computers. Second, the input and output capabilities of spreadsheets can be effectively used to monitor calculated results. Third, no additional systems or programs need to be learned. In this way, spreadsheets can be as suitable in computing equilibrium concentrations as well as to be used as teaching and learning aids. This work describes, therefore, the use of the Solver tool, contained in the Microsoft Excel spreadsheet package, on computing equilibrium concentrations in a multicomponent system, by the method of direct Gibbs energy minimization. The four phases Fe-Cr-O-C-Ni system is used as an example to illustrate the method proposed. The pure stoichiometric phases considered in equilibrium calculations are: Cr 2 O 3 (s) and FeO C r 2 O 3 (s). The atmosphere consists of O 2 , CO e CO 2 constituents. The liquid iron

  16. Regional disaster impact analysis: comparing Input-Output and Computable General Equilibrium models

    NARCIS (Netherlands)

    Koks, E.E.; Carrera, L.; Jonkeren, O.; Aerts, J.C.J.H.; Husby, T.G.; Thissen, M.; Standardi, G.; Mysiak, J.

    2016-01-01

    A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of

  17. HUBBLE-BUBBLE 1. A computer program for the analysis of non-equilibrium flows of water

    International Nuclear Information System (INIS)

    Mather, D.J.

    1978-02-01

    A description is given of the computer program HUBBLE-BUBBLE I which simulates the non-equilibrium flow of water and steam in a pipe. The code is designed to examine the transient flow developing in a pipe containing hot compressed water following the rupture of a retaining diaphragm. Allowance is made for an area change in the pipe. Particular attention is paid to the non-equilibrium development of vapour bubbles and to the transition from a bubble-liquid regime to a droplet-vapour regime. The mathematical and computational model is described together with a summary of the FORTRAN subroutines and listing of data input. (UK)

  18. Higher-order techniques in computational electromagnetics

    CERN Document Server

    Graglia, Roberto D

    2016-01-01

    Higher-Order Techniques in Computational Electromagnetics explains 'high-order' techniques that can significantly improve the accuracy, computational cost, and reliability of computational techniques for high-frequency electromagnetics, such as antennas, microwave devices and radar scattering applications.

  19. A computer graphics display technique for the examination of aircraft design data

    Science.gov (United States)

    Talcott, N. A., Jr.

    1981-01-01

    An interactive computer graphics technique has been developed for quickly sorting and interpreting large amounts of aerodynamic data. It utilizes a graphic representation rather than numbers. The geometry package represents the vehicle as a set of panels. These panels are ordered in groups of ascending values (e.g., equilibrium temperatures). The groups are then displayed successively on a CRT building up to the complete vehicle. A zoom feature allows for displaying only the panels with values between certain limits. The addition of color allows a one-time display thus eliminating the need for a display build up.

  20. On computation of C-stationary points for equilibrium problems with linear complementarity constraints via homotopy method

    Czech Academy of Sciences Publication Activity Database

    Červinka, Michal

    2010-01-01

    Roč. 2010, č. 4 (2010), s. 730-753 ISSN 0023-5954 Institutional research plan: CEZ:AV0Z10750506 Keywords : equilibrium problems with complementarity constraints * homotopy * C-stationarity Subject RIV: BC - Control Systems Theory Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/MTR/cervinka-on computation of c-stationary points for equilibrium problems with linear complementarity constraints via homotopy method.pdf

  1. Soft computing techniques in engineering applications

    CERN Document Server

    Zhong, Baojiang

    2014-01-01

    The Soft Computing techniques, which are based on the information processing of biological systems are now massively used in the area of pattern recognition, making prediction & planning, as well as acting on the environment. Ideally speaking, soft computing is not a subject of homogeneous concepts and techniques; rather, it is an amalgamation of distinct methods that confirms to its guiding principle. At present, the main aim of soft computing is to exploit the tolerance for imprecision and uncertainty to achieve tractability, robustness and low solutions cost. The principal constituents of soft computing techniques are probabilistic reasoning, fuzzy logic, neuro-computing, genetic algorithms, belief networks, chaotic systems, as well as learning theory. This book covers contributions from various authors to demonstrate the use of soft computing techniques in various applications of engineering.  

  2. Accelerating Multiagent Reinforcement Learning by Equilibrium Transfer.

    Science.gov (United States)

    Hu, Yujing; Gao, Yang; An, Bo

    2015-07-01

    An important approach in multiagent reinforcement learning (MARL) is equilibrium-based MARL, which adopts equilibrium solution concepts in game theory and requires agents to play equilibrium strategies at each state. However, most existing equilibrium-based MARL algorithms cannot scale due to a large number of computationally expensive equilibrium computations (e.g., computing Nash equilibria is PPAD-hard) during learning. For the first time, this paper finds that during the learning process of equilibrium-based MARL, the one-shot games corresponding to each state's successive visits often have the same or similar equilibria (for some states more than 90% of games corresponding to successive visits have similar equilibria). Inspired by this observation, this paper proposes to use equilibrium transfer to accelerate equilibrium-based MARL. The key idea of equilibrium transfer is to reuse previously computed equilibria when each agent has a small incentive to deviate. By introducing transfer loss and transfer condition, a novel framework called equilibrium transfer-based MARL is proposed. We prove that although equilibrium transfer brings transfer loss, equilibrium-based MARL algorithms can still converge to an equilibrium policy under certain assumptions. Experimental results in widely used benchmarks (e.g., grid world game, soccer game, and wall game) show that the proposed framework: 1) not only significantly accelerates equilibrium-based MARL (up to 96.7% reduction in learning time), but also achieves higher average rewards than algorithms without equilibrium transfer and 2) scales significantly better than algorithms without equilibrium transfer when the state/action space grows and the number of agents increases.

  3. Horseshoes in a Chaotic System with Only One Stable Equilibrium

    Science.gov (United States)

    Huan, Songmei; Li, Qingdu; Yang, Xiao-Song

    To confirm the numerically demonstrated chaotic behavior in a chaotic system with only one stable equilibrium reported by Wang and Chen, we resort to Poincaré map technique and present a rigorous computer-assisted verification of horseshoe chaos by virtue of topological horseshoes theory.

  4. Computer program to solve two-dimensional shock-wave interference problems with an equilibrium chemically reacting air model

    Science.gov (United States)

    Glass, Christopher E.

    1990-08-01

    The computer program EASI, an acronym for Equilibrium Air Shock Interference, was developed to calculate the inviscid flowfield, the maximum surface pressure, and the maximum heat flux produced by six shock wave interference patterns on a 2-D, cylindrical configuration. Thermodynamic properties of the inviscid flowfield are determined using either an 11-specie, 7-reaction equilibrium chemically reacting air model or a calorically perfect air model. The inviscid flowfield is solved using the integral form of the conservation equations. Surface heating calculations at the impingement point for the equilibrium chemically reacting air model use variable transport properties and specific heat. However, for the calorically perfect air model, heating rate calculations use a constant Prandtl number. Sample calculations of the six shock wave interference patterns, a listing of the computer program, and flowcharts of the programming logic are included.

  5. Pharmaceutical industry and trade liberalization using computable general equilibrium model.

    Science.gov (United States)

    Barouni, M; Ghaderi, H; Banouei, Aa

    2012-01-01

    Computable general equilibrium models are known as a powerful instrument in economic analyses and widely have been used in order to evaluate trade liberalization effects. The purpose of this study was to provide the impacts of trade openness on pharmaceutical industry using CGE model. Using a computable general equilibrium model in this study, the effects of decrease in tariffs as a symbol of trade liberalization on key variables of Iranian pharmaceutical products were studied. Simulation was performed via two scenarios in this study. The first scenario was the effect of decrease in tariffs of pharmaceutical products as 10, 30, 50, and 100 on key drug variables, and the second was the effect of decrease in other sectors except pharmaceutical products on vital and economic variables of pharmaceutical products. The required data were obtained and the model parameters were calibrated according to the social accounting matrix of Iran in 2006. The results associated with simulation demonstrated that the first scenario has increased import, export, drug supply to markets and household consumption, while import, export, supply of product to market, and household consumption of pharmaceutical products would averagely decrease in the second scenario. Ultimately, society welfare would improve in all scenarios. We presents and synthesizes the CGE model which could be used to analyze trade liberalization policy issue in developing countries (like Iran), and thus provides information that policymakers can use to improve the pharmacy economics.

  6. Performing an Environmental Tax Reform in a regional Economy. A Computable General Equilibrium

    NARCIS (Netherlands)

    Andre, F.J.; Cardenete, M.A.; Velazquez, E.

    2003-01-01

    We use a Computable General Equilibrium model to simulate the effects of an Environmental Tax Reform in a regional economy (Andalusia, Spain).The reform involves imposing a tax on CO2 or SO2 emissions and reducing either the Income Tax or the payroll tax of employers to Social Security, and

  7. An interactive computer code for calculation of gas-phase chemical equilibrium (EQLBRM)

    Science.gov (United States)

    Pratt, B. S.; Pratt, D. T.

    1984-01-01

    A user friendly, menu driven, interactive computer program known as EQLBRM which calculates the adiabatic equilibrium temperature and product composition resulting from the combustion of hydrocarbon fuels with air, at specified constant pressure and enthalpy is discussed. The program is developed primarily as an instructional tool to be run on small computers to allow the user to economically and efficiency explore the effects of varying fuel type, air/fuel ratio, inlet air and/or fuel temperature, and operating pressure on the performance of continuous combustion devices such as gas turbine combustors, Stirling engine burners, and power generation furnaces.

  8. Numerical Verification Of Equilibrium Chemistry

    International Nuclear Information System (INIS)

    Piro, Markus; Lewis, Brent; Thompson, William T.; Simunovic, Srdjan; Besmann, Theodore M.

    2010-01-01

    A numerical tool is in an advanced state of development to compute the equilibrium compositions of phases and their proportions in multi-component systems of importance to the nuclear industry. The resulting software is being conceived for direct integration into large multi-physics fuel performance codes, particularly for providing boundary conditions in heat and mass transport modules. However, any numerical errors produced in equilibrium chemistry computations will be propagated in subsequent heat and mass transport calculations, thus falsely predicting nuclear fuel behaviour. The necessity for a reliable method to numerically verify chemical equilibrium computations is emphasized by the requirement to handle the very large number of elements necessary to capture the entire fission product inventory. A simple, reliable and comprehensive numerical verification method is presented which can be invoked by any equilibrium chemistry solver for quality assurance purposes.

  9. Computer applications in thermochemistry

    International Nuclear Information System (INIS)

    Vana Varamban, S.

    1996-01-01

    Knowledge of equilibrium is needed under many practical situations. Simple stoichiometric calculations can be performed by the use of hand calculators. Multi-component, multi-phase gas - solid chemical equilibrium calculations are far beyond the conventional devices and methods. Iterative techniques have to be resorted. Such problems are most elegantly handled by the use of modern computers. This report demonstrates the possible use of computers for chemical equilibrium calculations in the field of thermochemistry and chemical metallurgy. Four modules are explained. To fit the experimental C p data and to generate the thermal functions, to perform equilibrium calculations to the defined conditions, to prepare the elaborate input to the equilibrium and to analyse the calculated results graphically. The principles of thermochemical calculations are briefly described. An extensive input guide is given. Several illustrations are included to help the understanding and usage. (author)

  10. Statistical and Computational Techniques in Manufacturing

    CERN Document Server

    2012-01-01

    In recent years, interest in developing statistical and computational techniques for applied manufacturing engineering has been increased. Today, due to the great complexity of manufacturing engineering and the high number of parameters used, conventional approaches are no longer sufficient. Therefore, in manufacturing, statistical and computational techniques have achieved several applications, namely, modelling and simulation manufacturing processes, optimization manufacturing parameters, monitoring and control, computer-aided process planning, etc. The present book aims to provide recent information on statistical and computational techniques applied in manufacturing engineering. The content is suitable for final undergraduate engineering courses or as a subject on manufacturing at the postgraduate level. This book serves as a useful reference for academics, statistical and computational science researchers, mechanical, manufacturing and industrial engineers, and professionals in industries related to manu...

  11. Phase equilibrium engineering

    CERN Document Server

    Brignole, Esteban Alberto

    2013-01-01

    Traditionally, the teaching of phase equilibria emphasizes the relationships between the thermodynamic variables of each phase in equilibrium rather than its engineering applications. This book changes the focus from the use of thermodynamics relationships to compute phase equilibria to the design and control of the phase conditions that a process needs. Phase Equilibrium Engineering presents a systematic study and application of phase equilibrium tools to the development of chemical processes. The thermodynamic modeling of mixtures for process development, synthesis, simulation, design and

  12. HINT computation of LHD equilibrium with zero rotational transform surface

    International Nuclear Information System (INIS)

    Kanno, Ryutaro; Toi, Kazuo; Watanabe, Kiyomasa; Hayashi, Takaya; Miura, Hideaki; Nakajima, Noriyoshi; Okamoto Masao

    2004-01-01

    A Large Helical Device equilibrium having a zero rotational transform surface is studied by using the three dimensional MHD equilibrium code, HINT. We find existence of the equilibrium but with formation of the two or three n=0 islands composing a homoclinic-type structure near the center, where n is a toroidal mode number. The LHD equilibrium maintains the structure, when the equilibrium beta increases. (author)

  13. Computational studies in tokamak equilibrium and transport

    International Nuclear Information System (INIS)

    Braams, B.J.

    1986-01-01

    This thesis is concerned with some problems arising in the magnetic confinement approach to controlled thermonuclear fusion. The work address the numerical modelling of equilibrium and transport properties of a confined plasma and the interpretation of experimental data. The thesis is divided in two parts. Part 1 is devoted to some aspects of the MHD equilibrium problem, both in the 'direct' formulation (given an equation for the plasma current, the corresponding equilibrium is to be determined) and in the 'inverse' formulation (the interpretation of measurements at the plasma edge). Part 2 is devoted to numerical studies of the edge plasma. The appropriate Navier-Stokes system of fluid equations is solved in a two-dimensional geometry. The main interest of this work is to develop an understanding of particle and energy transport in the scrape-off layer and onto material boundaries, and also to contribute to the conceptual design of the NET/INTOR tokamak reactor experiment. (Auth.)

  14. Computing Properties Of Chemical Mixtures At Equilibrium

    Science.gov (United States)

    Mcbride, B. J.; Gordon, S.

    1995-01-01

    Scientists and engineers need data on chemical equilibrium compositions to calculate theoretical thermodynamic properties of chemical systems. Information essential in design and analysis of such equipment as compressors, turbines, nozzles, engines, shock tubes, heat exchangers, and chemical-processing equipment. CET93 is general program that calculates chemical equilibrium compositions and properties of mixtures for any chemical system for which thermodynamic data are available. Includes thermodynamic data for more than 1,300 gaseous and condensed species and thermal-transport data for 151 gases. Written in FORTRAN 77.

  15. Computing multi-species chemical equilibrium with an algorithm based on the reaction extents

    DEFF Research Database (Denmark)

    Paz-Garcia, Juan Manuel; Johannesson, Björn; Ottosen, Lisbeth M.

    2013-01-01

    -negative constrains. The residual function, representing the distance to the equilibrium, is defined from the chemical potential (or Gibbs energy) of the chemical system. Local minimums are potentially avoided by the prioritization of the aqueous reactions with respect to the heterogeneous reactions. The formation......A mathematical model for the solution of a set of chemical equilibrium equations in a multi-species and multiphase chemical system is described. The computer-aid solution of model is achieved by means of a Newton-Raphson method enhanced with a line-search scheme, which deals with the non...... and release of gas bubbles is taken into account in the model, limiting the concentration of volatile aqueous species to a maximum value, given by the gas solubility constant.The reaction extents are used as state variables for the numerical method. As a result, the accepted solution satisfies the charge...

  16. Comparison of Themodynamic and Transport Property Models for Computing Equilibrium High Enthalpy Flows

    Science.gov (United States)

    Ramasahayam, Veda Krishna Vyas; Diwakar, Anant; Bodi, Kowsik

    2017-11-01

    To study the flow of high temperature air in vibrational and chemical equilibrium, accurate models for thermodynamic state and transport phenomena are required. In the present work, the performance of a state equation model and two mixing rules for determining equilibrium air thermodynamic and transport properties are compared with that of curve fits. The thermodynamic state model considers 11 species which computes flow chemistry by an iterative process and the mixing rules considered for viscosity are Wilke and Armaly-Sutton. The curve fits of Srinivasan, which are based on Grabau type transition functions, are chosen for comparison. A two-dimensional Navier-Stokes solver is developed to simulate high enthalpy flows with numerical fluxes computed by AUSM+-up. The accuracy of state equation model and curve fits for thermodynamic properties is determined using hypersonic inviscid flow over a circular cylinder. The performance of mixing rules and curve fits for viscosity are compared using hypersonic laminar boundary layer prediction on a flat plate. It is observed that steady state solutions from state equation model and curve fits match with each other. Though curve fits are significantly faster the state equation model is more general and can be adapted to any flow composition.

  17. Iterative algorithms for computing the feedback Nash equilibrium point for positive systems

    Science.gov (United States)

    Ivanov, I.; Imsland, Lars; Bogdanova, B.

    2017-03-01

    The paper studies N-player linear quadratic differential games on an infinite time horizon with deterministic feedback information structure. It introduces two iterative methods (the Newton method as well as its accelerated modification) in order to compute the stabilising solution of a set of generalised algebraic Riccati equations. The latter is related to the Nash equilibrium point of the considered game model. Moreover, we derive the sufficient conditions for convergence of the proposed methods. Finally, we discuss two numerical examples so as to illustrate the performance of both of the algorithms.

  18. Computer Recreations.

    Science.gov (United States)

    Dewdney, A. K.

    1988-01-01

    Describes the creation of the computer program "BOUNCE," designed to simulate a weighted piston coming into equilibrium with a cloud of bouncing balls. The model follows the ideal gas law. Utilizes the critical event technique to create the model. Discusses another program, "BOOM," which simulates a chain reaction. (CW)

  19. The equilibrium theory of inhomogeneous polymers (international series of monographs on physics)

    CERN Document Server

    Fredrickson, Glenn

    2013-01-01

    The Equilibrium Theory of Inhomogeneous Polymers provides an introduction to the field-theoretic methods and computer simulation techniques that are used in the design of structured polymeric fluids. By such methods, the principles that dictate equilibrium self-assembly in systems ranging from block and graft copolymers, to polyelectrolytes, liquid crystalline polymers, and polymer nanocomposites can be established. Building on an introductory discussion of single-polymer statistical mechanics, the book provides a detailed treatment of analytical and numerical techniques for addressing the conformational properties of polymers subjected to spatially-varying potential fields. This problem is shown to be central to the field-theoretic description of interacting polymeric fluids, and models for a number of important polymer systems are elaborated. Chapter 5 serves to unify and expound the topic of self-consistent field theory, which is a collection of analytical and numerical techniques for obtaining solutions o...

  20. Estimation of the transboundary economic impacts of the Grand Ethiopia Renaissance Dam: A Computable General Equilibrium Analysis

    NARCIS (Netherlands)

    Kahsay, T.N.; Kuik, O.J.; Brouwer, R.; van der Zaag, P.

    2015-01-01

    Employing a multi-region multi-sector computable general equilibrium (CGE) modeling framework, this study estimates the direct and indirect economic impacts of the Grand Ethiopian Renaissance Dam (GERD) on the Eastern Nile economies. The study contributes to the existing literature by evaluating the

  1. First principles calculations of thermal conductivity with out of equilibrium molecular dynamics simulations

    Science.gov (United States)

    Puligheddu, Marcello; Gygi, Francois; Galli, Giulia

    The prediction of the thermal properties of solids and liquids is central to numerous problems in condensed matter physics and materials science, including the study of thermal management of opto-electronic and energy conversion devices. We present a method to compute the thermal conductivity of solids by performing ab initio molecular dynamics at non equilibrium conditions. Our formulation is based on a generalization of the approach to equilibrium technique, using sinusoidal temperature gradients, and it only requires calculations of first principles trajectories and atomic forces. We discuss results and computational requirements for a representative, simple oxide, MgO, and compare with experiments and data obtained with classical potentials. This work was supported by MICCoM as part of the Computational Materials Science Program funded by the U.S. Department of Energy (DOE), Office of Science , Basic Energy Sciences (BES), Materials Sciences and Engineering Division under Grant DOE/BES 5J-30.

  2. A Computational Method for Determining the Equilibrium Composition and Product Temperature in a LH2/LOX Combustor

    Science.gov (United States)

    Sozen, Mehmet

    2003-01-01

    In what follows, the model used for combustion of liquid hydrogen (LH2) with liquid oxygen (LOX) using chemical equilibrium assumption, and the novel computational method developed for determining the equilibrium composition and temperature of the combustion products by application of the first and second laws of thermodynamics will be described. The modular FORTRAN code developed as a subroutine that can be incorporated into any flow network code with little effort has been successfully implemented in GFSSP as the preliminary runs indicate. The code provides capability of modeling the heat transfer rate to the coolants for parametric analysis in system design.

  3. Transpiration and film cooling boundary layer computer program. Volume 1: Numerical solutions of the turbulent boundary layer equations with equilibrium chemistry

    Science.gov (United States)

    Levine, J. N.

    1971-01-01

    A finite difference turbulent boundary layer computer program has been developed. The program is primarily oriented towards the calculation of boundary layer performance losses in rocket engines; however, the solution is general, and has much broader applicability. The effects of transpiration and film cooling as well as the effect of equilibrium chemical reactions (currently restricted to the H2-O2 system) can be calculated. The turbulent transport terms are evaluated using the phenomenological mixing length - eddy viscosity concept. The equations of motion are solved using the Crank-Nicolson implicit finite difference technique. The analysis and computer program have been checked out by solving a series of both laminar and turbulent test cases and comparing the results to data or other solutions. These comparisons have shown that the program is capable of producing very satisfactory results for a wide range of flows. Further refinements to the analysis and program, especially as applied to film cooling solutions, would be aided by the acquisition of a firm data base.

  4. Analysis of equilibrium and topology of tokamak plasmas

    International Nuclear Information System (INIS)

    Milligen, B.P. van.

    1991-01-01

    In a tokamak, the plasma is confined by means of a magnetic field. There exists an equilibrium between outward forces due to the pressure gradient in plasma and inward forces due to the interaction between currents flowing inside the plasma and the magnetic field. The equilibrium magnetic field is characterized by helical field lines that lie on nested toroidal surfaces of constant flux. The equilibrium yields values for global and local plasma parameters (e.g. plasma position, total current, local pressure). Thus, precise knowledge of the equilibrium is essential for plasma control, for the understanding of many phenomena occurring in the plasma (in particular departures from the ideal equilibrium involving current filamentation on the flux surfaces that lead to the formation of islands, i.e. nested helical flux surfaces), and for the interpretation of many different types of measurements (e.g. the translation of line integrated electron density measurements made by laser beams probing the plasma into a local electron density on a flux surface). The problem of determining the equilibrium magnetic field from external magnetic field measurements has been studied extensively in literature. The problem is 'ill-posed', which means that the solution is unstable to small changes in the measurement data, and the solution has to be constrained in order to stabilize it. Various techniques for handling this problem have been suggested in literature. Usually ad-hoc restrictions are imposed on the equilibrium solution in order to stabilize it. More equilibrium solvers are not able to handle very dissimilar measurement data which means information on the equilibrium is lost. The generally do not allow a straightforward error estimate of the obtained results to be made, and they require large amounts of computing time. This problems are addressed in this thesis. (author). 104 refs.; 42 figs.; 6 tabs

  5. Stabilization of emission of CO2: A computable general equilibrium assessment

    International Nuclear Information System (INIS)

    Glomsroed, S.; Vennemo, H.; Johnsen, T.

    1992-01-01

    A multisector computable general equilibrium model is used to study economic development perspectives in Norway if CO 2 emissions were stabilized. The effects discussed include impacts on main macroeconomic indicators and economic growth, sectoral allocation of production, and effects on the market for energy. The impact of other pollutants than CO 2 on emissions is assessed along with the related impact on noneconomic welfare. The results indicate that CO 2 emissions might be stabilized in Norway without dramatically reducing economic growth. Sectoral allocation effects are much larger. A substantial reduction in emissions to air other than CO 2 is found, yielding considerable gains in noneconomic welfare. 25 refs., 6 tabs., 2 figs

  6. Computing Nash equilibria through computational intelligence methods

    Science.gov (United States)

    Pavlidis, N. G.; Parsopoulos, K. E.; Vrahatis, M. N.

    2005-03-01

    Nash equilibrium constitutes a central solution concept in game theory. The task of detecting the Nash equilibria of a finite strategic game remains a challenging problem up-to-date. This paper investigates the effectiveness of three computational intelligence techniques, namely, covariance matrix adaptation evolution strategies, particle swarm optimization, as well as, differential evolution, to compute Nash equilibria of finite strategic games, as global minima of a real-valued, nonnegative function. An issue of particular interest is to detect more than one Nash equilibria of a game. The performance of the considered computational intelligence methods on this problem is investigated using multistart and deflection.

  7. Combustion of hydrogen-air jets in local chemical equilibrium: A guide to the CHARNAL computer program

    Science.gov (United States)

    Spalding, D. B.; Launder, B. E.; Morse, A. P.; Maples, G.

    1974-01-01

    A guide to a computer program, written in FORTRAN 4, for predicting the flow properties of turbulent mixing with combustion of a circular jet of hydrogen into a co-flowing stream of air is presented. The program, which is based upon the Imperial College group's PASSA series, solves differential equations for diffusion and dissipation of turbulent kinetic energy and also of the R.M.S. fluctuation of hydrogen concentration. The effective turbulent viscosity for use in the shear stress equation is computed. Chemical equilibrium is assumed throughout the flow.

  8. A personal-computer-based package for interactive assessment of magnetohydrodynamic equilibrium and poloidal field coil design in axisymmetric toroidal geometry

    International Nuclear Information System (INIS)

    Kelleher, W.P.; Steiner, D.

    1989-01-01

    A personal-computer (PC)-based calculational approach assesses magnetohydrodynamic (MHD) equilibrium and poloidal field (PF) coil arrangement in a highly interactive mode, well suited for tokamak scoping studies. The system developed involves a two-step process: the MHD equilibrium is calculated and then a PF coil arrangement, consistent with the equilibrium is determined in an interactive design environment. In this paper the approach is used to examine four distinctly different toroidal configurations: the STARFIRE rector, a spherical torus (ST), the Big Dee, and an elongated tokamak. In these applications the PC-based results are benchmarked against those of a mainframe code for STARFIRE, ST, and Big Dee. The equilibrium and PF coil arrangement calculations obtained with the PC approach agree within a few percent with those obtained with the mainframe code

  9. Can Migrants Save Greece From Ageing? A Computable General Equilibrium Approach Using G-AMOS.

    OpenAIRE

    Nikos Pappas

    2008-01-01

    The population of Greece is projected to age in the course of the next three decades. This paper combines demographic projections with a multi-period economic Computable General Equilibrium (CGE) modelling framework to assess the macroeconomic impact of these future demographic trends. The simulation strategy adopted in Lisenkova et. al. (2008) is also employed here. The size and age composition of the population in the future depends on current and future values of demographic parameters suc...

  10. Efficient computation of past global ocean circulation patterns using continuation in paleobathymetry

    NARCIS (Netherlands)

    Mulder, T. E.; Baatsen, M. L.J.; Wubs, F.W.; Dijkstra, H. A.

    2017-01-01

    In the field of paleoceanographic modeling, the different positioning of Earth's continental configurations is often a major challenge for obtaining equilibrium ocean flow solutions. In this paper, we introduce numerical parameter continuation techniques to compute equilibrium solutions of ocean

  11. Teaching Chemical Equilibrium with the Jigsaw Technique

    Science.gov (United States)

    Doymus, Kemal

    2008-01-01

    This study investigates the effect of cooperative learning (jigsaw) versus individual learning methods on students' understanding of chemical equilibrium in a first-year general chemistry course. This study was carried out in two different classes in the department of primary science education during the 2005-2006 academic year. One of the classes…

  12. A nested-LES wall-modeling approach for computation of high Reynolds number equilibrium and non-equilibrium wall-bounded turbulent flows

    Science.gov (United States)

    Tang, Yifeng; Akhavan, Rayhaneh

    2014-11-01

    A nested-LES wall-modeling approach for high Reynolds number, wall-bounded turbulence is presented. In this approach, a coarse-grained LES is performed in the full-domain, along with a nested, fine-resolution LES in a minimal flow unit. The coupling between the two domains is achieved by renormalizing the instantaneous LES velocity fields to match the profiles of kinetic energies of components of the mean velocity and velocity fluctuations in both domains to those of the minimal flow unit in the near-wall region, and to those of the full-domain in the outer region. The method is of fixed computational cost, independent of Reτ , in homogenous flows, and is O (Reτ) in strongly non-homogenous flows. The method has been applied to equilibrium turbulent channel flows at 1000 shear-driven, 3D turbulent channel flow at Reτ ~ 2000 . In equilibrium channel flow, the friction coefficient and the one-point turbulence statistics are predicted in agreement with Dean's correlation and available DNS and experimental data. In shear-driven, 3D channel flow, the evolution of turbulence statistics is predicted in agreement with experimental data of Driver & Hebbar (1991) in shear-driven, 3D boundary layer flow.

  13. Computational techniques of the simplex method

    CERN Document Server

    Maros, István

    2003-01-01

    Computational Techniques of the Simplex Method is a systematic treatment focused on the computational issues of the simplex method. It provides a comprehensive coverage of the most important and successful algorithmic and implementation techniques of the simplex method. It is a unique source of essential, never discussed details of algorithmic elements and their implementation. On the basis of the book the reader will be able to create a highly advanced implementation of the simplex method which, in turn, can be used directly or as a building block in other solution algorithms.

  14. Determination of gross plasma equilibrium from magnetic multipoles

    Energy Technology Data Exchange (ETDEWEB)

    Kessel, C.E.

    1986-05-01

    A new approximate technique to determine the gross plasma equilibrium parameters, major radius, minor radius, elongation and triangularity for an up-down symmetric plasma is developed. It is based on a multipole representation of the externally applied poloidal magnetic field, relating specific terms to the equilibrium parameters. The technique shows reasonable agreement with free boundary MHD equilibrium results. The method is useful in dynamic simulation and control studies.

  15. Determination of gross plasma equilibrium from magnetic multipoles

    International Nuclear Information System (INIS)

    Kessel, C.E.

    1986-05-01

    A new approximate technique to determine the gross plasma equilibrium parameters, major radius, minor radius, elongation and triangularity for an up-down symmetric plasma is developed. It is based on a multipole representation of the externally applied poloidal magnetic field, relating specific terms to the equilibrium parameters. The technique shows reasonable agreement with free boundary MHD equilibrium results. The method is useful in dynamic simulation and control studies

  16. Effects of periodic boundary conditions on equilibrium properties of computer simulated fluids. I. Theory

    International Nuclear Information System (INIS)

    Pratt, L.R.; Haan, S.W.

    1981-01-01

    An exact formal theory for the effects of periodic boundary conditions on the equilibrium properties of computer simulated classical many-body systems is developed. This is done by observing that use of the usual periodic conditions is equivalent to the study of a certain supermolecular liquid, in which a supermolecule is a polyatomic molecule of infinite extent composed of one of the physical particles in the system plus all its periodic images. For this supermolecular system in the grand ensemble, all the cluster expansion techniques used in the study of real molecular liquids are directly applicable. As expected, particle correlations are translationally uniform, but explicitly anisotropic. When the intermolecular potential energy functions are of short enough range, or cut off, so that the minimum image method is used, evaluation of the cluster integrals is dramatically simplified. In this circumstance, a large and important class of cluster expansion contributions can be summed exactly, and expressed in terms of the correlation functions which result when the system size is allowed to increase without bound. This result yields a simple and useful approximation to the corrections to the particle correlations due to the use of periodic boundary conditions with finite systems. Numerical application of these results are reported in the following paper

  17. Computed Radiography: An Innovative Inspection Technique

    International Nuclear Information System (INIS)

    Klein, William A.; Councill, Donald L.

    2002-01-01

    Florida Power and Light Company's (FPL) Nuclear Division combined two diverse technologies to create an innovative inspection technique, Computed Radiography, that improves personnel safety and unit reliability while reducing inspection costs. This technique was pioneered in the medical field and applied in the Nuclear Division initially to detect piping degradation due to flow-accelerated corrosion. Component degradation can be detected by this additional technique. This approach permits FPL to reduce inspection costs, perform on line examinations (no generation curtailment), and to maintain or improve both personnel safety and unit reliability. Computed Radiography is a very versatile tool capable of other uses: - improving the external corrosion program by permitting inspections underneath insulation, and - diagnosing system and component problems such as valve positions, without the need to shutdown or disassemble the component. (authors)

  18. The effect of SNR structure on non-equilibrium X-ray spectra

    International Nuclear Information System (INIS)

    Hamilton, A.J.S.; Sarazin, C.L.

    1983-01-01

    A technique is presented for characterizing the ionization structure and consequent thermal X-ray emission of a SNR when non-equilibrium ionization effects are important. The technique allows different theoretical SNR models to be compared and contrasted rapidly in advance of detailed numerical computations. In particular it is shown that the spectrum of a Sedov remnant can probably be applied satisfactorily in a variety of SNR structures, including the reverse shock model advocated by Chevalier (1982) for Type I SN, the isothermal similarity solution of Solinger, Rappaport and Buff (1975), and various inhomogenous or 'messy' structures. (Auth.)

  19. Equilibrium Dialysis

    African Journals Online (AJOL)

    context of antimicrobial therapy in malnutrition. Dialysis has in the past presented technical problems, being complicated and time-consuming. A new dialysis system based on the equilibrium technique has now become available, and it is the principles and practical application of this apparatus (Kontron Diapack; Kontron.

  20. Departures from local thermodynamic equilibrium in cutting arc plasmas derived from electron and gas density measurements using a two-wavelength quantitative Schlieren technique

    International Nuclear Information System (INIS)

    Prevosto, L.; Mancinelli, B.; Artana, G.; Kelly, H.

    2011-01-01

    A two-wavelength quantitative Schlieren technique that allows inferring the electron and gas densities of axisymmetric arc plasmas without imposing any assumption regarding statistical equilibrium models is reported. This technique was applied to the study of local thermodynamic equilibrium (LTE) departures within the core of a 30 A high-energy density cutting arc. In order to derive the electron and heavy particle temperatures from the inferred density profiles, a generalized two-temperature Saha equation together with the plasma equation of state and the quasineutrality condition were employed. Factors such as arc fluctuations that influence the accuracy of the measurements and the validity of the assumptions used to derive the plasma species temperature were considered. Significant deviations from chemical equilibrium as well as kinetic equilibrium were found at elevated electron temperatures and gas densities toward the arc core edge. An electron temperature profile nearly constant through the arc core with a value of about 14000-15000 K, well decoupled from the heavy particle temperature of about 1500 K at the arc core edge, was inferred.

  1. Why Enforcing its UNCAC Commitments Would be Good for Russia: A Computable General Equilibrium Model

    Directory of Open Access Journals (Sweden)

    Michael P. BARRY

    2010-05-01

    Full Text Available Russia has ratified the UN Convention Against Corruption but has not successfully enforced it. This paper uses updated GTAP data to reconstruct a computable general equilibrium (CGE model to quantify the macroeconomic effects of corruption in Russia. Corruption is found to cost the Russian economy billions of dollars a year. A conclusion of the paper is that implementing and enforcing the UNCAC would be of significant economic benefit to Russia and its people.

  2. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    Science.gov (United States)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  3. A report on intercomparison studies of computer programs which respectively model: i) radionuclide migration ii) equilibrium chemistry of groundwater

    International Nuclear Information System (INIS)

    Broyd, T.W.; McD Grant, M.; Cross, J.E.

    1985-01-01

    This report describes two intercomparison studies of computer programs which respectively model: i) radionuclide migration ii) equilibrium chemistry of groundwaters. These studies have been performed by running a series of test cases with each program and comparing the various results obtained. The work forms a part of the CEC MIRAGE project (MIgration of RAdionuclides in the GEosphere) and has been jointly funded by the CEC and the United Kingdom Department of the Environment. Presentations of the material contained herein were given at plenary meetings of the MIRAGE project in Brussels in March, 1984 (migration) and March, 1985 (equilibrium chemistry) respectively

  4. Gravitational equilibrium of a multi-body fluid system

    International Nuclear Information System (INIS)

    Eriguchi, Yoshiharu; Hachisu, Izumi.

    1983-01-01

    We have computed gravitational equilibrium sequences for systems consisting of N incompressible fluid bodies (N = 3, 4, 5). The component fluids are assumed congruent. The system seems to become a lobe-like shape for N = 3 case and a ring-like shape for N>=4 cases according as the fluid bodies come nearer to each other. For every sequence there is a critical equilibrium whose dimensionless angular momentum has the minimum value of the sequence. As the final outcome is nearly in equilibrium in the computation of a collapsing gas cloud, we can apply the present results to the interpretation of these dynamical calculations. For instance, the gas cloud can never fissure into any N-body equilibrium when its dimensionless angular momentum is below the critical value of the N-body sequence. (author)

  5. Non-equilibrium phase transitions

    CERN Document Server

    Henkel, Malte; Lübeck, Sven

    2009-01-01

    This book describes two main classes of non-equilibrium phase-transitions: (a) static and dynamics of transitions into an absorbing state, and (b) dynamical scaling in far-from-equilibrium relaxation behaviour and ageing. The first volume begins with an introductory chapter which recalls the main concepts of phase-transitions, set for the convenience of the reader in an equilibrium context. The extension to non-equilibrium systems is made by using directed percolation as the main paradigm of absorbing phase transitions and in view of the richness of the known results an entire chapter is devoted to it, including a discussion of recent experimental results. Scaling theories and a large set of both numerical and analytical methods for the study of non-equilibrium phase transitions are thoroughly discussed. The techniques used for directed percolation are then extended to other universality classes and many important results on model parameters are provided for easy reference.

  6. Discharge Fee Policy Analysis: A Computable General Equilibrium (CGE) Model of Water Resources and Water Environments

    OpenAIRE

    Guohua Fang; Ting Wang; Xinyi Si; Xin Wen; Yu Liu

    2016-01-01

    To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE) model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and out...

  7. User's guide for vectorized code EQUIL for calculating equilibrium chemistry on Control Data STAR-100 computer

    Science.gov (United States)

    Kumar, A.; Graves, R. A., Jr.; Weilmuenster, K. J.

    1980-01-01

    A vectorized code, EQUIL, was developed for calculating the equilibrium chemistry of a reacting gas mixture on the Control Data STAR-100 computer. The code provides species mole fractions, mass fractions, and thermodynamic and transport properties of the mixture for given temperature, pressure, and elemental mass fractions. The code is set up for the electrons H, He, C, O, N system of elements. In all, 24 chemical species are included.

  8. Upwind MacCormack Euler solver with non-equilibrium chemistry

    Science.gov (United States)

    Sherer, Scott E.; Scott, James N.

    1993-01-01

    A computer code, designated UMPIRE, is currently under development to solve the Euler equations in two dimensions with non-equilibrium chemistry. UMPIRE employs an explicit MacCormack algorithm with dissipation introduced via Roe's flux-difference split upwind method. The code also has the capability to employ a point-implicit methodology for flows where stiffness is introduced through the chemical source term. A technique consisting of diagonal sweeps across the computational domain from each corner is presented, which is used to reduce storage and execution requirements. Results depicting one dimensional shock tube flow for both calorically perfect gas and thermally perfect, dissociating nitrogen are presented to verify current capabilities of the program. Also, computational results from a chemical reactor vessel with no fluid dynamic effects are presented to check the chemistry capability and to verify the point implicit strategy.

  9. Zero-rating food in South Africa: A computable general equilibrium analysis

    Directory of Open Access Journals (Sweden)

    M Kearney

    2004-04-01

    Full Text Available Zero-rating food is considered to alleviate poverty of poor households who spend the largest proportion of their income on food.  However, this will result in a loss of revenue for government.  A Computable General Equilibrium (CGE model is used to analyze the combined effects on zero-rating food and using alternative revenue sources to compensate for the loss in revenue.  To prohibit excessively high increases in the statutory VAT rates of business and financial services, increasing direct taxes or increasing VAT to 16 per cent, is investigated.  Increasing direct taxes is the most successful option when creating a more progressive tax structure, and still generating a positive impact on GDP.  The results indicate that zero-rating food combined with a proportional percentage increase in direct taxes can improve the welfare of poor households.

  10. Computing diffusivities from particle models out of equilibrium

    Science.gov (United States)

    Embacher, Peter; Dirr, Nicolas; Zimmer, Johannes; Reina, Celia

    2018-04-01

    A new method is proposed to numerically extract the diffusivity of a (typically nonlinear) diffusion equation from underlying stochastic particle systems. The proposed strategy requires the system to be in local equilibrium and have Gaussian fluctuations but it is otherwise allowed to undergo arbitrary out-of-equilibrium evolutions. This could be potentially relevant for particle data obtained from experimental applications. The key idea underlying the method is that finite, yet large, particle systems formally obey stochastic partial differential equations of gradient flow type satisfying a fluctuation-dissipation relation. The strategy is here applied to three classic particle models, namely independent random walkers, a zero-range process and a symmetric simple exclusion process in one space dimension, to allow the comparison with analytic solutions.

  11. Future trends in power plant process computer techniques

    International Nuclear Information System (INIS)

    Dettloff, K.

    1975-01-01

    The development of new concepts of the process computer technique has advanced in great steps. The steps are in the three sections: hardware, software, application concept. New computers with a new periphery such as, e.g., colour layer equipment, have been developed in hardware. In software, a decisive step in the sector 'automation software' has been made. Through these components, a step forwards has also been made in the question of incorporating the process computer in the structure of the whole power plant control technique. (orig./LH) [de

  12. Equilibrium and off-equilibrium trap-size scaling in one-dimensional ultracold bosonic gases

    International Nuclear Information System (INIS)

    Campostrini, Massimo; Vicari, Ettore

    2010-01-01

    We study some aspects of equilibrium and off-equilibrium quantum dynamics of dilute bosonic gases in the presence of a trapping potential. We consider systems with a fixed number of particles and study their scaling behavior with increasing the trap size. We focus on one-dimensional bosonic systems, such as gases described by the Lieb-Liniger model and its Tonks-Girardeau limit of impenetrable bosons, and gases constrained in optical lattices as described by the Bose-Hubbard model. We study their quantum (zero-temperature) behavior at equilibrium and off equilibrium during the unitary time evolution arising from changes of the trapping potential, which may be instantaneous or described by a power-law time dependence, starting from the equilibrium ground state for an initial trap size. Renormalization-group scaling arguments and analytical and numerical calculations show that the trap-size dependence of the equilibrium and off-equilibrium dynamics can be cast in the form of a trap-size scaling in the low-density regime, characterized by universal power laws of the trap size, in dilute gases with repulsive contact interactions and lattice systems described by the Bose-Hubbard model. The scaling functions corresponding to several physically interesting observables are computed. Our results are of experimental relevance for systems of cold atomic gases trapped by tunable confining potentials.

  13. Equilibrium shoreface profiles

    DEFF Research Database (Denmark)

    Aagaard, Troels; Hughes, Michael G

    2017-01-01

    Large-scale coastal behaviour models use the shoreface profile of equilibrium as a fundamental morphological unit that is translated in space to simulate coastal response to, for example, sea level oscillations and variability in sediment supply. Despite a longstanding focus on the shoreface...... profile and its relevance to predicting coastal response to changing environmental conditions, the processes and dynamics involved in shoreface equilibrium are still not fully understood. Here, we apply a process-based empirical sediment transport model, combined with morphodynamic principles to provide......; there is no tuning or calibration and computation times are short. It is therefore easily implemented with repeated iterations to manage uncertainty....

  14. Training Software in Artificial-Intelligence Computing Techniques

    Science.gov (United States)

    Howard, Ayanna; Rogstad, Eric; Chalfant, Eugene

    2005-01-01

    The Artificial Intelligence (AI) Toolkit is a computer program for training scientists, engineers, and university students in three soft-computing techniques (fuzzy logic, neural networks, and genetic algorithms) used in artificial-intelligence applications. The program promotes an easily understandable tutorial interface, including an interactive graphical component through which the user can gain hands-on experience in soft-computing techniques applied to realistic example problems. The tutorial provides step-by-step instructions on the workings of soft-computing technology, whereas the hands-on examples allow interaction and reinforcement of the techniques explained throughout the tutorial. In the fuzzy-logic example, a user can interact with a robot and an obstacle course to verify how fuzzy logic is used to command a rover traverse from an arbitrary start to the goal location. For the genetic algorithm example, the problem is to determine the minimum-length path for visiting a user-chosen set of planets in the solar system. For the neural-network example, the problem is to decide, on the basis of input data on physical characteristics, whether a person is a man, woman, or child. The AI Toolkit is compatible with the Windows 95,98, ME, NT 4.0, 2000, and XP operating systems. A computer having a processor speed of at least 300 MHz, and random-access memory of at least 56MB is recommended for optimal performance. The program can be run on a slower computer having less memory, but some functions may not be executed properly.

  15. Computer animation algorithms and techniques

    CERN Document Server

    Parent, Rick

    2012-01-01

    Driven by the demands of research and the entertainment industry, the techniques of animation are pushed to render increasingly complex objects with ever-greater life-like appearance and motion. This rapid progression of knowledge and technique impacts professional developers, as well as students. Developers must maintain their understanding of conceptual foundations, while their animation tools become ever more complex and specialized. The second edition of Rick Parent's Computer Animation is an excellent resource for the designers who must meet this challenge. The first edition establ

  16. Computational intelligence techniques in health care

    CERN Document Server

    Zhou, Wengang; Satheesh, P

    2016-01-01

    This book presents research on emerging computational intelligence techniques and tools, with a particular focus on new trends and applications in health care. Healthcare is a multi-faceted domain, which incorporates advanced decision-making, remote monitoring, healthcare logistics, operational excellence and modern information systems. In recent years, the use of computational intelligence methods to address the scale and the complexity of the problems in healthcare has been investigated. This book discusses various computational intelligence methods that are implemented in applications in different areas of healthcare. It includes contributions by practitioners, technology developers and solution providers.

  17. Experimental determination of thermodynamic equilibrium in biocatalytic transamination

    DEFF Research Database (Denmark)

    Tufvesson, Pär; Jensen, Jacob Skibsted; Kroutil, Wolfgang

    2012-01-01

    The equilibrium constant is a critical parameter for making rational design choices in biocatalytic transamination for the synthesis of chiral amines. However, very few reports are available in the scientific literature determining the equilibrium constant (K) for the transamination of ketones....... Various methods for determining (or estimating) equilibrium have previously been suggested, both experimental as well as computational (based on group contribution methods). However, none of these were found suitable for determining the equilibrium constant for the transamination of ketones. Therefore...

  18. Computer codes for the evaluation of thermodynamic and transport properties for equilibrium air to 30000 K

    Science.gov (United States)

    Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.

    1991-01-01

    The computer codes developed here provide self-consistent thermodynamic and transport properties for equilibrium air for temperatures from 500 to 30000 K over a temperature range of 10 (exp -4) to 10 (exp -2) atm. These properties are computed through the use of temperature dependent curve fits for discrete values of pressure. Interpolation is employed for intermediate values of pressure. The curve fits are based on mixture values calculated from an 11-species air model. Individual species properties used in the mixture relations are obtained from a recent study by the present authors. A review and discussion of the sources and accuracy of the curve fitted data used herein are given in NASA RP 1260.

  19. Multi-Detector Computed Tomography Imaging Techniques in Arterial Injuries

    Directory of Open Access Journals (Sweden)

    Cameron Adler

    2018-04-01

    Full Text Available Cross-sectional imaging has become a critical aspect in the evaluation of arterial injuries. In particular, angiography using computed tomography (CT is the imaging of choice. A variety of techniques and options are available when evaluating for arterial injuries. Techniques involve contrast bolus, various phases of contrast enhancement, multiplanar reconstruction, volume rendering, and maximum intensity projection. After the images are rendered, a variety of features may be seen that diagnose the injury. This article provides a general overview of the techniques, important findings, and pitfalls in cross sectional imaging of arterial imaging, particularly in relation to computed tomography. In addition, the future directions of computed tomography, including a few techniques in the process of development, is also discussed.

  20. Computable general equilibrium model fiscal year 2013 capability development report

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-17

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

  1. Computing Sequential Equilibria for Two-Player Games

    DEFF Research Database (Denmark)

    Miltersen, Peter Bro; Sørensen, Troels Bjerre

    2006-01-01

    Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Koller and Pfeffer pointed out that the strategies...... obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming...... a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial time. In addition, the equilibrium we find is normal-form perfect. Our technique generalizes to general-sum games...

  2. Analysis of the trend to equilibrium of a chemically reacting system

    International Nuclear Information System (INIS)

    Kremer, Gilberto M; Bianchi, Miriam Pandolfi; Soares, Ana Jacinta

    2007-01-01

    In this present paper, a quaternary gaseous reactive mixture, for which the chemical reaction is close to its final stage and the elastic and reactive frequencies are comparable, is modelled within the Boltzmann equation extended to reacting gases. The main objective is a detailed analysis of the non-equilibrium effects arising in the reactive system A 1 + A 2 ↔ A 3 + A 4 , in a flow regime which is considered not far away from thermal, mechanical and chemical equilibrium. A first-order perturbation solution technique is applied to the macroscopic field equations for the spatially homogeneous gas system, and the trend to equilibrium is studied in detail. Adopting elastic hard-spheres and reactive line-of-centres cross sections and an appropriate choice of the input distribution functions-which allows us to distinguish the two cases where the constituents are either at same or different temperatures-explicit computations of the linearized production terms for mass, momentum and total energy are performed for each gas species. The departures from the equilibrium states of densities, temperatures and diffusion fluxes are characterized by small perturbations of their corresponding equilibrium values. For the hydrogen-chlorine system, the perturbations are plotted as functions of time for both cases where the species are either at the same or different temperatures. Moreover, the trend to equilibrium of the reaction rates is represented for the forward and backward reaction H 2 + Cl ↔ HCl + H

  3. Equilibrium fluctuation energy of gyrokinetic plasma

    International Nuclear Information System (INIS)

    Krommes, J.A.; Lee, W.W.; Oberman, C.

    1985-11-01

    The thermal equilibrium electric field fluctuation energy of the gyrokinetic model of magnetized plasma is computed, and found to be smaller than the well-known result (k)/8π = 1/2T/[1 + (klambda/sub D/) 2 ] valid for arbitrarily magnetized plasmas. It is shown that, in a certain sense, the equilibrium electric field energy is minimum in the gyrokinetic regime. 13 refs., 2 figs

  4. Operator support system using computational intelligence techniques

    International Nuclear Information System (INIS)

    Bueno, Elaine Inacio; Pereira, Iraci Martinez

    2015-01-01

    Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)

  5. Operator support system using computational intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Elaine Inacio, E-mail: ebueno@ifsp.edu.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Sao Paulo (IFSP), Sao Paulo, SP (Brazil); Pereira, Iraci Martinez, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)

  6. A new estimation technique of sovereign default risk

    Directory of Open Access Journals (Sweden)

    Mehmet Ali Soytaş

    2016-12-01

    Full Text Available Using the fixed-point theorem, sovereign default models are solved by numerical value function iteration and calibration methods, which due to their computational constraints, greatly limits the models' quantitative performance and foregoes its country-specific quantitative projection ability. By applying the Hotz-Miller estimation technique (Hotz and Miller, 1993- often used in applied microeconometrics literature- to dynamic general equilibrium models of sovereign default, one can estimate the ex-ante default probability of economies, given the structural parameter values obtained from country-specific business-cycle statistics and relevant literature. Thus, with this technique we offer an alternative solution method to dynamic general equilibrium models of sovereign default to improve upon their quantitative inference ability.

  7. Approximate Computing Techniques for Iterative Graph Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh; Kalyanaraman, Anantharaman; Chavarria Miranda, Daniel G.; Krishnamoorthy, Sriram

    2017-12-18

    Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with low impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.

  8. Numerical Computational Technique for Scattering from Underwater Objects

    OpenAIRE

    T. Ratna Mani; Raj Kumar; Odamapally Vijay Kumar

    2013-01-01

    This paper presents a computational technique for mono-static and bi-static scattering from underwater objects of different shape such as submarines. The scatter has been computed using finite element time domain (FETD) method, based on the superposition of reflections, from the different elements reaching the receiver at a particular instant in time. The results calculated by this method has been verified with the published results based on ramp response technique. An in-depth parametric s...

  9. Non-Equilibrium Properties from Equilibrium Free Energy Calculations

    Science.gov (United States)

    Pohorille, Andrew; Wilson, Michael A.

    2012-01-01

    Calculating free energy in computer simulations is of central importance in statistical mechanics of condensed media and its applications to chemistry and biology not only because it is the most comprehensive and informative quantity that characterizes the eqUilibrium state, but also because it often provides an efficient route to access dynamic and kinetic properties of a system. Most of applications of equilibrium free energy calculations to non-equilibrium processes rely on a description in which a molecule or an ion diffuses in the potential of mean force. In general case this description is a simplification, but it might be satisfactorily accurate in many instances of practical interest. This hypothesis has been tested in the example of the electrodiffusion equation . Conductance of model ion channels has been calculated directly through counting the number of ion crossing events observed during long molecular dynamics simulations and has been compared with the conductance obtained from solving the generalized Nernst-Plank equation. It has been shown that under relatively modest conditions the agreement between these two approaches is excellent, thus demonstrating the assumptions underlying the diffusion equation are fulfilled. Under these conditions the electrodiffusion equation provides an efficient approach to calculating the full voltage-current dependence routinely measured in electrophysiological experiments.

  10. Thermodynamic and transport properties of gaseous tetrafluoromethane in chemical equilibrium

    Science.gov (United States)

    Hunt, J. L.; Boney, L. R.

    1973-01-01

    Equations and in computer code are presented for the thermodynamic and transport properties of gaseous, undissociated tetrafluoromethane (CF4) in chemical equilibrium. The computer code calculates the thermodynamic and transport properties of CF4 when given any two of five thermodynamic variables (entropy, temperature, volume, pressure, and enthalpy). Equilibrium thermodynamic and transport property data are tabulated and pressure-enthalpy diagrams are presented.

  11. Critical dynamics a field theory approach to equilibrium and non-equilibrium scaling behavior

    CERN Document Server

    Täuber, Uwe C

    2014-01-01

    Introducing a unified framework for describing and understanding complex interacting systems common in physics, chemistry, biology, ecology, and the social sciences, this comprehensive overview of dynamic critical phenomena covers the description of systems at thermal equilibrium, quantum systems, and non-equilibrium systems. Powerful mathematical techniques for dealing with complex dynamic systems are carefully introduced, including field-theoretic tools and the perturbative dynamical renormalization group approach, rapidly building up a mathematical toolbox of relevant skills. Heuristic and qualitative arguments outlining the essential theory behind each type of system are introduced at the start of each chapter, alongside real-world numerical and experimental data, firmly linking new mathematical techniques to their practical applications. Each chapter is supported by carefully tailored problems for solution, and comprehensive suggestions for further reading, making this an excellent introduction to critic...

  12. Development of computer-aided auto-ranging technique for a computed radiography system

    International Nuclear Information System (INIS)

    Ishida, M.; Shimura, K.; Nakajima, N.; Kato, H.

    1988-01-01

    For a computed radiography system, the authors developed a computer-aided autoranging technique in which the clinically useful image data are automatically mapped to the available display range. The preread image data are inspected to determine the location of collimation. A histogram of the pixels inside the collimation is evaluated regarding characteristic values such as maxima and minima, and then the optimal density and contrast are derived for the display image. The effect of the autoranging technique was investigated at several hospitals in Japan. The average rate of films lost due to undesirable density or contrast was about 0.5%

  13. Practical techniques for pediatric computed tomography

    International Nuclear Information System (INIS)

    Fitz, C.R.; Harwood-Nash, D.C.; Kirks, D.R.; Kaufman, R.A.; Berger, P.E.; Kuhn, J.P.; Siegel, M.J.

    1983-01-01

    Dr. Donald Kirks has assembled this section on Practical Techniques for Pediatric Computed Tomography. The material is based on a presentation in the Special Interest session at the 25th Annual Meeting of the Society for Pediatric Radiology in New Orleans, Louisiana, USA in 1982. Meticulous attention to detail and technique is required to ensure an optimal CT examination. CT techniques specifically applicable to infants and children have not been disseminated in the radiology literature and in this respect it may rightly be observed that ''the child is not a small adult''. What follows is a ''cookbook'' prepared by seven participants and it is printed in Pediatric Radiology, in outline form, as a statement of individual preferences for pediatric CT techniques. This outline gives concise explanation of techniques and permits prompt dissemination of information. (orig.)

  14. Computer Assisted Audit Techniques

    Directory of Open Access Journals (Sweden)

    Eugenia Iancu

    2007-01-01

    Full Text Available From the modern point of view, audit takes intoaccount especially the information systems representingmainly the examination performed by a professional asregards the manner for developing an activity by means ofcomparing it to the quality criteria specific to this activity.Having as reference point this very general definition ofauditing, it must be emphasized that the best known segmentof auditing is the financial audit that had a parallel evolutionto the accountancy one.The present day phase of developing the financial audithas as main trait the internationalization of the accountantprofessional. World wide there are multinational companiesthat offer services in the financial auditing, taxing andconsultancy domain. The auditors, natural persons and auditcompanies, take part at the works of the national andinternational authorities for setting out norms in theaccountancy and auditing domain.The computer assisted audit techniques can be classified inseveral manners according to the approaches used by theauditor. The most well-known techniques are comprised inthe following categories: testing data techniques, integratedtest, parallel simulation, revising the program logics,programs developed upon request, generalized auditsoftware, utility programs and expert systems.

  15. A heads-up no-limit Texas Hold'em poker player: Discretized betting models and automatically generated equilibrium-finding programs

    DEFF Research Database (Denmark)

    Gilpin, Andrew G.; Sandholm, Tuomas; Sørensen, Troels Bjerre

    2008-01-01

    choices in the game. Second, we employ potential-aware automated abstraction algorithms for identifying strategically similar situations in order to decrease the size of the game tree. Third, we develop a new technique for automatically generating the source code of an equilibrium-finding algorithm from...... an XML-based description of a game. This automatically generated program is more efficient than what would be possible with a general-purpose equilibrium-finding program. Finally, we present results from the AAAI-07 Computer Poker Competition, in which Tartanian placed second out of ten entries....

  16. Hurricane Sandy Economic Impacts Assessment: A Computable General Equilibrium Approach and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Boero, Riccardo [Los Alamos National Laboratory; Edwards, Brian Keith [Los Alamos National Laboratory

    2017-08-07

    Economists use computable general equilibrium (CGE) models to assess how economies react and self-organize after changes in policies, technology, and other exogenous shocks. CGE models are equation-based, empirically calibrated, and inspired by Neoclassical economic theory. The focus of this work was to validate the National Infrastructure Simulation and Analysis Center (NISAC) CGE model and apply it to the problem of assessing the economic impacts of severe events. We used the 2012 Hurricane Sandy event as our validation case. In particular, this work first introduces the model and then describes the validation approach and the empirical data available for studying the event of focus. Shocks to the model are then formalized and applied. Finally, model results and limitations are presented and discussed, pointing out both the model degree of accuracy and the assessed total damage caused by Hurricane Sandy.

  17. Entropy production in a fluid-solid system far from thermodynamic equilibrium.

    Science.gov (United States)

    Chung, Bong Jae; Ortega, Blas; Vaidya, Ashwin

    2017-11-24

    The terminal orientation of a rigid body in a moving fluid is an example of a dissipative system, out of thermodynamic equilibrium and therefore a perfect testing ground for the validity of the maximum entropy production principle (MaxEP). Thus far, dynamical equations alone have been employed in studying the equilibrium states in fluid-solid interactions, but these are far too complex and become analytically intractable when inertial effects come into play. At that stage, our only recourse is to rely on numerical techniques which can be computationally expensive. In our past work, we have shown that the MaxEP is a reliable tool to help predict orientational equilibrium states of highly symmetric bodies such as cylinders, spheroids and toroidal bodies. The MaxEP correctly helps choose the stable equilibrium in these cases when the system is slightly out of thermodynamic equilibrium. In the current paper, we expand our analysis to examine i) bodies with fewer symmetries than previously reported, for instance, a half-ellipse and ii) when the system is far from thermodynamic equilibrium. Using two-dimensional numerical studies at Reynolds numbers ranging between 0 and 14, we examine the validity of the MaxEP. Our analysis of flow past a half-ellipse shows that overall the MaxEP is a good predictor of the equilibrium states but, in the special case of the half-ellipse with aspect ratio much greater than unity, the MaxEP is replaced by the Min-MaxEP, at higher Reynolds numbers when inertial effects come into play. Experiments in sedimentation tanks and with hinged bodies in a flow tank confirm these calculations.

  18. Comparative Analysis Between Computed and Conventional Inferior Alveolar Nerve Block Techniques.

    Science.gov (United States)

    Araújo, Gabriela Madeira; Barbalho, Jimmy Charles Melo; Dias, Tasiana Guedes de Souza; Santos, Thiago de Santana; Vasconcellos, Ricardo José de Holanda; de Morais, Hécio Henrique Araújo

    2015-11-01

    The aim of this randomized, double-blind, controlled trial was to compare the computed and conventional inferior alveolar nerve block techniques in symmetrically positioned inferior third molars. Both computed and conventional anesthetic techniques were performed in 29 healthy patients (58 surgeries) aged between 18 and 40 years. The anesthetic of choice was 2% lidocaine with 1: 200,000 epinephrine. The Visual Analogue Scale assessed the pain variable after anesthetic infiltration. Patient satisfaction was evaluated using the Likert Scale. Heart and respiratory rates, mean time to perform technique, and the need for additional anesthesia were also evaluated. Pain variable means were higher for the conventional technique as compared with computed, 3.45 ± 2.73 and 2.86 ± 1.96, respectively, but no statistically significant differences were found (P > 0.05). Patient satisfaction showed no statistically significant differences. The average computed technique runtime and the conventional were 3.85 and 1.61 minutes, respectively, showing statistically significant differences (P <0.001). The computed anesthetic technique showed lower mean pain perception, but did not show statistically significant differences when contrasted to the conventional technique.

  19. Application of computer technique in SMCAMS

    International Nuclear Information System (INIS)

    Lu Deming

    2001-01-01

    A series of applications of computer technique in SMCAMS physics design and magnetic field measurement is described, including digital calculation of electric-magnetic field, beam dynamics, calculation of beam injection and extraction, and mapping and shaping of the magnetic field

  20. Assessing the economic impact of North China’s water scarcity mitigation strategy : a multi - region, water - extended computable general equilibrium analysis

    NARCIS (Netherlands)

    Qin, Changbo; Qin, C.; Su, Zhongbo; Bressers, Johannes T.A.; Jia, Y.; Wang, H.

    2013-01-01

    This paper describes a multi-region computable general equilibrium model for analyzing the effectiveness of measures and policies for mitigating North China’s water scarcity with respect to three different groups of scenarios. The findings suggest that a reduction in groundwater use would negatively

  1. Ch. 33 Modeling: Computational Thermodynamics

    International Nuclear Information System (INIS)

    Besmann, Theodore M.

    2012-01-01

    This chapter considers methods and techniques for computational modeling for nuclear materials with a focus on fuels. The basic concepts for chemical thermodynamics are described and various current models for complex crystalline and liquid phases are illustrated. Also included are descriptions of available databases for use in chemical thermodynamic studies and commercial codes for performing complex equilibrium calculations.

  2. Continuous analog of multiplicative algebraic reconstruction technique for computed tomography

    Science.gov (United States)

    Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya

    2016-03-01

    We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.

  3. Numerical verification of equilibrium chemistry software within nuclear fuel performance codes

    International Nuclear Information System (INIS)

    Piro, M.H.; Lewis, B.J.; Thompson, W.T.; Simunovic, S.; Besmann, T.M.

    2010-01-01

    A numerical tool is in an advanced state of development to compute the equilibrium compositions of phases and their proportions in multi-component systems of importance to the nuclear industry. The resulting software is being conceived for direct integration into large multi-physics fuel performance codes, particularly for providing transport source terms, material properties, and boundary conditions in heat and mass transport modules. Consequently, any numerical errors produced in equilibrium chemistry computations will be propagated in subsequent heat and mass transport calculations, thus falsely predicting nuclear fuel behaviour. The necessity for a reliable method to numerically verify chemical equilibrium computations is emphasized by the requirement to handle the very large number of elements necessary to capture the entire fission product inventory. A simple, reliable and comprehensive numerical verification method called the Gibbs Criteria is presented which can be invoked by any equilibrium chemistry solver for quality assurance purposes. (author)

  4. Finite-size polyelectrolyte bundles at thermodynamic equilibrium

    Science.gov (United States)

    Sayar, M.; Holm, C.

    2007-01-01

    We present the results of extensive computer simulations performed on solutions of monodisperse charged rod-like polyelectrolytes in the presence of trivalent counterions. To overcome energy barriers we used a combination of parallel tempering and hybrid Monte Carlo techniques. Our results show that for small values of the electrostatic interaction the solution mostly consists of dispersed single rods. The potential of mean force between the polyelectrolyte monomers yields an attractive interaction at short distances. For a range of larger values of the Bjerrum length, we find finite-size polyelectrolyte bundles at thermodynamic equilibrium. Further increase of the Bjerrum length eventually leads to phase separation and precipitation. We discuss the origin of the observed thermodynamic stability of the finite-size aggregates.

  5. Application of computer technique in the reconstruction of Chinese ancient buildings

    Science.gov (United States)

    Li, Deren; Yang, Jie; Zhu, Yixuan

    2003-01-01

    This paper offers an introduction of computer assemble and simulation of ancient building. A pioneer research work was carried out by investigators of surveying and mapping describing ancient Chinese timber buildings by 3D frame graphs with computers. But users can know the structural layers and the assembly process of these buildings if the frame graphs are processed further with computer. This can be implemented by computer simulation technique. This technique display the raw data on the screen of a computer and interactively manage them by combining technologies from computer graphics and image processing, multi-media technology, artificial intelligence, highly parallel real-time computation technique and human behavior science. This paper presents the implement procedure of simulation for large-sized wooden buildings as well as 3D dynamic assembly of these buildings under the 3DS MAX environment. The results of computer simulation are also shown in the paper.

  6. Energy, economy and equity interactions in a CGE [Computable General Equilibrium] model for Pakistan

    International Nuclear Information System (INIS)

    Naqvi, Farzana

    1997-01-01

    In the last three decades, Computable General Equilibrium modelling has emerged as an established field of applied economics. This book presents a CGE model developed for Pakistan with the hope that it will lay down a foundation for application of general equilibrium modelling for policy formation in Pakistan. As the country is being driven swiftly to become an open market economy, it becomes vital to found out the policy measures that can foster the objectives of economic planning, such as social equity, with the minimum loss of the efficiency gains from the open market resource allocations. It is not possible to build a model for practical use that can do justice to all sectors of the economy in modelling of their peculiar features. The CGE model developed in this book focuses on the energy sector. Energy is considered as one of the basic needs and an essential input to economic growth. Hence, energy policy has multiple criteria to meet. In this book, a case study has been carried out to analyse energy pricing policy in Pakistan using this CGE model of energy, economy and equity interactions. Hence, the book also demonstrates how researchers can model the fine details of one sector given the core structure of a CGE model. (UK)

  7. A survey of upwind methods for flows with equilibrium and non-equilibrium chemistry and thermodynamics

    Science.gov (United States)

    Grossman, B.; Garrett, J.; Cinnella, P.

    1989-01-01

    Several versions of flux-vector split and flux-difference split algorithms were compared with regard to general applicability and complexity. Test computations were performed using curve-fit equilibrium air chemistry for an M = 5 high-temperature inviscid flow over a wedge, and an M = 24.5 inviscid flow over a blunt cylinder for test computations; for these cases, little difference in accuracy was found among the versions of the same flux-split algorithm. For flows with nonequilibrium chemistry, the effects of the thermodynamic model on the development of flux-vector split and flux-difference split algorithms were investigated using an equilibrium model, a general nonequilibrium model, and a simplified model based on vibrational relaxation. Several numerical examples are presented, including nonequilibrium air chemistry in a high-temperature shock tube and nonequilibrium hydrogen-air chemistry in a supersonic diffuser.

  8. Evolution and non-equilibrium physics

    DEFF Research Database (Denmark)

    Becker, Nikolaj; Sibani, Paolo

    2014-01-01

    We argue that the stochastic dynamics of interacting agents which replicate, mutate and die constitutes a non-equilibrium physical process akin to aging in complex materials. Specifically, our study uses extensive computer simulations of the Tangled Nature Model (TNM) of biological evolution...

  9. Equilibrium reconstruction in stellarators: V3FIT

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, J.D.; Knowlton, S.F. [Physics Department, Auburn University, Auburn, AL (United States); Hirshman, S.P.; Lazarus, E.A. [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Lao, L.L. [General Atomics, San Diego, CA (United States)

    2003-07-01

    The first section describes a general response function formalism for computing stellarator magnetic diagnostic signals, which is the first step in developing a reconstruction capability. The approach parallels that used in the EFIT two-dimensional (2-D) equilibrium reconstruction code. The second section describes the two codes we have written, V3RFUN and V3POST. V3RFUN computes the response functions for a specified magnetic diagnostic coil, and V3POST uses the response functions calculated by V3RFUN, along with the plasma current information supplied by the equilibrium code VMEC, to compute the expected magnetic diagnostic signals. These two codes are currently being used to design magnetic diagnostic for the NCSX stellarator (at PPPL) and the CTH toroidal hybrid stellarator (at Auburn University). The last section of the paper describes plans for the V3FIT code. (orig.)

  10. The Nash Equilibrium Revisited: Chaos and Complexity Hidden in Simplicity

    Science.gov (United States)

    Fellman, Philip V.

    The Nash Equilibrium is a much discussed, deceptively complex, method for the analysis of non-cooperative games (McLennan and Berg, 2005). If one reads many of the commonly available definitions the description of the Nash Equilibrium is deceptively simple in appearance. Modern research has discovered a number of new and important complex properties of the Nash Equilibrium, some of which remain as contemporary conundrums of extraordinary difficulty and complexity (Quint and Shubik, 1997). Among the recently discovered features which the Nash Equilibrium exhibits under various conditions are heteroclinic Hamiltonian dynamics, a very complex asymptotic structure in the context of two-player bi-matrix games and a number of computationally complex or computationally intractable features in other settings (Sato, Akiyama and Farmer, 2002). This paper reviews those findings and then suggests how they may inform various market prediction strategies.

  11. THE COMPUTATIONAL INTELLIGENCE TECHNIQUES FOR PREDICTIONS - ARTIFICIAL NEURAL NETWORKS

    OpenAIRE

    Mary Violeta Bar

    2014-01-01

    The computational intelligence techniques are used in problems which can not be solved by traditional techniques when there is insufficient data to develop a model problem or when they have errors.Computational intelligence, as he called Bezdek (Bezdek, 1992) aims at modeling of biological intelligence. Artificial Neural Networks( ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is solving problems that are too c...

  12. Equivalence of Equilibrium Propagation and Recurrent Backpropagation

    OpenAIRE

    Scellier, Benjamin; Bengio, Yoshua

    2017-01-01

    Recurrent Backpropagation and Equilibrium Propagation are algorithms for fixed point recurrent neural networks which differ in their second phase. In the first phase, both algorithms converge to a fixed point which corresponds to the configuration where the prediction is made. In the second phase, Recurrent Backpropagation computes error derivatives whereas Equilibrium Propagation relaxes to another nearby fixed point. In this work we establish a close connection between these two algorithms....

  13. Controlling competing electronic orders via non-equilibrium acoustic phonons

    Science.gov (United States)

    Schuett, Michael; Orth, Peter; Levchenko, Alex; Fernandes, Rafael

    The interplay between multiple electronic orders is a hallmark of strongly correlated systems displaying unconventional superconductivity. While doping, pressure, and magnetic field are the standard knobs employed to assess these different phases, ultrafast pump-and-probe techniques opened a new window to probe these systems. Recent examples include the ultrafast excitation of coherent optical phonons coupling to electronic states in cuprates and iron pnictides. In this work, we demonstrate theoretically that non-equilibrium acoustic phonons provide a promising framework to manipulate competing electronic phases and favor unconventional superconductivity over other states. In particular, we show that electrons coupled to out-of-equilibrium anisotropic acoustic phonons enter a steady state in which the effective electronic temperature varies around the Fermi surface. Such a momentum-dependent temperature can then be used to selectively heat electronic states that contribute primarily to density-wave instabilities, reducing their competition with superconductivity. We illustrate this phenomenon by computing the microscopic steady-state phase diagram of the iron pnictides, showing that superconductivity is enhanced with respect to the competing antiferromagnetic phase.

  14. Multi-period equilibrium/near-equilibrium in electricity markets based on locational marginal prices

    Science.gov (United States)

    Garcia Bertrand, Raquel

    decisions, i.e., on/off status for the units, and therefore optimality conditions cannot be directly applied. To avoid limitations provoked by binary variables, while retaining the advantages of using optimality conditions, we define the multi-period market equilibrium using Benders decomposition, which allows computing binary variables through the master problem and continuous variables through the subproblem. Finally, we illustrate these market equilibrium concepts through several case studies.

  15. New computing techniques in physics research

    International Nuclear Information System (INIS)

    Becks, Karl-Heinz; Perret-Gallix, Denis

    1994-01-01

    New techniques were highlighted by the ''Third International Workshop on Software Engineering, Artificial Intelligence and Expert Systems for High Energy and Nuclear Physics'' in Oberammergau, Bavaria, Germany, from October 4 to 8. It was the third workshop in the series; the first was held in Lyon in 1990 and the second at France-Telecom site near La Londe les Maures in 1992. This series of workshops covers a broad spectrum of problems. New, highly sophisticated experiments demand new techniques in computing, in hardware as well as in software. Software Engineering Techniques could in principle satisfy the needs for forthcoming accelerator experiments. The growing complexity of detector systems demands new techniques in experimental error diagnosis and repair suggestions; Expert Systems seem to offer a way of assisting the experimental crew during data-taking

  16. Computational techniques used in the development of coprocessing flowsheets

    International Nuclear Information System (INIS)

    Groenier, W.S.; Mitchell, A.D.; Jubin, R.T.

    1979-01-01

    The computer program SEPHIS, developed to aid in determining optimum solvent extraction conditions for the reprocessing of nuclear power reactor fuels by the Purex method, is described. The program employs a combination of approximate mathematical equilibrium expressions and a transient, stagewise-process calculational method to allow stage and product-stream concentrations to be predicted with accuracy and reliability. The possible applications to inventory control for nuclear material safeguards, nuclear criticality analysis, and process analysis and control are of special interest. The method is also applicable to other counntercurrent liquid--liquid solvent extraction processes having known chemical kinetics, that may involve multiple solutes and are performed in conventional contacting equipment

  17. MHD equilibrium identification on ASDEX-Upgrade

    International Nuclear Information System (INIS)

    McCarthy, P.J.; Schneider, W.; Lakner, K.; Zehrfeld, H.P.; Buechl, K.; Gernhardt, J.; Gruber, O.; Kallenbach, A.; Lieder, G.; Wunderlich, R.

    1992-01-01

    A central activity accompanying the ASDEX-Upgrade experiment is the analysis of MHD equilibria. There are two different numerical methods available, both using magnetic measurements which reflect equilibrium states of the plasma. The first method proceeds via a function parameterization (FP) technique, which uses in-vessel magnetic measurements to calculate up to 66 equilibrium parameters. The second method applies an interpretative equilibrium code (DIVA) for a best fit to a different set of magnetic measurements. Cross-checks with the measured particle influxes from the inner heat shield and the divertor region and with visible camera images of the scrape-off layer are made. (author) 3 refs., 3 figs

  18. Non-equilibrium synthesis of alloys using lasers

    International Nuclear Information System (INIS)

    Mazumder, J.; Choi, J.; Ribaudo, C.; Wang, A.; Kar, A.

    1993-01-01

    This paper discusses microstructure and properties of alloys, produced by laser alloying and cladding technique, for various applications. These include Fe-Cr-W-C alloys for wear resistance, Ni-Cr-Al-Hf alloys for high temperature oxidation resistance and Mg-Al alloys for corrosion resistance. Also a mathematical model will be presented for the prediction of the composition of the metastable phases produced by laser synthesis. Microstructure was characterized using various electron optical techniques such as Transmission Electron Microscopy (TEM), Scanning Electron Microscopy (SEM), Auger Electron Spectroscopy (AES) and Energy Dispersive X-Ray Analysis (EDAX). Wear properties were characterized by a line contact Block on Cylinder method. High temperature oxidation properties were characterized by using Perkin-Elmer Thermo-Gravimetric Analyzer (TGA) where dynamic weight change were monitored at 1,200 C. Corrosion properties were evaluated by a potentio-dynamic method using a computer controlled Potentiostat manufactured by EG ampersand G. A non-equilibrium M 6 C type carbide was found to be responsible for the improved wear resistance. Increased solid-solubility of Hf was found to be a major factor in improving the high temperature oxidation resistance of the Ni-Cr-Al-Hf alloys. Micro-Crystalline phases were observed in Mg-Al alloys. The rapid solidification was modeled using heat transfer in the liquid pool and the solid substrate and mass transfer in the liquid pool. Non-equilibrium partition coefficient was introduced through the boundary condition at the liquid-solid interface. A good correlation was observed between the prediction and the experimental data. 54 refs

  19. Non-equilibrium spectroscopy of high-Tc superconductors

    International Nuclear Information System (INIS)

    Krasnov, V M

    2009-01-01

    In superconductors, recombination of two non-equilibrium quasiparticles into a Cooper pair results in emission of excitation that mediates superconductivity. This is the basis of the proposed new type of 'non-equilibrium' spectroscopy of high T c superconductors, which may open a possibility for direct and unambiguous determination of the coupling mechanism of high T c superconductivity. In case of low T c superconductors, the feasibility of such the non-equilibrium spectroscopy was demonstrated in classical phonon generation-detection experiments almost four decades ago. Recently it was demonstrated that a similar technique can be used for high T c superconductors, using natural intrinsic Josephson junctions both for injection of non-equilibrium quasiparticles and for detection of the non-equilibrium radiation. Here I analyze theoretically non-equilibrium phenomena in intrinsic Josephson junctions. It is shown that extreme non-equilibrium state can be achieved at bias equal to integer number of the gap voltage, which can lead to laser-like emission from the stack. I argue that identification of the boson type, constituting this non-equilibrium radiation would unambiguously reveal the coupling mechanism of high Tc superconductors.

  20. A comparative analysis of soft computing techniques for gene prediction.

    Science.gov (United States)

    Goel, Neelam; Singh, Shailendra; Aseri, Trilok Chand

    2013-07-01

    The rapid growth of genomic sequence data for both human and nonhuman species has made analyzing these sequences, especially predicting genes in them, very important and is currently the focus of many research efforts. Beside its scientific interest in the molecular biology and genomics community, gene prediction is of considerable importance in human health and medicine. A variety of gene prediction techniques have been developed for eukaryotes over the past few years. This article reviews and analyzes the application of certain soft computing techniques in gene prediction. First, the problem of gene prediction and its challenges are described. These are followed by different soft computing techniques along with their application to gene prediction. In addition, a comparative analysis of different soft computing techniques for gene prediction is given. Finally some limitations of the current research activities and future research directions are provided. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Improvements on non-equilibrium and transport Green function techniques: The next-generation TRANSIESTA

    Science.gov (United States)

    Papior, Nick; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads

    2017-03-01

    We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT-NEGF code handles devices with one or multiple electrodes (Ne ≥ 1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable matrix inversion, performance-critical pivoting, and hybrid parallelization. Additionally, a generic NEGF "post-processing" code (TBTRANS/PHTRANS) for electron and phonon transport is presented with several novelties such as Hamiltonian interpolations, Ne ≥ 1 electrode capability, bond-currents, generalized interface for user-defined tight-binding transport, transmission projection using eigenstates of a projected Hamiltonian, and fast inversion algorithms for large-scale simulations easily exceeding 106 atoms on workstation computers. The new features of both codes are demonstrated and bench-marked for relevant test systems.

  2. Comments on equilibrium, transient equilibrium, and secular equilibrium in serial radioactive decay

    International Nuclear Information System (INIS)

    Prince, J.R.

    1979-01-01

    Equations describing serial radioactive decay are reviewed along with published descriptions or transient and secular equilibrium. It is shown that terms describing equilibrium are not used in the same way by various authors. Specific definitions are proposed; they suggest that secular equilibrium is a subset of transient equilibrium

  3. New coding technique for computer generated holograms.

    Science.gov (United States)

    Haskell, R. E.; Culver, B. C.

    1972-01-01

    A coding technique is developed for recording computer generated holograms on a computer controlled CRT in which each resolution cell contains two beam spots of equal size and equal intensity. This provides a binary hologram in which only the position of the two dots is varied from cell to cell. The amplitude associated with each resolution cell is controlled by selectively diffracting unwanted light into a higher diffraction order. The recording of the holograms is fast and simple.

  4. Techniques involving extreme environment, nondestructive techniques, computer methods in metals research, and data analysis

    International Nuclear Information System (INIS)

    Bunshah, R.F.

    1976-01-01

    A number of different techniques which range over several different aspects of materials research are covered in this volume. They are concerned with property evaluation of 4 0 K and below, surface characterization, coating techniques, techniques for the fabrication of composite materials, computer methods, data evaluation and analysis, statistical design of experiments and non-destructive test techniques. Topics covered in this part include internal friction measurements; nondestructive testing techniques; statistical design of experiments and regression analysis in metallurgical research; and measurement of surfaces of engineering materials

  5. [Development of computer aided forming techniques in manufacturing scaffolds for bone tissue engineering].

    Science.gov (United States)

    Wei, Xuelei; Dong, Fuhui

    2011-12-01

    To review recent advance in the research and application of computer aided forming techniques for constructing bone tissue engineering scaffolds. The literature concerning computer aided forming techniques for constructing bone tissue engineering scaffolds in recent years was reviewed extensively and summarized. Several studies over last decade have focused on computer aided forming techniques for bone scaffold construction using various scaffold materials, which is based on computer aided design (CAD) and bone scaffold rapid prototyping (RP). CAD include medical CAD, STL, and reverse design. Reverse design can fully simulate normal bone tissue and could be very useful for the CAD. RP techniques include fused deposition modeling, three dimensional printing, selected laser sintering, three dimensional bioplotting, and low-temperature deposition manufacturing. These techniques provide a new way to construct bone tissue engineering scaffolds with complex internal structures. With rapid development of molding and forming techniques, computer aided forming techniques are expected to provide ideal bone tissue engineering scaffolds.

  6. Equilibrium Arrival Times to Queues

    DEFF Research Database (Denmark)

    Breinbjerg, Jesper; Østerdal, Lars Peter

    We consider a non-cooperative queueing environment where a finite number of customers independently choose when to arrive at a queueing system that opens at a given point in time and serves customers on a last-come first-serve preemptive-resume (LCFS-PR) basis. Each customer has a service time...... requirement which is identically and independently distributed according to some general probability distribution, and they want to complete service as early as possible while minimizing the time spent in the queue. In this setting, we establish the existence of an arrival time strategy that constitutes...... a symmetric (mixed) Nash equilibrium, and show that there is at most one symmetric equilibrium. We provide a numerical method to compute this equilibrium and demonstrate by a numerical example that the social effciency can be lower than the effciency induced by a similar queueing system that serves customers...

  7. [Cardiac computed tomography: new applications of an evolving technique].

    Science.gov (United States)

    Martín, María; Corros, Cecilia; Calvo, Juan; Mesa, Alicia; García-Campos, Ana; Rodríguez, María Luisa; Barreiro, Manuel; Rozado, José; Colunga, Santiago; de la Hera, Jesús M; Morís, César; Luyando, Luis H

    2015-01-01

    During the last years we have witnessed an increasing development of imaging techniques applied in Cardiology. Among them, cardiac computed tomography is an emerging and evolving technique. With the current possibility of very low radiation studies, the applications have expanded and go further coronariography In the present article we review the technical developments of cardiac computed tomography and its new applications. Copyright © 2014 Instituto Nacional de Cardiología Ignacio Chávez. Published by Masson Doyma México S.A. All rights reserved.

  8. The Lewis Chemical Equilibrium Program with parametric study capability

    Science.gov (United States)

    Sevigny, R.

    1981-01-01

    The program was developed to determine chemical equilibrium in complex systems. Using a free energy minimization technique, the program permits calculations such as: chemical equilibrium for assigned thermodynamic states; theoretical rocket performance for both equilibrium and frozen compositions during expansion; incident and reflected shock properties; and Chapman-Jouget detonation properties. It is shown that the same program can handle solid coal in an entrained flow coal gasification problem.

  9. Experimental investigation of liquid chromatography columns by means of computed tomography

    DEFF Research Database (Denmark)

    Astrath, D.U.; Lottes, F.; Vu, Duc Thuong

    2007-01-01

    The efficiency of packed chromatographic columns was investigated experimentally by means of computed tomography (CT) techniques. The measurements were carried out by monitoring tracer fronts in situ inside the chromatographic columns. The experimental results were fitted using the equilibrium di...

  10. A simple and sensitive separation technique of 99Mo and 99mTc from their equilibrium mixture

    International Nuclear Information System (INIS)

    Swadesh Mandal; Ajoy Mandal

    2014-01-01

    The present work describes a simple and inexpensive separation method of 99 Mo from the equilibrium mixture. The liquid-liquid extraction technique has been employed to separate 99 Mo and 99m Tc using triisooctylamine (TIOA). The 99 Mo and 99m Tc were quantitatively separated out in 2 M TIOA with tripled distilled water; 99m Tc was back extracted from TIOA organic phase to aqueous phase by 0.1 M DTPA. The species information or indirect speciation of molybdenum was also established by the extraction profile of the molybdenum. (author)

  11. Parallel application of plasma equilibrium fitting based on inhomogeneous platforms

    International Nuclear Information System (INIS)

    Liao Min; Zhang Jinhua; Chen Liaoyuan; Li Yongge; Pan Wei; Pan Li

    2008-01-01

    An online analysis and online display platform EFIT, which is based on the equilibrium-fitting mode, is inducted in this paper. This application can realize large data transportation between inhomogeneous platforms by designing a communication mechanism using sockets. It spends approximately one minute to complete the equilibrium fitting reconstruction by using a finite state machine to describe the management node and several node computers of cluster system to fulfill the parallel computation, this satisfies the online display during the discharge interval. An effective communication model between inhomogeneous platforms is provided, which could transport the computing results from Linux platform to Windows platform for online analysis and display. (authors)

  12. Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.

    Science.gov (United States)

    Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander

    2018-04-10

    A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing

  13. Soil-water characteristics of Gaomiaozi bentonite by vapour equilibrium technique

    Directory of Open Access Journals (Sweden)

    Wenjing Sun

    2014-02-01

    Full Text Available Soil-water characteristics of Gaomiaozi (GMZ Ca-bentonite at high suctions (3–287 MPa are measured by vapour equilibrium technique. The soil-water retention curve (SWRC of samples with the same initial compaction states is obtained in drying and wetting process. At high suctions, the hysteresis behaviour is not obvious in relationship between water content and suction, while the opposite holds between degree of saturation and suction. The suction variation can change its water retention behaviour and void ratio. Moreover, changes of void ratio can bring about changes in degree of saturation. Therefore, the total change in degree of saturation includes changes caused by suction and that by void ratio. In the space of degree of saturation and suction, the SWRC at constant void ratio shifts to the direction of higher suctions with decreasing void ratio. However, the relationship between water content and suction is less affected by changes of void ratio. The degree of saturation decreases approximately linearly with increasing void ratio at a constant suction. Moreover, the slope of the line decreases with increasing suction and they show an approximately linear relationship in semi-logarithmical scale. From this linear relationship, the variation of degree of saturation caused by the change in void ratio can be obtained. Correspondingly, SWRC at a constant void ratio can be determined from SWRC at different void ratios.

  14. Outline of fast analyzer for MHD equilibrium 'FAME'

    International Nuclear Information System (INIS)

    Sakata, Shinya; Haginoya, Hirofumi; Tsuruoka, Takuya; Aoyagi, Tetsuo; Saito, Naoyuki; Harada, Hiroo; Tani, Keiji; Watanabe, Hideto.

    1994-03-01

    The FAME (Fast Analyzer for Magnetohydrodynamic (MHD) Equilibrium) system has been developed in order to provide more than 100 MHD equilibria in time series which are enough for the non-stationary analysis of the experimental data of JT-60 within about 20 minutes shot interval. The FAME is an MIMD type small scale parallel computer with 20 microprocessors which are connected by a multi-stage switching system. The maximum theoretical speed is 250 MFLOPS. For the software system of FAME, MHD equilibrium analysis code SELENE and its input data production code FBI are tuned up taking the parallel processing into consideration. Consequently, the computational performance of the FAME system becomes more than 7 times faster than the existing general purpose computer FACOM M780-10s. This report summarizes the outline of the FAME system including hardware, soft-ware and peripheral equipments. (author)

  15. The Computation of Nash Equilibrium in Fashion Games via Semi-Tensor Product Method

    Institute of Scientific and Technical Information of China (English)

    GUO Peilian; WANG Yuzhen

    2016-01-01

    Using the semi-tensor product of matrices,this paper investigates the computation of pure-strategy Nash equilibrium (PNE) for fashion games,and presents several new results.First,a formal fashion game model on a social network is given.Second,the utility function of each player is converted into an algebraic form via the semi-tensor product of matrices,based on which the case of two-strategy fashion game is studied and two methods are obtained for the case to verify the existence of PNE.Third,the multi-strategy fashion game model is investigated and an algorithm is established to find all the PNEs for the general case.Finally,two kinds of optimization problems,that is,the so-called social welfare and normalized satisfaction degree optimization problems are investigated and two useful results are given.The study of several illustrative examples shows that the new results obtained in this paper are effective.

  16. Tourism Contribution to Poverty Alleviation in Kenya: A Dynamic Computable General Equilibrium Analysis.

    Science.gov (United States)

    Njoya, Eric Tchouamou; Seetaram, Neelu

    2018-04-01

    The aim of this article is to investigate the claim that tourism development can be the engine for poverty reduction in Kenya using a dynamic, microsimulation computable general equilibrium model. The article improves on the common practice in the literature by using the more comprehensive Foster-Greer-Thorbecke (FGT) index to measure poverty instead of headcount ratios only. Simulations results from previous studies confirm that expansion of the tourism industry will benefit different sectors unevenly and will only marginally improve poverty headcount. This is mainly due to the contraction of the agricultural sector caused the appreciation of the real exchange rates. This article demonstrates that the effect on poverty gap and poverty severity is, nevertheless, significant for both rural and urban areas with higher impact in the urban areas. Tourism expansion enables poorer households to move closer to the poverty line. It is concluded that the tourism industry is pro-poor.

  17. Equilibrium reconstruction in the TCA/Br tokamak

    International Nuclear Information System (INIS)

    Sa, Wanderley Pires de

    1996-01-01

    The accurate and rapid determination of the Magnetohydrodynamic (MHD) equilibrium configuration in tokamaks is a subject for the magnetic confinement of the plasma. With the knowledge of characteristic plasma MHD equilibrium parameters it is possible to control the plasma position during its formation using feed-back techniques. It is also necessary an on-line analysis between successive discharges to program external parameters for the subsequent discharges. In this work it is investigated the MHD equilibrium configuration reconstruction of the TCA/BR tokamak from external magnetic measurements, using a method that is able to fast determine the main parameters of discharge. The thesis has two parts. Firstly it is presented the development of an equilibrium code that solves de Grad-Shafranov equation for the TCA/BR tokamak geometry. Secondly it is presented the MHD equilibrium reconstruction process from external magnetic field and flux measurements using the Function Parametrization FP method. this method. This method is based on the statistical analysis of a database of simulated equilibrium configurations, with the goal of obtaining a simple relationship between the parameters that characterize the equilibrium and the measurements. The results from FP are compared with conventional methods. (author)

  18. Sampling the Denatured State of Polypeptides in Water, Urea, and Guanidine Chloride to Strict Equilibrium Conditions with the Help of Massively Parallel Computers.

    Science.gov (United States)

    Meloni, Roberto; Camilloni, Carlo; Tiana, Guido

    2014-02-11

    The denatured state of polypeptides and proteins, stabilized by chemical denaturants like urea and guanidine chloride, displays residual secondary structure when studied by nuclear-magnetic-resonance spectroscopy. However, these experimental techniques are weakly sensitive, and thus molecular-dynamics simulations can be useful to complement the experimental findings. To sample the denatured state, we made use of massively-parallel computers and of a variant of the replica exchange algorithm, in which the different branches, connected with unbiased replicas, favor the formation and disruption of local secondary structure. The algorithm is applied to the second hairpin of GB1 in water, in urea, and in guanidine chloride. We show with the help of different criteria that the simulations converge to equilibrium. It results that urea and guanidine chloride, besides inducing some polyproline-II structure, have different effect on the hairpin. Urea disrupts completely the native region and stabilizes a state which resembles a random coil, while guanidine chloride has a milder effect.

  19. Computable General Equilibrium Model Fiscal Year 2013 Capability Development Report - April 2014

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC); Rivera, Michael K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC)

    2014-04-01

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

  20. Variability of radon and thoron equilibrium factors in indoor environment of Garhwal Himalaya

    International Nuclear Information System (INIS)

    Prasad, Mukesh; Rawat, Mukesh; Dangwal, Anoop; Kandari, Tushar; Gusain, G.S.; Mishra, Rosaline; Ramola, R.C.

    2016-01-01

    The measurements of radon, thoron and their progeny concentrations have been carried out in the dwellings of Uttarkashi and Tehri districts of Garhwal Himalaya, India using LR-115 detector based pin-hole dosimeter and DRPS/DTPS techniques. The equilibrium factors for radon, thoron and their progeny were calculated by using the values measured with these techniques. The average values of equilibrium factor between radon and its progeny have been found to be 0.44, 0.39, 0.39 and 0.28 for rainy, autumn, winter and summer seasons, respectively. For thoron and its progeny, the average values of equilibrium factor have been found to be 0.04, 0.04, 0.04 and 0.03 for rainy, autumn, winter and summer seasons, respectively. The equilibrium factor between radon and its progeny has been found to be dependent on the seasonal changes. However, the equilibrium factor for thoron and progeny has been found to be same for rainy, autumn and winter seasons but slightly different for summer season. The annual average equilibrium factors for radon and thoron have been found to vary from 0.23 to 0.80 with an average of 0.42 and from 0.01 to 0.29 with an average of 0.07, respectively. The detailed discussion of the measurement techniques and the explanation for the results obtained is given in the paper. - Highlights: • Equilibrium factors for indoor radon, thoron and their progeny were measured. • Recently developed passive detector techniques were used for measurements. • The values of equilibrium factors are comparable with world's average values. • Equilibrium factor should be measured separately for individual dwelling. • Separate values of equilibrium factors are useful to produce actual radiation dose.

  1. Evolutionary computation techniques a comparative perspective

    CERN Document Server

    Cuevas, Erik; Oliva, Diego

    2017-01-01

    This book compares the performance of various evolutionary computation (EC) techniques when they are faced with complex optimization problems extracted from different engineering domains. Particularly focusing on recently developed algorithms, it is designed so that each chapter can be read independently. Several comparisons among EC techniques have been reported in the literature, however, they all suffer from one limitation: their conclusions are based on the performance of popular evolutionary approaches over a set of synthetic functions with exact solutions and well-known behaviors, without considering the application context or including recent developments. In each chapter, a complex engineering optimization problem is posed, and then a particular EC technique is presented as the best choice, according to its search characteristics. Lastly, a set of experiments is conducted in order to compare its performance to other popular EC methods.

  2. Computational techniques in tribology and material science at the atomic level

    Science.gov (United States)

    Ferrante, J.; Bozzolo, G. H.

    1992-01-01

    Computations in tribology and material science at the atomic level present considerable difficulties. Computational techniques ranging from first-principles to semi-empirical and their limitations are discussed. Example calculations of metallic surface energies using semi-empirical techniques are presented. Finally, application of the methods to calculation of adhesion and friction are presented.

  3. Comparison of radiographic technique by computer simulation

    International Nuclear Information System (INIS)

    Brochi, M.A.C.; Ghilardi Neto, T.

    1989-01-01

    A computational algorithm to compare radiographic techniques (KVp, mAs and filters) is developed based in the fixation of parameters that defines the images, such as optical density and constrast. Before the experience, the results were used in a radiography of thorax. (author) [pt

  4. Experimental determination of thermodynamic equilibrium in biocatalytic transamination.

    Science.gov (United States)

    Tufvesson, Pär; Jensen, Jacob S; Kroutil, Wolfgang; Woodley, John M

    2012-08-01

    The equilibrium constant is a critical parameter for making rational design choices in biocatalytic transamination for the synthesis of chiral amines. However, very few reports are available in the scientific literature determining the equilibrium constant (K) for the transamination of ketones. Various methods for determining (or estimating) equilibrium have previously been suggested, both experimental as well as computational (based on group contribution methods). However, none of these were found suitable for determining the equilibrium constant for the transamination of ketones. Therefore, in this communication we suggest a simple experimental methodology which we hope will stimulate more accurate determination of thermodynamic equilibria when reporting the results of transaminase-catalyzed reactions in order to increase understanding of the relationship between substrate and product molecular structure on reaction thermodynamics. Copyright © 2012 Wiley Periodicals, Inc.

  5. APPLYING ARTIFICIAL INTELLIGENCE TECHNIQUES TO HUMAN-COMPUTER INTERFACES

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.

    1988-01-01

    A description is given of UIMS (User Interface Management System), a system using a variety of artificial intelligence techniques to build knowledge-based user interfaces combining functionality and information from a variety of computer systems that maintain, test, and configure customer telephone...... and data networks. Three artificial intelligence (AI) techniques used in UIMS are discussed, namely, frame representation, object-oriented programming languages, and rule-based systems. The UIMS architecture is presented, and the structure of the UIMS is explained in terms of the AI techniques....

  6. Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces.

    Science.gov (United States)

    Spezia, Riccardo; Martínez-Nuñez, Emilio; Vazquez, Saulo; Hase, William L

    2017-04-28

    In this Introduction, we show the basic problems of non-statistical and non-equilibrium phenomena related to the papers collected in this themed issue. Over the past few years, significant advances in both computing power and development of theories have allowed the study of larger systems, increasing the time length of simulations and improving the quality of potential energy surfaces. In particular, the possibility of using quantum chemistry to calculate energies and forces 'on the fly' has paved the way to directly study chemical reactions. This has provided a valuable tool to explore molecular mechanisms at given temperatures and energies and to see whether these reactive trajectories follow statistical laws and/or minimum energy pathways. This themed issue collects different aspects of the problem and gives an overview of recent works and developments in different contexts, from the gas phase to the condensed phase to excited states.This article is part of the themed issue 'Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces'. © 2017 The Author(s).

  7. Explicit integration of extremely stiff reaction networks: partial equilibrium methods

    International Nuclear Information System (INIS)

    Guidry, M W; Hix, W R; Billings, J J

    2013-01-01

    In two preceding papers (Guidry et al 2013 Comput. Sci. Disc. 6 015001 and Guidry and Harris 2013 Comput. Sci. Disc. 6 015002), we have shown that when reaction networks are well removed from equilibrium, explicit asymptotic and quasi-steady-state approximations can give algebraically stabilized integration schemes that rival standard implicit methods in accuracy and speed for extremely stiff systems. However, we also showed that these explicit methods remain accurate but are no longer competitive in speed as the network approaches equilibrium. In this paper, we analyze this failure and show that it is associated with the presence of fast equilibration timescales that neither asymptotic nor quasi-steady-state approximations are able to remove efficiently from the numerical integration. Based on this understanding, we develop a partial equilibrium method to deal effectively with the approach to equilibrium and show that explicit asymptotic methods, combined with the new partial equilibrium methods, give an integration scheme that can plausibly deal with the stiffest networks, even in the approach to equilibrium, with accuracy and speed competitive with that of implicit methods. Thus we demonstrate that such explicit methods may offer alternatives to implicit integration of even extremely stiff systems and that these methods may permit integration of much larger networks than have been possible before in a number of fields. (paper)

  8. Computer-Assisted Technique for Surgical Tooth Extraction

    Directory of Open Access Journals (Sweden)

    Hosamuddin Hamza

    2016-01-01

    Full Text Available Introduction. Surgical tooth extraction is a common procedure in dentistry. However, numerous extraction cases show a high level of difficulty in practice. This difficulty is usually related to inadequate visualization, improper instrumentation, or other factors related to the targeted tooth (e.g., ankyloses or presence of bony undercut. Methods. In this work, the author presents a new technique for surgical tooth extraction based on 3D imaging, computer planning, and a new concept of computer-assisted manufacturing. Results. The outcome of this work is a surgical guide made by 3D printing of plastics and CNC of metals (hybrid outcome. In addition, the conventional surgical cutting tools (surgical burs are modified with a number of stoppers adjusted to avoid any excessive drilling that could harm bone or other vital structures. Conclusion. The present outcome could provide a minimally invasive technique to overcome the routine complications facing dental surgeons in surgical extraction procedures.

  9. Social incidence and economic costs of carbon limits; A computable general equilibrium analysis for Switzerland

    Energy Technology Data Exchange (ETDEWEB)

    Stephan, G.; Van Nieuwkoop, R.; Wiedmer, T. (Institute for Applied Microeconomics, Univ. of Bern (Switzerland))

    1992-01-01

    Both distributional and allocational effects of limiting carbon dioxide emissions in a small and open economy are discussed. It starts from the assumption that Switzerland attempts to stabilize its greenhouse gas emissions over the next 25 years, and evaluates costs and benefits of the respective reduction programme. From a methodological viewpoint, it is illustrated how a computable general equilibrium approach can be adopted for identifying economic effects of cutting greenhouse gas emissions on the national level. From a political economy point of view it considers the social incidence of a greenhouse policy. It shows in particular that public acceptance can be increased and economic costs of greenhouse policies can be reduced, if carbon taxes are accompanied by revenue redistribution. 8 tabs., 1 app., 17 refs.

  10. Computational Intelligence Techniques for New Product Design

    CERN Document Server

    Chan, Kit Yan; Dillon, Tharam S

    2012-01-01

    Applying computational intelligence for product design is a fast-growing and promising research area in computer sciences and industrial engineering. However, there is currently a lack of books, which discuss this research area. This book discusses a wide range of computational intelligence techniques for implementation on product design. It covers common issues on product design from identification of customer requirements in product design, determination of importance of customer requirements, determination of optimal design attributes, relating design attributes and customer satisfaction, integration of marketing aspects into product design, affective product design, to quality control of new products. Approaches for refinement of computational intelligence are discussed, in order to address different issues on product design. Cases studies of product design in terms of development of real-world new products are included, in order to illustrate the design procedures, as well as the effectiveness of the com...

  11. Computer technique for evaluating collimator performance

    International Nuclear Information System (INIS)

    Rollo, F.D.

    1975-01-01

    A computer program has been developed to theoretically evaluate the overall performance of collimators used with radioisotope scanners and γ cameras. The first step of the program involves the determination of the line spread function (LSF) and geometrical efficiency from the fundamental parameters of the collimator being evaluated. The working equations can be applied to any plane of interest. The resulting LSF is applied to subroutine computer programs which compute corresponding modulation transfer function and contrast efficiency functions. The latter function is then combined with appropriate geometrical efficiency data to determine the performance index function. The overall computer program allows one to predict from the physical parameters of the collimator alone how well the collimator will reproduce various sized spherical voids of activity in the image plane. The collimator performance program can be used to compare the performance of various collimator types, to study the effects of source depth on collimator performance, and to assist in the design of collimators. The theory of the collimator performance equation is discussed, a comparison between the experimental and theoretical LSF values is made, and examples of the application of the technique are presented

  12. Tourism Contribution to Poverty Alleviation in Kenya: A Dynamic Computable General Equilibrium Analysis

    Science.gov (United States)

    Njoya, Eric Tchouamou; Seetaram, Neelu

    2017-01-01

    The aim of this article is to investigate the claim that tourism development can be the engine for poverty reduction in Kenya using a dynamic, microsimulation computable general equilibrium model. The article improves on the common practice in the literature by using the more comprehensive Foster-Greer-Thorbecke (FGT) index to measure poverty instead of headcount ratios only. Simulations results from previous studies confirm that expansion of the tourism industry will benefit different sectors unevenly and will only marginally improve poverty headcount. This is mainly due to the contraction of the agricultural sector caused the appreciation of the real exchange rates. This article demonstrates that the effect on poverty gap and poverty severity is, nevertheless, significant for both rural and urban areas with higher impact in the urban areas. Tourism expansion enables poorer households to move closer to the poverty line. It is concluded that the tourism industry is pro-poor. PMID:29595836

  13. A review of metaheuristic scheduling techniques in cloud computing

    Directory of Open Access Journals (Sweden)

    Mala Kalra

    2015-11-01

    Full Text Available Cloud computing has become a buzzword in the area of high performance distributed computing as it provides on-demand access to shared pool of resources over Internet in a self-service, dynamically scalable and metered manner. Cloud computing is still in its infancy, so to reap its full benefits, much research is required across a broad array of topics. One of the important research issues which need to be focused for its efficient performance is scheduling. The goal of scheduling is to map tasks to appropriate resources that optimize one or more objectives. Scheduling in cloud computing belongs to a category of problems known as NP-hard problem due to large solution space and thus it takes a long time to find an optimal solution. There are no algorithms which may produce optimal solution within polynomial time to solve these problems. In cloud environment, it is preferable to find suboptimal solution, but in short period of time. Metaheuristic based techniques have been proved to achieve near optimal solutions within reasonable time for such problems. In this paper, we provide an extensive survey and comparative analysis of various scheduling algorithms for cloud and grid environments based on three popular metaheuristic techniques: Ant Colony Optimization (ACO, Genetic Algorithm (GA and Particle Swarm Optimization (PSO, and two novel techniques: League Championship Algorithm (LCA and BAT algorithm.

  14. NON-EQUILIBRIUM HELIUM IONIZATION IN AN MHD SIMULATION OF THE SOLAR ATMOSPHERE

    International Nuclear Information System (INIS)

    Golding, Thomas Peter; Carlsson, Mats; Leenaarts, Jorrit

    2016-01-01

    The ionization state of the gas in the dynamic solar chromosphere can depart strongly from the instantaneous statistical equilibrium commonly assumed in numerical modeling. We improve on earlier simulations of the solar atmosphere that only included non-equilibrium hydrogen ionization by performing a 2D radiation-magnetohydrodynamics simulation featuring non-equilibrium ionization of both hydrogen and helium. The simulation includes the effect of hydrogen Lyα and the EUV radiation from the corona on the ionization and heating of the atmosphere. Details on code implementation are given. We obtain helium ion fractions that are far from their equilibrium values. Comparison with models with local thermodynamic equilibrium (LTE) ionization shows that non-equilibrium helium ionization leads to higher temperatures in wavefronts and lower temperatures in the gas between shocks. Assuming LTE ionization results in a thermostat-like behavior with matter accumulating around the temperatures where the LTE ionization fractions change rapidly. Comparison of DEM curves computed from our models shows that non-equilibrium ionization leads to more radiating material in the temperature range 11–18 kK, compared to models with LTE helium ionization. We conclude that non-equilibrium helium ionization is important for the dynamics and thermal structure of the upper chromosphere and transition region. It might also help resolve the problem that intensities of chromospheric lines computed from current models are smaller than those observed

  15. Vapor-liquid equilibrium thermodynamics of N2 + CH4 - Model and Titan applications

    Science.gov (United States)

    Thompson, W. R.; Zollweg, John A.; Gabis, David H.

    1992-01-01

    A thermodynamic model is presented for vapor-liquid equilibrium in the N2 + CH4 system, which is implicated in calculations of the Titan tropospheric clouds' vapor-liquid equilibrium thermodynamics. This model imposes constraints on the consistency of experimental equilibrium data, and embodies temperature effects by encompassing enthalpy data; it readily calculates the saturation criteria, condensate composition, and latent heat for a given pressure-temperature profile of the Titan atmosphere. The N2 content of condensate is about half of that computed from Raoult's law, and about 30 percent greater than that computed from Henry's law.

  16. Application of computational intelligence techniques for load shedding in power systems: A review

    International Nuclear Information System (INIS)

    Laghari, J.A.; Mokhlis, H.; Bakar, A.H.A.; Mohamad, Hasmaini

    2013-01-01

    Highlights: • The power system blackout history of last two decades is presented. • Conventional load shedding techniques, their types and limitations are presented. • Applications of intelligent techniques in load shedding are presented. • Intelligent techniques include ANN, fuzzy logic, ANFIS, genetic algorithm and PSO. • The discussion and comparison between these techniques are provided. - Abstract: Recent blackouts around the world question the reliability of conventional and adaptive load shedding techniques in avoiding such power outages. To address this issue, reliable techniques are required to provide fast and accurate load shedding to prevent collapse in the power system. Computational intelligence techniques, due to their robustness and flexibility in dealing with complex non-linear systems, could be an option in addressing this problem. Computational intelligence includes techniques like artificial neural networks, genetic algorithms, fuzzy logic control, adaptive neuro-fuzzy inference system, and particle swarm optimization. Research in these techniques is being undertaken in order to discover means for more efficient and reliable load shedding. This paper provides an overview of these techniques as applied to load shedding in a power system. This paper also compares the advantages of computational intelligence techniques over conventional load shedding techniques. Finally, this paper discusses the limitation of computational intelligence techniques, which restricts their usage in load shedding in real time

  17. Problems in equilibrium theory

    CERN Document Server

    Aliprantis, Charalambos D

    1996-01-01

    In studying General Equilibrium Theory the student must master first the theory and then apply it to solve problems. At the graduate level there is no book devoted exclusively to teaching problem solving. This book teaches for the first time the basic methods of proof and problem solving in General Equilibrium Theory. The problems cover the entire spectrum of difficulty; some are routine, some require a good grasp of the material involved, and some are exceptionally challenging. The book presents complete solutions to two hundred problems. In searching for the basic required techniques, the student will find a wealth of new material incorporated into the solutions. The student is challenged to produce solutions which are different from the ones presented in the book.

  18. The empirical equilibrium structure of diacetylene

    Science.gov (United States)

    Thorwirth, Sven; Harding, Michael E.; Muders, Dirk; Gauss, Jürgen

    2008-09-01

    High-level quantum-chemical calculations are reported at the MP2 and CCSD(T) levels of theory for the equilibrium structure and the harmonic and anharmonic force fields of diacetylene, H sbnd C tbnd C sbnd C tbnd C sbnd H. The calculations were performed employing Dunning's hierarchy of correlation-consistent basis sets cc-pV XZ, cc-pCV XZ, and cc-pwCV XZ, as well as the ANO2 basis set of Almlöf and Taylor. An empirical equilibrium structure based on experimental rotational constants for 13 isotopic species of diacetylene and computed zero-point vibrational corrections is determined (reemp:r=1.0615 Å,r=1.2085 Å,r=1.3727 Å) and in good agreement with the best theoretical structure (CCSD(T)/cc-pCV5Z: r=1.0617 Å, r=1.2083 Å, r=1.3737 Å). In addition, the computed fundamental vibrational frequencies are compared with the available experimental data and found in satisfactory agreement.

  19. Note: Local thermal conductivities from boundary driven non-equilibrium molecular dynamics simulations

    International Nuclear Information System (INIS)

    Bresme, F.; Armstrong, J.

    2014-01-01

    We report non-equilibrium molecular dynamics simulations of heat transport in models of molecular fluids. We show that the “local” thermal conductivities obtained from non-equilibrium molecular dynamics simulations agree within numerical accuracy with equilibrium Green-Kubo computations. Our results support the local equilibrium hypothesis for transport properties. We show how to use the local dependence of the thermal gradients to quantify the thermal conductivity of molecular fluids for a wide range of thermodynamic states using a single simulation

  20. The DART general equilibrium model: A technical description

    OpenAIRE

    Springer, Katrin

    1998-01-01

    This paper provides a technical description of the Dynamic Applied Regional Trade (DART) General Equilibrium Model. The DART model is a recursive dynamic, multi-region, multi-sector computable general equilibrium model. All regions are fully specified and linked by bilateral trade flows. The DART model can be used to project economic activities, energy use and trade flows for each of the specified regions to simulate various trade policy as well as environmental policy scenarios, and to analy...

  1. Nanoscale dislocation shear loops at static equilibrium and finite temperature

    Science.gov (United States)

    Dang, Khanh; Capolungo, Laurent; Spearot, Douglas E.

    2017-12-01

    Atomistic simulations are used to determine the resolved shear stress necessary for equilibrium and the resulting geometry of nanoscale dislocation shear loops in Al. Dislocation loops with different sizes and shapes are created via superposition of elemental triangular dislocation displacement fields in the presence of an externally imposed shear stress. First, a bisection algorithm is developed to determine systematically the resolved shear stress necessary for equilibrium at 0 K. This approach allows for the identification of dislocation core structure and a correlation between dislocation loop size, shape and the computed shear stress for equilibrium. It is found, in agreement with predictions made by Scattergood and Bacon, that the equilibrium shape of a dislocation loop becomes more circular with increasing loop size. Second, the bisection algorithm is extended to study the influence of temperature on the resolved shear stress necessary for stability. An approach is presented to compute the effective lattice friction stress, including temperature dependence, for dislocation loops in Al. The temperature dependence of the effective lattice friction stress can be reliably computed for dislocation loops larger than 16.2 nm. However, for dislocation loops smaller than this threshold, the effective lattice friction stress shows a dislocation loop size dependence caused by significant overlap of the stress fields on the interior of the dislocation loops. Combined, static and finite temperature atomistic simulations provide essential data to parameterize discrete dislocation dynamics simulations.

  2. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  3. Equilibrium Droplets on Deformable Substrates: Equilibrium Conditions.

    Science.gov (United States)

    Koursari, Nektaria; Ahmed, Gulraiz; Starov, Victor M

    2018-05-15

    Equilibrium conditions of droplets on deformable substrates are investigated, and it is proven using Jacobi's sufficient condition that the obtained solutions really provide equilibrium profiles of both the droplet and the deformed support. At the equilibrium, the excess free energy of the system should have a minimum value, which means that both necessary and sufficient conditions of the minimum should be fulfilled. Only in this case, the obtained profiles provide the minimum of the excess free energy. The necessary condition of the equilibrium means that the first variation of the excess free energy should vanish, and the second variation should be positive. Unfortunately, the mentioned two conditions are not the proof that the obtained profiles correspond to the minimum of the excess free energy and they could not be. It is necessary to check whether the sufficient condition of the equilibrium (Jacobi's condition) is satisfied. To the best of our knowledge Jacobi's condition has never been verified for any already published equilibrium profiles of both the droplet and the deformable substrate. A simple model of the equilibrium droplet on the deformable substrate is considered, and it is shown that the deduced profiles of the equilibrium droplet and deformable substrate satisfy the Jacobi's condition, that is, really provide the minimum to the excess free energy of the system. To simplify calculations, a simplified linear disjoining/conjoining pressure isotherm is adopted for the calculations. It is shown that both necessary and sufficient conditions for equilibrium are satisfied. For the first time, validity of the Jacobi's condition is verified. The latter proves that the developed model really provides (i) the minimum of the excess free energy of the system droplet/deformable substrate and (ii) equilibrium profiles of both the droplet and the deformable substrate.

  4. Cloud computing and digital media fundamentals, techniques, and applications

    CERN Document Server

    Li, Kuan-Ching; Shih, Timothy K

    2014-01-01

    Cloud Computing and Digital Media: Fundamentals, Techniques, and Applications presents the fundamentals of cloud and media infrastructure, novel technologies that integrate digital media with cloud computing, and real-world applications that exemplify the potential of cloud computing for next-generation digital media. It brings together technologies for media/data communication, elastic media/data storage, security, authentication, cross-network media/data fusion, interdevice media interaction/reaction, data centers, PaaS, SaaS, and more.The book covers resource optimization for multimedia clo

  5. On equilibrium real exchange rates in euro area: Special focus on behavioral equilibrium exchange rates in Ireland and Greece

    Directory of Open Access Journals (Sweden)

    Klára Plecitá

    2012-01-01

    Full Text Available This paper focuses on the intra-euro-area imbalances. Therefore the first aim of this paper is to identify euro-area countries exhibiting macroeconomic imbalances. The subsequent aim is to estimate equilibrium real exchange rates for these countries and to compute their degrees of real exchange rate misalignment. The intra-area balance is assessed using the Cluster Analysis and the Principle Component Analysis; on this basis Greece and Ireland are selected as the two euro-area countries with largest imbalances in 2010. Further the medium-run equilibrium exchange rates for Greece and Ireland are estimated applying the Behavioral Equilibrium Exchange Rate (BEER approach popularised by Clark and MacDonald (1998. In addition, the long-run equilibrium exchange rates are estimated using the Permanent Equilibrium Exchange Rate (PEER model. Employing the BEER and PEER approaches on quarterly time series of real effective exchange rates (REER from 1997: Q1 to 2010: Q4 we identify an undervaluation of the Greek and Irish REER around their entrance to the euro area. For the rest of the period analysed their REER is broadly in line with estimated BEER and PEER levels.

  6. Fusion of neural computing and PLS techniques for load estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lu, M.; Xue, H.; Cheng, X. [Northwestern Polytechnical Univ., Xi' an (China); Zhang, W. [Xi' an Inst. of Post and Telecommunication, Xi' an (China)

    2007-07-01

    A method to predict the electric load of a power system in real time was presented. The method is based on neurocomputing and partial least squares (PLS). Short-term load forecasts for power systems are generally determined by conventional statistical methods and Computational Intelligence (CI) techniques such as neural computing. However, statistical modeling methods often require the input of questionable distributional assumptions, and neural computing is weak, particularly in determining topology. In order to overcome the problems associated with conventional techniques, the authors developed a CI hybrid model based on neural computation and PLS techniques. The theoretical foundation for the designed CI hybrid model was presented along with its application in a power system. The hybrid model is suitable for nonlinear modeling and latent structure extracting. It can automatically determine the optimal topology to maximize the generalization. The CI hybrid model provides faster convergence and better prediction results compared to the abductive networks model because it incorporates a load conversion technique as well as new transfer functions. In order to demonstrate the effectiveness of the hybrid model, load forecasting was performed on a data set obtained from the Puget Sound Power and Light Company. Compared with the abductive networks model, the CI hybrid model reduced the forecast error by 32.37 per cent on workday, and by an average of 27.18 per cent on the weekend. It was concluded that the CI hybrid model has a more powerful predictive ability. 7 refs., 1 tab., 3 figs.

  7. Composition and partition functions of partially ionized hydrogen plasma in Non-Local Thermal Equilibrium (Non-LThE) and Non-Local Chemical Equilibrium (Non-LChE)

    International Nuclear Information System (INIS)

    Chen Kuan; Eddy, T.L.

    1993-01-01

    A GTME (Generalized MultiThermodynamic Equilibrium) plasma model is developed for plasmas in both Non-LThE (Non-Local Thermal Equilibrium) and Non-LChE (Non-Local Chemical Equilibrium). The model uses multitemperatures for thermal nonequilibrium and non-zero chemical affinities as a measure of the deviation from chemical equilibrium. The plasma is treated as an ideal gas with the Debye-Hueckel approximation employed for pressure correction. The proration method is used when the cutoff energy level is between two discrete levels. The composition and internal partition functions of a hydrogen plasma are presented for electron temperatures ranging from 5000 to 35000 K and pressures from 0.1 to 1000 kPa. Number densities of 7 different species of hydrogen plasma and internal partition functions of different energy modes (rotational, vibrational, and electronic excitation) are computed for three affinity values. The results differ from other plasma properties in that they 1) are not based on equilibrium properties; and 2) are expressed as a function of different energy distribution parameters (temperatures) within each energy mode of each species as appropriate. The computed number densities and partition functions are applicable to calculating the thermodynamic, transport, and radiation properties of a hydrogen plasma not in thermal and chemical equilibria. The nonequilibrium plasma model and plasma compositions presented in this paper are very useful to the diagnosis of high-speed and/or low-pressure plasma flows in which the assumptions of local thermal and chemical equilibrium are invalid. (orig.)

  8. Equilibrium and non-equilibrium metal-ceramic interfaces

    International Nuclear Information System (INIS)

    Gao, Y.; Merkle, K.L.

    1992-01-01

    Metal-ceramic interfaces in thermodynamic equilibrium (Au/ZrO 2 ) and non-equilibrium (Au/MgO) have been studied by TEM and HREM. In the Au/ZrO 2 system, ZrO 2 precipitates formed by internal oxidation of a 7%Zr-Au alloy show a cubic ZrO 2 phase. It appears that formation of the cubic ZrO 2 is facilitated by alignment with the Au matrix. Most of the ZrO 2 precipitates have a perfect cube-on-cube orientation relationship with the Au matrix. The large number of interfacial steps observed in a short-time annealing experiment indicate that the precipitates are formed by the ledge growth mechanism. The lowest interfacial energy is indicated by the dominance of closed-packed [111] Au/ZrO 2 interfaces. In the Au/MgO system, composite films with small MgO smoke particles embedded in a Au matrix were prepared by a thin film technique. HREM observations show that most of the Au/MgO interfaces have a strong tendency to maintain a dense lattice structure across the interfaces irrespective of whether the interfaces are incoherent or semi-coherent. This paper reports that this indicates that there may be a relatively strong bond between MgO and Au

  9. Exploring Chemical and Thermal Non-equilibrium in Nitrogen Arcs

    International Nuclear Information System (INIS)

    Ghorui, S; Das, A K

    2012-01-01

    Plasma torches operating with nitrogen are of special importance as they can operate with usual tungsten based refractory electrodes and offer radical rich non-oxidizing high temperature environment for plasma chemistry. Strong gradients in temperature as well as species densities and huge convective fluxes lead to varying degrees of chemical non-equilibrium in associated regions. An axi-symmetric two-temperature chemical non-equilibrium model of a nitrogen plasma torch has been developed to understand the effects of thermal and chemical non-equilibrium in arcs. A 2-D finite volume CFD code in association with a non-equilibrium property routine enabled extraction of steady state self-consistent distributions of various plasma quantities inside the torch under various thermal and chemical non-equilibrium conditions. Chemical non-equilibrium has been incorporated through computation of diffusive and convective fluxes in each finite volume cell in every iteration and associating corresponding thermodynamic and transport properties through the scheme of 'chemical non-equilibrium parameter' introduced by Ghorui et. al. Recombination coefficient data from Nahar et. al. and radiation data from Krey and Morris have been used in the simulation. Results are presented for distributions of temperature, pressure, velocity, current density, electric potential, species densities and chemical non-equilibrium effects. Obtained results are compared with similar results under LTE.

  10. Plasma Equilibrium Control in Nuclear Fusion Devices 2. Plasma Control in Magnetic Confinement Devices 2.1 Plasma Control in Tokamaks

    Science.gov (United States)

    Fukuda, Takeshi

    The plasma control technique for use in large tokamak devices has made great developmental strides in the last decade, concomitantly with progress in the understanding of tokamak physics and in part facilitated by the substantial advancement in the computing environment. Equilibrium control procedures have thereby been established, and it has been pervasively recognized in recent years that the real-time feedback control of physical quantities is indispensable for the improvement and sustainment of plasma performance in a quasi-steady-state. Further development is presently undertaken to realize the “advanced plasma control” concept, where integrated fusion performance is achieved by the simultaneous feedback control of multiple physical quantities, combined with equilibrium control.

  11. Computable general equilibrium models for sustainability impact assessment: Status quo and prospects

    International Nuclear Information System (INIS)

    Boehringer, Christoph; Loeschel, Andreas

    2006-01-01

    Sustainability Impact Assessment (SIA) of economic, environmental, and social effects triggered by governmental policies has become a central requirement for policy design. The three dimensions of SIA are inherently intertwined and subject to trade-offs. Quantification of trade-offs for policy decision support requires numerical models in order to assess systematically the interference of complex interacting forces that affect economic performance, environmental quality, and social conditions. This paper investigates the use of computable general equilibrium (CGE) models for measuring the impacts of policy interference on policy-relevant economic, environmental, and social (institutional) indicators. We find that operational CGE models used for energy-economy-environment (E3) analyses have a good coverage of central economic indicators. Environmental indicators such as energy-related emissions with direct links to economic activities are widely covered, whereas indicators with complex natural science background such as water stress or biodiversity loss are hardly represented. Social indicators stand out for very weak coverage, mainly because they are vaguely defined or incommensurable. Our analysis identifies prospects for future modeling in the field of integrated assessment that link standard E3-CGE-models to themespecific complementary models with environmental and social focus. (author)

  12. Computable general equilibrium modelling in the context of trade and environmental policy

    Energy Technology Data Exchange (ETDEWEB)

    Koesler, Simon Tobias

    2014-10-14

    This thesis is dedicated to the evaluation of environmental policies in the context of climate change. Its objectives are twofold. Its first part is devoted to the development of potent instruments for quantitative impact analysis of environmental policy. In this context, the main contributions include the development of a new computable general equilibrium (CGE) model which makes use of the new comprehensive and coherent World Input-Output Dataset (WIOD) and which features a detailed representation of bilateral and bisectoral trade flows. Moreover it features an investigation of input substitutability to provide modellers with adequate estimates for key elasticities as well as a discussion and amelioration of the standard base year calibration procedure of most CGE models. Building on these tools, the second part applies the improved modelling framework and studies the economic implications of environmental policy. This includes an analysis of so called rebound effects, which are triggered by energy efficiency improvements and reduce their net benefit, an investigation of how firms restructure their production processes in the presence of carbon pricing mechanisms, and an analysis of a regional maritime emission trading scheme as one of the possible options to reduce emissions of international shipping in the EU context.

  13. Computable general equilibrium modelling in the context of trade and environmental policy

    International Nuclear Information System (INIS)

    Koesler, Simon Tobias

    2014-01-01

    This thesis is dedicated to the evaluation of environmental policies in the context of climate change. Its objectives are twofold. Its first part is devoted to the development of potent instruments for quantitative impact analysis of environmental policy. In this context, the main contributions include the development of a new computable general equilibrium (CGE) model which makes use of the new comprehensive and coherent World Input-Output Dataset (WIOD) and which features a detailed representation of bilateral and bisectoral trade flows. Moreover it features an investigation of input substitutability to provide modellers with adequate estimates for key elasticities as well as a discussion and amelioration of the standard base year calibration procedure of most CGE models. Building on these tools, the second part applies the improved modelling framework and studies the economic implications of environmental policy. This includes an analysis of so called rebound effects, which are triggered by energy efficiency improvements and reduce their net benefit, an investigation of how firms restructure their production processes in the presence of carbon pricing mechanisms, and an analysis of a regional maritime emission trading scheme as one of the possible options to reduce emissions of international shipping in the EU context.

  14. Essays on environmental policy analysis: Computable general equilibrium approaches applied to Sweden

    International Nuclear Information System (INIS)

    Hill, M.

    2001-01-01

    This thesis consists of three essays within the field of applied environmental economics, with the common basic aim of analyzing effects of Swedish environmental policy. Starting out from Swedish environmental goals, the thesis assesses a range of policy-related questions. The objective is to quantify policy outcomes by constructing and applying numerical models especially designed for environmental policy analysis. Static and dynamic multi-sectoral computable general equilibrium models are developed in order to analyze the following issues. The costs and benefits of a domestic carbon dioxide (CO 2 ) tax reform. Special attention is given to how these costs and benefits depend on the structure of the tax system and, furthermore, how they depend on policy-induced changes in 'secondary' pollutants. The effects of allowing for emission permit trading through time when the domestic long-term domestic environmental goal is specified in CO 2 stock terms. The effects on long-term projected economic growth and welfare that are due to damages from emission flow and accumulation of 'local' pollutants (nitrogen oxides and sulfur dioxide), as well as the outcome of environmental policy when costs and benefits are considered in an integrated environmental-economic framework

  15. Computational fluid-dynamic model of laser-induced breakdown in air

    International Nuclear Information System (INIS)

    Dors, Ivan G.; Parigger, Christian G.

    2003-01-01

    Temperature and pressure profiles are computed by the use of a two-dimensional, axially symmetric, time-accurate computational fluid-dynamic model for nominal 10-ns optical breakdown laser pulses. The computational model includes a kinetics mechanism that implements plasma equilibrium kinetics in ionized regions and nonequilibrium, multistep, finite-rate reactions in nonionized regions. Fluid-physics phenomena following laser-induced breakdown are recorded with high-speed shadowgraph techniques. The predicted fluid phenomena are shown by direct comparison with experimental records to agree with the flow patterns that are characteristic of laser spark decay

  16. Computational and theoretical modeling of pH and flow effects on the early-stage non-equilibrium self-assembly of optoelectronic peptides

    Science.gov (United States)

    Mansbach, Rachael; Ferguson, Andrew

    Self-assembling π-conjugated peptides are attractive candidates for the fabrication of bioelectronic materials possessing optoelectronic properties due to electron delocalization over the conjugated peptide groups. We present a computational and theoretical study of an experimentally-realized optoelectronic peptide that displays triggerable assembly in low pH to resolve the microscopic effects of flow and pH on the non-equilibrium morphology and kinetics of assembly. Using a combination of molecular dynamics simulations and hydrodynamic modeling, we quantify the time and length scales at which convective flows employed in directed assembly compete with microscopic diffusion to influence assembly. We also show that there is a critical pH below which aggregation proceeds irreversibly, and quantify the relationship between pH, charge density, and aggregate size. Our work provides new fundamental understanding of pH and flow of non-equilibrium π-conjugated peptide assembly, and lays the groundwork for the rational manipulation of environmental conditions and peptide chemistry to control assembly and the attendant emergent optoelectronic properties. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award # DE-SC0011847, and by the Computational Science and Engineering Fellowship from the University of Illinois at Urbana-Champaign.

  17. Finite-Difference Solution for Laminar or Turbulent Boundary Layer Flow over Axisymmetric Bodies with Ideal Gas, CF4, or Equilibrium Air Chemistry

    Science.gov (United States)

    Hamilton, H. Harris, II; Millman, Daniel R.; Greendyke, Robert B.

    1992-01-01

    A computer code was developed that uses an implicit finite-difference technique to solve nonsimilar, axisymmetric boundary layer equations for both laminar and turbulent flow. The code can treat ideal gases, air in chemical equilibrium, and carbon tetrafluoride (CF4), which is a useful gas for hypersonic blunt-body simulations. This is the only known boundary layer code that can treat CF4. Comparisons with experimental data have demonstrated that accurate solutions are obtained. The method should prove useful as an analysis tool for comparing calculations with wind tunnel experiments and for making calculations about flight vehicles where equilibrium air chemistry assumptions are valid.

  18. Inclusion of pressure and flow in the KITES MHD equilibrium code

    International Nuclear Information System (INIS)

    Raburn, Daniel; Fukuyama, Atsushi

    2013-01-01

    One of the simplest self-consistent models of a plasma is single-fluid magnetohydrodynamic (MHD) equilibrium with no bulk fluid flow under axisymmetry. However, both fluid flow and non-axisymmetric effects can significantly impact plasma equilibrium and confinement properties: in particular, fluid flow can produce profile pedestals, and non-axisymmetric effects can produce islands and stochastic regions. There exist a number of computational codes which are capable of calculating equilibria with arbitrary flow or with non-axisymmetric effects. Previously, a concept for a code to calculate MHD equilibria with flow in non-axisymmetric systems was presented, called the KITES (Kyoto ITerative Equilibrium Solver) code. Since then, many of the computational modules for the KITES code have been completed, and the work-in-progress KITES code has been used to calculate non-axisymmetric force-free equilibria. Additional computational modules are required to allow the KITES code to calculate equilibria with pressure and flow. Here, the authors report on the approaches used in developing these modules and provide a sample calculation with pressure. (author)

  19. Thermodynamic chemical energy transfer mechanisms of non-equilibrium, quasi-equilibrium, and equilibrium chemical reactions

    International Nuclear Information System (INIS)

    Roh, Heui-Seol

    2015-01-01

    Chemical energy transfer mechanisms at finite temperature are explored by a chemical energy transfer theory which is capable of investigating various chemical mechanisms of non-equilibrium, quasi-equilibrium, and equilibrium. Gibbs energy fluxes are obtained as a function of chemical potential, time, and displacement. Diffusion, convection, internal convection, and internal equilibrium chemical energy fluxes are demonstrated. The theory reveals that there are chemical energy flux gaps and broken discrete symmetries at the activation chemical potential, time, and displacement. The statistical, thermodynamic theory is the unification of diffusion and internal convection chemical reactions which reduces to the non-equilibrium generalization beyond the quasi-equilibrium theories of migration and diffusion processes. The relationship between kinetic theories of chemical and electrochemical reactions is also explored. The theory is applied to explore non-equilibrium chemical reactions as an illustration. Three variable separation constants indicate particle number constants and play key roles in describing the distinct chemical reaction mechanisms. The kinetics of chemical energy transfer accounts for the four control mechanisms of chemical reactions such as activation, concentration, transition, and film chemical reactions. - Highlights: • Chemical energy transfer theory is proposed for non-, quasi-, and equilibrium. • Gibbs energy fluxes are expressed by chemical potential, time, and displacement. • Relationship between chemical and electrochemical reactions is discussed. • Theory is applied to explore nonequilibrium energy transfer in chemical reactions. • Kinetics of non-equilibrium chemical reactions shows the four control mechanisms

  20. A Computer Algebra Approach to Solving Chemical Equilibria in General Chemistry

    Science.gov (United States)

    Kalainoff, Melinda; Lachance, Russ; Riegner, Dawn; Biaglow, Andrew

    2012-01-01

    In this article, we report on a semester-long study of the incorporation into our general chemistry course, of advanced algebraic and computer algebra techniques for solving chemical equilibrium problems. The method presented here is an alternative to the commonly used concentration table method for describing chemical equilibria in general…

  1. Algorithm For Hypersonic Flow In Chemical Equilibrium

    Science.gov (United States)

    Palmer, Grant

    1989-01-01

    Implicit, finite-difference, shock-capturing algorithm calculates inviscid, hypersonic flows in chemical equilibrium. Implicit formulation chosen because overcomes limitation on mathematical stability encountered in explicit formulations. For dynamical portion of problem, Euler equations written in conservation-law form in Cartesian coordinate system for two-dimensional or axisymmetric flow. For chemical portion of problem, equilibrium state of gas at each point in computational grid determined by minimizing local Gibbs free energy, subject to local conservation of molecules, atoms, ions, and total enthalpy. Major advantage: resulting algorithm naturally stable and captures strong shocks without help of artificial-dissipation terms to damp out spurious numerical oscillations.

  2. Transition towards a low carbon economy: A computable general equilibrium analysis for Poland

    International Nuclear Information System (INIS)

    Böhringer, Christoph; Rutherford, Thomas F.

    2013-01-01

    In the transition to sustainable economic structures the European Union assumes a leading role with its climate and energy package which sets ambitious greenhouse gas emission reduction targets by 2020. Among EU Member States, Poland with its heavy energy system reliance on coal is particularly worried on the pending trade-offs between emission regulation and economic growth. In our computable general equilibrium analysis of the EU climate and energy package we show that economic adjustment cost for Poland hinge crucially on restrictions to where-flexibility of emission abatement, revenue recycling, and technological options in the power system. We conclude that more comprehensive flexibility provisions at the EU level and a diligent policy implementation at the national level could achieve the transition towards a low carbon economy at little cost thereby broadening societal support. - Highlights: ► Economic impact assessment of the EU climate and energy package for Poland. ► Sensitivity analysis on where-flexibility, revenue recycling and technology choice. ► Application of a hybrid bottom-up, top-down CGE model

  3. Equilibrium Structure of Manganese Trifluoride (MnF3) Molecule

    International Nuclear Information System (INIS)

    Caliskan, M.

    2004-01-01

    The symmetry lowering in manganese trifluoride molecule due to Jahn-Teller distortion was demonstrated in both the experimental and computational results. The molecule does not have D 3 h (or C 3 v) symmetry, rather it has C 2 v symmetry it has been shown from electron-diffraction measurements, that even a molecule of D 3 h symmetry in its equilibrium geometry would appear as having C 3 v symmetry. The manganese trifluoride molecular structures is an example of concerted applications of electron diffraction experiment and computation. It was found two lower energy structures with C 2 v symmetry, one corresponding to the ground state and another corresponding to the transition state. In this work we have calculate the equilibrium structure of the MnF 3 in the C 2 v configuration using the Interionic Force Model. We have compared our results for equilibrium bond lengths and bond angles with measured values from electron diffraction and with the results of quantum chemical calculations. The agreement can be considered as very reasonable

  4. Dynamic Data-Driven Reduced-Order Models of Macroscale Quantities for the Prediction of Equilibrium System State for Multiphase Porous Medium Systems

    Science.gov (United States)

    Talbot, C.; McClure, J. E.; Armstrong, R. T.; Mostaghimi, P.; Hu, Y.; Miller, C. T.

    2017-12-01

    Microscale simulation of multiphase flow in realistic, highly-resolved porous medium systems of a sufficient size to support macroscale evaluation is computationally demanding. Such approaches can, however, reveal the dynamic, steady, and equilibrium states of a system. We evaluate methods to utilize dynamic data to reduce the cost associated with modeling a steady or equilibrium state. We construct data-driven models using extensions to dynamic mode decomposition (DMD) and its connections to Koopman Operator Theory. DMD and its variants comprise a class of equation-free methods for dimensionality reduction of time-dependent nonlinear dynamical systems. DMD furnishes an explicit reduced representation of system states in terms of spatiotemporally varying modes with time-dependent oscillation frequencies and amplitudes. We use DMD to predict the steady and equilibrium macroscale state of a realistic two-fluid porous medium system imaged using micro-computed tomography (µCT) and simulated using the lattice Boltzmann method (LBM). We apply Koopman DMD to direct numerical simulation data resulting from simulations of multiphase fluid flow through a 1440x1440x4320 section of a full 1600x1600x5280 realization of imaged sandstone. We determine a representative set of system observables via dimensionality reduction techniques including linear and kernel principal component analysis. We demonstrate how this subset of macroscale quantities furnishes a representation of the time-evolution of the system in terms of dynamic modes, and discuss the selection of a subset of DMD modes yielding the optimal reduced model, as well as the time-dependence of the error in the predicted equilibrium value of each macroscale quantity. Finally, we describe how the above procedure, modified to incorporate methods from compressed sensing and random projection techniques, may be used in an online fashion to facilitate adaptive time-stepping and parsimonious storage of system states over time.

  5. The Optimal Price Ratio of Typical Energy Sources in Beijing Based on the Computable General Equilibrium Model

    Directory of Open Access Journals (Sweden)

    Yongxiu He

    2014-04-01

    Full Text Available In Beijing, China, the rational consumption of energy is affected by the insufficient linkage mechanism of the energy pricing system, the unreasonable price ratio and other issues. This paper combines the characteristics of Beijing’s energy market, putting forward the society-economy equilibrium indicator R maximization taking into consideration the mitigation cost to determine a reasonable price ratio range. Based on the computable general equilibrium (CGE model, and dividing four kinds of energy sources into three groups, the impact of price fluctuations of electricity and natural gas on the Gross Domestic Product (GDP, Consumer Price Index (CPI, energy consumption and CO2 and SO2 emissions can be simulated for various scenarios. On this basis, the integrated effects of electricity and natural gas price shocks on the Beijing economy and environment can be calculated. The results show that relative to the coal prices, the electricity and natural gas prices in Beijing are currently below reasonable levels; the solution to these unreasonable energy price ratios should begin by improving the energy pricing mechanism, through means such as the establishment of a sound dynamic adjustment mechanism between regulated prices and market prices. This provides a new idea for exploring the rationality of energy price ratios in imperfect competitive energy markets.

  6. Initial conditions of non-equilibrium quark-gluon plasma evolution

    International Nuclear Information System (INIS)

    Shmatov, S.V.

    2002-01-01

    In accordance with the hydrodynamic Bjorken limit, the initial energy density and temperature for a chemical non-equilibrium quark-gluon system formed in the heavy ion collisions at the LHC are computed. The dependence of this value on the type of colliding nuclei and the collision impact parameter is studied. The principle possibility of the non-equilibrium quark-gluon plasma (QGP) formation in the light nuclei collisions is shown. The life time of QGP is calculated. (author)

  7. Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation

    Directory of Open Access Journals (Sweden)

    Benjamin Scellier

    2017-05-01

    Full Text Available We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made and the second phase of training (after the target or prediction error is revealed. Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well-defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point or stationary distribution toward a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged toward their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal “back-propagated” during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not. We also show experimentally that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST

  8. Transport and equilibrium in field-reversed mirrors

    International Nuclear Information System (INIS)

    Boyd, J.K.

    1982-09-01

    Two plasma models relevant to compact torus research have been developed to study transport and equilibrium in field reversed mirrors. In the first model for small Larmor radius and large collision frequency, the plasma is described as an adiabatic hydromagnetic fluid. In the second model for large Larmor radius and small collision frequency, a kinetic theory description has been developed. Various aspects of the two models have been studied in five computer codes ADB, AV, NEO, OHK, RES. The ADB code computes two dimensional equilibrium and one dimensional transport in a flux coordinate. The AV code calculates orbit average integrals in a harmonic oscillator potential. The NEO code follows particle trajectories in a Hill's vortex magnetic field to study stochasticity, invariants of the motion, and orbit average formulas. The OHK code displays analytic psi(r), B/sub Z/(r), phi(r), E/sub r/(r) formulas developed for the kinetic theory description. The RES code calculates resonance curves to consider overlap regions relevant to stochastic orbit behavior

  9. Analysis of non-equilibrium phenomena in inductively coupled plasma generators

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, W.; Panesi, M., E-mail: mpanesi@illinois.edu [University of Illinois at Urbana-Champaign, Urbana, Illinois 61822 (United States); Lani, A. [Von Karman Institute for Fluid Dynamics, Rhode-Saint-Genèse (Belgium)

    2016-07-15

    This work addresses the modeling of non-equilibrium phenomena in inductively coupled plasma discharges. In the proposed computational model, the electromagnetic induction equation is solved together with the set of Navier-Stokes equations in order to compute the electromagnetic and flow fields, accounting for their mutual interaction. Semi-classical statistical thermodynamics is used to determine the plasma thermodynamic properties, while transport properties are obtained from kinetic principles, with the method of Chapman and Enskog. Particle ambipolar diffusive fluxes are found by solving the Stefan-Maxwell equations with a simple iterative method. Two physico-mathematical formulations are used to model the chemical reaction processes: (1) A Local Thermodynamics Equilibrium (LTE) formulation and (2) a thermo-chemical non-equilibrium (TCNEQ) formulation. In the TCNEQ model, thermal non-equilibrium between the translational energy mode of the gas and the vibrational energy mode of individual molecules is accounted for. The electronic states of the chemical species are assumed in equilibrium with the vibrational temperature, whereas the rotational energy mode is assumed to be equilibrated with translation. Three different physical models are used to account for the coupling of chemistry and energy transfer processes. Numerical simulations obtained with the LTE and TCNEQ formulations are used to characterize the extent of non-equilibrium of the flow inside the Plasmatron facility at the von Karman Institute. Each model was tested using different kinetic mechanisms to assess the sensitivity of the results to variations in the reaction parameters. A comparison of temperatures and composition profiles at the outlet of the torch demonstrates that the flow is in non-equilibrium for operating conditions characterized by pressures below 30 000 Pa, frequency 0.37 MHz, input power 80 kW, and mass flow 8 g/s.

  10. Equilibrium partitioning of macromolecules in confining geometries: Improved universality with a new molecular size parameter

    DEFF Research Database (Denmark)

    Wang, Yanwei; Peters, Günther H.J.; Hansen, Flemming Yssing

    2008-01-01

    structures (CABS), allows the computation of equilibrium partition coefficients as a function of confinement size solely based on a single sampling of the configuration space of a macromolecule in bulk. Superior in computational speed to previous computational methods, CABS is capable of handling slits...... parameter for characterization of spatial confinement effects on macromolecules. Results for the equilibrium partition coefficient in the weak confinement regime depend only on the ratio ofR-s to the confinement size regardless of molecular details....

  11. Visualization of Minkowski operations by computer graphics techniques

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Blaauwgeers, G.S.M.; Serra, J; Soille, P

    1994-01-01

    We consider the problem of visualizing 3D objects defined as a Minkowski addition or subtraction of elementary objects. It is shown that such visualizations can be obtained by using techniques from computer graphics such as ray tracing and Constructive Solid Geometry. Applications of the method are

  12. Steady-State Electrodiffusion from the Nernst-Planck Equation Coupled to Local Equilibrium Monte Carlo Simulations.

    Science.gov (United States)

    Boda, Dezső; Gillespie, Dirk

    2012-03-13

    We propose a procedure to compute the steady-state transport of charged particles based on the Nernst-Planck (NP) equation of electrodiffusion. To close the NP equation and to establish a relation between the concentration and electrochemical potential profiles, we introduce the Local Equilibrium Monte Carlo (LEMC) method. In this method, Grand Canonical Monte Carlo simulations are performed using the electrochemical potential specified for the distinct volume elements. An iteration procedure that self-consistently solves the NP and flux continuity equations with LEMC is shown to converge quickly. This NP+LEMC technique can be used in systems with diffusion of charged or uncharged particles in complex three-dimensional geometries, including systems with low concentrations and small applied voltages that are difficult for other particle simulation techniques.

  13. Experimental data processing techniques by a personal computer

    International Nuclear Information System (INIS)

    Matsuura, Kiyokata; Tsuda, Kenzo; Abe, Yoshihiko; Kojima, Tsuyoshi; Nishikawa, Akira; Shimura, Hitoshi; Hyodo, Hiromi; Yamagishi, Shigeru.

    1989-01-01

    A personal computer (16-bit, about 1 MB memory) can be used at a low cost in the experimental data processing. This report surveys the important techniques on A/D and D/A conversion, display, store and transfer of the experimental data. It is also discussed the items to be considered in the software. Practical softwares programed BASIC and Assembler language are given as examples. Here, we present some techniques to get faster process in BASIC language and show that the system composed of BASIC and Assembler is useful in a practical experiment. The system performance such as processing speed and flexibility in setting operation condition will depend strongly on programming language. We have made test for processing speed by some typical programming languages; BASIC(interpreter), C, FORTRAN and Assembler. As for the calculation, FORTRAN has the best performance which is comparable to or better than Assembler even in the personal computer. (author)

  14. Implementation of Premixed Equilibrium Chemistry Capability in OVERFLOW

    Science.gov (United States)

    Olsen, Mike E.; Liu, Yen; Vinokur, M.; Olsen, Tom

    2004-01-01

    An implementation of premixed equilibrium chemistry has been completed for the OVERFLOW code, a chimera capable, complex geometry flow code widely used to predict transonic flowfields. The implementation builds on the computational efficiency and geometric generality of the solver.

  15. Prediction of scour caused by 2D horizontal jets using soft computing techniques

    Directory of Open Access Journals (Sweden)

    Masoud Karbasi

    2017-12-01

    Full Text Available This paper presents application of five soft-computing techniques, artificial neural networks, support vector regression, gene expression programming, grouping method of data handling (GMDH neural network and adaptive-network-based fuzzy inference system, to predict maximum scour hole depth downstream of a sluice gate. The input parameters affecting the scour depth are the sediment size and its gradation, apron length, sluice gate opening, jet Froude number and the tail water depth. Six non-dimensional parameters were achieved to define a functional relationship between the input and output variables. Published data were used from the experimental researches. The results of soft-computing techniques were compared with empirical and regression based equations. The results obtained from the soft-computing techniques are superior to those of empirical and regression based equations. Comparison of soft-computing techniques showed that accuracy of the ANN model is higher than other models (RMSE = 0.869. A new GEP based equation was proposed.

  16. Local equilibrium in bird flocks

    Science.gov (United States)

    Mora, Thierry; Walczak, Aleksandra M.; Del Castello, Lorenzo; Ginelli, Francesco; Melillo, Stefania; Parisi, Leonardo; Viale, Massimiliano; Cavagna, Andrea; Giardina, Irene

    2016-12-01

    The correlated motion of flocks is an example of global order emerging from local interactions. An essential difference with respect to analogous ferromagnetic systems is that flocks are active: animals move relative to each other, dynamically rearranging their interaction network. This non-equilibrium characteristic has been studied theoretically, but its impact on actual animal groups remains to be fully explored experimentally. Here, we introduce a novel dynamical inference technique, based on the principle of maximum entropy, which accommodates network rearrangements and overcomes the problem of slow experimental sampling rates. We use this method to infer the strength and range of alignment forces from data of starling flocks. We find that local bird alignment occurs on a much faster timescale than neighbour rearrangement. Accordingly, equilibrium inference, which assumes a fixed interaction network, gives results consistent with dynamical inference. We conclude that bird orientations are in a state of local quasi-equilibrium over the interaction length scale, providing firm ground for the applicability of statistical physics in certain active systems.

  17. Module description of TOKAMAK equilibrium code MEUDAS

    International Nuclear Information System (INIS)

    Suzuki, Masaei; Hayashi, Nobuhiko; Matsumoto, Taro; Ozeki, Takahisa

    2002-01-01

    The analysis of an axisymmetric MHD equilibrium serves as a foundation of TOKAMAK researches, such as a design of devices and theoretical research, the analysis of experiment result. For this reason, also in JAERI, an efficient MHD analysis code has been developed from start of TOKAMAK research. The free boundary equilibrium code ''MEUDAS'' which uses both the DCR method (Double-Cyclic-Reduction Method) and a Green's function can specify the pressure and the current distribution arbitrarily, and has been applied to the analysis of a broad physical subject as a code having rapidity and high precision. Also the MHD convergence calculation technique in ''MEUDAS'' has been built into various newly developed codes. This report explains in detail each module in ''MEUDAS'' for performing convergence calculation in solving the MHD equilibrium. (author)

  18. Theory of stellar atmospheres an introduction to astrophysical non-equilibrium quantitative spectroscopic analysis

    CERN Document Server

    Hubeny, Ivan

    2015-01-01

    This book provides an in-depth and self-contained treatment of the latest advances achieved in quantitative spectroscopic analyses of the observable outer layers of stars and similar objects. Written by two leading researchers in the field, it presents a comprehensive account of both the physical foundations and numerical methods of such analyses. The book is ideal for astronomers who want to acquire deeper insight into the physical foundations of the theory of stellar atmospheres, or who want to learn about modern computational techniques for treating radiative transfer in non-equilibrium situations. It can also serve as a rigorous yet accessible introduction to the discipline for graduate students.

  19. 2-D skin-current toroidal-MHD-equilibrium code

    International Nuclear Information System (INIS)

    Feinberg, B.; Niland, R.A.; Coonrod, J.; Levine, M.A.

    1982-09-01

    A two-dimensional, toroidal, ideal MHD skin-current equilibrium computer code is described. The code is suitable for interactive implementation on a minicomptuer. Some examples of the use of the code for design and interpretation of toroidal cusp experiments are presented

  20. Equilibrium figures for beta Lyrae type disks

    International Nuclear Information System (INIS)

    Wilson, R.E.

    1981-01-01

    Accumulated evidence for a geometrically and optically thick disk in the β Lyrae system has now established the disk's basic external configuration. Since the disk has been constant in its main properties over the historical interval of β Lyrae observations and also seems to have a basically well-defined photosphere, it is now time to being consideration of its sturcture. Here, we compute equilibrium figures for self-gravitating disks around stars in binary systems as a start toward eventual computation of complete disk models. A key role is played by centrifugally limited rotation of the central star, which would naturally arise late in the rapid phase of mass transfer. Beta Lyrae is thus postulated to be a double-contact binary, which makes possible nonarbitrary separation of star and disk into separate structures. The computed equilibrium figures are three-dimensional, as the gravitation of the second star is included. Under the approximation that the gravitational potential of the disk is that of a thin wire and that the local disk angular velocity is proportional to u/sup n/ (u = distance from rotation axis), we comptue the total potential and locate equipotential surfaces. The centrifugal potential is written in a particularly convenient form which permits one to change the rotation law discontinuously (for example, at the star-disk coupling point) while ensuring that centrifugal potential and centrifigual force are continuous functions of position. With such a one-parameter rotation law, one can find equilibrium disk figures with dimensions very similar to those found in β Lyrae, but considerations of internal consistency demand at least a two-parameter law

  1. Spectroscopy, modeling and computation of metal chelate solubility in supercritical CO2. 1998 annual progress report

    International Nuclear Information System (INIS)

    Brennecke, J.F.; Chateauneuf, J.E.; Stadtherr, M.A.

    1998-01-01

    'This report summarizes work after 1 year and 8 months (9/15/96-5/14/98) of a 3 year project. Thus far, progress has been made in: (1) the measurement of the solubility of metal chelates in SC CO 2 with and without added cosolvents, (2) the spectroscopic determination of preferential solvation of metal chelates by cosolvents in SC CO 2 solutions, and (3) the development of a totally reliable computational technique for phase equilibrium computations. An important factor in the removal of metals from solid matrices with CO 2 /chelate mixtures is the equilibrium solubility of the metal chelate complex in the CO 2 .'

  2. Probing the equilibrium dynamics of colloidal hard spheres above the mode-coupling glass transition

    NARCIS (Netherlands)

    Brambilla, G.; al Masri, J.H.M.; Pierno, M.; Berthier, L.; Cipelletti, L.

    2010-01-01

    We use dynamic light scattering and computer simulations to study equilibrium dynamics and dynamic heterogeneity in concentrated suspensions of colloidal hard spheres. Our study covers an unprecedented density range and spans seven decades in structural relaxation time, , including equilibrium

  3. Study of the post-equilibrium slope approximation in the calculation of glomerular filtration rate using the 51Cr-EDTA single injection technique

    International Nuclear Information System (INIS)

    Nimmon, C.C.; McAlister, J.M.; Hickson, B.; Cattell, W.R.

    1975-01-01

    A comparison of methods for calculating the renal clearance of EDTA from the plasma disappearance curve, after a single injection, has been made. Measurements were made on 38 patients, using external monitoring and venous blood sampling techniques, over a period of 24 h after an injection of 100 μCi of 51 Cr-EDTA. The results indicate that the period 3 - 6 h after injection is suitable for sampling the post-equilibrium part of the plasma disappearance curve for values of the glomerular filtration rate (GFR) in the range 0 - 140 ml/min. It was also found that, to within the individual measurement errors, the values of the clearance calculated by using the post-equilibrium period only (PES clearance) can be considered to show a constant proportionality to the values calculated by using the entire plasma disappearance curve (total clearance). (author)

  4. Assessing economic impacts of China's water pollution mitigation measures through a dynamic computable general equilibrium analysis

    International Nuclear Information System (INIS)

    Qin Changbo; Jia Yangwen; Wang Hao; Bressers, Hans T A; Su, Z

    2011-01-01

    In this letter, we apply an extended environmental dynamic computable general equilibrium model to assess the economic consequences of implementing a total emission control policy. On the basis of emission levels in 2007, we simulate different emission reduction scenarios, ranging from 20 to 50% emission reduction, up to the year 2020. The results indicate that a modest total emission reduction target in 2020 can be achieved at low macroeconomic cost. As the stringency of policy targets increases, the macroeconomic cost will increase at a rate faster than linear. Implementation of a tradable emission permit system can counterbalance the economic costs affecting the gross domestic product and welfare. We also find that a stringent environmental policy can lead to an important shift in production, consumption and trade patterns from dirty sectors to relatively clean sectors.

  5. Computer-aided classification of lung nodules on computed tomography images via deep learning technique

    Directory of Open Access Journals (Sweden)

    Hua KL

    2015-08-01

    Full Text Available Kai-Lung Hua,1 Che-Hao Hsu,1 Shintami Chusnul Hidayati,1 Wen-Huang Cheng,2 Yu-Jen Chen3 1Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, 2Research Center for Information Technology Innovation, Academia Sinica, 3Department of Radiation Oncology, MacKay Memorial Hospital, Taipei, Taiwan Abstract: Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain. Keywords: nodule classification, deep learning, deep belief network, convolutional neural network

  6. Bone tissue engineering scaffolding: computer-aided scaffolding techniques.

    Science.gov (United States)

    Thavornyutikarn, Boonlom; Chantarapanich, Nattapon; Sitthiseripratip, Kriskrai; Thouas, George A; Chen, Qizhi

    Tissue engineering is essentially a technique for imitating nature. Natural tissues consist of three components: cells, signalling systems (e.g. growth factors) and extracellular matrix (ECM). The ECM forms a scaffold for its cells. Hence, the engineered tissue construct is an artificial scaffold populated with living cells and signalling molecules. A huge effort has been invested in bone tissue engineering, in which a highly porous scaffold plays a critical role in guiding bone and vascular tissue growth and regeneration in three dimensions. In the last two decades, numerous scaffolding techniques have been developed to fabricate highly interconnective, porous scaffolds for bone tissue engineering applications. This review provides an update on the progress of foaming technology of biomaterials, with a special attention being focused on computer-aided manufacturing (Andrade et al. 2002) techniques. This article starts with a brief introduction of tissue engineering (Bone tissue engineering and scaffolds) and scaffolding materials (Biomaterials used in bone tissue engineering). After a brief reviews on conventional scaffolding techniques (Conventional scaffolding techniques), a number of CAM techniques are reviewed in great detail. For each technique, the structure and mechanical integrity of fabricated scaffolds are discussed in detail. Finally, the advantaged and disadvantage of these techniques are compared (Comparison of scaffolding techniques) and summarised (Summary).

  7. Measuring techniques in emission computed tomography

    International Nuclear Information System (INIS)

    Jordan, K.; Knoop, B.

    1988-01-01

    The chapter reviews the historical development of the emission computed tomography and its basic principles, proceeds to SPECT and PET, special techniques of emission tomography, and concludes with a comprehensive discussion of the mathematical fundamentals of the reconstruction and the quantitative activity determination in vivo, dealing with radon transformation and the projection slice theorem, methods of image reconstruction such as analytical and algebraic methods, limiting conditions in real systems such as limited number of measured data, noise enhancement, absorption, stray radiation, and random coincidence. (orig./HP) With 111 figs., 6 tabs [de

  8. Calculating Shocks In Flows At Chemical Equilibrium

    Science.gov (United States)

    Eberhardt, Scott; Palmer, Grant

    1988-01-01

    Boundary conditions prove critical. Conference paper describes algorithm for calculation of shocks in hypersonic flows of gases at chemical equilibrium. Although algorithm represents intermediate stage in development of reliable, accurate computer code for two-dimensional flow, research leading up to it contributes to understanding of what is needed to complete task.

  9. Chemical equilibrium of ablation materials including condensed species

    Science.gov (United States)

    Stroud, C. W.; Brinkley, K. L.

    1975-01-01

    Equilibrium is determined by finding chemical composition with minimum free energy. Method of steepest descent is applied to quadratic representation of free-energy surface. Solution is initiated by selecting arbitrary set of mole fractions, from which point on free-energy surface is computed.

  10. Gated equilibrium bloodpool scintigraphy

    International Nuclear Information System (INIS)

    Reinders Folmer, S.C.C.

    1981-01-01

    This thesis deals with the clinical applications of gated equilibrium bloodpool scintigraphy, performed with either a gamma camera or a portable detector system, the nuclear stethoscope. The main goal has been to define the value and limitations of noninvasive measurements of left ventricular ejection fraction as a parameter of cardiac performance in various disease states, both for diagnostic purposes as well as during follow-up after medical or surgical intervention. Secondly, it was attempted to extend the use of the equilibrium bloodpool techniques beyond the calculation of ejection fraction alone by considering the feasibility to determine ventricular volumes and by including the possibility of quantifying valvular regurgitation. In both cases, it has been tried to broaden the perspective of the observations by comparing them with results of other, invasive and non-invasive, procedures, in particular cardiac catheterization, M-mode echocardiography and myocardial perfusion scintigraphy. (Auth.)

  11. Soft Computing Techniques for the Protein Folding Problem on High Performance Computing Architectures.

    Science.gov (United States)

    Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M

    2016-01-01

    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.

  12. Computational Intelligence based techniques for islanding detection of distributed generation in distribution network: A review

    International Nuclear Information System (INIS)

    Laghari, J.A.; Mokhlis, H.; Karimi, M.; Bakar, A.H.A.; Mohamad, Hasmaini

    2014-01-01

    Highlights: • Unintentional and intentional islanding, their causes, and solutions are presented. • Remote, passive, active and hybrid islanding detection techniques are discussed. • The limitation of these techniques in accurately detect islanding are discussed. • Computational intelligence techniques ability in detecting islanding is discussed. • Review of ANN, fuzzy logic control, ANFIS, Decision tree techniques is provided. - Abstract: Accurate and fast islanding detection of distributed generation is highly important for its successful operation in distribution networks. Up to now, various islanding detection technique based on communication, passive, active and hybrid methods have been proposed. However, each technique suffers from certain demerits that cause inaccuracies in islanding detection. Computational intelligence based techniques, due to their robustness and flexibility in dealing with complex nonlinear systems, is an option that might solve this problem. This paper aims to provide a comprehensive review of computational intelligence based techniques applied for islanding detection of distributed generation. Moreover, the paper compares the accuracies of computational intelligence based techniques over existing techniques to provide a handful of information for industries and utility researchers to determine the best method for their respective system

  13. Equilibrium and non-equilibrium phenomena in arcs and torches

    NARCIS (Netherlands)

    Mullen, van der J.J.A.M.

    2000-01-01

    A general treatment of non-equilibrium plasma aspects is obtained by relating transport fluxes to equilibrium restoring processes in so-called disturbed Bilateral Relations. The (non) equilibrium stage of a small microwave induced plasma serves as case study.

  14. Nonlinear equilibrium in Tokamaks including convective terms and viscosity

    International Nuclear Information System (INIS)

    Martin, P.; Castro, E.; Puerta, J.

    2003-01-01

    MHD equilibrium in tokamaks becomes very complex, when the non-linear convective term and viscosity are included in the momentum equation. In order to simplify the analysis, each new term has been separated in type gradient terms and vorticity depending terms. For the special case in which the vorticity vanishes, an extended Grad-Shafranov type equation can be obtained. However now the magnetic surface is not isobars or current surfaces as in the usual Grad-Shafranov treatment. The non-linear convective terms introduces gradient of Bernoulli type kinetic terms . Montgomery and other authors have shown the importance of the viscosity terms in tokamaks [1,2], here the treatment is carried out for the equilibrium condition, including generalized tokamaks coordinates recently described [3], which simplify the equilibrium analysis. Calculation of the new isobar surfaces is difficult and some computation have been carried out elsewhere for some particular cases [3]. Here, our analysis is extended discussing how the toroidal current density, plasma pressure and toroidal field are modified across the midplane because of the new terms (convective and viscous). New calculations and computations are also presented. (Author)

  15. The automated design of materials far from equilibrium

    Science.gov (United States)

    Miskin, Marc Z.

    Automated design is emerging as a powerful concept in materials science. By combining computer algorithms, simulations, and experimental data, new techniques are being developed that start with high level functional requirements and identify the ideal materials that achieve them. This represents a radically different picture of how materials become functional in which technological demand drives material discovery, rather than the other way around. At the frontiers of this field, materials systems previously considered too complicated can start to be controlled and understood. Particularly promising are materials far from equilibrium. Material robustness, high strength, self-healing and memory are properties displayed by several materials systems that are intrinsically out of equilibrium. These and other properties could be revolutionary, provided they can first be controlled. This thesis conceptualizes and implements a framework for designing materials that are far from equilibrium. We show how, even in the absence of a complete physical theory, design from the top down is possible and lends itself to producing physical insight. As a prototype system, we work with granular materials: collections of athermal, macroscopic identical objects, since these materials function both as an essential component of industrial processes as well as a model system for many non-equilibrium states of matter. We show that by placing granular materials in the context of design, benefits emerge simultaneously for fundamental and applied interests. As first steps, we use our framework to design granular aggregates with extreme properties like high stiffness, and softness. We demonstrate control over nonlinear effects by producing exotic aggregates that stiffen under compression. Expanding on our framework, we conceptualize new ways of thinking about material design when automatic discovery is possible. We show how to build rules that link particle shapes to arbitrary granular packing

  16. Variability of radon and thoron equilibrium factors in indoor environment of Garhwal Himalaya.

    Science.gov (United States)

    Prasad, Mukesh; Rawat, Mukesh; Dangwal, Anoop; Kandari, Tushar; Gusain, G S; Mishra, Rosaline; Ramola, R C

    2016-01-01

    The measurements of radon, thoron and their progeny concentrations have been carried out in the dwellings of Uttarkashi and Tehri districts of Garhwal Himalaya, India using LR-115 detector based pin-hole dosimeter and DRPS/DTPS techniques. The equilibrium factors for radon, thoron and their progeny were calculated by using the values measured with these techniques. The average values of equilibrium factor between radon and its progeny have been found to be 0.44, 0.39, 0.39 and 0.28 for rainy, autumn, winter and summer seasons, respectively. For thoron and its progeny, the average values of equilibrium factor have been found to be 0.04, 0.04, 0.04 and 0.03 for rainy, autumn, winter and summer seasons, respectively. The equilibrium factor between radon and its progeny has been found to be dependent on the seasonal changes. However, the equilibrium factor for thoron and progeny has been found to be same for rainy, autumn and winter seasons but slightly different for summer season. The annual average equilibrium factors for radon and thoron have been found to vary from 0.23 to 0.80 with an average of 0.42 and from 0.01 to 0.29 with an average of 0.07, respectively. The detailed discussion of the measurement techniques and the explanation for the results obtained is given in the paper. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. One-loop calculation in time-dependent non-equilibrium thermo field dynamics

    International Nuclear Information System (INIS)

    Umezawa, H.; Yamanaka, Y.

    1989-01-01

    This paper is a review on the structure of thermo field dynamics (TFD) in which the basic concepts such as the thermal doublets, the quasi-particles and the self-consistent renormalization are presented in detail. A strong emphasis is put on the computational scheme. A detailed structure of this scheme is illustrated by the one-loop calculation in a non-equilibrium time-dependent process. A detailed account of the one-loop calculation has never been reported anywhere. The role of the self-consistent renormalization is explained. The equilibrium TFD is obtained as the long-time limit of non-equilibrium TFD. (author)

  18. Teachers of Advertising Media Courses Describe Techniques, Show Computer Applications.

    Science.gov (United States)

    Lancaster, Kent M.; Martin, Thomas C.

    1989-01-01

    Reports on a survey of university advertising media teachers regarding textbooks and instructional aids used, teaching techniques, computer applications, student placement, instructor background, and faculty publishing. (SR)

  19. Modeling Computer Virus and Its Dynamics

    Directory of Open Access Journals (Sweden)

    Mei Peng

    2013-01-01

    Full Text Available Based on that the computer will be infected by infected computer and exposed computer, and some of the computers which are in suscepitible status and exposed status can get immunity by antivirus ability, a novel coumputer virus model is established. The dynamic behaviors of this model are investigated. First, the basic reproduction number R0, which is a threshold of the computer virus spreading in internet, is determined. Second, this model has a virus-free equilibrium P0, which means that the infected part of the computer disappears, and the virus dies out, and P0 is a globally asymptotically stable equilibrium if R01 then this model has only one viral equilibrium P*, which means that the computer persists at a constant endemic level, and P* is also globally asymptotically stable. Finally, some numerical examples are given to demonstrate the analytical results.

  20. Collisional-radiative switching - A powerful technique for converging non-LTE calculations

    Science.gov (United States)

    Hummer, D. G.; Voels, S. A.

    1988-01-01

    A very simple technique has been developed to converge statistical equilibrium and model atmospheric calculations in extreme non-LTE conditions when the usual iterative methods fail to converge from an LTE starting model. The proposed technique is based on a smooth transition from a collision-dominated LTE situation to the desired non-LTE conditions in which radiation dominates, at least in the most important transitions. The proposed approach was used to successfully compute stellar models with He abundances of 0.20, 0.30, and 0.50; Teff = 30,000 K, and log g = 2.9.

  1. Analytic and numerical studies of Scyllac equilibrium

    International Nuclear Information System (INIS)

    Barnes, D.C.; Brackbill, J.U.; Dagazian, R.Y.; Freidberg, J.P.; Schneider, W.; Betancourt, O.; Garabedian, P.

    1976-01-01

    The results of both numerical and analytic studies of the Scyllac equilibria are presented. Analytic expansions are used to derive equilibrium equations appropriate to noncircular cross sections, and compute the stellarator fields which produce toroidal force balance. Numerical algorithms are used to solve both the equilibrium equations and the full system of dynamical equations in three dimensions. Numerical equilibria are found for both l = 1,0 and l= 1,2 systems. It is found that the stellarator fields which produce equilibria in the l = 1.0 system are larger for diffuse than for sharp boundary plasma profiles, and that the stability of the equilibria depends strongly on the harmonic content of the stellarator fields

  2. Unified commutation-pruning technique for efficient computation of composite DFTs

    Science.gov (United States)

    Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.

    2015-12-01

    An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with

  3. Random sampling technique for ultra-fast computations of molecular opacities for exoplanet atmospheres

    Science.gov (United States)

    Min, M.

    2017-10-01

    Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.

  4. A survey of energy saving techniques for mobile computers

    NARCIS (Netherlands)

    Smit, Gerardus Johannes Maria; Havinga, Paul J.M.

    1997-01-01

    Portable products such as pagers, cordless and digital cellular telephones, personal audio equipment, and laptop computers are increasingly being used. Because these applications are battery powered, reducing power consumption is vital. In this report we first give a survey of techniques for

  5. Chemical equilibrium in the GaP-HCl and InP-HCl systems

    International Nuclear Information System (INIS)

    Goliusov, V.A.; Voronin, V.A.; Chuchmarev, S.K.

    1983-01-01

    Chemical equilibrium in the GaP-HCl and InP-HCl systems is investigated experimentally, polynomial dependence of the total pressure on temperature (800-1100 K) and hydrochloric aci concntration under the experimental conditions is obtained. The technique for equilibrium calculation in hydrogencontaining chemical systems based on the tensimetric investigation results is suggested. The equilibrium gas phase composition in the GaP(InP)-HCl systems and self consistent, within the framework of the designed equilibrium model thermodynamic characteristics are determined. The effectiveness of gas-phase indium- and gallium phosphides precipitation in the GaP(InP)-HCl systems is calculated

  6. Computer Aided Measurement Laser (CAML): technique to quantify post-mastectomy lymphoedema

    International Nuclear Information System (INIS)

    Trombetta, Chiara; Abundo, Paolo; Felici, Antonella; Ljoka, Concetta; Foti, Calogero; Cori, Sandro Di; Rosato, Nicola

    2012-01-01

    Lymphoedema can be a side effect of cancer treatment. Eventhough several methods for assessing lymphoedema are used in clinical practice, an objective quantification of lymphoedema has been problematic. The aim of the study was to determine the objectivity, reliability and repeatability of the computer aided measurement laser (CAML) technique. CAML technique is based on computer aided design (CAD) methods and requires an infrared laser scanner. Measurements are scanned and the information describing size and shape of the limb allows to design the model by using the CAD software. The objectivity and repeatability was established in the beginning using a phantom. Consequently a group of subjects presenting post-breast cancer lymphoedema was evaluated using as a control the contralateral limb. Results confirmed that in clinical settings CAML technique is easy to perform, rapid and provides meaningful data for assessing lymphoedema. Future research will include a comparison of upper limb CAML technique between healthy subjects and patients with known lymphoedema.

  7. New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity

    Science.gov (United States)

    Pak, Chan-Gi; Lung, Shun-Fat

    2017-01-01

    A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.

  8. Three-dimensional integrated CAE system applying computer graphic technique

    International Nuclear Information System (INIS)

    Kato, Toshisada; Tanaka, Kazuo; Akitomo, Norio; Obata, Tokayasu.

    1991-01-01

    A three-dimensional CAE system for nuclear power plant design is presented. This system utilizes high-speed computer graphic techniques for the plant design review, and an integrated engineering database for handling the large amount of nuclear power plant engineering data in a unified data format. Applying this system makes it possible to construct a nuclear power plant using only computer data from the basic design phase to the manufacturing phase, and it increases the productivity and reliability of the nuclear power plants. (author)

  9. Module description of TOKAMAK equilibrium code MEUDAS

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Masaei; Hayashi, Nobuhiko; Matsumoto, Taro; Ozeki, Takahisa [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment

    2002-01-01

    The analysis of an axisymmetric MHD equilibrium serves as a foundation of TOKAMAK researches, such as a design of devices and theoretical research, the analysis of experiment result. For this reason, also in JAERI, an efficient MHD analysis code has been developed from start of TOKAMAK research. The free boundary equilibrium code ''MEUDAS'' which uses both the DCR method (Double-Cyclic-Reduction Method) and a Green's function can specify the pressure and the current distribution arbitrarily, and has been applied to the analysis of a broad physical subject as a code having rapidity and high precision. Also the MHD convergence calculation technique in ''MEUDAS'' has been built into various newly developed codes. This report explains in detail each module in ''MEUDAS'' for performing convergence calculation in solving the MHD equilibrium. (author)

  10. Thermodynamic equilibrium-air correlations for flowfield applications

    Science.gov (United States)

    Zoby, E. V.; Moss, J. N.

    1981-01-01

    Equilibrium-air thermodynamic correlations have been developed for flowfield calculation procedures. A comparison between the postshock results computed by the correlation equations and detailed chemistry calculations is very good. The thermodynamic correlations are incorporated in an approximate inviscid flowfield code with a convective heating capability for the purpose of defining the thermodynamic environment through the shock layer. Comparisons of heating rates computed by the approximate code and a viscous-shock-layer method are good. In addition to presenting the thermodynamic correlations, the impact of several viscosity models on the convective heat transfer is demonstrated.

  11. High-efficiency photorealistic computer-generated holograms based on the backward ray-tracing technique

    Science.gov (United States)

    Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin

    2018-03-01

    Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.

  12. Validation of vibration-dissociation coupling models in hypersonic non-equilibrium separated flows

    Science.gov (United States)

    Shoev, G.; Oblapenko, G.; Kunova, O.; Mekhonoshina, M.; Kustova, E.

    2018-03-01

    The validation of recently developed models of vibration-dissociation coupling is discussed in application to numerical solutions of the Navier-Stokes equations in a two-temperature approximation for a binary N2/N flow. Vibrational-translational relaxation rates are computed using the Landau-Teller formula generalized for strongly non-equilibrium flows obtained in the framework of the Chapman-Enskog method. Dissociation rates are calculated using the modified Treanor-Marrone model taking into account the dependence of the model parameter on the vibrational state. The solutions are compared to those obtained using traditional Landau-Teller and Treanor-Marrone models, and it is shown that for high-enthalpy flows, the traditional and recently developed models can give significantly different results. The computed heat flux and pressure on the surface of a double cone are in a good agreement with experimental data available in the literature on low-enthalpy flow with strong thermal non-equilibrium. The computed heat flux on a double wedge qualitatively agrees with available data for high-enthalpy non-equilibrium flows. Different contributions to the heat flux calculated using rigorous kinetic theory methods are evaluated. Quantitative discrepancy of numerical and experimental data is discussed.

  13. Development of a Thermal Equilibrium Prediction Algorithm

    International Nuclear Information System (INIS)

    Aviles-Ramos, Cuauhtemoc

    2002-01-01

    A thermal equilibrium prediction algorithm is developed and tested using a heat conduction model and data sets from calorimetric measurements. The physical model used in this study is the exact solution of a system of two partial differential equations that govern the heat conduction in the calorimeter. A multi-parameter estimation technique is developed and implemented to estimate the effective volumetric heat generation and thermal diffusivity in the calorimeter measurement chamber, and the effective thermal diffusivity of the heat flux sensor. These effective properties and the exact solution are used to predict the heat flux sensor voltage readings at thermal equilibrium. Thermal equilibrium predictions are carried out considering only 20% of the total measurement time required for thermal equilibrium. A comparison of the predicted and experimental thermal equilibrium voltages shows that the average percentage error from 330 data sets is only 0.1%. The data sets used in this study come from calorimeters of different sizes that use different kinds of heat flux sensors. Furthermore, different nuclear material matrices were assayed in the process of generating these data sets. This study shows that the integration of this algorithm into the calorimeter data acquisition software will result in an 80% reduction of measurement time. This reduction results in a significant cutback in operational costs for the calorimetric assay of nuclear materials. (authors)

  14. Non-Equilibrium Solidification of Undercooled Metallic Melts

    Directory of Open Access Journals (Sweden)

    Dieter M. Herlach

    2014-06-01

    Full Text Available If a liquid is undercooled below its equilibrium melting temperature an excess Gibbs free energy is created. This gives access to solidification of metastable solids under non-equilibrium conditions. In the present work, techniques of containerless processing are applied. Electromagnetic and electrostatic levitation enable to freely suspend a liquid drop of a few millimeters in diameter. Heterogeneous nucleation on container walls is completely avoided leading to large undercoolings. The freely suspended drop is accessible for direct observation of rapid solidification under conditions far away from equilibrium by applying proper diagnostic means. Nucleation of metastable crystalline phases is monitored by X-ray diffraction using synchrotron radiation during non-equilibrium solidification. While nucleation preselects the crystallographic phase, subsequent crystal growth controls the microstructure evolution. Metastable microstructures are obtained from deeply undercooled melts as supersaturated solid solutions, disordered superlattice structures of intermetallics. Nucleation and crystal growth take place by heat and mass transport. Comparative experiments in reduced gravity allow for investigations on how forced convection can be used to alter the transport processes and design materials by using undercooling and convection as process parameters.

  15. CET89 - CHEMICAL EQUILIBRIUM WITH TRANSPORT PROPERTIES, 1989

    Science.gov (United States)

    Mcbride, B.

    1994-01-01

    Scientists and engineers need chemical equilibrium composition data to calculate the theoretical thermodynamic properties of a chemical system. This information is essential in the design and analysis of equipment such as compressors, turbines, nozzles, engines, shock tubes, heat exchangers, and chemical processing equipment. The substantial amount of numerical computation required to obtain equilibrium compositions and transport properties for complex chemical systems led scientists at NASA's Lewis Research Center to develop CET89, a program designed to calculate the thermodynamic and transport properties of these systems. CET89 is a general program which will calculate chemical equilibrium compositions and mixture properties for any chemical system with available thermodynamic data. Generally, mixtures may include condensed and gaseous products. CET89 performs the following operations: it 1) obtains chemical equilibrium compositions for assigned thermodynamic states, 2) calculates dilute-gas transport properties of complex chemical mixtures, 3) obtains Chapman-Jouguet detonation properties for gaseous species, 4) calculates incident and reflected shock properties in terms of assigned velocities, and 5) calculates theoretical rocket performance for both equilibrium and frozen compositions during expansion. The rocket performance function allows the option of assuming either a finite area or an infinite area combustor. CET89 accommodates problems involving up to 24 reactants, 20 elements, and 600 products (400 of which may be condensed). The program includes a library of thermodynamic and transport properties in the form of least squares coefficients for possible reaction products. It includes thermodynamic data for over 1300 gaseous and condensed species and transport data for 151 gases. The subroutines UTHERM and UTRAN convert thermodynamic and transport data to unformatted form for faster processing. The program conforms to the FORTRAN 77 standard, except for

  16. Oscillation Susceptibility Analysis of the ADMIRE Aircraft along the Path of Longitudinal Flight Equilibriums in Two Different Mathematical Models

    Directory of Open Access Journals (Sweden)

    Achim Ionita

    2009-01-01

    Full Text Available The oscillation susceptibility of the ADMIRE aircraft along the path of longitudinal flight equilibriums is analyzed numerically in the general and in a simplified flight model. More precisely, the longitudinal flight equilibriums, the stability of these equilibriums, and the existence of bifurcations along the path of these equilibriums are researched in both models. Maneuvers and appropriate piloting tasks for the touch-down moment are simulated in both models. The computed results obtained in the models are compared in order to see if the movement concerning the landing phase computed in the simplified model is similar to that computed in the general model. The similarity we find is not a proof of the structural stability of the simplified system, what as far we know never been made, but can increase the confidence that the simplified system correctly describes the real phenomenon.

  17. Computed tomography of the llama head: technique and normal anatomy

    International Nuclear Information System (INIS)

    Hathcock, J.T.; Pugh, D.G.; Cartee, R.E.; Hammond, L.

    1996-01-01

    Computed tomography was performed on the head of 6 normal adult llamas. The animals were under general anesthesia and positioned in dorsal recumbency on the scanning table. The area scanned was from the external occipital protuberance to the rostral portion of the nasal passage, and the images are presented in both a bone window and a soft tissue window to allow evaluation and identification of the anatomy of the head. Computed tomography of the llama head can be accomplished by most computed tomography scanners utilizing a technique similar to that used in small animals with minor modification of the scanning table

  18. Applications of NLP Techniques to Computer-Assisted Authoring of Test Items for Elementary Chinese

    Science.gov (United States)

    Liu, Chao-Lin; Lin, Jen-Hsiang; Wang, Yu-Chun

    2010-01-01

    The authors report an implemented environment for computer-assisted authoring of test items and provide a brief discussion about the applications of NLP techniques for computer assisted language learning. Test items can serve as a tool for language learners to examine their competence in the target language. The authors apply techniques for…

  19. Mathematics in computed tomography and related techniques

    International Nuclear Information System (INIS)

    Sawicka, B.

    1992-01-01

    The mathematical basis of computed tomography (CT) was formulated in 1917 by Radon. His theorem states that the 2-D function f(x,y) can be determined at all points from a complete set of its line integrals. Modern methods of image reconstruction include three approaches: algebraic reconstruction techniques with simultaneous iterative reconstruction or simultaneous algebraic reconstruction; convolution back projection; and the Fourier transform method. There is no one best approach. Because the experimental data do not strictly satisfy theoretical models, a number of effects have to be taken into account; in particular, the problems of beam geometry, finite beam dimensions and distribution, beam scattering, and the radiation source spectrum. Tomography with truncated data is of interest, employing mathematical approximations to compensate for the unmeasured projection data. Mathematical techniques in image processing and data analysis are also extensively used. 13 refs

  20. [Clinical analysis of 12 cases of orthognathic surgery with digital computer-assisted technique].

    Science.gov (United States)

    Tan, Xin-ying; Hu, Min; Liu, Chang-kui; Liu, Hua-wei; Liu, San-xia; Tao, Ye

    2014-06-01

    This study was to investigate the effect of the digital computer-assisted technique in orthognathic surgery. Twelve patients from January 2008 to December 2011 with jaw malformation were treated in our department. With the help of CT and three-dimensional reconstruction technique, 12 patients underwent surgical treatment and the results were evaluated after surgery. Digital computer-assisted technique could clearly show the status of the jaw deformity and assist virtual surgery. After surgery all patients were satisfied with the results. Digital orthognathic surgery can improve the predictability of the surgical procedure, and to facilitate patients' communication, shorten operative time, and reduce patients' pain.

  1. Econometrically calibrated computable general equilibrium models: Applications to the analysis of energy and climate politics

    Science.gov (United States)

    Schu, Kathryn L.

    Economy-energy-environment models are the mainstay of economic assessments of policies to reduce carbon dioxide (CO2) emissions, yet their empirical basis is often criticized as being weak. This thesis addresses these limitations by constructing econometrically calibrated models in two policy areas. The first is a 35-sector computable general equilibrium (CGE) model of the U.S. economy which analyzes the uncertain impacts of CO2 emission abatement. Econometric modeling of sectors' nested constant elasticity of substitution (CES) cost functions based on a 45-year price-quantity dataset yields estimates of capital-labor-energy-material input substitution elasticities and biases of technical change that are incorporated into the CGE model. I use the estimated standard errors and variance-covariance matrices to construct the joint distribution of the parameters of the economy's supply side, which I sample to perform Monte Carlo baseline and counterfactual runs of the model. The resulting probabilistic abatement cost estimates highlight the importance of the uncertainty in baseline emissions growth. The second model is an equilibrium simulation of the market for new vehicles which I use to assess the response of vehicle prices, sales and mileage to CO2 taxes and increased corporate average fuel economy (CAFE) standards. I specify an econometric model of a representative consumer's vehicle preferences using a nested CES expenditure function which incorporates mileage and other characteristics in addition to prices, and develop a novel calibration algorithm to link this structure to vehicle model supplies by manufacturers engaged in Bertrand competition. CO2 taxes' effects on gasoline prices reduce vehicle sales and manufacturers' profits if vehicles' mileage is fixed, but these losses shrink once mileage can be adjusted. Accelerated CAFE standards induce manufacturers to pay fines for noncompliance rather than incur the higher costs of radical mileage improvements

  2. Reconstruction of equilibrium trajectories during whole-body movements.

    Science.gov (United States)

    Domen, K; Latash, M L; Zatsiorsky, V M

    1999-03-01

    The framework of the equilibrium-point hypothesis was used to reconstruct equilibrium trajectories (ETs) of the ankle, hip and body center of mass during quick voluntary hip flexions ('Japanese courtesy bow') by standing subjects. Different spring loads applied to the subject's back were used to introduce smooth perturbations that are necessary to reconstruct ETs based on a series of trials at the same task. Time patterns of muscle torques were calculated using inverse dynamics techniques. A second-order linear model was employed to calculate the instantaneous position of the spring-like joint or center of mass characteristic at different times during the movement. ETs of the joints and of the center of mass had significantly different shapes from the actual trajectories. Integral measures of electromyographic bursts of activity in postural muscles demonstrated a relation to muscle length corresponding to the equilibrium-point hypothesis.

  3. A computer code to simulate X-ray imaging techniques

    International Nuclear Information System (INIS)

    Duvauchelle, Philippe; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-01-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests

  4. A computer code to simulate X-ray imaging techniques

    Energy Technology Data Exchange (ETDEWEB)

    Duvauchelle, Philippe E-mail: philippe.duvauchelle@insa-lyon.fr; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-09-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests.

  5. Determining flexor-tendon repair techniques via soft computing

    Science.gov (United States)

    Johnson, M.; Firoozbakhsh, K.; Moniem, M.; Jamshidi, M.

    2001-01-01

    An SC-based multi-objective decision-making method for determining the optimal flexor-tendon repair technique from experimental and clinical survey data, and with variable circumstances, was presented. Results were compared with those from the Taguchi method. Using the Taguchi method results in the need to perform ad-hoc decisions when the outcomes for individual objectives are contradictory to a particular preference or circumstance, whereas the SC-based multi-objective technique provides a rigorous straightforward computational process in which changing preferences and importance of differing objectives are easily accommodated. Also, adding more objectives is straightforward and easily accomplished. The use of fuzzy-set representations of information categories provides insight into their performance throughout the range of their universe of discourse. The ability of the technique to provide a "best" medical decision given a particular physician, hospital, patient, situation, and other criteria was also demonstrated.

  6. Quasi-equilibrium in glassy dynamics: an algebraic view

    International Nuclear Information System (INIS)

    Franz, Silvio; Parisi, Giorgio

    2013-01-01

    We study a chain of identical glassy systems in a constrained equilibrium, where each bond of the chain is forced to remain at a preassigned distance to the previous one. We apply this description to mean-field glassy systems in the limit of a long chain where each bond is close to the previous one. We show that this construction defines a pseudo-dynamic process that in specific conditions can formally describe real relaxational dynamics for long times. In particular, in mean-field spin glass models we can recover in this way the equations of Langevin dynamics in the long time limit at the dynamical transition temperature and below. We interpret the formal identity as evidence that in these situations the configuration space is explored in a quasi-equilibrium fashion. Our general formalism, which relates dynamics to equilibrium, puts slow dynamics in a new perspective and opens the way to the computation of new dynamical quantities in glassy systems. (paper)

  7. I. Dissociation free energies of drug-receptor systems via non-equilibrium alchemical simulations: a theoretical framework.

    Science.gov (United States)

    Procacci, Piero

    2016-06-01

    In this contribution I critically revise the alchemical reversible approach in the context of the statistical mechanics theory of non-covalent bonding in drug-receptor systems. I show that most of the pitfalls and entanglements for the binding free energy evaluation in computer simulations are rooted in the equilibrium assumption that is implicit in the reversible method. These critical issues can be resolved by using a non-equilibrium variant of the alchemical method in molecular dynamics simulations, relying on the production of many independent trajectories with a continuous dynamical evolution of an externally driven alchemical coordinate, completing the decoupling of the ligand in a matter of a few tens of picoseconds rather than nanoseconds. The absolute binding free energy can be recovered from the annihilation work distributions by applying an unbiased unidirectional free energy estimate, on the assumption that any observed work distribution is given by a mixture of normal distributions, whose components are identical in either direction of the non-equilibrium process, with weights regulated by the Crooks theorem. I finally show that the inherent reliability and accuracy of the unidirectional estimate of the decoupling free energies, based on the production of a few hundreds of non-equilibrium independent sub-nanosecond unrestrained alchemical annihilation processes, is a direct consequence of the funnel-like shape of the free energy surface in molecular recognition. An application of the technique to a real drug-receptor system is presented in the companion paper.

  8. Equilibrium and pre-equilibrium emissions in proton-induced ...

    Indian Academy of Sciences (India)

    necessary for the domain of fission-reactor technology for the calculation of nuclear transmutation ... tions occur in three stages: INC, pre-equilibrium and equilibrium (or compound. 344. Pramana ... In the evaporation phase of the reaction, the.

  9. Incorporation of a Chemical Equilibrium Equation of State into LOCI-Chem

    Science.gov (United States)

    Cox, Carey F.

    2005-01-01

    Renewed interest in development of advanced high-speed transport, reentry vehicles and propulsion systems has led to a resurgence of research into high speed aerodynamics. As this flow regime is typically dominated by hot reacting gaseous flow, efficient models for the characteristic chemical activity are necessary for accurate and cost effective analysis and design of aerodynamic vehicles that transit this regime. The LOCI-Chem code recently developed by Ed Luke at Mississippi State University for NASA/MSFC and used by NASA/MSFC and SSC represents an important step in providing an accurate, efficient computational tool for the simulation of reacting flows through the use of finite-rate kinetics [3]. Finite rate chemistry however, requires the solution of an additional N-1 species mass conservation equations with source terms involving reaction kinetics that are not fully understood. In the equilibrium limit, where the reaction rates approach infinity, these equations become very stiff. Through the use of the assumption of local chemical equilibrium the set of governing equations is reduced back to the usual gas dynamic equations, and thus requires less computation, while still allowing for the inclusion of reacting flow phenomenology. The incorporation of a chemical equilibrium equation of state module into the LOCI-Chem code was the primary objective of the current research. The major goals of the project were: (1) the development of a chemical equilibrium composition solver, and (2) the incorporation of chemical equilibrium solver into LOCI-Chem. Due to time and resource constraints, code optimization was not considered unless it was important to the proper functioning of the code.

  10. Shape characteristics of equilibrium and non-equilibrium fractal clusters.

    Science.gov (United States)

    Mansfield, Marc L; Douglas, Jack F

    2013-07-28

    It is often difficult in practice to discriminate between equilibrium and non-equilibrium nanoparticle or colloidal-particle clusters that form through aggregation in gas or solution phases. Scattering studies often permit the determination of an apparent fractal dimension, but both equilibrium and non-equilibrium clusters in three dimensions frequently have fractal dimensions near 2, so that it is often not possible to discriminate on the basis of this geometrical property. A survey of the anisotropy of a wide variety of polymeric structures (linear and ring random and self-avoiding random walks, percolation clusters, lattice animals, diffusion-limited aggregates, and Eden clusters) based on the principal components of both the radius of gyration and electric polarizability tensor indicates, perhaps counter-intuitively, that self-similar equilibrium clusters tend to be intrinsically anisotropic at all sizes, while non-equilibrium processes such as diffusion-limited aggregation or Eden growth tend to be isotropic in the large-mass limit, providing a potential means of discriminating these clusters experimentally if anisotropy could be determined along with the fractal dimension. Equilibrium polymer structures, such as flexible polymer chains, are normally self-similar due to the existence of only a single relevant length scale, and are thus anisotropic at all length scales, while non-equilibrium polymer structures that grow irreversibly in time eventually become isotropic if there is no difference in the average growth rates in different directions. There is apparently no proof of these general trends and little theoretical insight into what controls the universal anisotropy in equilibrium polymer structures of various kinds. This is an obvious topic of theoretical investigation, as well as a matter of practical interest. To address this general problem, we consider two experimentally accessible ratios, one between the hydrodynamic and gyration radii, the other

  11. CFD analysis of laboratory scale phase equilibrium cell operation

    Science.gov (United States)

    Jama, Mohamed Ali; Nikiforow, Kaj; Qureshi, Muhammad Saad; Alopaeus, Ville

    2017-10-01

    For the modeling of multiphase chemical reactors or separation processes, it is essential to predict accurately chemical equilibrium data, such as vapor-liquid or liquid-liquid equilibria [M. Šoóš et al., Chem. Eng. Process.: Process Intensif. 42(4), 273-284 (2003)]. The instruments used in these experiments are typically designed based on previous experiences, and their operation verified based on known equilibria of standard components. However, mass transfer limitations with different chemical systems may be very different, potentially falsifying the measured equilibrium compositions. In this work, computational fluid dynamics is utilized to design and analyze laboratory scale experimental gas-liquid equilibrium cell for the first time to augment the traditional analysis based on plug flow assumption. Two-phase dilutor cell, used for measuring limiting activity coefficients at infinite dilution, is used as a test case for the analysis. The Lagrangian discrete model is used to track each bubble and to study the residence time distribution of the carrier gas bubbles in the dilutor cell. This analysis is necessary to assess whether the gas leaving the cell is in equilibrium with the liquid, as required in traditional analysis of such apparatus. Mass transfer for six different bio-oil compounds is calculated to determine the approach equilibrium concentration. Also, residence times assuming plug flow and ideal mixing are used as reference cases to evaluate the influence of mixing on the approach to equilibrium in the dilutor. Results show that the model can be used to predict the dilutor operating conditions for which each of the studied gas-liquid systems reaches equilibrium.

  12. CFD analysis of laboratory scale phase equilibrium cell operation.

    Science.gov (United States)

    Jama, Mohamed Ali; Nikiforow, Kaj; Qureshi, Muhammad Saad; Alopaeus, Ville

    2017-10-01

    For the modeling of multiphase chemical reactors or separation processes, it is essential to predict accurately chemical equilibrium data, such as vapor-liquid or liquid-liquid equilibria [M. Šoóš et al., Chem. Eng. Process Intensif. 42(4), 273-284 (2003)]. The instruments used in these experiments are typically designed based on previous experiences, and their operation verified based on known equilibria of standard components. However, mass transfer limitations with different chemical systems may be very different, potentially falsifying the measured equilibrium compositions. In this work, computational fluid dynamics is utilized to design and analyze laboratory scale experimental gas-liquid equilibrium cell for the first time to augment the traditional analysis based on plug flow assumption. Two-phase dilutor cell, used for measuring limiting activity coefficients at infinite dilution, is used as a test case for the analysis. The Lagrangian discrete model is used to track each bubble and to study the residence time distribution of the carrier gas bubbles in the dilutor cell. This analysis is necessary to assess whether the gas leaving the cell is in equilibrium with the liquid, as required in traditional analysis of such apparatus. Mass transfer for six different bio-oil compounds is calculated to determine the approach equilibrium concentration. Also, residence times assuming plug flow and ideal mixing are used as reference cases to evaluate the influence of mixing on the approach to equilibrium in the dilutor. Results show that the model can be used to predict the dilutor operating conditions for which each of the studied gas-liquid systems reaches equilibrium.

  13. Jet-images: computer vision inspired techniques for jet tagging

    Energy Technology Data Exchange (ETDEWEB)

    Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel [SLAC National Accelerator Laboratory,Menlo Park, CA 94028 (United States)

    2015-02-18

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  14. Jet-images: computer vision inspired techniques for jet tagging

    International Nuclear Information System (INIS)

    Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel

    2015-01-01

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  15. Research on integrated simulation of fluid-structure system by computation science techniques

    International Nuclear Information System (INIS)

    Yamaguchi, Akira

    1996-01-01

    In Power Reactor and Nuclear Fuel Development Corporation, the research on the integrated simulation of fluid-structure system by computation science techniques has been carried out, and by its achievement, the verification of plant systems which has depended on large scale experiments is substituted by computation science techniques, in this way, it has been aimed at to reduce development costs and to attain the optimization of FBR systems. For the purpose, it is necessary to establish the technology for integrally and accurately analyzing complicated phenomena (simulation technology), the technology for applying it to large scale problems (speed increasing technology), and the technology for assuring the reliability of the results of analysis when simulation technology is utilized for the permission and approval of FBRs (verifying technology). The simulation of fluid-structure interaction, the heat flow simulation in the space with complicated form and the related technologies are explained. As the utilization of computation science techniques, the elucidation of phenomena by numerical experiment and the numerical simulation as the substitute for tests are discussed. (K.I.)

  16. A new estimation method for nuclide number densities in equilibrium cycle

    International Nuclear Information System (INIS)

    Seino, Takeshi; Sekimoto, Hiroshi; Ando, Yoshihira.

    1997-01-01

    A new method is proposed for estimating nuclide number densities of LWR equilibrium cycle by multi-recycling calculation. Conventionally, it is necessary to spend a large computation time for attaining the ultimate equilibrium state. Hence, the cycle in nearly constant fuel composition has been considered as an equilibrium state which can be achieved by a few of recycling calculations on a simulated cycle operation under a specific fuel core design. The present method uses steady state fuel nuclide number densities as the initial guess for multi-recycling burnup calculation obtained by a continuously fuel supplied core model. The number densities are modified to be the initial number densities for nuclides of a batch supplied fuel. It was found that the calculated number densities could attain to more precise equilibrium state than that of a conventional multi-recycling calculation with a small number of recyclings. In particular, the present method could give the ultimate equilibrium number densities of the nuclides with the higher mass number than 245 Cm and 244 Pu which were not able to attain to the ultimate equilibrium state within a reasonable number of iterations using a conventional method. (author)

  17. Non-equilibrium transport in the quantum dot: quench dynamics and non-equilibrium steady state

    Science.gov (United States)

    Culver, Adrian; Andrei, Natan

    We present an exact method of calculating the non-equilibrium current driven by a voltage drop across a quantum dot. The system is described by the two lead Anderson model at zero temperature with on-site Coulomb repulsion and non-interacting, linearized leads. We prepare the system in an initial state consisting of a free Fermi sea in each lead with the voltage drop given as the difference between the two Fermi levels. We quench the system by coupling the dot to the leads at t = 0 and following the time evolution of the wavefunction. In the long time limit a new type of Bethe Ansatz wavefunction emerges, which satisfies the Lippmann-Schwinger equation with the two Fermi seas serving as the boundary conditions. This exact, non-perturbative solution describes the non-equilibrium steady state of the system. We describe how to use this solution to compute the infinite time limit of the expectation value of the current operator at a given voltage, which would yield the I-V characteristic of the dot. Research supported by NSF Grant DMR 1410583.

  18. Energy flow in non-equilibrium conformal field theory

    Science.gov (United States)

    Bernard, Denis; Doyon, Benjamin

    2012-09-01

    We study the energy current and its fluctuations in quantum gapless 1d systems far from equilibrium modeled by conformal field theory, where two separated halves are prepared at distinct temperatures and glued together at a point contact. We prove that these systems converge towards steady states, and give a general description of such non-equilibrium steady states in terms of quantum field theory data. We compute the large deviation function, also called the full counting statistics, of energy transfer through the contact. These are universal and satisfy fluctuation relations. We provide a simple representation of these quantum fluctuations in terms of classical Poisson processes whose intensities are proportional to Boltzmann weights.

  19. Numerical solution of dynamic equilibrium models under Poisson uncertainty

    DEFF Research Database (Denmark)

    Posch, Olaf; Trimborn, Timo

    2013-01-01

    We propose a simple and powerful numerical algorithm to compute the transition process in continuous-time dynamic equilibrium models with rare events. In this paper we transform the dynamic system of stochastic differential equations into a system of functional differential equations of the retar...... solution to Lucas' endogenous growth model under Poisson uncertainty are used to compute the exact numerical error. We show how (potential) catastrophic events such as rare natural disasters substantially affect the economic decisions of households....

  20. Clarifications to the limitations of the s-α equilibrium model for gyrokinetic computations of turbulence

    International Nuclear Information System (INIS)

    Lapillonne, X.; Brunner, S.; Dannert, T.; Jolliet, S.; Marinoni, A.; Villard, L.; Goerler, T.; Jenko, F.; Merz, F.

    2009-01-01

    In the context of gyrokinetic flux-tube simulations of microturbulence in magnetized toroidal plasmas, different treatments of the magnetic equilibrium are examined. Considering the Cyclone DIII-D base case parameter set [Dimits et al., Phys. Plasmas 7, 969 (2000)], significant differences in the linear growth rates, the linear and nonlinear critical temperature gradients, and the nonlinear ion heat diffusivities are observed between results obtained using either an s-α or a magnetohydrodynamic (MHD) equilibrium. Similar disagreements have been reported previously [Redd et al., Phys. Plasmas 6, 1162 (1999)]. In this paper it is shown that these differences result primarily from the approximation made in the standard implementation of the s-α model, in which the straight field line angle is identified to the poloidal angle, leading to inconsistencies of order ε (ε=a/R is the inverse aspect ratio, a the minor radius and R the major radius). An equilibrium model with concentric, circular flux surfaces and a correct treatment of the straight field line angle gives results very close to those using a finite ε, low β MHD equilibrium. Such detailed investigation of the equilibrium implementation is of particular interest when comparing flux tube and global codes. It is indeed shown here that previously reported agreements between local and global simulations in fact result from the order ε inconsistencies in the s-α model, coincidentally compensating finite ρ * effects in the global calculations, where ρ * =ρ s /a with ρ s the ion sound Larmor radius. True convergence between local and global simulations is finally obtained by correct treatment of the geometry in both cases, and considering the appropriate ρ * →0 limit in the latter case.

  1. The development of a computer technique for the investigation of reactor lattice parameters

    International Nuclear Information System (INIS)

    Joubert, W.R.

    1982-01-01

    An integrated computer technique was developed whereby all the computer programmes needed to calculate reactor lattice parameters from basic neutron data, could be combined in one system. The theory of the computer programmes is explained in detail. Results are given and compared with experimental values as well as those calculated with a standard system

  2. Temperature in non-equilibrium states: a review of open problems and current proposals

    International Nuclear Information System (INIS)

    Casas-Vazquez, J; Jou, D

    2003-01-01

    The conceptual problems arising in the definition and measurement of temperature in non-equilibrium states are discussed in this paper in situations where the local-equilibrium hypothesis is no longer satisfactory. This is a necessary and urgent discussion because of the increasing interest in thermodynamic theories beyond local equilibrium, in computer simulations, in non-linear statistical mechanics, in new experiments, and in technological applications of nanoscale systems and material sciences. First, we briefly review the concept of temperature from the perspectives of equilibrium thermodynamics and statistical mechanics. Afterwards, we explore which of the equilibrium concepts may be extrapolated beyond local equilibrium and which of them should be modified, then we review several attempts to define temperature in non-equilibrium situations from macroscopic and microscopic bases. A wide review of proposals is offered on effective non-equilibrium temperatures and their application to ideal and real gases, electromagnetic radiation, nuclear collisions, granular systems, glasses, sheared fluids, amorphous semiconductors and turbulent fluids. The consistency between the different relativistic transformation laws for temperature is discussed in the new light gained from this perspective. A wide bibliography is provided in order to foster further research in this field

  3. A Biomechanical Model of Single-joint Arm Movement Control Based on the Equilibrium Point Hypothesis

    OpenAIRE

    Masataka, SUZUKI; Yoshihiko, YAMAZAKI; Yumiko, TANIGUCHI; Department of Psychology, Kinjo Gakuin University; Department of Health and Physical Education, Nagoya Institute of Technology; College of Human Life and Environment, Kinjo Gakuin University

    2003-01-01

    SUZUKI,M., YAMAZAKI,Y. and TANIGUCHI,Y., A Biomechanical Model of Single-joint Arm Movement Control Based on the Equilibrium Point Hypothesis. Adv. Exerc. Sports Physiol., Vol.9, No.1 pp.7-25, 2003. According to the equilibrium point hypothesis of motor control, control action of muscles is not explicitly computed, but rather arises as a consequence of interaction among moving equilibrium point, reflex feedback and muscle mechanical properties. This approach is attractive as it obviates the n...

  4. Fast non-linear extraction of plasma equilibrium parameters using a neural network mapping

    International Nuclear Information System (INIS)

    Lister, J.B.; Schnurrenberger, H.

    1990-07-01

    The shaping of non-circular plasmas requires a non-linear mapping between the measured diagnostic signals and selected equilibrium parameters. The particular configuration of Neural Network known as the multi-layer perceptron provides a powerful and general technique for formulating an arbitrary continuous non-linear multi-dimensional mapping. This technique has been successfully applied to the extraction of equilibrium parameters from measurements of single-null diverted plasmas in the DIII-D tokamak; the results are compared with a purely linear mapping. The method is promising, and hardware implementation is straightforward. (author) 15 refs., 7 figs

  5. Groundwater flux estimation in streams: A thermal equilibrium approach

    Science.gov (United States)

    Zhou, Yan; Fox, Garey A.; Miller, Ron B.; Mollenhauer, Robert; Brewer, Shannon

    2018-06-01

    Stream and groundwater interactions play an essential role in regulating flow, temperature, and water quality for stream ecosystems. Temperature gradients have been used to quantify vertical water movement in the streambed since the 1960s, but advancements in thermal methods are still possible. Seepage runs are a method commonly used to quantify exchange rates through a series of streamflow measurements but can be labor and time intensive. The objective of this study was to develop and evaluate a thermal equilibrium method as a technique for quantifying groundwater flux using monitored stream water temperature at a single point and readily available hydrological and atmospheric data. Our primary assumption was that stream water temperature at the monitored point was at thermal equilibrium with the combination of all heat transfer processes, including mixing with groundwater. By expanding the monitored stream point into a hypothetical, horizontal one-dimensional thermal modeling domain, we were able to simulate the thermal equilibrium achieved with known atmospheric variables at the point and quantify unknown groundwater flux by calibrating the model to the resulting temperature signature. Stream water temperatures were monitored at single points at nine streams in the Ozark Highland ecoregion and five reaches of the Kiamichi River to estimate groundwater fluxes using the thermal equilibrium method. When validated by comparison with seepage runs performed at the same time and reach, estimates from the two methods agreed with each other with an R2 of 0.94, a root mean squared error (RMSE) of 0.08 (m/d) and a Nash-Sutcliffe efficiency (NSE) of 0.93. In conclusion, the thermal equilibrium method was a suitable technique for quantifying groundwater flux with minimal cost and simple field installation given that suitable atmospheric and hydrological data were readily available.

  6. Multi-binding site model-based curve-fitting program for the computation of RIA data

    International Nuclear Information System (INIS)

    Malan, P.G.; Ekins, R.P.; Cox, M.G.; Long, E.M.R.

    1977-01-01

    In this paper, a comparison will be made of model-based and empirical curve-fitting procedures. The implementation of a multiple binding-site curve-fitting model which will successfully fit a wide range of assay data, and which can be run on a mini-computer is described. The latter sophisticated model also provides estimates of binding site concentrations and the values of the respective equilibrium constants present: the latter have been used for refining assay conditions using computer optimisation techniques. (orig./AJ) [de

  7. Computer Tomography: A Novel Diagnostic Technique used in Horses

    African Journals Online (AJOL)

    In Veterinary Medicine, Computer Tomography (CT scan) is used more often in dogs and cats than in large animals due to their small size and ease of manipulation. This paper, however, illustrates the use of the technique in horses. CT scan was used in the diagnosis of two conditions of the head and limbs, namely alveolar ...

  8. Modeling of the equilibrium of a tokamak plasma

    International Nuclear Information System (INIS)

    Grandgirard, V.

    1999-12-01

    The simulation and the control of a plasma discharge in a tokamak require an efficient and accurate solving of the equilibrium because this equilibrium needs to be calculated again every microsecond to simulate discharges that can last up to 1000 seconds. The purpose of this thesis is to propose numerical methods in order to calculate these equilibrium with acceptable computer time and memory size. Chapter 1 deals with hydrodynamics equation and sets up the problem. Chapter 2 gives a method to take into account the boundary conditions. Chapter 3 is dedicated to the optimization of the inversion of the system matrix. This matrix being quasi-symmetric, the Woodbury method combined with Cholesky method has been used. This direct method has been compared with 2 iterative methods: GMRES (generalized minimal residual) and BCG (bi-conjugate gradient). The 2 last chapters study the control of the plasma equilibrium, this work is presented in the formalism of the optimized control of distributed systems and leads to non-linear equations of state and quadratic functionals that are solved numerically by a quadratic sequential method. This method is based on the replacement of the initial problem with a series of control problems involving linear equations of state. (A.C.)

  9. Computational techniques for inelastic analysis and numerical experiments

    International Nuclear Information System (INIS)

    Yamada, Y.

    1977-01-01

    A number of formulations have been proposed for inelastic analysis, particularly for the thermal elastic-plastic creep analysis of nuclear reactor components. In the elastic-plastic regime, which principally concerns with the time independent behavior, the numerical techniques based on the finite element method have been well exploited and computations have become a routine work. With respect to the problems in which the time dependent behavior is significant, it is desirable to incorporate a procedure which is workable on the mechanical model formulation as well as the method of equation of state proposed so far. A computer program should also take into account the strain-dependent and/or time-dependent micro-structural changes which often occur during the operation of structural components at the increasingly high temperature for a long period of time. Special considerations are crucial if the analysis is to be extended to large strain regime where geometric nonlinearities predominate. The present paper introduces a rational updated formulation and a computer program under development by taking into account the various requisites stated above. (Auth.)

  10. Using nonequilibrium capillary electrophoresis of equilibrium mixtures (NECEEM) for simultaneous determination of concentration and equilibrium constant.

    Science.gov (United States)

    Kanoatov, Mirzo; Galievsky, Victor A; Krylova, Svetlana M; Cherney, Leonid T; Jankowski, Hanna K; Krylov, Sergey N

    2015-03-03

    Nonequilibrium capillary electrophoresis of equilibrium mixtures (NECEEM) is a versatile tool for studying affinity binding. Here we describe a NECEEM-based approach for simultaneous determination of both the equilibrium constant, K(d), and the unknown concentration of a binder that we call a target, T. In essence, NECEEM is used to measure the unbound equilibrium fraction, R, for the binder with a known concentration that we call a ligand, L. The first set of experiments is performed at varying concentrations of T, prepared by serial dilution of the stock solution, but at a constant concentration of L, which is as low as its reliable quantitation allows. The value of R is plotted as a function of the dilution coefficient, and dilution corresponding to R = 0.5 is determined. This dilution of T is used in the second set of experiments in which the concentration of T is fixed but the concentration of L is varied. The experimental dependence of R on the concentration of L is fitted with a function describing their theoretical dependence. Both K(d) and the concentration of T are used as fitting parameters, and their sought values are determined as the ones that generate the best fit. We have fully validated this approach in silico by using computer-simulated NECEEM electropherograms and then applied it to experimental determination of the unknown concentration of MutS protein and K(d) of its interactions with a DNA aptamer. The general approach described here is applicable not only to NECEEM but also to any other method that can determine a fraction of unbound molecules at equilibrium.

  11. Equilibrium measurements on the REPUTE-1 RFP plasma

    International Nuclear Information System (INIS)

    Ji, H.; Toyama, H.; Shinohara, S.; Fujisawa, A.; Yamagishi, K.; Shimazu, Y.; Ejiri, A.; Shimoji, K.; Miyamoto, K.

    1989-01-01

    Global plasma equilibrium measurements by external magnetic probes are widely introduced on the toroidal plasmas, i.e. tokamaks or RFPs, because of their simplicity and convenience. The measurement principle is based on Shafranov's toroidal equilibrium theory, which gives simple relations between the global equilibrium quantity and the external fields. These relations are valid in the either case of existence or absence of ideal shell just out the plasma column, however, not valid in the case of the thin (or resistive) shell, whose skin time τ s has the same order of the current rise time τ r . A method introduced by Swain et al. is effective in this case, in which the plasma current I p is replaced by 6 filament currents. However, by this method it is dificult to include the effect of iron core and computation requires a lot (beyond 14) of the measurement of the fields or flux loop. In this paper we introduce a simple method which is based on fitting measured fields to the vacuum approximate solution of Grad-Shafranov equation. The computation requires only a few measurements (≥6) of the fields. REPUTE-1 device is characterized by a thin shell of 5 mm thickness whose skin time τ s for the penetration of the vertical field is 1 ms compared with τ r of 0.5 ms. The optimum discharges whose duration τ d are about 3 times of τ s have been obtained. In spite of various efforts including vertical-ohmic coils series connection experiments, toroidal ripple reduction experiments and port bypass plate installation experiments, until now τ d is still limited by 3.2 ms. We should think that the equilibrium of plasma is lost due to an unfavorable vertical field. In this paper we present the measurements of the time evolution of the plasma position from the flat-top phase to the termination phase, at that time the plasma begins to lose its equilibrium. The liner has a major radius R L of 82 cm and a minor radius a L of 22 cm. (author) 6 refs., 4 figs

  12. Analyzing the Effects of Technological Change: A Computable General Equilibrium Approach

    Science.gov (United States)

    1988-09-01

    present important simplifying assumptions about the nature of consumer preferences and production possibility sets. If a general equilibrium model...important assumptions are in such areas as consumer preferences , the actions of the government, and the financial structure of the model. Each of these is...back in the future. 4.3.2 Consumer demand Consumer preferences are a second important modeling assumption affecting the results of the study. The PILOT

  13. Measuring productivity differences in equilibrium search models

    DEFF Research Database (Denmark)

    Lanot, Gauthier; Neumann, George R.

    1996-01-01

    Equilibrium search models require unobserved heterogeneity in productivity to fit observed wage distribution data, but provide no guidance about the location parameter of the heterogeneity. In this paper we show that the location of the productivity heterogeneity implies a mode in a kernel density...... estimate of the wage distribution. The number of such modes and their location are identified using bump hunting techniques due to Silverman (1981). These techniques are applied to Danish panel data on workers and firms. These estimates are used to assess the importance of employer wage policy....

  14. Domain Immersion Technique And Free Surface Computations Applied To Extrusion And Mixing Processes

    Science.gov (United States)

    Valette, Rudy; Vergnes, Bruno; Basset, Olivier; Coupez, Thierry

    2007-04-01

    This work focuses on the development of numerical techniques devoted to the simulation of mixing processes of complex fluids such as twin-screw extrusion or batch mixing. In mixing process simulation, the absence of symmetry of the moving boundaries (the screws or the rotors) implies that their rigid body motion has to be taken into account by using a special treatment. We therefore use a mesh immersion technique (MIT), which consists in using a P1+/P1-based (MINI-element) mixed finite element method for solving the velocity-pressure problem and then solving the problem in the whole barrel cavity by imposing a rigid motion (rotation) to nodes found located inside the so called immersed domain, each subdomain (screw, rotor) being represented by a surface CAD mesh (or its mathematical equation in simple cases). The independent meshes are immersed into a unique backgound computational mesh by computing the distance function to their boundaries. Intersections of meshes are accounted for, allowing to compute a fill factor usable as for the VOF methodology. This technique, combined with the use of parallel computing, allows to compute the time-dependent flow of generalized Newtonian fluids including yield stress fluids in a complex system such as a twin screw extruder, including moving free surfaces, which are treated by a "level set" and Hamilton-Jacobi method.

  15. Isospin equilibrium and non-equilibrium in heavy-ion collisions at intermediate energies

    International Nuclear Information System (INIS)

    Chen Liewen; Ge Lingxiao; Zhang Xiaodong; Zhang Fengshou

    1997-01-01

    The equilibrium and non-equilibrium of the isospin degree of freedom are studied in terms of an isospin-dependent QMD model, which includes isospin-dependent symmetry energy, Coulomb energy, N-N cross sections and Pauli blocking. It is shown that there exists a transition from the isospin equilibrium to non-equilibrium as the incident energy from below to above a threshold energy in central, asymmetric heavy-ion collisions. Meanwhile, it is found that the phenomenon results from the co-existence and competition of different reaction mechanisms, namely, the isospin degree of freedom reaches an equilibrium if the incomplete fusion (ICF) component is dominant and does not reach equilibrium if the fragmentation component is dominant. Moreover, it is also found that the isospin-dependent N-N cross sections and symmetry energy are crucial for the equilibrium of the isospin degree of freedom in heavy-ion collisions around the Fermi energy. (author)

  16. Numerical computation of FCT equilibria by inverse equilibrium method

    International Nuclear Information System (INIS)

    Tokuda, Shinji; Tsunematsu, Toshihide; Takeda, Tatsuoki

    1986-11-01

    FCT (Flux Conserving Tokamak) equilibria were obtained numerically by the inverse equilibrium method. The high-beta tokamak ordering was used to get the explicit boundary conditions for FCT equilibria. The partial differential equation was reduced to the simultaneous quasi-linear ordinary differential equations by using the moment method. The regularity conditions for solutions at the singular point of the equations can be expressed correctly by this reduction and the problem to be solved becomes a tractable boundary value problem on the quasi-linear ordinary differential equations. This boundary value problem was solved by the method of quasi-linearization, one of the shooting methods. Test calculations show that this method provides high-beta tokamak equilibria with sufficiently high accuracy for MHD stability analysis. (author)

  17. Bayer Digester Optimization Studies using Computer Techniques

    Science.gov (United States)

    Kotte, Jan J.; Schleider, Victor H.

    Theoretically required heat transfer performance by the multistaged flash heat reclaim system of a high pressure Bayer digester unit is determined for various conditions of discharge temperature, excess flash vapor and indirect steam addition. Solution of simultaneous heat balances around the digester vessels and the heat reclaim system yields the magnitude of available heat for representation of each case on a temperature-enthalpy diagram, where graphical fit of the number of flash stages fixes the heater requirements. Both the heat balances and the trial-and-error graphical solution are adapted to solution by digital computer techniques.

  18. The quasi-equilibrium response of MOS structures: Quasi-static factor

    Science.gov (United States)

    Okeke, M.; Balland, B.

    1984-07-01

    The dynamic response of a MOS structure driven into a non-equilibrium behaviour by a voltage ramp is presented. In contrast to Khun's quasi-static technique it is shown that any ramp-driven MOS structure has some degree of non-equilibrium. A quasi staticity factor μAK which serves as a measure of the degree of quasi-equilibrium, has been introduced for the first time. The mathematical model presented in the paper allows a better explanation of the experimental recordings. It is shown that this model could be used to analyse the various features of the response of the structure and that such physical parameters as the generation-rate, trap activation energy, and the effective capture constants could be obtained.

  19. Equilibrium depth of scour at straight guide banks

    Science.gov (United States)

    Gjunsburgs, B.; Bulankina, V.

    2017-10-01

    The equilibrium stage of scour at the head of straight guide banks with a uniform and stratified bed conditions have been studied. The contraction of the river by bridge crossing with straight guide banks considerably alters the flow pattern. The streamlines become curve and the concentration of streamlines, longitudinal and transverse slopes of the water surface, a local increase in velocity, vortex and eddy structures, and the origin of a flow separation zone between the extreme streamlines and the guide bank are observed and local scour is developing at the head of the straight guide banks. New formulae for calculation of equilibrium depth of scour at straight guide banks at uniform and stratified river bed is elaborated and confirmed by tests and computer modelling results.

  20. Investigation of plasma equilibrium in HL-1 tokamak

    International Nuclear Information System (INIS)

    Lu Zhihong; Jiang Yunxia; Yang Jinwei; Zhang Baozhu; Qiu Wei; Qin Yunwen

    1987-01-01

    In this paper, the plasma equilibrium in HL-1 tokamak has been discussed. The horizontal and vertical displacement of plasma is measured using a symmetical magneic probe system, and the temporal evolution of displacements is given by a data acquisition system with micro-computer. The influence of various stray fields on plasma equilibrium has been analysed. The direction and value of horizontal stray field induced by the totoidal field coils and primary windings are determined using vertical displacement data. By adjusting parameters of internal and external vertical field, in the flat part of discharge current, the plasma can be kept in its equilibuium state at the place where is 3 cm outer of chamber certre, i.e nearby the centre of limiter

  1. On the theories, techniques, and computer codes used in numerical reactor criticality and burnup calculations

    International Nuclear Information System (INIS)

    El-Osery, I.A.

    1981-01-01

    The purpose of this paper is to discuss the theories, techniques and computer codes that are frequently used in numerical reactor criticality and burnup calculations. It is a part of an integrated nuclear reactor calculation scheme conducted by the Reactors Department, Inshas Nuclear Research Centre. The crude part in numerical reactor criticality and burnup calculations includes the determination of neutron flux distribution which can be obtained in principle as a solution of Boltzmann transport equation. Numerical methods used for solving transport equations are discussed. Emphasis are made on numerical techniques based on multigroup diffusion theory. These numerical techniques include nodal, modal, and finite difference ones. The most commonly known computer codes utilizing these techniques are reviewed. Some of the main computer codes that have been already developed at the Reactors Department and related to numerical reactor criticality and burnup calculations have been presented

  2. Force-dominated non-equilibrium oxidation kinetics of tantalum

    International Nuclear Information System (INIS)

    Kar, Prasenjit; Wang, Ke; Liang, Hong

    2008-01-01

    Using a combined electrochemical and mechanical manipulation technique, we compared the equilibrium and non-equilibrium oxidation processes and states of tantalum. Experimentally, a setup was developed with an electrochemical system attached to a sliding mechanical configuration capable of friction force measurement. The surface chemistry of a sliding surface, i.e., tantalum, was modified through the electrolyte. The mechanically applied force was fixed and the dynamics of the surface was monitored in situ through a force sensor. The formation of non-equilibrium oxidation states of tantalum was found in oxidation limiting environment of acetic acid. An oxidative environment of deionized water saturated with KCl was used as comparison. We proposed a modified Arrhenius-Eyring equation in which the mechanical factor was considered. We found that the mechanical energy induced the non-stable-state reactions leading to metastable oxidation states of tantalum. This equation can be used to predict mechanochemical reactions that are important in many industrial applications

  3. Convergence to equilibrium of renormalised solutions to nonlinear chemical reaction–diffusion systems

    Science.gov (United States)

    Fellner, Klemens; Tang, Bao Quoc

    2018-06-01

    The convergence to equilibrium for renormalised solutions to nonlinear reaction-diffusion systems is studied. The considered reaction-diffusion systems arise from chemical reaction networks with mass action kinetics and satisfy the complex balanced condition. By applying the so-called entropy method, we show that if the system does not have boundary equilibria, i.e. equilibrium states lying on the boundary of R_+^N, then any renormalised solution converges exponentially to the complex balanced equilibrium with a rate, which can be computed explicitly up to a finite-dimensional inequality. This inequality is proven via a contradiction argument and thus not explicitly. An explicit method of proof, however, is provided for a specific application modelling a reversible enzyme reaction by exploiting the specific structure of the conservation laws. Our approach is also useful to study the trend to equilibrium for systems possessing boundary equilibria. More precisely, to show the convergence to equilibrium for systems with boundary equilibria, we establish a sufficient condition in terms of a modified finite-dimensional inequality along trajectories of the system. By assuming this condition, which roughly means that the system produces too much entropy to stay close to a boundary equilibrium for infinite time, the entropy method shows exponential convergence to equilibrium for renormalised solutions to complex balanced systems with boundary equilibria.

  4. Equilibrium and perturbations in plasma-vacuum systems

    International Nuclear Information System (INIS)

    Mercier, C.

    1974-01-01

    Thermonuclear plasmas must be maintained far from all material contact. In order to realize this condition, one uses in the vacuum surrounding the plasma, a metal wall supposed perfectly conducting and currents whose positions and intensities have to be suitably chosen. The problem of equilibrium consists of finding a toroidal solution of the system of equations JxB=grad P, div B=0, J=rot B, B,J, and P being respectively the magnetic field, current intensity and plasma pressure. The problem can be solved in symmetry of revolution using cylindrical coordinates. The arrangement and intensity of the currents found will not be exactly realized due to, for exemple, technical reasons. Consequently, the first problem of equilibrium is considered as a first approximation and the configuration which will be obtained under imposed real conditions is computed as perturbed equilibria [fr

  5. Precision of lumbar intervertebral measurements: does a computer-assisted technique improve reliability?

    Science.gov (United States)

    Pearson, Adam M; Spratt, Kevin F; Genuario, James; McGough, William; Kosman, Katherine; Lurie, Jon; Sengupta, Dilip K

    2011-04-01

    Comparison of intra- and interobserver reliability of digitized manual and computer-assisted intervertebral motion measurements and classification of "instability." To determine if computer-assisted measurement of lumbar intervertebral motion on flexion-extension radiographs improves reliability compared with digitized manual measurements. Many studies have questioned the reliability of manual intervertebral measurements, although few have compared the reliability of computer-assisted and manual measurements on lumbar flexion-extension radiographs. Intervertebral rotation, anterior-posterior (AP) translation, and change in anterior and posterior disc height were measured with a digitized manual technique by three physicians and by three other observers using computer-assisted quantitative motion analysis (QMA) software. Each observer measured 30 sets of digital flexion-extension radiographs (L1-S1) twice. Shrout-Fleiss intraclass correlation coefficients for intra- and interobserver reliabilities were computed. The stability of each level was also classified (instability defined as >4 mm AP translation or 10° rotation), and the intra- and interobserver reliabilities of the two methods were compared using adjusted percent agreement (APA). Intraobserver reliability intraclass correlation coefficients were substantially higher for the QMA technique THAN the digitized manual technique across all measurements: rotation 0.997 versus 0.870, AP translation 0.959 versus 0.557, change in anterior disc height 0.962 versus 0.770, and change in posterior disc height 0.951 versus 0.283. The same pattern was observed for interobserver reliability (rotation 0.962 vs. 0.693, AP translation 0.862 vs. 0.151, change in anterior disc height 0.862 vs. 0.373, and change in posterior disc height 0.730 vs. 0.300). The QMA technique was also more reliable for the classification of "instability." Intraobserver APAs ranged from 87 to 97% for QMA versus 60% to 73% for digitized manual

  6. Computational reduction techniques for numerical vibro-acoustic analysis of hearing aids

    DEFF Research Database (Denmark)

    Creixell Mediante, Ester

    . In this thesis, several challenges encountered in the process of modelling and optimizing hearing aids are addressed. Firstly, a strategy for modelling the contacts between plastic parts for harmonic analysis is developed. Irregularities in the contact surfaces, inherent to the manufacturing process of the parts....... Secondly, the applicability of Model Order Reduction (MOR) techniques to lower the computational complexity of hearing aid vibro-acoustic models is studied. For fine frequency response calculation and optimization, which require solving the numerical model repeatedly, a computational challenge...... is encountered due to the large number of Degrees of Freedom (DOFs) needed to represent the complexity of the hearing aid system accurately. In this context, several MOR techniques are discussed, and an adaptive reduction method for vibro-acoustic optimization problems is developed as a main contribution. Lastly...

  7. Use of S-α diagram for representing tokamak equilibrium

    International Nuclear Information System (INIS)

    Takahashi, H.; Chance, M.; Kessel, C.; LeBlanc, B.; Manickam, J.; Okabayashi, M.

    1991-05-01

    A use of the S-α diagram is proposed as a tool for representing the plasma equilibrium with a qualitative characterization of its stability through pattern recognition. The diagram is an effective tool for visually presenting the relationship between the shear and dimensionless pressure gradient of an equilibrium. In the PBX-M tokamak, an H-mode operating regime with high poloidal β and L-mode regime with high toroidal β, obtained using different profile modification techniques, are found to have distinct S-α trajectory patterns. Pellet injection into a plasma in the H-mode regime with high toroidal β, obtained using different profile modification techniques, are found to have distinct S-α trajectory patterns. Pellet injection into a plasma in the H-mode regime results in favorable qualities of both regimes. The β collapse process and ELM event also manifest themselves as characteristic changes in the S-α pattern

  8. Development of a computational technique to measure cartilage contact area.

    Science.gov (United States)

    Willing, Ryan; Lapner, Michael; Lalone, Emily A; King, Graham J W; Johnson, James A

    2014-03-21

    Computational measurement of joint contact distributions offers the benefit of non-invasive measurements of joint contact without the use of interpositional sensors or casting materials. This paper describes a technique for indirectly measuring joint contact based on overlapping of articular cartilage computer models derived from CT images and positioned using in vitro motion capture data. The accuracy of this technique when using the physiological nonuniform cartilage thickness distribution, or simplified uniform cartilage thickness distributions, is quantified through comparison with direct measurements of contact area made using a casting technique. The efficacy of using indirect contact measurement techniques for measuring the changes in contact area resulting from hemiarthroplasty at the elbow is also quantified. Using the physiological nonuniform cartilage thickness distribution reliably measured contact area (ICC=0.727), but not better than the assumed bone specific uniform cartilage thicknesses (ICC=0.673). When a contact pattern agreement score (s(agree)) was used to assess the accuracy of cartilage contact measurements made using physiological nonuniform or simplified uniform cartilage thickness distributions in terms of size, shape and location, their accuracies were not significantly different (p>0.05). The results of this study demonstrate that cartilage contact can be measured indirectly based on the overlapping of cartilage contact models. However, the results also suggest that in some situations, inter-bone distance measurement and an assumed cartilage thickness may suffice for predicting joint contact patterns. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Phase behavior of multicomponent membranes: Experimental and computational techniques

    DEFF Research Database (Denmark)

    Bagatolli, Luis; Kumar, P.B. Sunil

    2009-01-01

    Recent developments in biology seems to indicate that the Fluid Mosaic model of membrane proposed by Singer and Nicolson, with lipid bilayer functioning only as medium to support protein machinery, may be too simple to be realistic. Many protein functions are now known to depend on the compositio....... This review includes basic foundations on membrane model systems and experimental approaches applied in the membrane research area, stressing on recent advances in the experimental and computational techniques....... membranes. Current increase in interest in the domain formation in multicomponent membranes also stems from the experiments demonstrating liquid ordered-liquid disordered coexistence in mixtures of lipids and cholesterol and the success of several computational models in predicting their behavior...

  10. Experimental and Computational Techniques in Soft Condensed Matter Physics

    Science.gov (United States)

    Olafsen, Jeffrey

    2010-09-01

    1. Microscopy of soft materials Eric R. Weeks; 2. Computational methods to study jammed Systems Carl F. Schrek and Corey S. O'Hern; 3. Soft random solids: particulate gels, compressed emulsions and hybrid materials Anthony D. Dinsmore; 4. Langmuir monolayers Michael Dennin; 5. Computer modeling of granular rheology Leonardo E. Silbert; 6. Rheological and microrheological measurements of soft condensed matter John R. de Bruyn and Felix K. Oppong; 7. Particle-based measurement techniques for soft matter Nicholas T. Ouellette; 8. Cellular automata models of granular flow G. William Baxter; 9. Photoelastic materials Brian Utter; 10. Image acquisition and analysis in soft condensed matter Jeffrey S. Olafsen; 11. Structure and patterns in bacterial colonies Nicholas C. Darnton.

  11. Equilibrium and transient conductivity for gadolium-doped ceria under large perturbations: II. Modeling

    DEFF Research Database (Denmark)

    Zhu, Huayang; Ricote, Sandrine; Coors, W. Grover

    2014-01-01

    the computational implementation of a Nernst–Planck–Poisson (NPP) model to represent and interpret conductivity-relaxation measurements. Defect surface chemistry is represented with both equilibrium and finite-rate kinetic models. The experiments and the models are capable of representing relaxations from strongly......A model-based approach is used to interpret equilibrium and transient conductivity measurements for 10% gadolinium-doped ceria: Ce0.9Gd0.1O1.95 − δ (GDC10). The measurements were carried out by AC impedance spectroscopy on slender extruded GDC10 rods. Although equilibrium conductivity measurements...... provide sufficient information from which to derive material properties, it is found that uniquely establishing properties is difficult. Augmenting equilibrium measurements with conductivity relaxation significantly improves the evaluation of needed physical properties. This paper develops and applies...

  12. Fast non-linear extraction of plasma equilibrium parameters using a neural network mapping

    International Nuclear Information System (INIS)

    Lister, J.B.; Schnurrenberger, H.

    1991-01-01

    The shaping of non-circular plasmas requires a non-linear mapping between the measured diagnostic signals and selected equilibrium parameters. The particular configuration of neural network known as the multilayer perceptron provides a powerful and general technique for formulating an arbitrary continuous non-linear multi-dimensional mapping. This technique has been successfully applied to the extraction of equilibrium parameters from measurements of single-null diverted plasmas in the DIII-D tokamak; the results are compared with a purely linear mapping. The method is promising, and hardware implementation is straightforward. (author). 17 refs, 8 figs, 2 tab

  13. The Voronoi volume and molecular representation of molar volume: equilibrium simple fluids.

    Science.gov (United States)

    Hunjan, Jagtar Singh; Eu, Byung Chan

    2010-04-07

    The Voronoi volume of simple fluids was previously made use of in connection with volume transport phenomena in nonequilibrium simple fluids. To investigate volume transport phenomena, it is important to develop a method to compute the Voronoi volume of fluids in nonequilibrium. In this work, as a first step to this goal, we investigate the equilibrium limit of the nonequilibrium Voronoi volume together with its attendant related molar (molal) and specific volumes. It is proved that the equilibrium Voronoi volume is equivalent to the molar (molal) volume. The latter, in turn, is proved equivalent to the specific volume. This chain of equivalences provides an alternative procedure of computing the equilibrium Voronoi volume from the molar volume/specific volume. We also show approximate methods of computing the Voronoi and molar volumes from the information on the pair correlation function. These methods may be employed for their quick estimation, but also provide some aspects of the fluid structure and its relation to the Voronoi volume. The Voronoi volume obtained from computer simulations is fitted to a function of temperature and pressure in the region above the triple point but below the critical point. Since the fitting function is given in terms of reduced variables for the Lennard-Jones (LJ) model and the kindred volumes (i.e., specific and molar volumes) are in essence equivalent to the equation of state, the formula obtained is a reduced equation state for simple fluids obeying the LJ model potential in the range of temperature and pressure examined and hence can be used for other simple fluids.

  14. Calculation of the equilibrium distribution for a deleterious gene by the finite Fourier transform.

    Science.gov (United States)

    Lange, K

    1982-03-01

    In a population of constant size every deleterious gene eventually attains a stochastic equilibrium between mutation and selection. The individual probabilities of this equilibrium distribution can be computed by an application of the finite Fourier transform to an appropriate branching process formula. Specific numerical examples are discussed for the autosomal dominants, Huntington's chorea and chondrodystrophy, and for the X-linked recessive, Becker's muscular dystrophy.

  15. Algorithmic Mechanism Design of Evolutionary Computation.

    Science.gov (United States)

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.

  16. Numerical simulation of hypersonic inlet flows with equilibrium or finite rate chemistry

    Science.gov (United States)

    Yu, Sheng-Tao; Hsieh, Kwang-Chung; Shuen, Jian-Shun; Mcbride, Bonnie J.

    1988-01-01

    An efficient numerical program incorporated with comprehensive high temperature gas property models has been developed to simulate hypersonic inlet flows. The computer program employs an implicit lower-upper time marching scheme to solve the two-dimensional Navier-Stokes equations with variable thermodynamic and transport properties. Both finite-rate and local-equilibrium approaches are adopted in the chemical reaction model for dissociation and ionization of the inlet air. In the finite rate approach, eleven species equations coupled with fluid dynamic equations are solved simultaneously. In the local-equilibrium approach, instead of solving species equations, an efficient chemical equilibrium package has been developed and incorporated into the flow code to obtain chemical compositions directly. Gas properties for the reaction products species are calculated by methods of statistical mechanics and fit to a polynomial form for C(p). In the present study, since the chemical reaction time is comparable to the flow residence time, the local-equilibrium model underpredicts the temperature in the shock layer. Significant differences of predicted chemical compositions in shock layer between finite rate and local-equilibrium approaches have been observed.

  17. International Conference on Soft Computing Techniques and Engineering Application

    CERN Document Server

    Li, Xiaolong

    2014-01-01

    The main objective of ICSCTEA 2013 is to provide a platform for researchers, engineers and academicians from all over the world to present their research results and development activities in soft computing techniques and engineering application. This conference provides opportunities for them to exchange new ideas and application experiences face to face, to establish business or research relations and to find global partners for future collaboration.

  18. On the definition of equilibrium and non-equilibrium states in dynamical systems

    OpenAIRE

    Akimoto, Takuma

    2008-01-01

    We propose a definition of equilibrium and non-equilibrium states in dynamical systems on the basis of the time average. We show numerically that there exists a non-equilibrium non-stationary state in the coupled modified Bernoulli map lattice.

  19. Experiment and theory at the convergence limit: accurate equilibrium structure of picolinic acid by gas-phase electron diffraction and coupled-cluster computations.

    Science.gov (United States)

    Vogt, Natalja; Marochkin, Ilya I; Rykov, Anatolii N

    2018-04-18

    The accurate molecular structure of picolinic acid has been determined from experimental data and computed at the coupled cluster level of theory. Only one conformer with the O[double bond, length as m-dash]C-C-N and H-O-C[double bond, length as m-dash]O fragments in antiperiplanar (ap) positions, ap-ap, has been detected under conditions of the gas-phase electron diffraction (GED) experiment (Tnozzle = 375(3) K). The semiexperimental equilibrium structure, rsee, of this conformer has been derived from the GED data taking into account the anharmonic vibrational effects estimated from the ab initio force field. The equilibrium structures of the two lowest-energy conformers, ap-ap and ap-sp (with the synperiplanar H-O-C[double bond, length as m-dash]O fragment), have been fully optimized at the CCSD(T)_ae level of theory in conjunction with the triple-ζ basis set (cc-pwCVTZ). The quality of the optimized structures has been improved due to extrapolation to the quadruple-ζ basis set. The high accuracy of both GED determination and CCSD(T) computations has been disclosed by a correct comparison of structures having the same physical meaning. The ap-ap conformer has been found to be stabilized by the relatively strong NH-O hydrogen bond of 1.973(27) Å (GED) and predicted to be lower in energy by 16 kJ mol-1 with respect to the ap-sp conformer without a hydrogen bond. The influence of this bond on the structure of picolinic acid has been analyzed within the Natural Bond Orbital model. The possibility of the decarboxylation of picolinic acid has been considered in the GED analysis, but no significant amounts of pyridine and carbon dioxide could be detected. To reveal the structural changes reflecting the mesomeric and inductive effects due to the carboxylic substituent, the accurate structure of pyridine has been also computed at the CCSD(T)_ae level with basis sets from triple- to 5-ζ quality. The comprehensive structure computations for pyridine as well as for

  20. Finite volume schemes with equilibrium type discretization of source terms for scalar conservation laws

    International Nuclear Information System (INIS)

    Botchorishvili, Ramaz; Pironneau, Olivier

    2003-01-01

    We develop here a new class of finite volume schemes on unstructured meshes for scalar conservation laws with stiff source terms. The schemes are of equilibrium type, hence with uniform bounds on approximate solutions, valid in cell entropy inequalities and exact for some equilibrium states. Convergence is investigated in the framework of kinetic schemes. Numerical tests show high computational efficiency and a significant advantage over standard cell centered discretization of source terms. Equilibrium type schemes produce accurate results even on test problems for which the standard approach fails. For some numerical tests they exhibit exponential type convergence rate. In two of our numerical tests an equilibrium type scheme with 441 nodes on a triangular mesh is more accurate than a standard scheme with 5000 2 grid points

  1. Contributions of computational chemistry and biophysical techniques to fragment-based drug discovery.

    Science.gov (United States)

    Gozalbes, Rafael; Carbajo, Rodrigo J; Pineda-Lucena, Antonio

    2010-01-01

    In the last decade, fragment-based drug discovery (FBDD) has evolved from a novel approach in the search of new hits to a valuable alternative to the high-throughput screening (HTS) campaigns of many pharmaceutical companies. The increasing relevance of FBDD in the drug discovery universe has been concomitant with an implementation of the biophysical techniques used for the detection of weak inhibitors, e.g. NMR, X-ray crystallography or surface plasmon resonance (SPR). At the same time, computational approaches have also been progressively incorporated into the FBDD process and nowadays several computational tools are available. These stretch from the filtering of huge chemical databases in order to build fragment-focused libraries comprising compounds with adequate physicochemical properties, to more evolved models based on different in silico methods such as docking, pharmacophore modelling, QSAR and virtual screening. In this paper we will review the parallel evolution and complementarities of biophysical techniques and computational methods, providing some representative examples of drug discovery success stories by using FBDD.

  2. Integration of computational modeling and experimental techniques to design fuel surrogates

    DEFF Research Database (Denmark)

    Choudhury, H.A.; Intikhab, S.; Kalakul, Sawitree

    2017-01-01

    performance. A simplified alternative is to develop surrogate fuels that have fewer compounds and emulate certain important desired physical properties of the target fuels. Six gasoline blends were formulated through a computer aided model based technique “Mixed Integer Non-Linear Programming” (MINLP...... Virtual Process-Product Design Laboratory (VPPD-Lab) are applied onto the defined compositions of the surrogate gasoline. The aim is to primarily verify the defined composition of gasoline by means of VPPD-Lab. ρ, η and RVP are calculated with more accuracy and constraints such as distillation curve...... and flash point on the blend design are also considered. A post-design experiment-based verification step is proposed to further improve and fine-tune the “best” selected gasoline blends following the computation work. Here, advanced experimental techniques are used to measure the RVP, ρ, η, RON...

  3. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm.

    Science.gov (United States)

    Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.

  4. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm

    Science.gov (United States)

    Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239

  5. Equilibrium and generators

    International Nuclear Information System (INIS)

    Balter, H.S.

    1994-01-01

    This work studies the behaviour of radionuclides when it produce a desintegration activity,decay and the isotopes stable creation. It gives definitions about the equilibrium between activity of parent and activity of the daughter, radioactive decay,isotope stable and transient equilibrium and maxim activity time. Some considerations had been given to generators that permit a disgregation of two radioisotopes in equilibrium and its good performance. Tabs

  6. Use of the McQuarrie equation for the computation of shear viscosity via equilibrium molecular dynamics

    International Nuclear Information System (INIS)

    Chialvo, A.A.; Debenedetti, P.G.

    1991-01-01

    To date, the calculation of shear viscosity for soft-core fluids via equilibrium molecular dynamics has been done almost exclusively using the Green-Kubo formalism. The alternative mean-squared displacement approach has not been used, except for hard-sphere fluids, in which case the expression proposed by Helfand [Phys. Rev. 119, 1 (1960)] has invariably been selected. When written in the form given by McQuarrie [Statistical Mechanics (Harper ampersand Row, New York, 1976), Chap. 21], however, the mean-squared displacement approach offers significant computational advantages over both its Green-Kubo and Helfand counterparts. In order to achieve comparable statistical significance, the number of experiments needed when using the Green-Kubo or Helfand formalisms is more than an order of magnitude higher than for the McQuarrie expression. For pairwise-additive systems with zero linear momentum, the McQuarrie method yields frame-independent shear viscosities. The hitherto unexplored McQuarrie implementation of the mean-squared displacement approach to shear-viscosity calculation thus appears superior to alternative methods currently in use

  7. Costs incurred by applying computer-aided design/computer-aided manufacturing techniques for the reconstruction of maxillofacial defects.

    Science.gov (United States)

    Rustemeyer, Jan; Melenberg, Alex; Sari-Rieger, Aynur

    2014-12-01

    This study aims to evaluate the additional costs incurred by using a computer-aided design/computer-aided manufacturing (CAD/CAM) technique for reconstructing maxillofacial defects by analyzing typical cases. The medical charts of 11 consecutive patients who were subjected to the CAD/CAM technique were considered, and invoices from the companies providing the CAD/CAM devices were reviewed for every case. The number of devices used was significantly correlated with cost (r = 0.880; p costs were found between cases in which prebent reconstruction plates were used (€3346.00 ± €29.00) and cases in which they were not (€2534.22 ± €264.48; p costs of two, three and four devices, even when ignoring the cost of reconstruction plates. Additional fees provided by statutory health insurance covered a mean of 171.5% ± 25.6% of the cost of the CAD/CAM devices. Since the additional fees provide financial compensation, we believe that the CAD/CAM technique is suited for wide application and not restricted to complex cases. Where additional fees/funds are not available, the CAD/CAM technique might be unprofitable, so the decision whether or not to use it remains a case-to-case decision with respect to cost versus benefit. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  8. Chemical Principles Revisited: Chemical Equilibrium.

    Science.gov (United States)

    Mickey, Charles D.

    1980-01-01

    Describes: (1) Law of Mass Action; (2) equilibrium constant and ideal behavior; (3) general form of the equilibrium constant; (4) forward and reverse reactions; (5) factors influencing equilibrium; (6) Le Chatelier's principle; (7) effects of temperature, changing concentration, and pressure on equilibrium; and (8) catalysts and equilibrium. (JN)

  9. Para-equilibrium phase diagrams

    International Nuclear Information System (INIS)

    Pelton, Arthur D.; Koukkari, Pertti; Pajarre, Risto; Eriksson, Gunnar

    2014-01-01

    Highlights: • A rapidly cooled system may attain a state of para-equilibrium. • In this state rapidly diffusing elements reach equilibrium but others are immobile. • Application of the Phase Rule to para-equilibrium phase diagrams is discussed. • A general algorithm to calculate para-equilibrium phase diagrams is described. - Abstract: If an initially homogeneous system at high temperature is rapidly cooled, a temporary para-equilibrium state may result in which rapidly diffusing elements have reached equilibrium but more slowly diffusing elements have remained essentially immobile. The best known example occurs when homogeneous austenite is quenched. A para-equilibrium phase assemblage may be calculated thermodynamically by Gibbs free energy minimization under the constraint that the ratios of the slowly diffusing elements are the same in all phases. Several examples of calculated para-equilibrium phase diagram sections are presented and the application of the Phase Rule is discussed. Although the rules governing the geometry of these diagrams may appear at first to be somewhat different from those for full equilibrium phase diagrams, it is shown that in fact they obey exactly the same rules with the following provision. Since the molar ratios of non-diffusing elements are the same in all phases at para-equilibrium, these ratios act, as far as the geometry of the diagram is concerned, like “potential” variables (such as T, pressure or chemical potentials) rather than like “normal” composition variables which need not be the same in all phases. A general algorithm to calculate para-equilibrium phase diagrams is presented. In the limit, if a para-equilibrium calculation is performed under the constraint that no elements diffuse, then the resultant phase diagram shows the single phase with the minimum Gibbs free energy at any point on the diagram; such calculations are of interest in physical vapor deposition when deposition is so rapid that phase

  10. A study of chemical equilibrium of tri-component mixtures of hydrogen isotopes

    International Nuclear Information System (INIS)

    Cristescu, Ioana; Cristescu, I.; Peculea, M.

    1998-01-01

    In this paper we present a model for computing the equilibrium constants for chemical reactions between hydrogen's isotopes as function of temperature. The equilibrium constants were expressed with the aid of Gibbs potential and the partition function of the mixture. We assessed the partition function for hydrogen's isotopes having in view that some nuclei are fermions and other bosons. As results we plotted the values of equilibrium constants as function of temperature. Knowing these values we determined the deuterium distribution on species (for mixture H 2 -HD-D 2 ) as function of total deuterium concentration and the tritium distribution on species (for mixtures D 2 -DT-T 2 and H 2 -HT-T 2 ) as function of total tritium concentration. (authors)

  11. Stresses in non-equilibrium fluids: Exact formulation and coarse-grained theory

    Science.gov (United States)

    Krüger, Matthias; Solon, Alexandre; Démery, Vincent; Rohwer, Christian M.; Dean, David S.

    2018-02-01

    Starting from the stochastic equation for the density operator, we formulate the exact (instantaneous) stress tensor for interacting Brownian particles and show that its average value agrees with expressions derived previously. We analyze the relation between the stress tensor and forces due to external potentials and observe that, out of equilibrium, particle currents give rise to extra forces. Next, we derive the stress tensor for a Landau-Ginzburg theory in generic, non-equilibrium situations, finding an expression analogous to that of the exact microscopic stress tensor, and discuss the computation of out-of-equilibrium (classical) Casimir forces. Subsequently, we give a general form for the stress tensor which is valid for a large variety of energy functionals and which reproduces the two mentioned cases. We then use these relations to study the spatio-temporal correlations of the stress tensor in a Brownian fluid, which we compute to leading order in the interaction potential strength. We observe that, after integration over time, the spatial correlations generally decay as power laws in space. These are expected to be of importance for driven confined systems. We also show that divergence-free parts of the stress tensor do not contribute to the Green-Kubo relation for the viscosity.

  12. Dual-scan technique for the customization of zirconia computer-aided design/computer-aided manufacturing frameworks.

    Science.gov (United States)

    Andreiuolo, Rafael Ferrone; Sabrosa, Carlos Eduardo; Dias, Katia Regina H Cervantes

    2013-09-01

    The use of bi-layered all-ceramic crowns has continuously grown since the introduction of computer-aided design/computer-aided manufacturing (CAD/CAM) zirconia cores. Unfortunately, despite the outstanding mechanical properties of zirconia, problems related to porcelain cracking or chipping remain. One of the reasons for this is that ceramic copings are usually milled to uniform thicknesses of 0.3-0.6 mm around the whole tooth preparation. This may not provide uniform thickness or appropriate support for the veneering porcelain. To prevent these problems, the dual-scan technique demonstrates an alternative that allows the restorative team to customize zirconia CAD/CAM frameworks with adequate porcelain thickness and support in a simple manner.

  13. Office of Fusion Energy computational review

    International Nuclear Information System (INIS)

    Cohen, B.I.; Cohen, R.H.; Byers, J.A.

    1996-01-01

    The LLNL MFE Theory and Computations Program supports computational efforts in the following areas: (1) Magnetohydrodynamic equilibrium and stability; (2) Fluid and kinetic edge plasma simulation and modeling; (3) Kinetic and fluid core turbulent transport simulation; (4) Comprehensive tokamak modeling (CORSICA Project) - transport, MHD equilibrium and stability, edge physics, heating, turbulent transport, etc. and (5) Other: ECRH ray tracing, reflectometry, plasma processing. This report discusses algorithm and codes pertaining to these areas

  14. An Interaction of Economy and Environment in Dynamic Computable General Equilibrium Modelling with a Focus on Climate Change Issues in Korea : A Proto-type Model

    Energy Technology Data Exchange (ETDEWEB)

    Joh, Seung Hun; Dellink, Rob; Nam, Yunmi; Kim, Yong Gun; Song, Yang Hoon [Korea Environment Institute, Seoul (Korea)

    2000-12-01

    In the beginning of the 21st century, climate change is one of hottest issues in arena of both international environment and domestic one. During the COP6 meeting held in The Hague, over 10,000 people got together from the world. This report is a series of policy study on climate change in context of Korea. This study addresses on interactions of economy and environment in a perfect foresight dynamic computable general equilibrium with a focus on greenhouse gas mitigation strategy in Korea. The primary goal of this study is to evaluate greenhouse gas mitigation portfolios of changes in timing and magnitude with a particular focus on developing a methodology to integrate the bottom-up information on technical measures to reduce pollution into a top-down multi-sectoral computable general equilibrium framework. As a non-Annex I country Korea has been under strong pressure to declare GHG reduction commitment. Of particular concern is economic consequences GHG mitigation would accrue to the society. Various economic assessment have been carried out to address on the issue including analyses on cost, ancillary benefit, emission trading, so far. In this vein, this study on GHG mitigation commitment is a timely answer to climate change policy field. Empirical results available next year would be highly demanded in the situation. 62 refs., 13 figs., 9 tabs.

  15. Two-proton correlation functions for equilibrium and non-equilibrium emission

    International Nuclear Information System (INIS)

    Gong, W.G.; Gelbke, C.K.; Carlin, N.; De Souza, R.T.; Kim, Y.D.; Lynch, W.G.; Murakami, T.; Poggi, G.; Sanderson, D.; Tsang, M.B.; Xu, H.M.; Michigan State Univ., East Lansing; Fields, D.E.; Kwiatkowski, K.; Planeta, R.; Viola, V.E. Jr.; Yennello, S.J.; Indiana Univ., Bloomington; Indiana Univ., Bloomington; Pratt, S.

    1990-01-01

    Two-proton correlation functions are compared for equilibrium and non-equilibrium emission processes investigated, respectively, in ''reverse kinematics'' for the reactions 129 Xe+ 27 Al and 129 Xe+ 122 Sn at E/A=31 MeV and in ''forward kinematics'' for the reaction 14 N+ 197 Au at E/A=75 MeV. Observed differences in the shapes of the correlation functions are understood in terms of the different time scales for equilibrium and preequilibrium emission. Transverse and longitudinal correlation functions are very similar. (orig.)

  16. Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models

    Science.gov (United States)

    Parke, F. I.

    1981-01-01

    Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.

  17. A New Computational Technique for the Generation of Optimised Aircraft Trajectories

    Science.gov (United States)

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto

    2017-12-01

    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  18. A review of computer-aided design/computer-aided manufacture techniques for removable denture fabrication

    Science.gov (United States)

    Bilgin, Mehmet Selim; Baytaroğlu, Ebru Nur; Erdem, Ali; Dilber, Erhan

    2016-01-01

    The aim of this review was to investigate usage of computer-aided design/computer-aided manufacture (CAD/CAM) such as milling and rapid prototyping (RP) technologies for removable denture fabrication. An electronic search was conducted in the PubMed/MEDLINE, ScienceDirect, Google Scholar, and Web of Science databases. Databases were searched from 1987 to 2014. The search was performed using a variety of keywords including CAD/CAM, complete/partial dentures, RP, rapid manufacturing, digitally designed, milled, computerized, and machined. The identified developments (in chronological order), techniques, advantages, and disadvantages of CAD/CAM and RP for removable denture fabrication are summarized. Using a variety of keywords and aiming to find the topic, 78 publications were initially searched. For the main topic, the abstract of these 78 articles were scanned, and 52 publications were selected for reading in detail. Full-text of these articles was gained and searched in detail. Totally, 40 articles that discussed the techniques, advantages, and disadvantages of CAD/CAM and RP for removable denture fabrication and the articles were incorporated in this review. Totally, 16 of the papers summarized in the table. Following review of all relevant publications, it can be concluded that current innovations and technological developments of CAD/CAM and RP allow the digitally planning and manufacturing of removable dentures from start to finish. As a result according to the literature review CAD/CAM techniques and supportive maxillomandibular relationship transfer devices are growing fast. In the close future, fabricating removable dentures will become medical informatics instead of needing a technical staff and procedures. However the methods have several limitations for now. PMID:27095912

  19. Computational intelligence techniques for biological data mining: An overview

    Science.gov (United States)

    Faye, Ibrahima; Iqbal, Muhammad Javed; Said, Abas Md; Samir, Brahim Belhaouari

    2014-10-01

    Computational techniques have been successfully utilized for a highly accurate analysis and modeling of multifaceted and raw biological data gathered from various genome sequencing projects. These techniques are proving much more effective to overcome the limitations of the traditional in-vitro experiments on the constantly increasing sequence data. However, most critical problems that caught the attention of the researchers may include, but not limited to these: accurate structure and function prediction of unknown proteins, protein subcellular localization prediction, finding protein-protein interactions, protein fold recognition, analysis of microarray gene expression data, etc. To solve these problems, various classification and clustering techniques using machine learning have been extensively used in the published literature. These techniques include neural network algorithms, genetic algorithms, fuzzy ARTMAP, K-Means, K-NN, SVM, Rough set classifiers, decision tree and HMM based algorithms. Major difficulties in applying the above algorithms include the limitations found in the previous feature encoding and selection methods while extracting the best features, increasing classification accuracy and decreasing the running time overheads of the learning algorithms. The application of this research would be potentially useful in the drug design and in the diagnosis of some diseases. This paper presents a concise overview of the well-known protein classification techniques.

  20. The effect of time-dependent coupling on non-equilibrium steady states

    DEFF Research Database (Denmark)

    Cornean, Horia; Neidhardt, Hagen; Zagrebnov, Valentin

    Consider (for simplicity) two one-dimensional semi-infinite leads coupled to a quantum well via time dependent point interactions. In the remote past the system is decoupled, and each of its components is at thermal equilibrium. In the remote future the system is fully coupled. We define...... and compute the non equilibrium steady state (NESS) generated by this evolution. We show that when restricted to the subspace of absolute continuity of the fully coupled system, the state does not depend at all on the switching. Moreover, we show that the stationary charge current has the same invariant...

  1. The effect of time-dependent coupling on non-equilibrium steady states

    DEFF Research Database (Denmark)

    Cornean, Horia; Neidhardt, Hagen; Zagrebnov, Valentin A.

    2009-01-01

    Consider (for simplicity) two one-dimensional semi-infinite leads coupled to a quantum well via time dependent point interactions. In the remote past the system is decoupled, and each of its components is at thermal equilibrium. In the remote future the system is fully coupled. We define...... and compute the non equilibrium steady state (NESS) generated by this evolution. We show that when restricted to the subspace of absolute continuity of the fully coupled system, the state does not depend at all on the switching. Moreover, we show that the stationary charge current has the same invariant...

  2. A computational technique for turbulent flow of wastewater sludge.

    Science.gov (United States)

    Bechtel, Tom B

    2005-01-01

    A computational fluid dynamics (CFD) technique applied to the turbulent flow of wastewater sludge in horizontal, smooth-wall, circular pipes is presented. The technique uses the Crank-Nicolson finite difference method in conjunction with the variable secant method, an algorithm for determining the pressure gradient of the flow. A simple algebraic turbulence model is used. A Bingham-plastic rheological model is used to describe the shear stress/shear rate relationship for the wastewater sludge. The method computes velocity gradient and head loss, given a fixed volumetric flow, pipe size, and solids concentration. Solids concentrations ranging from 3 to 10% (by weight) and nominal pipe sizes from 0.15 m (6 in.) to 0.36 m (14 in.) are studied. Comparison of the CFD results for water to established values serves to validate the numerical method. The head loss results are presented in terms of a head loss ratio, R(hl), which is the ratio of sludge head loss to water head loss. An empirical equation relating R(hl) to pipe velocity and solids concentration, derived from the results of the CFD calculations, is presented. The results are compared with published values of Rhl for solids concentrations of 3 and 6%. A new expression for the Fanning friction factor for wastewater sludge flow is also presented.

  3. Acceleration of FDTD mode solver by high-performance computing techniques.

    Science.gov (United States)

    Han, Lin; Xi, Yanping; Huang, Wei-Ping

    2010-06-21

    A two-dimensional (2D) compact finite-difference time-domain (FDTD) mode solver is developed based on wave equation formalism in combination with the matrix pencil method (MPM). The method is validated for calculation of both real guided and complex leaky modes of typical optical waveguides against the bench-mark finite-difference (FD) eigen mode solver. By taking advantage of the inherent parallel nature of the FDTD algorithm, the mode solver is implemented on graphics processing units (GPUs) using the compute unified device architecture (CUDA). It is demonstrated that the high-performance computing technique leads to significant acceleration of the FDTD mode solver with more than 30 times improvement in computational efficiency in comparison with the conventional FDTD mode solver running on CPU of a standard desktop computer. The computational efficiency of the accelerated FDTD method is in the same order of magnitude of the standard finite-difference eigen mode solver and yet require much less memory (e.g., less than 10%). Therefore, the new method may serve as an efficient, accurate and robust tool for mode calculation of optical waveguides even when the conventional eigen value mode solvers are no longer applicable due to memory limitation.

  4. MHD equilibrium with toroidal rotation

    International Nuclear Information System (INIS)

    Li, J.

    1987-03-01

    The present work attempts to formulate the equilibrium of axisymmetric plasma with purely toroidal flow within ideal MHD theory. In general, the inertial term Rho(v.Del)v caused by plasma flow is so complicated that the equilibrium equation is completely different from the Grad-Shafranov equation. However, in the case of purely toroidal flow the equilibrium equation can be simplified so that it resembles the Grad-Shafranov equation. Generally one arbitrary two-variable functions and two arbitrary single variable functions, instead of only four single-variable functions, are allowed in the new equilibrium equations. Also, the boundary conditions of the rotating (with purely toroidal fluid flow, static - without any fluid flow) equilibrium are the same as those of the static equilibrium. So numerically one can calculate the rotating equilibrium as a static equilibrium. (author)

  5. Equilibrium and non equilibrium in fragmentation

    International Nuclear Information System (INIS)

    Dorso, C.O.; Chernomoretz, A.; Lopez, J.A.

    2001-01-01

    Full text: In this communication we present recent results regarding the interplay of equilibrium and non equilibrium in the process of fragmentation of excited finite Lennard Jones drops. Because the general features of such a potential resemble the ones of the nuclear interaction (fact that is reinforced by the similarity between the EOS of both systems) these studies are not only relevant from a fundamental point of view but also shed light on the problem of nuclear multifragmentation. We focus on the microscopic analysis of the state of the fragmenting system at fragmentation time. We show that the Caloric Curve (i e. the functional relationship between the temperature of the system and the excitation energy) is of the type rise plateau with no vapor branch. The usual rise plateau rise pattern is only recovered when equilibrium is artificially imposed. This result puts a serious question on the validity of the freeze out hypothesis. This feature is independent of the dimensionality or excitation mechanism. Moreover we explore the behavior of magnitudes which can help us determine the degree of the assumed phase transition. It is found that no clear cut criteria is presently available. (Author)

  6. Failure of Local Thermal Equilibrium in Quantum Friction

    Science.gov (United States)

    Intravaia, F.; Behunin, R. O.; Henkel, C.; Busch, K.; Dalvit, D. A. R.

    2016-09-01

    Recent progress in manipulating atomic and condensed matter systems has instigated a surge of interest in nonequilibrium physics, including many-body dynamics of trapped ultracold atoms and ions, near-field radiative heat transfer, and quantum friction. Under most circumstances the complexity of such nonequilibrium systems requires a number of approximations to make theoretical descriptions tractable. In particular, it is often assumed that spatially separated components of a system thermalize with their immediate surroundings, although the global state of the system is out of equilibrium. This powerful assumption reduces the complexity of nonequilibrium systems to the local application of well-founded equilibrium concepts. While this technique appears to be consistent for the description of some phenomena, we show that it fails for quantum friction by underestimating by approximately 80% the magnitude of the drag force. Our results show that the correlations among the components of driven, but steady-state, quantum systems invalidate the assumption of local thermal equilibrium, calling for a critical reexamination of this approach for describing the physics of nonequilibrium systems.

  7. On equilibrium real exchange rates in euro area: Special focus on behavioral equilibrium exchange rates in Ireland and Greece

    OpenAIRE

    Klára Plecitá; Luboš Střelec

    2012-01-01

    This paper focuses on the intra-euro-area imbalances. Therefore the first aim of this paper is to identify euro-area countries exhibiting macroeconomic imbalances. The subsequent aim is to estimate equilibrium real exchange rates for these countries and to compute their degrees of real exchange rate misalignment. The intra-area balance is assessed using the Cluster Analysis and the Principle Component Analysis; on this basis Greece and Ireland are selected as the two euro-area countries with ...

  8. Non-equilibrium magnetic interactions in strongly correlated systems

    Energy Technology Data Exchange (ETDEWEB)

    Secchi, A., E-mail: a.secchi@science.ru.nl [Institute for Molecules and Materials, Radboud University Nijmegen, 6525 AJ Nijmegen (Netherlands); Brener, S.; Lichtenstein, A.I. [Institut für Theoretische Physik, Universitat Hamburg, Jungiusstraße 9, D-20355 Hamburg (Germany); Katsnelson, M.I. [Institute for Molecules and Materials, Radboud University Nijmegen, 6525 AJ Nijmegen (Netherlands)

    2013-06-15

    We formulate a low-energy theory for the magnetic interactions between electrons in the multi-band Hubbard model under non-equilibrium conditions determined by an external time-dependent electric field which simulates laser-induced spin dynamics. We derive expressions for dynamical exchange parameters in terms of non-equilibrium electronic Green functions and self-energies, which can be computed, e.g., with the methods of time-dependent dynamical mean-field theory. Moreover, we find that a correct description of the system requires, in addition to exchange, a new kind of magnetic interaction, that we name twist exchange, which formally resembles Dzyaloshinskii–Moriya coupling, but is not due to spin–orbit, and is actually due to an effective three-spin interaction. Our theory allows the evaluation of the related time-dependent parameters as well. -- Highlights: •We develop a theory for magnetism of strongly correlated systems out of equilibrium. •Our theory is suitable for laser-induced ultrafast magnetization dynamics. •We write time-dependent exchange parameters in terms of electronic Green functions. •We find a new magnetic interaction, a “twist exchange”. •We give general expressions for magnetic noise in itinerant-electron systems.

  9. Application of the 3D Iced-Ale method to equilibrium and stability problems of a magnetically confined plasma

    International Nuclear Information System (INIS)

    Barnes, D.C.; Brackbill, J.U.

    1977-01-01

    A numerical study of the equilibrium and stability properties of the Scyllac experiment at Los Alamos is described. The formulation of the numerical method, which is an extension of the ICED-ALE method to magnetohydrodynamic flow in three dimensions, is given. The properties of the method are discussed, including low computational diffusion, local conservation, and implicit formulation in the time variable. Also discussed are the problems encountered in applying boundary conditions and computing equilibria. The results of numerical computations of equilibria indicate that the helical field amplitudes must be doubled from their design values to produce equilibrium in the Scyllac experiment. This is consistent with other theoretical and experimental results

  10. Non-equilibrium Economics

    Directory of Open Access Journals (Sweden)

    Katalin Martinás

    2007-02-01

    Full Text Available A microeconomic, agent based framework to dynamic economics is formulated in a materialist approach. An axiomatic foundation of a non-equilibrium microeconomics is outlined. Economic activity is modelled as transformation and transport of commodities (materials owned by the agents. Rate of transformations (production intensity, and the rate of transport (trade are defined by the agents. Economic decision rules are derived from the observed economic behaviour. The non-linear equations are solved numerically for a model economy. Numerical solutions for simple model economies suggest that the some of the results of general equilibrium economics are consequences only of the equilibrium hypothesis. We show that perfect competition of selfish agents does not guarantee the stability of economic equilibrium, but cooperativity is needed, too.

  11. Plant process computer replacements - techniques to limit installation schedules and costs

    International Nuclear Information System (INIS)

    Baker, M.D.; Olson, J.L.

    1992-01-01

    Plant process computer systems, a standard fixture in all nuclear power plants, are used to monitor and display important plant process parameters. Scanning thousands of field sensors and alarming out-of-limit values, these computer systems are heavily relied on by control room operators. The original nuclear steam supply system (NSSS) vendor for the power plant often supplied the plant process computer. Designed using sixties and seventies technology, a plant's original process computer has been obsolete for some time. Driven by increased maintenance costs and new US Nuclear Regulatory Commission regulations such as NUREG-0737, Suppl. 1, many utilities have replaced their process computers with more modern computer systems. Given that computer systems are by their nature prone to rapid obsolescence, this replacement cycle will likely repeat. A process computer replacement project can be a significant capital expenditure and must be performed during a scheduled refueling outage. The object of the installation process is to install a working system on schedule. Experience gained by supervising several computer replacement installations has taught lessons that, if applied, will shorten the schedule and limit the risk of costly delays. Examples illustrating this technique are given. This paper and these examples deal only with the installation process and assume that the replacement computer system has been adequately designed, and development and factory tested

  12. GRAVTool, Advances on the Package to Compute Geoid Model path by the Remove-Compute-Restore Technique, Following Helmert's Condensation Method

    Science.gov (United States)

    Marotta, G. S.

    2017-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astrogeodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove Compute Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and Global Geopotential Model (GGM), respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and adjust these models to one local vertical datum. This research presents the advances on the package called GRAVTool to compute geoid models path by the RCR, following Helmert's condensation method, and its application in a study area. The studied area comprehends the federal district of Brazil, with 6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show a geoid model computed by the GRAVTool package, after analysis of the density, DTM and GGM values, more adequate to the reference values used on the study area. The accuracy of the computed model (σ = ± 0.058 m, RMS = 0.067 m, maximum = 0.124 m and minimum = -0.155 m), using density value of 2.702 g/cm³ ±0.024 g/cm³, DTM SRTM Void Filled 3 arc-second and GGM EIGEN-6C4 up to degree and order 250, matches the uncertainty (σ =± 0.073) of 26 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.076 m, RMS = 0.098 m, maximum = 0.320 m and minimum = -0.061 m).

  13. N-Methyl Inversion and Accurate Equilibrium Structures in Alkaloids: Pseudopelletierine.

    Science.gov (United States)

    Vallejo-López, Montserrat; Écija, Patricia; Vogt, Natalja; Demaison, Jean; Lesarri, Alberto; Basterretxea, Francisco J; Cocinero, Emilio J

    2017-11-21

    A rotational spectroscopy investigation has resolved the conformational equilibrium and structural properties of the alkaloid pseudopelletierine. Two different conformers, which originate from inversion of the N-methyl group from an axial to an equatorial position, have been unambiguously identified in the gas phase, and nine independent isotopologues have been recorded by Fourier-transform microwave spectroscopy in a jet expansion. Both conformers share a chair-chair configuration of the two bridged six-membered rings. The conformational equilibrium is displaced towards the axial form, with a relative population in the supersonic jet of N axial /N equatorial ≈2/1. An accurate equilibrium structure has been determined by using the semiexperimental mixed-estimation method and alternatively computed by quantum-chemical methods up to the coupled-cluster level of theory. A comparison with the N-methyl inversion equilibria in related tropanes is also presented. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. A Strategic-Equilibrium Based

    Directory of Open Access Journals (Sweden)

    Gabriel J. Turbay

    2011-03-01

    Full Text Available The strategic equilibrium of an N-person cooperative game with transferable utility is a system composed of a cover collection of subsets of N and a set of extended imputations attainable through such equilibrium cover. The system describes a state of coalitional bargaining stability where every player has a bargaining alternative against any other player to support his corresponding equilibrium claim. Any coalition in the sable system may form and divide the characteristic value function of the coalition as prescribed by the equilibrium payoffs. If syndicates are allowed to form, a formed coalition may become a syndicate using the equilibrium payoffs as disagreement values in bargaining for a part of the complementary coalition incremental value to the grand coalition when formed. The emergent well known-constant sum derived game in partition function is described in terms of parameters that result from incumbent binding agreements. The strategic-equilibrium corresponding to the derived game gives an equal value claim to all players.  This surprising result is alternatively explained in terms of strategic-equilibrium based possible outcomes by a sequence of bargaining stages that when the binding agreements are in the right sequential order, von Neumann and Morgenstern (vN-M non-discriminatory solutions emerge. In these solutions a preferred branch by a sufficient number of players is identified: the weaker players syndicate against the stronger player. This condition is referred to as the stronger player paradox.  A strategic alternative available to the stronger players to overcome the anticipated not desirable results is to voluntarily lower his bargaining equilibrium claim. In doing the original strategic equilibrium is modified and vN-M discriminatory solutions may occur, but also a different stronger player may emerge that has eventually will have to lower his equilibrium claim. A sequence of such measures converges to the equal

  15. Ion exchange equilibrium constants

    CERN Document Server

    Marcus, Y

    2013-01-01

    Ion Exchange Equilibrium Constants focuses on the test-compilation of equilibrium constants for ion exchange reactions. The book first underscores the scope of the compilation, equilibrium constants, symbols used, and arrangement of the table. The manuscript then presents the table of equilibrium constants, including polystyrene sulfonate cation exchanger, polyacrylate cation exchanger, polymethacrylate cation exchanger, polysterene phosphate cation exchanger, and zirconium phosphate cation exchanger. The text highlights zirconium oxide anion exchanger, zeolite type 13Y cation exchanger, and

  16. Non-equilibrium hydrogen ionization in 2D simulations of the solar atmosphere

    Science.gov (United States)

    Leenaarts, J.; Carlsson, M.; Hansteen, V.; Rutten, R. J.

    2007-10-01

    Context: The ionization of hydrogen in the solar chromosphere and transition region does not obey LTE or instantaneous statistical equilibrium because the timescale is long compared with important hydrodynamical timescales, especially of magneto-acoustic shocks. Since the pressure, temperature, and electron density depend sensitively on hydrogen ionization, numerical simulation of the solar atmosphere requires non-equilibrium treatment of all pertinent hydrogen transitions. The same holds for any diagnostic application employing hydrogen lines. Aims: To demonstrate the importance and to quantify the effects of non-equilibrium hydrogen ionization, both on the dynamical structure of the solar atmosphere and on hydrogen line formation, in particular Hα. Methods: We implement an algorithm to compute non-equilibrium hydrogen ionization and its coupling into the MHD equations within an existing radiation MHD code, and perform a two-dimensional simulation of the solar atmosphere from the convection zone to the corona. Results: Analysis of the simulation results and comparison to a companion simulation assuming LTE shows that: a) non-equilibrium computation delivers much smaller variations of the chromospheric hydrogen ionization than for LTE. The ionization is smaller within shocks but subsequently remains high in the cool intershock phases. As a result, the chromospheric temperature variations are much larger than for LTE because in non-equilibrium, hydrogen ionization is a less effective internal energy buffer. The actual shock temperatures are therefore higher and the intershock temperatures lower. b) The chromospheric populations of the hydrogen n = 2 level, which governs the opacity of Hα, are coupled to the ion populations. They are set by the high temperature in shocks and subsequently remain high in the cool intershock phases. c) The temperature structure and the hydrogen level populations differ much between the chromosphere above photospheric magnetic elements

  17. Proceedings of RIKEN BNL research center workshop, equilibrium and non-equilibrium aspects of hot, dense QCD, Vol. 28

    International Nuclear Information System (INIS)

    De Vega, H.J.; Boyanovsky, D.

    2000-01-01

    The Relativistic Heavy Ion Collider (RHIC) at Brookhaven, beginning operation this year, and the Large Hadron Collider (LHC) at CERN, beginning operation ∼2005, will provide an unprecedented range of energies and luminosities that will allow us to probe the Gluon-Quark plasma. At RHIC and LHC, at central rapidity typical estimates of energy densities and temperatures are e * 1-10 GeV/fm3 and T0 * 300 - 900 MeV. Such energies are well above current estimates for the GQ plasma. Initially, this hot, dense plasma is far from local thermal equilibrium, making the theoretical study of transport phenomena, kinetic and chemical equilibration in dense and hot plasmas, and related issues a matter of fundamental importance. During the last few years a consistent framework to study collective effects in the Gluon-Quark plasma, and a microscopic description of transport in terms of the hard thermal (and dense) loops resummation program has emerged. This approach has the potential of providing a microscopic formulation of transport, in the regime of temperatures and densities to be achieved at RHIC and LHC. A parallel development over the last few years has provided a consistent formulation of non-equilibrium quantum field theory that provides a real-time description of phenomena out of equilibrium. Novel techniques including non-perturbative approaches and the dynamical renormalization group techniques lead to new insights into transport and relaxation. A deeper understanding of collective.excitations and transport phenomena in the GQ plasma could lead to recognize novel potential experimental signatures. New insights into small-c physics reveals a striking similarity between small-c and hard thermal loops, and novel real-time numerical simulations have recently studied the parton distributions and their thermalizations in the initial stages of a heavy ion collision

  18. Fluorescence lifetime components reveal kinetic intermediate states upon equilibrium denaturation of carbonic anhydrase II.

    Science.gov (United States)

    Nemtseva, Elena V; Lashchuk, Olesya O; Gerasimova, Marina A; Melnik, Tatiana N; Nagibina, Galina S; Melnik, Bogdan S

    2017-12-21

    In most cases, intermediate states of multistage folding proteins are not 'visible' under equilibrium conditions but are revealed in kinetic experiments. Time-resolved fluorescence spectroscopy was used in equilibrium denaturation studies. The technique allows for detecting changes in the conformation and environment of tryptophan residues in different structural elements of carbonic anhydrase II which in its turn has made it possible to study the intermediate states of carbonic anhydrase II under equilibrium conditions. The results of equilibrium and kinetic experiments using wild-type bovine carbonic anhydrase II and its mutant form with the substitution of leucine for alanine at position 139 (L139A) were compared. The obtained lifetime components of intrinsic tryptophan fluorescence allowed for revealing that, the same as in kinetic experiments, under equilibrium conditions the unfolding of carbonic anhydrase II ensues through formation of intermediate states.

  19. Partition Function and Configurational Entropy in Non-Equilibrium States: A New Theoretical Model

    Directory of Open Access Journals (Sweden)

    Akira Takada

    2018-03-01

    Full Text Available A new model of non-equilibrium thermodynamic states has been investigated on the basis of the fact that all thermodynamic variables can be derived from partition functions. We have thus attempted to define partition functions for non-equilibrium conditions by introducing the concept of pseudo-temperature distributions. These pseudo-temperatures are configurational in origin and distinct from kinetic (phonon temperatures because they refer to the particular fragments of the system with specific energies. This definition allows thermodynamic states to be described either for equilibrium or non-equilibrium conditions. In addition; a new formulation of an extended canonical partition function; internal energy and entropy are derived from this new temperature definition. With this new model; computational experiments are performed on simple non-interacting systems to investigate cooling and two distinct relaxational effects in terms of the time profiles of the partition function; internal energy and configurational entropy.

  20. Tokamak plasma equilibrium problems with anisotropic pressure and rotation and their numerical solution

    International Nuclear Information System (INIS)

    Ivanov, A. A.; Martynov, A. A.; Medvedev, S. Yu.; Poshekhonov, Yu. Yu.

    2015-01-01

    In the MHD tokamak plasma theory, the plasma pressure is usually assumed to be isotropic. However, plasma heating by neutral beam injection and RF heating can lead to a strong anisotropy of plasma parameters and rotation of the plasma. The development of MHD equilibrium theory taking into account the plasma inertia and anisotropic pressure began a long time ago, but until now it has not been consistently applied in computational codes for engineering calculations of the plasma equilibrium and evolution in tokamak. This paper contains a detailed derivation of the axisymmetric plasma equilibrium equation in the most general form (with arbitrary rotation and anisotropic pressure) and description of the specialized version of the SPIDER code. The original method of calculation of the equilibrium with an anisotropic pressure and a prescribed rotational transform profile is proposed. Examples of calculations and discussion of the results are also presented

  1. China’s Rare Earths Supply Forecast in 2025: A Dynamic Computable General Equilibrium Analysis

    Directory of Open Access Journals (Sweden)

    Jianping Ge

    2016-09-01

    Full Text Available The supply of rare earths in China has been the focus of significant attention in recent years. Due to changes in regulatory policies and the development of strategic emerging industries, it is critical to investigate the scenario of rare earth supplies in 2025. To address this question, this paper constructed a dynamic computable equilibrium (DCGE model to forecast the production, domestic supply, and export of China’s rare earths in 2025. Based on our analysis, production will increase by 10.8%–12.6% and achieve 116,335–118,260 tons of rare-earth oxide (REO in 2025, based on recent extraction control during 2011–2016. Moreover, domestic supply and export will be 75,081–76,800 tons REO and 38,797–39,400 tons REO, respectively. The technological improvements on substitution and recycling will significantly decrease the supply and mining activities of rare earths. From a policy perspective, we found that the elimination of export regulations, including export quotas and export taxes, does have a negative impact on China’s future domestic supply of rare earths. The policy conflicts between the increase in investment in strategic emerging industries, and the increase in resource and environmental taxes on rare earths will also affect China’s rare earths supply in the future.

  2. The inherent dangers of using computable general equilibrium models as a single integrated modelling framework for sustainability impact assessment. A critical note on Boehringer and Loeschel (2006)

    International Nuclear Information System (INIS)

    Scrieciu, S. Serban

    2007-01-01

    The search for methods of assessment that best evaluate and integrate the trade-offs and interactions between the economic, environmental and social components of development has been receiving a new impetus due to the requirement that sustainability concerns be incorporated into the policy formulation process. A paper forthcoming in Ecological Economics (Boehringer, C., Loeschel, A., in press. Computable general equilibrium models for sustainability impact assessment: status quo and prospects, Ecological Economics.) claims that Computable General Equilibrium (CGE) models may potentially represent the much needed 'back-bone' tool to carry out reliable integrated quantitative Sustainability Impact Assessments (SIAs). While acknowledging the usefulness of CGE models for some dimensions of SIA, this commentary questions the legitimacy of employing this particular economic modelling tool as a single integrating modelling framework for a comprehensive evaluation of the multi-dimensional, dynamic and complex interactions between policy and sustainability. It discusses several inherent dangers associated with the advocated prospects for the CGE modelling approach to contribute to comprehensive and reliable sustainability impact assessments. The paper warns that this reductionist viewpoint may seriously infringe upon the basic values underpinning the SIA process, namely a transparent, heterogeneous, balanced, inter-disciplinary, consultative and participatory take to policy evaluation and building of the evidence-base. (author)

  3. Equilibrium-point control hypothesis examined by measured arm stiffness during multijoint movement.

    Science.gov (United States)

    Gomi, H; Kawato

    1996-04-05

    For the last 20 years, it has been hypothesized that well-coordinated, multijoint movements are executed without complex computation by the brain, with the use of springlike muscle properties and peripheral neural feedback loops. However, it has been technically and conceptually difficult to examine this "equilibrium-point control" hypothesis directly in physiological or behavioral experiments. A high-performance manipulandum was developed and used here to measure human arm stiffness, the magnitude of which during multijoint movement is important for this hypothesis. Here, the equilibrium-point trajectory was estimated from the measured stiffness, the actual trajectory, and the generated torque. Its velocity profile differed from that of the actual trajectory. These results argue against the hypothesis that the brain sends as a motor command only an equilibrium-point trajectory similar to the actual trajectory.

  4. Non-equilibrium fluctuation-induced interactions

    International Nuclear Information System (INIS)

    Dean, David S

    2012-01-01

    We discuss non-equilibrium aspects of fluctuation-induced interactions. While the equilibrium behavior of such interactions has been extensively studied and is relatively well understood, the study of these interactions out of equilibrium is relatively new. We discuss recent results on the non-equilibrium behavior of systems whose dynamics is of the dissipative stochastic type and identify a number of outstanding problems concerning non-equilibrium fluctuation-induced interactions.

  5. Teaching Computer Ergonomic Techniques: Practices and Perceptions of Secondary and Postsecondary Business Educators.

    Science.gov (United States)

    Alexander, Melody W.; Arp, Larry W.

    1997-01-01

    A survey of 260 secondary and 251 postsecondary business educators found the former more likely to think computer ergonomic techniques should taught in elementary school and to address the hazards of improper use. Both groups stated that over half of students they observe do not use good techniques and agreed that students need continual…

  6. Statistical equilibrium calculations for silicon in early-type model stellar atmospheres

    International Nuclear Information System (INIS)

    Kamp, L.W.

    1976-02-01

    Line profiles of 36 multiplets of silicon (Si) II, III, and IV were computed for a grid of model atmospheres covering the range from 15,000 to 35,000 K in effective temperature and 2.5 to 4.5 in log (gravity). The computations involved simultaneous solution of the steady-state statistical equilibrium equations for the populations and of the equation of radiative transfer in the lines. The variables were linearized, and successive corrections were computed until a minimal accuracy of 1/1000 in the line intensities was reached. The common assumption of local thermodynamic equilibrium (LTE) was dropped. The model atmospheres used also were computed by non-LTE methods. Some effects that were incorporated into the calculations were the depression of the continuum by free electrons, hydrogen and ionized helium line blocking, and auto-ionization and dielectronic recombination, which later were found to be insignificant. Use of radiation damping and detailed electron (quadratic Stark) damping constants had small but significant effects on the strong resonance lines of Si III and IV. For weak and intermediate-strength lines, large differences with respect to LTE computations, the results of which are also presented, were found in line shapes and strengths. For the strong lines the differences are generally small, except for the models at the hot, low-gravity extreme of the range. These computations should be useful in the interpretation of the spectra of stars in the spectral range B0--B5, luminosity classes III, IV, and V

  7. Instrumentation, computer software and experimental techniques used in low-frequency internal friction studies at WNRE

    International Nuclear Information System (INIS)

    Sprugmann, K.W.; Ritchie, I.G.

    1980-04-01

    A detailed and comprehensive account of the equipment, computer programs and experimental methods developed at the Whiteshell Nuclear Research Estalbishment for the study of low-frequency internal friction is presented. Part 1 describes the mechanical apparatus, electronic instrumentation and computer software, while Part II describes in detail the laboratory techniques and various types of experiments performed together with data reduction and analysis. Experimental procedures for the study of internal friction as a function of temperature, strain amplitude or time are described. Computer control of these experiments using the free-decay technique is outlined. In addition, a pendulum constant-amplitude drive system is described. (auth)

  8. A neuro-fuzzy computing technique for modeling hydrological time series

    Science.gov (United States)

    Nayak, P. C.; Sudheer, K. P.; Rangan, D. M.; Ramasastri, K. S.

    2004-05-01

    Intelligent computing tools such as artificial neural network (ANN) and fuzzy logic approaches are proven to be efficient when applied individually to a variety of problems. Recently there has been a growing interest in combining both these approaches, and as a result, neuro-fuzzy computing techniques have evolved. This approach has been tested and evaluated in the field of signal processing and related areas, but researchers have only begun evaluating the potential of this neuro-fuzzy hybrid approach in hydrologic modeling studies. This paper presents the application of an adaptive neuro fuzzy inference system (ANFIS) to hydrologic time series modeling, and is illustrated by an application to model the river flow of Baitarani River in Orissa state, India. An introduction to the ANFIS modeling approach is also presented. The advantage of the method is that it does not require the model structure to be known a priori, in contrast to most of the time series modeling techniques. The results showed that the ANFIS forecasted flow series preserves the statistical properties of the original flow series. The model showed good performance in terms of various statistical indices. The results are highly promising, and a comparative analysis suggests that the proposed modeling approach outperforms ANNs and other traditional time series models in terms of computational speed, forecast errors, efficiency, peak flow estimation etc. It was observed that the ANFIS model preserves the potential of the ANN approach fully, and eases the model building process.

  9. Parallel computing techniques for rotorcraft aerodynamics

    Science.gov (United States)

    Ekici, Kivanc

    The modification of unsteady three-dimensional Navier-Stokes codes for application on massively parallel and distributed computing environments is investigated. The Euler/Navier-Stokes code TURNS (Transonic Unsteady Rotor Navier-Stokes) was chosen as a test bed because of its wide use by universities and industry. For the efficient implementation of TURNS on parallel computing systems, two algorithmic changes are developed. First, main modifications to the implicit operator, Lower-Upper Symmetric Gauss Seidel (LU-SGS) originally used in TURNS, is performed. Second, application of an inexact Newton method, coupled with a Krylov subspace iterative method (Newton-Krylov method) is carried out. Both techniques have been tried previously for the Euler equations mode of the code. In this work, we have extended the methods to the Navier-Stokes mode. Several new implicit operators were tried because of convergence problems of traditional operators with the high cell aspect ratio (CAR) grids needed for viscous calculations on structured grids. Promising results for both Euler and Navier-Stokes cases are presented for these operators. For the efficient implementation of Newton-Krylov methods to the Navier-Stokes mode of TURNS, efficient preconditioners must be used. The parallel implicit operators used in the previous step are employed as preconditioners and the results are compared. The Message Passing Interface (MPI) protocol has been used because of its portability to various parallel architectures. It should be noted that the proposed methodology is general and can be applied to several other CFD codes (e.g. OVERFLOW).

  10. Artifact Elimination Technique in Tomogram of X-ray Computed Tomography

    International Nuclear Information System (INIS)

    Rasif Mohd Zain

    2015-01-01

    Artifacts of tomogram are main commonly problems occurred in x-ray computed tomography. The artifacts will be appearing in tomogram due to noise, beam hardening, and scattered radiation. The study has been carried out using CdTe time pix detector. The new technique has been developed to eliminate the artifact occurred in hardware and software. The hardware setup involved the careful alignment all of the components of the system and the introduction of a collimator beam. Meanwhile, in software development deal with the flat field correction, noise filter and data projection algorithm. The results show the technique developed produce good quality images and eliminate the artifacts. (author)

  11. Non-perturbative calculation of equilibrium polarization of stored electron beams

    International Nuclear Information System (INIS)

    Yokoya, Kaoru.

    1992-05-01

    Stored electron/positron beams polarize spontaneously owing to the spin-flip synchrotron radiation. In the existing computer codes, the degree of the equilibrium polarization has been calculated using perturbation expansions in terms of the orbital oscillation amplitudes. In this paper a new numerical method is presented which does not employ the perturbation expansion. (author)

  12. The development of computer industry and applications of its relevant techniques in nuclear research laboratories

    International Nuclear Information System (INIS)

    Dai Guiliang

    1988-01-01

    The increasing needs for computers in the area of nuclear science and technology are described. The current status of commerical availabe computer products of different scale in world market are briefly reviewed. A survey of some noticeable techniques is given from the view point of computer applications in nuclear science research laboratories

  13. Solving Multi-Pollutant Emission Dispatch Problem Using Computational Intelligence Technique

    Directory of Open Access Journals (Sweden)

    Nur Azzammudin Rahmat

    2016-06-01

    Full Text Available Economic dispatch is a crucial process conducted by the utilities to correctly determine the satisfying amount of power to be generated and distributed to the consumers. During the process, the utilities also consider pollutant emission as the consequences of fossil-fuel consumption. Fossil-fuel includes petroleum, coal, and natural gas; each has its unique chemical composition of pollutants i.e. sulphur oxides (SOX, nitrogen oxides (NOX and carbon oxides (COX. This paper presents multi-pollutant emission dispatch problem using computational intelligence technique. In this study, a novel emission dispatch technique is formulated to determine the amount of the pollutant level. It utilizes a pre-developed optimization technique termed as differential evolution immunized ant colony optimization (DEIANT for the emission dispatch problem. The optimization results indicated high level of COX level, regardless of any type of fossil fuel consumed.

  14. Improvements on non-equilibrium and transport Green function techniques: The next-generation TRANSIESTA

    OpenAIRE

    Papior, Nick Rübner; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads

    2017-01-01

    We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT–NEGF code handles devices with one or multiple electrodes (Ne≥1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable m...

  15. Geodesic acoustic eigenmode for tokamak equilibrium with maximum of local GAM frequency

    Energy Technology Data Exchange (ETDEWEB)

    Lakhin, V.P. [NRC “Kurchatov Institute”, Moscow (Russian Federation); Sorokina, E.A., E-mail: sorokina.ekaterina@gmail.com [NRC “Kurchatov Institute”, Moscow (Russian Federation); Peoples' Friendship University of Russia, Moscow (Russian Federation)

    2014-01-24

    The geodesic acoustic eigenmode for tokamak equilibrium with the maximum of local GAM frequency is found analytically in the frame of MHD model. The analysis is based on the asymptotic matching technique.

  16. Calculation Method for Equilibrium Points in Dynamical Systems Based on Adaptive Sinchronization

    Directory of Open Access Journals (Sweden)

    Manuel Prian Rodríguez

    2017-12-01

    Full Text Available In this work, a control system is proposed as an equivalent numerical procedure whose aim is to obtain the natural equilibrium points of a dynamical system. These equilibrium points may be employed later as setpoint signal for different control techniques. The proposed procedure is based on the adaptive synchronization between an oscillator and a reference model driven by the oscillator state variables. A stability analysis is carried out and a simplified algorithm is proposed. Finally, satisfactory simulation results are shown.

  17. Quantity Constrained General Equilibrium

    NARCIS (Netherlands)

    Babenko, R.; Talman, A.J.J.

    2006-01-01

    In a standard general equilibrium model it is assumed that there are no price restrictions and that prices adjust infinitely fast to their equilibrium values.In case of price restrictions a general equilibrium may not exist and rationing on net demands or supplies is needed to clear the markets.In

  18. Precision of gated equilibrium radioventriculography in measuring left ventricular stroke volume

    International Nuclear Information System (INIS)

    Bourguignon, M.H.; Wise, R.A.; Ehrlich, W.E.; Douglas, K.H.; Camargo, E.E.; Harrison, K.E.; Wagner, H.N. Jr.

    1983-01-01

    We have demonstrated that relative changes of small amplitude in ventricular stroke volume can be measured accurately in dogs when a fully automated technique for delineation of end diastolic and end systolic region of interest (ROI) is used. Consequently, we expect such a technique to be very sensitive in measuring relative changes of any ventricular quantitative parameter from gated equilibrium radio ventriculography in humans

  19. Non-equilibrium quasiparticle processes in superconductor tunneling structures

    International Nuclear Information System (INIS)

    Perold, W.J.

    1990-01-01

    A broad overview is presented of the phenomenon of superconductivity. The tunneling of quasiparticles in superconducter-insulator structures is described. Related non-equilibrium processes, such as superconductor bandgap suppresion, quasiparticle diffusion and recombination, and excess quasiparticle collection are discussed. The processes are illustrated with numerical computer simulation data. The importance of the inter-relationship between these processes in practical multiple tunneling junction superconducting device structures is also emphasized. 14 refs., 8 figs

  20. Goya - an MHD equilibrium code for toroidal plasmas

    International Nuclear Information System (INIS)

    Scheffel, J.

    1984-09-01

    A description of the GOYA free-boundary equilibrium code is given. The non-linear Grad-Shafranov equation of ideal MHD is solved in a toroidal geometry for plasmas with purely poloidal magnetic fields. The code is based on a field line-tracing procedure, making storage of a large amount of information on a grid unnecessary. Usage of the code is demonstrated by computations of equi/libria for the EXTRAP-T1 device. (Author)

  1. Ionization equilibrium and radiation losses of molybdenum in a high temperature plasma

    International Nuclear Information System (INIS)

    1976-11-01

    The ionization equilibrium and the associated radiation losses of molybdenum have been calculated as a function of the electron temperature. In the 1-2keV range the computed fractional abundances are supported by experimental facts obtained in T.F.R. Tokamak plasmas

  2. Non-equilibrium versus equilibrium emission of complex fragments from hot nuclei

    International Nuclear Information System (INIS)

    Viola, V.E.; Kwiatkowski, K.; Yennello, S.; Fields, D.E.

    1989-01-01

    The relative contributions of equilibrium and non-equilibrium mechanisms for intermediate-mass fragment emission have been deduced for Z=3-14 fragments formed in 3 He- and 14 N-induced reactions on Ag and Au targets. Complete inclusive excitation function measurements have been performed for 3 He projectiles from E/A=67 to 1,200 MeV and for 14 N from E/A=20 to 50 MeV. The data are consistent with a picture in which equilibrated emission is important at the lowest energies, but with increasing bombarding energy the cross sections are increasingly dominated by non-equilibrium processes. Non-equilibrium emission is also shown to be favored for light fragments relative to heavy fragments. These results are supported by coincidence studies of intermediate-mass fragments tagged by linear momentum transfer measurements

  3. Replacing leads by self-energies using non-equilibrium Green's functions

    International Nuclear Information System (INIS)

    Michael, Fredrick; Johnson, M.D.

    2003-01-01

    Open quantum systems consist of semi-infinite leads which transport electrons to and from the device of interest. We show here that within the non-equilibrium Green's function technique for continuum systems, the leads can be replaced by simple c-number self-energies. Our starting point is an approach for continuum systems developed by Feuchtwang. The reformulation developed here is simpler to understand and carry out than the somewhat unwieldly manipulations typical in the Feuchtwang method. The self-energies turn out to have a limited variability: the retarded self-energy Σ r depends on the arbitrary choice of internal boundary conditions, but the non-equilibrium self-energy or scattering function Σ which determines transport is invariant for a broad class of boundary conditions. Expressed in terms of these self-energies, continuum non-equilibrium transport calculations take a particularly simple form similar to that developed for discrete systems

  4. Non-equilibrium supramolecular polymerization.

    Science.gov (United States)

    Sorrenti, Alessandro; Leira-Iglesias, Jorge; Markvoort, Albert J; de Greef, Tom F A; Hermans, Thomas M

    2017-09-18

    Supramolecular polymerization has been traditionally focused on the thermodynamic equilibrium state, where one-dimensional assemblies reside at the global minimum of the Gibbs free energy. The pathway and rate to reach the equilibrium state are irrelevant, and the resulting assemblies remain unchanged over time. In the past decade, the focus has shifted to kinetically trapped (non-dissipative non-equilibrium) structures that heavily depend on the method of preparation (i.e., pathway complexity), and where the assembly rates are of key importance. Kinetic models have greatly improved our understanding of competing pathways, and shown how to steer supramolecular polymerization in the desired direction (i.e., pathway selection). The most recent innovation in the field relies on energy or mass input that is dissipated to keep the system away from the thermodynamic equilibrium (or from other non-dissipative states). This tutorial review aims to provide the reader with a set of tools to identify different types of self-assembled states that have been explored so far. In particular, we aim to clarify the often unclear use of the term "non-equilibrium self-assembly" by subdividing systems into dissipative, and non-dissipative non-equilibrium states. Examples are given for each of the states, with a focus on non-dissipative non-equilibrium states found in one-dimensional supramolecular polymerization.

  5. Does the nervous system use equilibrium-point control to guide single and multiple joint movements?

    Science.gov (United States)

    Bizzi, E; Hogan, N; Mussa-Ivaldi, F A; Giszter, S

    1992-12-01

    The hypothesis that the central nervous system (CNS) generates movement as a shift of the limb's equilibrium posture has been corroborated experimentally in studies involving single- and multijoint motions. Posture may be controlled through the choice of muscle length-tension curve that set agonist-antagonist torque-angle curves determining an equilibrium position for the limb and the stiffness about the joints. Arm trajectories seem to be generated through a control signal defining a series of equilibrium postures. The equilibrium-point hypothesis drastically simplifies the requisite computations for multijoint movements and mechanical interactions with complex dynamic objects in the environment. Because the neuromuscular system is springlike, the instantaneous difference between the arm's actual position and the equilibrium position specified by the neural activity can generate the requisite torques, avoiding the complex "inverse dynamic" problem of computing the torques at the joints. The hypothesis provides a simple, unified description of posture and movement as well as contact control task performance, in which the limb must exert force stably and do work on objects in the environment. The latter is a surprisingly difficult problem, as robotic experience has shown. The prior evidence for the hypothesis came mainly from psychophysical and behavioral experiments. Our recent work has shown that microstimulation of the frog spinal cord's premotoneural network produces leg movements to various positions in the frog's motor space. The hypothesis can now be investigated in the neurophysiological machinery of the spinal cord.

  6. Laminar or turbulent boundary-layer flows of perfect gases or reacting gas mixtures in chemical equilibrium

    Science.gov (United States)

    Anderson, E. C.; Lewis, C. H.

    1971-01-01

    Turbulent boundary layer flows of non-reacting gases are predicted for both interal (nozzle) and external flows. Effects of favorable pressure gradients on two eddy viscosity models were studied in rocket and hypervelocity wind tunnel flows. Nozzle flows of equilibrium air with stagnation temperatures up to 10,000 K were computed. Predictions of equilibrium nitrogen flows through hypervelocity nozzles were compared with experimental data. A slender spherically blunted cone was studied at 70,000 ft altitude and 19,000 ft/sec. in the earth's atmosphere. Comparisons with available experimental data showed good agreement. A computer program was developed and fully documented during this investigation for use by interested individuals.

  7. Rayleigh’s quotient–based damage detection algorithm: Theoretical concepts, computational techniques, and field implementation strategies

    DEFF Research Database (Denmark)

    NJOMO WANDJI, Wilfried

    2017-01-01

    levels are targeted: existence, location, and severity. The proposed algorithm is analytically developed from the dynamics theory and the virtual energy principle. Some computational techniques are proposed for carrying out computations, including discretization, integration, derivation, and suitable...

  8. Out of equilibrium transport through an Anderson impurity: probing scaling laws within the equation of motion approach.

    Science.gov (United States)

    Balseiro, C A; Usaj, G; Sánchez, M J

    2010-10-27

    We study non-equilibrium electron transport through a quantum impurity coupled to metallic leads using the equation of motion technique at finite temperature T. Assuming that the interactions are taking place solely in the impurity and focusing on the infinite Hubbard limit, we compute the out of equilibrium density of states and the differential conductance G(2)(T, V) in order to test several scaling laws. We find that G(2)(T, V)/G(2)(T, 0) is a universal function of both eV/T(K) and T/T(K), T(K) being the Kondo temperature. The effect of an in-plane magnetic field on the splitting of the zero bias anomaly in the differential conductance is also analyzed. For a Zeeman splitting Δ, the computed differential conductance peak splitting depends only on Δ/T(K), and for large fields approaches the value of 2Δ. Besides studying the traditional two leads setup, we also consider other configurations that mimic recent experiments, namely, an impurity embedded in a mesoscopic wire and the presence of a third weakly coupled lead. In these cases, a double peak structure of the Kondo resonance is clearly obtained in the differential conductance while the amplitude of the highest peak is shown to decrease as ln(eV/T(K)). Several features of these results are in qualitative agreement with recent experimental observations reported on quantum dots.

  9. World Oil Price and Biofuels : A General Equilibrium Analysis

    OpenAIRE

    Timilsina, Govinda R.; Mevel, Simon; Shrestha, Ashish

    2011-01-01

    The price of oil could play a significant role in influencing the expansion of biofuels. However, this issue has not been fully investigated yet in the literature. Using a global computable general equilibrium model, this study analyzes the impact of oil price on biofuel expansion, and subsequently, on food supply. The study shows that a 65 percent increase in oil price in 2020 from the 20...

  10. A new technique for on-line and off-line high speed computation

    International Nuclear Information System (INIS)

    Hartouni, E.P.; Jensen, D.A.; Klima, B.; Kreisler, M.N.; Rabin, M.S.Z.; Uribe, J.; Gottschalk, E.; Gara, A.; Knapp, B.C.

    1989-01-01

    A new technique for both on-line and off-line computation has been developed. With this technique, a reconstruction analysis in Elementary Particle Physics, otherwise prohibitively long, has been accomplished. It will be used on-line in an upcoming Fermilab experiment to reconstruct more than 100,000 events per second and to trigger on the basis of that information. The technique delivers 40 Giga operations per second, has a bandwidth on the order of Gigabytes per second and has a modest cost. An overview of the program, details of the system, and performance measurements are presented in this paper

  11. Phase stability analysis of liquid-liquid equilibrium with stochastic methods

    Directory of Open Access Journals (Sweden)

    G. Nagatani

    2008-09-01

    Full Text Available Minimization of Gibbs free energy using activity coefficient models and nonlinear equation solution techniques is commonly applied to phase stability problems. However, when conventional techniques, such as the Newton-Raphson method, are employed, serious convergence problems may arise. Due to the existence of multiple solutions, several problems can be found in modeling liquid-liquid equilibrium of multicomponent systems, which are highly dependent on the initial guess. In this work phase stability analysis of liquid-liquid equilibrium is investigated using the NRTL model. For this purpose, two distinct stochastic numerical algorithms are employed to minimize the tangent plane distance of Gibbs free energy: a subdivision algorithm that can find all roots of nonlinear equations for liquid-liquid stability analysis and the Simulated Annealing method. Results obtained in this work for the two stochastic algorithms are compared with those of the Interval Newton method from the literature. Several different binary and multicomponent systems from the literature were successfully investigated.

  12. Self-organization of dissipative and coherent vortex structures in non-equilibrium magnetized two-dimensional plasmas

    International Nuclear Information System (INIS)

    Bystrenko, O; Bystrenko, T

    2010-01-01

    The properties of non-equilibrium magnetized plasmas confined in planar geometry are studied on the basis of first-principle microscopic Langevin dynamics computer simulations. The non-equilibrium state of plasmas is maintained due to the recombination and generation of charges. The intrinsic microscopic structure of non-equilibrium steady-state magnetized plasmas, in particular the inter-particle correlations and self-organization of vortex structures, are examined. The simulations have been performed for a wide range of parameters including strong plasma coupling, high charge recombination and generation rates and intense magnetic field. As is shown in simulations, the non-equilibrium recombination and generation processes trigger the formation of ordered dissipative or coherent drift vortex states in 2D plasmas with distinctly spatially separated components, which are far from thermal equilibrium. This is evident from the unusual properties of binary distributions and behavior of the Coulomb energy of the system, which turn out to be quite different from the ones typical for the equilibrium state of plasmas under the same conditions.

  13. An Equilibrium Chance-Constrained Multiobjective Programming Model with Birandom Parameters and Its Application to Inventory Problem

    Directory of Open Access Journals (Sweden)

    Zhimiao Tao

    2013-01-01

    Full Text Available An equilibrium chance-constrained multiobjective programming model with birandom parameters is proposed. A type of linear model is converted into its crisp equivalent model. Then a birandom simulation technique is developed to tackle the general birandom objective functions and birandom constraints. By embedding the birandom simulation technique, a modified genetic algorithm is designed to solve the equilibrium chance-constrained multiobjective programming model. We apply the proposed model and algorithm to a real-world inventory problem and show the effectiveness of the model and the solution method.

  14. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  15. Calculation of chemical equilibrium between aqueous solution and minerals: the EQ3/6 software package

    International Nuclear Information System (INIS)

    Wolery, T.J.

    1979-01-01

    The newly developed EQ/36 software package computes equilibrium models of aqueous geochemical systems. The package contains two principal programs: EQ3 performs distribution-of-species calculations for natural water compositions; EQ6 uses the results of EQ3 to predict the consequences of heating and cooling aqueous solutions and of irreversible reaction in rock--water systems. The programs are valuable for studying such phenomena as the formation of ore bodies, scaling and plugging in geothermal development, and the long-term disposal of nuclear waste. EQ3 and EQ6 are compared with such well-known geochemical codes as SOLMNEQ, WATEQ, REDEQL, MINEQL, and PATHI. The data base allows calculations in the temperature interval 0 to 350 0 C, at either 1 atm-steam saturation pressures or a constant 500 bars. The activity coefficient approximations for aqueous solutes limit modeling to solutions of ionic strength less than about one molal. The mathematical derivations and numerical techniques used in EQ6 are presented in detail. The program uses the Newton--Raphson method to solve the governing equations of chemical equilibrium for a system of specified elemental composition at fixed temperature and pressure. Convergence is aided by optimizing starting estimates and by under-relaxation techniques. The minerals present in the stable phase assemblage are found by several empirical methods. Reaction path models may be generated by using this approach in conjunction with finite differences. This method is analogous to applying high-order predictor--corrector methods to integrate a corresponding set of ordinary differential equations, but avoids propagation of error (drift). 8 figures, 9 tables

  16. Advanced technique for computing fuel combustion properties in pulverized-fuel fired boilers

    Energy Technology Data Exchange (ETDEWEB)

    Kotler, V.R. (Vsesoyuznyi Teplotekhnicheskii Institut (Russian Federation))

    1992-03-01

    Reviews foreign technical reports on advanced techniques for computing fuel combustion properties in pulverized-fuel fired boilers and analyzes a technique developed by Combustion Engineering, Inc. (USA). Characteristics of 25 fuel types, including 19 grades of coal, are listed along with a diagram of an installation with a drop tube furnace. Characteristics include burn-out intensity curves obtained using thermogravimetric analysis for high-volatile bituminous, semi-bituminous and coking coal. The patented LFP-SKM mathematical model is used to model combustion of a particular fuel under given conditions. The model allows for fuel particle size, air surplus, load, flame height, and portion of air supplied as tertiary blast. Good agreement between computational and experimental data was observed. The method is employed in designing new boilers as well as converting operating boilers to alternative types of fuel. 3 refs.

  17. A new inorganic atmospheric aerosol phase equilibrium model (UHAERO

    Directory of Open Access Journals (Sweden)

    N. R. Amundson

    2006-01-01

    Full Text Available A variety of thermodynamic models have been developed to predict inorganic gas-aerosol equilibrium. To achieve computational efficiency a number of the models rely on a priori specification of the phases present in certain relative humidity regimes. Presented here is a new computational model, named UHAERO, that is both efficient and rigorously computes phase behavior without any a priori specification. The computational implementation is based on minimization of the Gibbs free energy using a primal-dual method, coupled to a Newton iteration. The mathematical details of the solution are given elsewhere. The model computes deliquescence behavior without any a priori specification of the relative humidities of deliquescence. Also included in the model is a formulation based on classical theory of nucleation kinetics that predicts crystallization behavior. Detailed phase diagrams of the sulfate/nitrate/ammonium/water system are presented as a function of relative humidity at 298.15 K over the complete space of composition.

  18. Far-from-Equilibrium Route to Superthermal Light in Bimodal Nanolasers

    Directory of Open Access Journals (Sweden)

    Mathias Marconi

    2018-01-01

    Full Text Available Microscale and nanoscale lasers inherently exhibit rich photon statistics due to complex light-matter interaction in a strong spontaneous emission noise background. It is well known that they may display superthermal fluctuations—photon superbunching—in specific situations due to either gain competition, leading to mode-switching instabilities, or carrier-carrier coupling in superradiant microcavities. Here we show a generic route to superbunching in bimodal nanolasers by preparing the system far from equilibrium through a parameter quench. We demonstrate, both theoretically and experimentally, that transient dynamics after a short-pump-pulse-induced quench leads to heavy-tailed superthermal statistics when projected onto the weak mode. We implement a simple experimental technique to access the probability density functions that further enables quantifying the distance from thermal equilibrium via the thermodynamic entropy. The universality of this mechanism relies on the far-from-equilibrium dynamical scenario, which can be mapped to a fast cooling process of a suspension of Brownian particles in a liquid. Our results open up new avenues to mold photon statistics in multimode optical systems and may constitute a test bed to investigate out-of-equilibrium thermodynamics using micro or nanocavity arrays.

  19. Far-from-Equilibrium Route to Superthermal Light in Bimodal Nanolasers

    Science.gov (United States)

    Marconi, Mathias; Javaloyes, Julien; Hamel, Philippe; Raineri, Fabrice; Levenson, Ariel; Yacomotti, Alejandro M.

    2018-02-01

    Microscale and nanoscale lasers inherently exhibit rich photon statistics due to complex light-matter interaction in a strong spontaneous emission noise background. It is well known that they may display superthermal fluctuations—photon superbunching—in specific situations due to either gain competition, leading to mode-switching instabilities, or carrier-carrier coupling in superradiant microcavities. Here we show a generic route to superbunching in bimodal nanolasers by preparing the system far from equilibrium through a parameter quench. We demonstrate, both theoretically and experimentally, that transient dynamics after a short-pump-pulse-induced quench leads to heavy-tailed superthermal statistics when projected onto the weak mode. We implement a simple experimental technique to access the probability density functions that further enables quantifying the distance from thermal equilibrium via the thermodynamic entropy. The universality of this mechanism relies on the far-from-equilibrium dynamical scenario, which can be mapped to a fast cooling process of a suspension of Brownian particles in a liquid. Our results open up new avenues to mold photon statistics in multimode optical systems and may constitute a test bed to investigate out-of-equilibrium thermodynamics using micro or nanocavity arrays.

  20. Effects of centrifugal modification of magnetohydrodynamic equilibrium on resistive wall mode stability

    International Nuclear Information System (INIS)

    Shiraishi, J.; Aiba, N.; Miyato, N.; Yagi, M.

    2014-01-01

    Toroidal rotation effects are self-consistently taken into account not only in the linear magnetohydrodynamic (MHD) stability analysis but also in the equilibrium calculation. The MHD equilibrium computation is affected by centrifugal force due to the toroidal rotation. To study the toroidal rotation effects on resistive wall modes (RWMs), a new code has been developed. The RWMaC modules, which solve the electromagnetic dynamics in vacuum and the resistive wall, have been implemented in the MINERVA code, which solves the Frieman–Rotenberg equation that describes the linear ideal MHD dynamics in a rotating plasma. It is shown that modification of MHD equilibrium by the centrifugal force significantly reduces growth rates of RWMs with fast rotation in the order of M 2  = 0.1 where M is the Mach number. Moreover, it can open a stable window which does not exist under the assumption that the rotation affects only the linear dynamics. The rotation modifies the equilibrium pressure gradient and current density profiles, which results in the change of potential energy including rotational effects. (paper)

  1. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications.

    Science.gov (United States)

    Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod

    2016-08-06

    In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.

  2. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    Science.gov (United States)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an

  3. Monte-Carlo simulation of the evolution of point defects in solids under non-equilibrium conditions

    International Nuclear Information System (INIS)

    Maurice, Francoise; Doan, N.V.

    1981-11-01

    This report was written in order to serve as a guide for courageous users who want to tackle the problem of the evolution of point defect populations in a solid under non-equilibrium conditions by the Monte-Carlo technique. The original program, developed by Lanore in her swelling investigations on solids under irradiation by different particles, was generalized in order to take into account the effects and the phenomena related to the presence of solute atoms. Detailed descriptions of the simulation model, computational procedures and formulae used in the calculations are given. Two examples are shown to illustrate the applications to the swelling phenomenon: first, the effect to temperature or dose rate changes on void-swelling in electron-irradiated copper; second, the influence of solute atoms on the void nucleation in electron-irradiated nickel [fr

  4. PREFACE: 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011)

    Science.gov (United States)

    Teodorescu, Liliana; Britton, David; Glover, Nigel; Heinrich, Gudrun; Lauret, Jérôme; Naumann, Axel; Speer, Thomas; Teixeira-Dias, Pedro

    2012-06-01

    ACAT2011 This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011) which took place on 5-7 September 2011 at Brunel University, UK. The workshop series, which began in 1990 in Lyon, France, brings together computer science researchers and practitioners, and researchers from particle physics and related fields in order to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. It is a forum for the exchange of ideas among the fields, exploring and promoting cutting-edge computing, data analysis and theoretical calculation techniques in fundamental physics research. This year's edition of the workshop brought together over 100 participants from all over the world. 14 invited speakers presented key topics on computing ecosystems, cloud computing, multivariate data analysis, symbolic and automatic theoretical calculations as well as computing and data analysis challenges in astrophysics, bioinformatics and musicology. Over 80 other talks and posters presented state-of-the art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. Panel and round table discussions on data management and multivariate data analysis uncovered new ideas and collaboration opportunities in the respective areas. This edition of ACAT was generously sponsored by the Science and Technology Facility Council (STFC), the Institute for Particle Physics Phenomenology (IPPP) at Durham University, Brookhaven National Laboratory in the USA and Dell. We would like to thank all the participants of the workshop for the high level of their scientific contributions and for the enthusiastic participation in all its activities which were, ultimately, the key factors in the

  5. New computing techniques in physics research

    International Nuclear Information System (INIS)

    Perret-Gallix, D.; Wojcik, W.

    1990-01-01

    These proceedings relate in a pragmatic way the use of methods and techniques of software engineering and artificial intelligence in high energy and nuclear physics. Such fundamental research can only be done through the design, the building and the running of equipments and systems among the most complex ever undertaken by mankind. The use of these new methods is mandatory in such an environment. However their proper integration in these real applications raise some unsolved problems. Their solution, beyond the research field, will lead to a better understanding of some fundamental aspects of software engineering and artificial intelligence. Here is a sample of subjects covered in the proceedings : Software engineering in a multi-users, multi-versions, multi-systems environment, project management, software validation and quality control, data structure and management object oriented languages, multi-languages application, interactive data analysis, expert systems for diagnosis, expert systems for real-time applications, neural networks for pattern recognition, symbolic manipulation for automatic computation of complex processes

  6. An implicit flux-split algorithm to calculate hypersonic flowfields in chemical equilibrium

    Science.gov (United States)

    Palmer, Grant

    1987-01-01

    An implicit, finite-difference, shock-capturing algorithm that calculates inviscid, hypersonic flows in chemical equilibrium is presented. The flux vectors and flux Jacobians are differenced using a first-order, flux-split technique. The equilibrium composition of the gas is determined by minimizing the Gibbs free energy at every node point. The code is validated by comparing results over an axisymmetric hemisphere against previously published results. The algorithm is also applied to more practical configurations. The accuracy, stability, and versatility of the algorithm have been promising.

  7. Convergence to equilibrium in competitive Lotka–Volterra and chemostat systems

    KAUST Repository

    Champagnat, Nicolas; Jabin, Pierre-Emmanuel; Raoul, Gaë l

    2010-01-01

    We study a generalized system of ODE's modeling a finite number of biological populations in a competitive interaction. We adapt the techniques in Jabin and Raoul [8] and Champagnat and Jabin (2010) [2] to prove the convergence to a unique stable equilibrium. © 2010 Académie des sciences.

  8. Time-Depending Parametric Variational Approach for an Economic General Equilibrium Problem of Pure Exchange with Application

    International Nuclear Information System (INIS)

    Scaramuzzino, F.

    2009-01-01

    This paper considers a qualitative analysis of the solution of a pure exchange general economic equilibrium problem according to two independent parameters. Some recently results obtained by the author in the static and the dynamic case have been collected. Such results have been applied in a particular parametric case: it has been focused the attention on a numerical application for which the existence of the solution of time-depending parametric variational inequality that describes the equilibrium conditions has been proved by means of the direct method. By using MatLab computation after a linear interpolation, the curves of equilibrium have been visualized.

  9. Identifying apparent local stable isotope equilibrium in a complex non-equilibrium system.

    Science.gov (United States)

    He, Yuyang; Cao, Xiaobin; Wang, Jianwei; Bao, Huiming

    2018-02-28

    Although being out of equilibrium, biomolecules in organisms have the potential to approach isotope equilibrium locally because enzymatic reactions are intrinsically reversible. A rigorous approach that can describe isotope distribution among biomolecules and their apparent deviation from equilibrium state is lacking, however. Applying the concept of distance matrix in graph theory, we propose that apparent local isotope equilibrium among a subset of biomolecules can be assessed using an apparent fractionation difference (|Δα|) matrix, in which the differences between the observed isotope composition (δ') and the calculated equilibrium fractionation factor (1000lnβ) can be more rigorously evaluated than by using a previous approach for multiple biomolecules. We tested our |Δα| matrix approach by re-analyzing published data of different amino acids (AAs) in potato and in green alga. Our re-analysis shows that biosynthesis pathways could be the reason for an apparently close-to-equilibrium relationship inside AA families in potato leaves. Different biosynthesis/degradation pathways in tubers may have led to the observed isotope distribution difference between potato leaves and tubers. The analysis of data from green algae does not support the conclusion that AAs are further from equilibrium in glucose-cultured green algae than in the autotrophic ones. Application of the |Δα| matrix can help us to locate potential reversible reactions or reaction networks in a complex system such as a metabolic system. The same approach can be broadly applied to all complex systems that have multiple components, e.g. geochemical or atmospheric systems of early Earth or other planets. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Economic analysis of energy supply and national economy on the basis of general equilibrium models. Applications of the input-output decomposition analysis and the Computable General Equilibrium models shown by the example of Korea

    International Nuclear Information System (INIS)

    Ko, Jong-Hwan.

    1993-01-01

    Firstly, this study investigaties the causes of sectoral growth and structural changes in the Korean economy. Secondly, it develops the borders of a consistent economic model in order to investigate simultaneously the different impacts of changes in energy and in the domestic economy. This is done any both the Input-Output-Decomposition analysis and a Computable General Equilibrium model (CGE Model). The CGE Model eliminates the disadvantages of the IO Model and allows the investigation of the interdegenerative of the various energy sectors with the economy. The Social Accounting Matrix serves as the data basis of the GCE Model. Simulated experiments have been comet out with the help of the GCE Model, indicating the likely impact of an oil price shock in the economy-sectorally and generally. (orig.) [de

  11. Combining Acceleration Techniques for Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction.

    Science.gov (United States)

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2017-01-01

    Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.

  12. Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion

    Science.gov (United States)

    Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.

    2014-04-01

    The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  13. Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion

    International Nuclear Information System (INIS)

    Philip, B.; Wang, Z.; Berrill, M.A.; Birke, M.; Pernice, M.

    2014-01-01

    The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton–Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence

  14. Isotope effects in the equilibrium and non-equilibrium vaporization of tritiated water and ice

    International Nuclear Information System (INIS)

    Baumgaertner, F.; Kim, M.-A.

    1990-01-01

    The vaporization isotope effect of the HTO/H 2 O system has been measured at various temperatures and pressures under equilibrium as well as non-equilibrium conditions. The isotope effect values measured in equilibrium sublimation or distillation are in good agreement with the theoretical values based on the harmonic oscillator model. In non-equilibrium vaporization at low temperatures ( 0 C), the isotope effect decreases rapidly with decreasing system pressure and becomes negligible when the system pressure is lowered more than one tenth of the equilibrium vapor pressure. At higher temperatures, the isotope effect decreases very slowly with decreasing system pressure. Discussion is extended for the application of the present results to the study of biological enrichment of tritium. (author)

  15. Saturated ideal kink/peeling formations described as three-dimensional magnetohydrodynamic tokamak equilibrium states

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, W. A.; Brunetti, D.; Duval, B. P.; Faustin, J. M.; Graves, J. P.; Kleiner, A.; Patten, H.; Pfefferlé, D.; Porte, L.; Raghunathan, M.; Reimerdes, H.; Sauter, O.; Tran, T. M., E-mail: wilfred.cooper@epfl.ch [Ecole Polytechnique Fédérale de Lausanne (EPFL), Swiss Plasma Center (SPC), CH-1015 Lausanne (Switzerland)

    2016-04-15

    Free boundary magnetohydrodynamic equilibrium states with spontaneous three dimensional deformations of the plasma-vacuum interface are computed for the first time. The structures obtained have the appearance of saturated ideal external kink/peeling modes. High edge pressure gradients yield toroidal mode number n = 1 corrugations for a high edge bootstrap current and larger n distortions when this current is small. Deformations in the plasma boundary region induce a nonaxisymmetric Pfirsch-Schlüter current driving a field-aligned current ribbon consistent with reported experimental observations. A variation in the 3D equilibrium confirms that the n = 1 mode is a kink/peeling structure. We surmise that our calculated equilibrium structures constitute a viable model for the edge harmonic oscillations and outer modes associated with a quiescent H-mode operation in shaped tokamak plasmas.

  16. Energy from sugarcane bagasse under electricity rationing in Brazil: a computable general equilibrium model

    International Nuclear Information System (INIS)

    Scaramucci, Jose A.; Perin, Clovis; Pulino, Petronio; Bordoni, Orlando F.J.G.; Cunha, Marcelo P. da; Cortez, Luis A.B.

    2006-01-01

    In the midst of the institutional reforms of the Brazilian electric sectors initiated in the 1990s, a serious electricity shortage crisis developed in 2001. As an alternative to blackout, the government instituted an emergency plan aimed at reducing electricity consumption. From June 2001 to February 2002, Brazilians were compelled to curtail electricity use by 20%. Since the late 1990s, but especially after the electricity crisis, energy policy in Brazil has been directed towards increasing thermoelectricity supply and promoting further gains in energy conservation. Two main issues are addressed here. Firstly, we estimate the economic impacts of constraining the supply of electric energy in Brazil. Secondly, we investigate the possible penetration of electricity generated from sugarcane bagasse. A computable general equilibrium (CGE) model is used. The traditional sector of electricity and the remainder of the economy are characterized by a stylized top-down representation as nested CES (constant elasticity of substitution) production functions. The electricity production from sugarcane bagasse is described through a bottom-up activity analysis, with a detailed representation of the required inputs based on engineering studies. The model constructed is used to study the effects of the electricity shortage in the preexisting sector through prices, production and income changes. It is shown that installing capacity to generate electricity surpluses by the sugarcane agroindustrial system could ease the economic impacts of an electric energy shortage crisis on the gross domestic product (GDP)

  17. Equilibrium Shape of Ferrofluid in the Uniform External Field

    Science.gov (United States)

    2017-07-14

    mentioned as free-surface instabilities. That makes their computational modeling rather challenging. For the sake of validation and verification , there...4 4. Toward Verification of the Ellipsoidal Shape 5 5. Conclusion 6 6. References 7 Distribution List 8 Approved for public release...authors claimed that this fact can be established on the basis of a rigorous theory, like it was done for the equilibrium shape of rotating self

  18. Convergence to equilibrium in competitive Lotka–Volterra and chemostat systems

    KAUST Repository

    Champagnat, Nicolas

    2010-12-01

    We study a generalized system of ODE\\'s modeling a finite number of biological populations in a competitive interaction. We adapt the techniques in Jabin and Raoul [8] and Champagnat and Jabin (2010) [2] to prove the convergence to a unique stable equilibrium. © 2010 Académie des sciences.

  19. On generalized operator quasi-equilibrium problems

    Science.gov (United States)

    Kum, Sangho; Kim, Won Kyu

    2008-09-01

    In this paper, we will introduce the generalized operator equilibrium problem and generalized operator quasi-equilibrium problem which generalize the operator equilibrium problem due to Kazmi and Raouf [K.R. Kazmi, A. Raouf, A class of operator equilibrium problems, J. Math. Anal. Appl. 308 (2005) 554-564] into multi-valued and quasi-equilibrium problems. Using a Fan-Browder type fixed point theorem in [S. Park, Foundations of the KKM theory via coincidences of composites of upper semicontinuous maps, J. Korean Math. Soc. 31 (1994) 493-519] and an existence theorem of equilibrium for 1-person game in [X.-P. Ding, W.K. Kim, K.-K. Tan, Equilibria of non-compact generalized games with L*-majorized preferences, J. Math. Anal. Appl. 164 (1992) 508-517] as basic tools, we prove new existence theorems on generalized operator equilibrium problem and generalized operator quasi-equilibrium problem which includes operator equilibrium problems.

  20. Quick plasma equilibrium reconstruction based on GPU

    International Nuclear Information System (INIS)

    Xiao Bingjia; Huang, Y.; Luo, Z.P.; Yuan, Q.P.; Lao, L.

    2014-01-01

    A parallel code named P-EFIT which could complete an equilibrium reconstruction iteration in 250 μs is described. It is built with the CUDA TM architecture by using Graphical Processing Unit (GPU). It is described for the optimization of middle-scale matrix multiplication on GPU and an algorithm which could solve block tri-diagonal linear system efficiently in parallel. Benchmark test is conducted. Static test proves the accuracy of the P-EFIT and simulation-test proves the feasibility of using P-EFIT for real-time reconstruction on 65x65 computation grids. (author)

  1. Finite-element-model updating using computational intelligence techniques applications to structural dynamics

    CERN Document Server

    Marwala, Tshilidzi

    2010-01-01

    Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...

  2. Equilibrium and out-of-equilibrium thermodynamics in supercooled liquids and glasses

    International Nuclear Information System (INIS)

    Mossa, S; Nave, E La; Tartaglia, P; Sciortino, F

    2003-01-01

    We review the inherent structure thermodynamical formalism and the formulation of an equation of state (EOS) for liquids in equilibrium based on the (volume) derivatives of the statistical properties of the potential energy surface. We also show that, under the hypothesis that during ageing the system explores states associated with equilibrium configurations, it is possible to generalize the proposed EOS to out-of-equilibrium (OOE) conditions. The proposed formulation is based on the introduction of one additional parameter which, in the chosen thermodynamic formalism, can be chosen as the local minimum where the slowly relaxing OOE liquid is trapped

  3. Equilibrium sampling of environmental pollutants in fish: comparison with lipid-normalized concentrations and homogenization effects on chemical activity.

    Science.gov (United States)

    Jahnke, Annika; Mayer, Philipp; Adolfsson-Erici, Margaretha; McLachlan, Michael S

    2011-07-01

    Equilibrium sampling of organic pollutants into the silicone polydimethylsiloxane (PDMS) has recently been applied in biological tissues including fish. Pollutant concentrations in PDMS can then be multiplied with lipid/PDMS distribution coefficients (D(Lipid,PDMS) ) to obtain concentrations in fish lipids. In the present study, PDMS thin films were used for equilibrium sampling of polychlorinated biphenyls (PCBs) in intact tissue of two eels and one salmon. A classical exhaustive extraction technique to determine lipid-normalized PCB concentrations, which assigns the body burden of the chemical to the lipid fraction of the fish, was additionally applied. Lipid-based PCB concentrations obtained by equilibrium sampling were 85 to 106% (Norwegian Atlantic salmon), 108 to 128% (Baltic Sea eel), and 51 to 83% (Finnish lake eel) of those determined using total extraction. This supports the validity of the equilibrium sampling technique, while at the same time confirming that the fugacity capacity of these lipid-rich tissues for PCBs was dominated by the lipid fraction. Equilibrium sampling was also applied to homogenates of the same fish tissues. The PCB concentrations in the PDMS were 1.2 to 2.0 times higher in the homogenates (statistically significant in 18 of 21 cases, p equilibrium sampling and partition coefficients determined using tissue homogenates. Copyright © 2011 SETAC.

  4. Fall Back Equilibrium

    NARCIS (Netherlands)

    Kleppe, J.; Borm, P.E.M.; Hendrickx, R.L.P.

    2008-01-01

    Fall back equilibrium is a refinement of the Nash equilibrium concept. In the underly- ing thought experiment each player faces the possibility that, after all players decided on their action, his chosen action turns out to be blocked. Therefore, each player has to decide beforehand on a back-up

  5. Computer simulation of strain-induced ordering in interstitial solutions based on the b.c.c. Ta lattice

    International Nuclear Information System (INIS)

    Blanter, M.S.; Khachaturyan, A.G.

    1980-01-01

    A computer simulation is made of strain-induced ordering of interstitial atoms within octahedral interstices in the Ta host lattice. The calculation technique allows to take into account infinite-range strain-induced interaction. Computer simulation of ordering process enables to model the sequence of structure changes which occur during the ordering process and to find the equilibrium structure of the stable interstitial superstructures. The structures of high-temperature ordering phases obtained by the method of static concentration waves coincide with those obtained by means of computer simulation. However computer simulation enables to predict the structures of low-temperature ordered phases which cannot be obtained by the method of concentration waves. Comparison of computer simulation results and structures of observed ordered phases demonstrates good agreement. (author)

  6. Optimization of tokamak plasma equilibrium shape using parallel genetic algorithms

    International Nuclear Information System (INIS)

    Zhulin An; Bin Wu; Lijian Qiu

    2006-01-01

    In the device of non-circular cross sectional tokamaks, the plasma equilibrium shape has a strong influence on the confinement and MHD stability. The plasma equilibrium shape is determined by the configuration of the poloidal field (PF) system. Usually there are many PF systems that could support the specified plasma equilibrium, the differences are the number of coils used, their positions, sizes and currents. It is necessary to find the optimal choice that meets the engineering constrains, which is often done by a constrained optimization. The Genetic Algorithms (GAs) based method has been used to solve the problem of the optimization, but the time complexity limits the algorithms to become widely used. Due to the large search space that the optimization has, it takes several hours to get a nice result. The inherent parallelism in GAs can be exploited to enhance their search efficiency. In this paper, we introduce a parallel genetic algorithms (PGAs) based approach which can reduce the computational time. The algorithm has a master-slave structure, the slave explore the search space separately and return the results to the master. A program is also developed, and it can be running on any computers which support massage passing interface. Both the algorithm and the program are detailed discussed in the paper. We also include an application that uses the program to determine the positions and currents of PF coils in EAST. The program reach the target value within half an hour and yield a speedup rate of 5.21 on 8 CPUs. (author)

  7. Poloidal field coil design for known plasma equilibrium states

    International Nuclear Information System (INIS)

    Paulson, C.C.; Todd, A.M.M.; Reusch, M.F.

    1986-01-01

    The technique for obtaining plasma equilibria with given boundary conditions has long been known and understood. The inverse problem of obtaining a poloidal field (PF) coil system from a given plasma equilibrium has been widely studied, however its solution has remained largely an art form. An investigation, by the writers, of this fundamentally ill-posed inverse problem has resulted in a new understanding of the requirements that solutions must satisfy. A set of interacting computer codes has been written which may be used to successfully design PF coil systems capable of supporting given plasma equilibria. It is shown that for discrete coil systems with a reasonable number of elements the standard minimization of the R M S flux error can lead to undesirable results. Examples are given to show that an additional stability requirement must be imposed on the regularization parameter to obtain correct solutions. For some equilibria, the authors find that the inverse problem admits dual solutions corresponding to two possible magnetic field configurations that fit the constraining relations on the plasma surface equally well. An additional minimization of the absolute value of the limiter flux is required to discriminate between these solutions

  8. Acceleration optimization of real-time equilibrium reconstruction for HL-2A tokamak discharge control

    Science.gov (United States)

    Rui, MA; Fan, XIA; Fei, LING; Jiaxian, LI

    2018-02-01

    Real-time equilibrium reconstruction is crucially important for plasma shape control in the process of tokamak plasma discharge. However, as the reconstruction algorithm is computationally intensive, it is very difficult to improve its accuracy and reduce the computation time, and some optimizations need to be done. This article describes the three most important aspects of this optimization: (1) compiler optimization; (2) some optimization for middle-scale matrix multiplication on the graphic processing unit and an algorithm which can solve the block tri-diagonal linear system efficiently in parallel; (3) a new algorithm to locate the X&O point on the central processing unit. A static test proves the correctness and a dynamic test proves the feasibility of using the new code for real-time reconstruction with 129 × 129 grids; it can complete one iteration around 575 μs for each equilibrium reconstruction. The plasma displacements from real-time equilibrium reconstruction are compared with the experimental measurements, and the calculated results are consistent with the measured ones, which can be used as a reference for the real-time control of HL-2A discharge.

  9. Influence of collective excitations on pre-equilibrium and equilibrium processes

    International Nuclear Information System (INIS)

    Ignatyuk, A.V.; Lunev, V.P.

    1990-01-01

    The influence of the collective states excitations on equilibrium and preequilibrium processes in reaction is discussed. It is shown that for a consistent description of the contribution of preequilibrium and equilibrium compound processes collective states should be taken into account in the level density calculations. The microscopic and phenomenological approaches for the level density calculations are discussed. 13 refs.; 8 figs

  10. Are the Concepts of Dynamic Equilibrium and the Thermodynamic Criteria for Spontaneity, Nonspontaneity, and Equilibrium Compatible?

    Science.gov (United States)

    Silverberg, Lee J.; Raff, Lionel M.

    2015-01-01

    Thermodynamic spontaneity-equilibrium criteria require that in a single-reaction system, reactions in either the forward or reverse direction at equilibrium be nonspontaneous. Conversely, the concept of dynamic equilibrium holds that forward and reverse reactions both occur at equal rates at equilibrium to the extent allowed by kinetic…

  11. Template matching techniques in computer vision theory and practice

    CERN Document Server

    Brunelli, Roberto

    2009-01-01

    The detection and recognition of objects in images is a key research topic in the computer vision community.  Within this area, face recognition and interpretation has attracted increasing attention owing to the possibility of unveiling human perception mechanisms, and for the development of practical biometric systems. This book and the accompanying website, focus on template matching, a subset of object recognition techniques of wide applicability, which has proved to be particularly effective for face recognition applications. Using examples from face processing tasks throughout the book to illustrate more general object recognition approaches, Roberto Brunelli: examines the basics of digital image formation, highlighting points critical to the task of template matching;presents basic and  advanced template matching techniques, targeting grey-level images, shapes and point sets;discusses recent pattern classification paradigms from a template matching perspective;illustrates the development of a real fac...

  12. Implementing an Equilibrium Law Teaching Sequence for Secondary School Students to Learn Chemical Equilibrium

    Science.gov (United States)

    Ghirardi, Marco; Marchetti, Fabio; Pettinari, Claudio; Regis, Alberto; Roletto, Ezio

    2015-01-01

    A didactic sequence is proposed for the teaching of chemical equilibrium law. In this approach, we have avoided the kinetic derivation and the thermodynamic justification of the equilibrium constant. The equilibrium constant expression is established empirically by a trial-and-error approach. Additionally, students learn to use the criterion of…

  13. Real time equilibrium reconstruction algorithm in EAST tokamak

    International Nuclear Information System (INIS)

    Wang Huazhong; Luo Jiarong; Huang Qinchao

    2004-01-01

    The EAST (HT-7U) superconducting tokamak is a national project of China on fusion research, with a capability of long-pulse (∼1000 s) operation. In order to realize a long-duration steady-state operation of EAST, some significant capability of real-time control is required. It would be very crucial to obtain the current profile parameters and the plasma shapes in real time by a flexible control system. As those discharge parameters cannot be directly measured, so a current profile consistent with the magnetohydrodynamic equilibrium should be evaluated from external magnetic measurements, based on a linearized iterative least square method, which can meet the requirements of the measurements. The arithmetic that the EFIT (equilibrium fitting code) is used for reference will be given in this paper and the computational efforts are reduced by parameterizing the current profile linearly in terms of a number of physical parameters. In order to introduce this reconstruction algorithm clearly, the main hardware design will be listed also. (authors)

  14. Global Stability of an Epidemic Model of Computer Virus

    Directory of Open Access Journals (Sweden)

    Xiaofan Yang

    2014-01-01

    Full Text Available With the rapid popularization of the Internet, computers can enter or leave the Internet increasingly frequently. In fact, no antivirus software can detect and remove all sorts of computer viruses. This implies that viruses would persist on the Internet. To better understand the spread of computer viruses in these situations, a new propagation model is established and analyzed. The unique equilibrium of the model is globally asymptotically stable, in accordance with the reality. A parameter analysis of the equilibrium is also conducted.

  15. Transition from equilibrium ignition to non-equilibrium burn for ICF capsules surrounded by a high-Z pusher

    International Nuclear Information System (INIS)

    Li, Ji W.; Chang, Lei; Li, Yun S.; Li, Jing H.

    2011-01-01

    For the ICF capsule surrounded by a high-Z pusher which traps the radiation and confines the hot fuel, the fuel will first be ignited in thermal equilibrium with radiation at a much lower temperature than hot-spot ignition, which is also the low temperature ignition. Because of the lower areal density for ICF capsules, the equilibrium ignition must be developed into a non-equilibrium burn to shorten the reaction time and lower the drive energy. In this paper, the transition from the equilibrium ignition to non-equilibrium burn is discussed and the energy deposited by α particles required for the equilibrium ignition and non-equilibrium burn to occur is estimated.

  16. Examples of equilibrium and non-equilibrium behavior in evolutionary systems

    Science.gov (United States)

    Soulier, Arne

    With this thesis, we want to shed some light into the darkness of our understanding of simply defined statistical mechanics systems and the surprisingly complex dynamical behavior they exhibit. We will do so by presenting in turn one equilibrium and then one non-equilibrium system with evolutionary dynamics. In part 1, we will present the seceder-model, a newly developed system that cannot equilibrate. We will then study several properties of the system and obtain an idea of the richness of the dynamics of the seceder model, which is particular impressive given the minimal amount of modeling necessary in its setup. In part 2, we will present extensions to the directed polymer in random media problem on a hypercube and its connection to the Eigen model of evolution. Our main interest will be the influence of time-dependent and time-independent changes in the fitness landscape viewed by an evolving population. This part contains the equilibrium dynamics. The stochastic models and the topic of evolution and non-equilibrium in general will allow us to point out similarities to the various lines of thought in game theory.

  17. Equilibrium reconstruction in the TCA/Br tokamak; Reconstrucao do equilibrio no tokamak TCA/BR

    Energy Technology Data Exchange (ETDEWEB)

    Sa, Wanderley Pires de

    1996-12-31

    The accurate and rapid determination of the Magnetohydrodynamic (MHD) equilibrium configuration in tokamaks is a subject for the magnetic confinement of the plasma. With the knowledge of characteristic plasma MHD equilibrium parameters it is possible to control the plasma position during its formation using feed-back techniques. It is also necessary an on-line analysis between successive discharges to program external parameters for the subsequent discharges. In this work it is investigated the MHD equilibrium configuration reconstruction of the TCA/BR tokamak from external magnetic measurements, using a method that is able to fast determine the main parameters of discharge. The thesis has two parts. Firstly it is presented the development of an equilibrium code that solves de Grad-Shafranov equation for the TCA/BR tokamak geometry. Secondly it is presented the MHD equilibrium reconstruction process from external magnetic field and flux measurements using the Function Parametrization FP method. this method. This method is based on the statistical analysis of a database of simulated equilibrium configurations, with the goal of obtaining a simple relationship between the parameters that characterize the equilibrium and the measurements. The results from FP are compared with conventional methods. (author) 68 refs., 31 figs., 16 tabs.

  18. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013)

    Science.gov (United States)

    Wang, Jianxiong

    2014-06-01

    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  19. Equilibrium models and variational inequalities

    CERN Document Server

    Konnov, Igor

    2007-01-01

    The concept of equilibrium plays a central role in various applied sciences, such as physics (especially, mechanics), economics, engineering, transportation, sociology, chemistry, biology and other fields. If one can formulate the equilibrium problem in the form of a mathematical model, solutions of the corresponding problem can be used for forecasting the future behavior of very complex systems and, also, for correcting the the current state of the system under control. This book presents a unifying look on different equilibrium concepts in economics, including several models from related sciences.- Presents a unifying look on different equilibrium concepts and also the present state of investigations in this field- Describes static and dynamic input-output models, Walras, Cassel-Wald, spatial price, auction market, oligopolistic equilibrium models, transportation and migration equilibrium models- Covers the basics of theory and solution methods both for the complementarity and variational inequality probl...

  20. Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils

    Directory of Open Access Journals (Sweden)

    Fatimah Khaleel Ibrahim

    2017-08-01

    Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.

  1. Applications of soft computing in time series forecasting simulation and modeling techniques

    CERN Document Server

    Singh, Pritpal

    2016-01-01

    This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and governmen...

  2. Deviations from thermal equilibrium in plasmas

    International Nuclear Information System (INIS)

    Burm, K.T.A.L.

    2004-01-01

    A plasma system in local thermal equilibrium can usually be described with only two parameters. To describe deviations from equilibrium two extra parameters are needed. However, it will be shown that deviations from temperature equilibrium and deviations from Saha equilibrium depend on one another. As a result, non-equilibrium plasmas can be described with three parameters. This reduction in parameter space will ease the plasma describing effort enormously

  3. Equilibrium mercury isotope fractionation between dissolved Hg(II) species and thiol-bound Hg

    NARCIS (Netherlands)

    Wiederhold, Jan G.; Cramer, Christopher J.; Daniel, Kelly; Infante, Ivan; Bourdon, Bernard; Kretzschmar, Ruben

    2010-01-01

    Stable Hg isotope ratios provide a new tool to trace environmental Hg cycling. Thiols (-SH) are the dominant Hg-binding groups in natural organic matter. Here, we report experimental and computational results on equilibrium Hg isotope fractionation between dissolved Hg(II) species and thiol-bound

  4. Discharge Fee Policy Analysis: A Computable General Equilibrium (CGE Model of Water Resources and Water Environments

    Directory of Open Access Journals (Sweden)

    Guohua Fang

    2016-09-01

    Full Text Available To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and output sources of the National Economic Production Department. Secondly, an extended Social Accounting Matrix (SAM of Jiangsu province is developed to simulate various scenarios. By changing values of the discharge fees (increased by 50%, 100% and 150%, three scenarios are simulated to examine their influence on the overall economy and each industry. The simulation results show that an increased fee will have a negative impact on Gross Domestic Product (GDP. However, waste water may be effectively controlled. Also, this study demonstrates that along with the economic costs, the increase of the discharge fee will lead to the upgrading of industrial structures from a situation of heavy pollution to one of light pollution which is beneficial to the sustainable development of the economy and the protection of the environment.

  5. LP Well-Posedness for Bilevel Vector Equilibrium and Optimization Problems with Equilibrium Constraints

    OpenAIRE

    Khanh, Phan Quoc; Plubtieng, Somyot; Sombut, Kamonrat

    2014-01-01

    The purpose of this paper is introduce several types of Levitin-Polyak well-posedness for bilevel vector equilibrium and optimization problems with equilibrium constraints. Base on criterion and characterizations for these types of Levitin-Polyak well-posedness we argue on diameters and Kuratowski’s, Hausdorff’s, or Istrǎtescus measures of noncompactness of approximate solution sets under suitable conditions, and we prove the Levitin-Polyak well-posedness for bilevel vector equilibrium and op...

  6. On techniques of ATR lattice computation

    International Nuclear Information System (INIS)

    1997-08-01

    Lattice computation is to compute the average nuclear constants of unit fuel lattice which are required for computing core nuclear characteristics such as core power distribution and reactivity characteristics. The main nuclear constants are infinite multiplying rate, neutron movement area, cross section for diffusion computation, local power distribution and isotope composition. As for the lattice computation code, WIMS-ATR is used, which is based on the WIMS-D code developed in U.K., and for the purpose of heightening the accuracy of analysis, which was improved by adding heavy water scattering cross section considering the temperature dependence by Honeck model. For the computation of the neutron absorption by control rods, LOIEL BLUE code is used. The extrapolation distance of neutron flux on control rod surfaces is computed by using THERMOS and DTF codes, and the lattice constants of adjoining lattices are computed by using the WIMS-ATR code. As for the WIMS-ATR code, the computation flow and nuclear data library, and as for the LOIEL BLUE code, the computation flow are explained. The local power distribution in fuel assemblies determined by the WIMS-ATR code was verified with the measured data, and the results are reported. (K.I.)

  7. Determination of Ni{sup 2+} using an equilibrium ion exchange technique: Important chemical factors and applicability to environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Worms, Isabelle A.M. [CABE - Analytical and Biophysical Environmental Chemistry, University of Geneva, 30 quai Ernest Ansermet 1211 Geneva 4 (Switzerland); Wilkinson, Kevin J. [Department of Chemistry, University of Montreal C.P. 6128, succursale Centre-ville Montreal, H3C 3J7 (Canada)], E-mail: KJ.Wilkinson@umontreal.ca

    2008-05-26

    In natural waters, the determination of free metal concentrations is a key parameter for studying bioavailability. Unfortunately, few analytical tools are available for determining Ni speciation at the low concentrations found in natural waters. In this paper, an ion exchange technique (IET) that employs a Dowex resin is evaluated for its applicability to measure [Ni{sup 2+}] in freshwaters. The presence of major cations (e.g. Na, Ca and Mg) reduced both the times that were required for equilibration and the partition coefficient to the resin ({lambda}{sup '}{sub Ni}). IET measurements of [Ni{sup 2+}] in the presence of known ligands (citrate, diglycolate, sulfoxine, oxine and diethyldithiocarbamate) were verified by thermodynamic speciation models (MINEQL{sup +} and VisualMINTEQ). Results indicated that the presence of hydrophobic complexes (e.g. Ni(DDC){sub 2}{sup 0}) lead to an overestimation of the Ni{sup 2+} fraction. On the other hand, [Ni{sup 2+}] measurements that were made in the presence of amphiphilic complexes formed with humic substances (standard aquatic humic acid (SRHA) and standard aquatic fulvic acid (SRFA)) were well correlated to free ion concentrations that were calculated using a NICA-DONNAN model. An analytical method is also presented here to reduce the complexity of the calibration (due to the presence of many other cations) for the use of Dowex equilibrium ion exchange technique in natural waters.

  8. Computational efficiency improvement with Wigner rotation technique in studying atoms in intense few-cycle circularly polarized pulses

    International Nuclear Information System (INIS)

    Yuan, Minghu; Feng, Liqiang; Lü, Rui; Chu, Tianshu

    2014-01-01

    We show that by introducing Wigner rotation technique into the solution of time-dependent Schrödinger equation in length gauge, computational efficiency can be greatly improved in describing atoms in intense few-cycle circularly polarized laser pulses. The methodology with Wigner rotation technique underlying our openMP parallel computational code for circularly polarized laser pulses is described. Results of test calculations to investigate the scaling property of the computational code with the number of the electronic angular basis function l as well as the strong field phenomena are presented and discussed for the hydrogen atom

  9. Computation of thermodynamic equilibria of nuclear materials in multi-physics codes

    International Nuclear Information System (INIS)

    Piro, M.H.; Lewis, B.J.; Thompson, W.T.; Simunovic, S.; Besmann, T.M.

    2011-01-01

    A new equilibrium thermodynamic solver is being developed with the primary impetus of direct integration into nuclear fuel performance and safety codes to provide improved predictions of fuel behavior. This solver is intended to provide boundary conditions and material properties for continuum transport calculations. There are several legitimate concerns with the use of existing commercial thermodynamic codes: 1) licensing entanglements associated with code distribution, 2) computational performance, and 3) limited capabilities of handling large multi-component systems of interest to the nuclear industry. The development of this solver is specifically aimed at addressing these concerns. In support of this goal, a new numerical algorithm for computing chemical equilibria is presented which is not based on the traditional steepest descent method or 'Gibbs energy minimization' technique. This new approach exploits fundamental principles of equilibrium thermodynamics, which simplifies the optimization equations. The chemical potentials of all species and phases in the system are constrained by the system chemical potentials, and the objective is to minimize the residuals of the mass balance equations. Several numerical advantages are achieved through this simplification, as described in this paper. (author)

  10. Techniques and environments for big data analysis parallel, cloud, and grid computing

    CERN Document Server

    Dehuri, Satchidananda; Kim, Euiwhan; Wang, Gi-Name

    2016-01-01

    This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.

  11. Electrostatic afocal-zoom lens design using computer optimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Sise, Omer, E-mail: omersise@gmail.com

    2014-12-15

    Highlights: • We describe the detailed design of a five-element electrostatic afocal-zoom lens. • The simplex optimization is used to optimize lens voltages. • The method can be applied to multi-element electrostatic lenses. - Abstract: Electron optics is the key to the successful operation of electron collision experiments where well designed electrostatic lenses are needed to drive electron beam before and after the collision. In this work, the imaging properties and aberration analysis of an electrostatic afocal-zoom lens design were investigated using a computer optimization technique. We have found a whole new range of voltage combinations that has gone unnoticed until now. A full range of voltage ratios and spherical and chromatic aberration coefficients were systematically analyzed with a range of magnifications between 0.3 and 3.2. The grid-shadow evaluation was also employed to show the effect of spherical aberration. The technique is found to be useful for searching the optimal configuration in a multi-element lens system.

  12. Computer vision techniques for the diagnosis of skin cancer

    CERN Document Server

    Celebi, M

    2014-01-01

    The goal of this volume is to summarize the state-of-the-art in the utilization of computer vision techniques in the diagnosis of skin cancer. Malignant melanoma is one of the most rapidly increasing cancers in the world. Early diagnosis is particularly important since melanoma can be cured with a simple excision if detected early. In recent years, dermoscopy has proved valuable in visualizing the morphological structures in pigmented lesions. However, it has also been shown that dermoscopy is difficult to learn and subjective. Newer technologies such as infrared imaging, multispectral imaging, and confocal microscopy, have recently come to the forefront in providing greater diagnostic accuracy. These imaging technologies presented in this book can serve as an adjunct to physicians and  provide automated skin cancer screening. Although computerized techniques cannot as yet provide a definitive diagnosis, they can be used to improve biopsy decision-making as well as early melanoma detection, especially for pa...

  13. A Multiperiod Equilibrium Pricing Model

    Directory of Open Access Journals (Sweden)

    Minsuk Kwak

    2014-01-01

    Full Text Available We propose an equilibrium pricing model in a dynamic multiperiod stochastic framework with uncertain income. There are one tradable risky asset (stock/commodity, one nontradable underlying (temperature, and also a contingent claim (weather derivative written on the tradable risky asset and the nontradable underlying in the market. The price of the contingent claim is priced in equilibrium by optimal strategies of representative agent and market clearing condition. The risk preferences are of exponential type with a stochastic coefficient of risk aversion. Both subgame perfect strategy and naive strategy are considered and the corresponding equilibrium prices are derived. From the numerical result we examine how the equilibrium prices vary in response to changes in model parameters and highlight the importance of our equilibrium pricing principle.

  14. CO2, energy and economy interactions: A multisectoral, dynamic, computable general equilibrium model for Korea

    Science.gov (United States)

    Kang, Yoonyoung

    While vast resources have been invested in the development of computational models for cost-benefit analysis for the "whole world" or for the largest economies (e.g. United States, Japan, Germany), the remainder have been thrown together into one model for the "rest of the world." This study presents a multi-sectoral, dynamic, computable general equilibrium (CGE) model for Korea. This research evaluates the impacts of controlling COsb2 emissions using a multisectoral CGE model. This CGE economy-energy-environment model analyzes and quantifies the interactions between COsb2, energy and economy. This study examines interactions and influences of key environmental policy components: applied economic instruments, emission targets, and environmental tax revenue recycling methods. The most cost-effective economic instrument is the carbon tax. The economic effects discussed include impacts on main macroeconomic variables (in particular, economic growth), sectoral production, and the energy market. This study considers several aspects of various COsb2 control policies, such as the basic variables in the economy: capital stock and net foreign debt. The results indicate emissions might be stabilized in Korea at the expense of economic growth and with dramatic sectoral allocation effects. Carbon dioxide emissions stabilization could be achieved to the tune of a 600 trillion won loss over a 20 year period (1990-2010). The average annual real GDP would decrease by 2.10% over the simulation period compared to the 5.87% increase in the Business-as-Usual. This model satisfies an immediate need for a policy simulation model for Korea and provides the basic framework for similar economies. It is critical to keep the central economic question at the forefront of any discussion regarding environmental protection. How much will reform cost, and what does the economy stand to gain and lose? Without this model, the policy makers might resort to hesitation or even blind speculation. With

  15. X-ray computer tomography, ultrasound and vibrational spectroscopic evaluation techniques of polymer gel dosimeters

    International Nuclear Information System (INIS)

    Baldock, Clive

    2004-01-01

    Since Gore et al published their paper on Fricke gel dosimetry, the predominant method of evaluation of both Fricke and polymer gel dosimeters has been magnetic resonance imaging (MRI). More recently optical computer tomography (CT) has also been a favourable evaluation method. Other techniques have been explored and developed as potential evaluation techniques in gel dosimetry. This paper reviews these other developments

  16. Head and neck computed tomography virtual endoscopy: evaluation of a new imaging technique.

    Science.gov (United States)

    Gallivan, R P; Nguyen, T H; Armstrong, W B

    1999-10-01

    To evaluate a new radiographic imaging technique: computed tomography virtual endoscopy (CTVE) for head and neck tumors. Twenty-one patients presenting with head and neck masses who underwent axial computed tomography (CT) scan with contrast were evaluated by CTVE. Comparisons were made with video-recorded images and operative records to evaluate the potential utility of this new imaging technique. Twenty-one patients with aerodigestive head and neck tumors were evaluated by CTVE. One patient had a nasal cylindrical cell papilloma; the remainder, squamous cell carcinomas distributed throughout the upper aerodigestive tract. Patients underwent complete head and neck examination, flexible laryngoscopy, axial CT with contrast, CTVE, and in most cases, operative endoscopy. Available clinical and radiographic evaluations were compared and correlated to CTVE findings. CTVE accurately demonstrated abnormalities caused by intraluminal tumor, but where there was apposition of normal tissue against tumor, inaccurate depictions of surface contour occurred. Contour resolution was limited, and mucosal irregularity could not be defined. There was very good overall correlation between virtual images, flexible laryngoscopic findings, rigid endoscopy, and operative evaluation in cases where oncological resections were performed. CTVE appears to be most accurate in evaluation of subglottic and nasopharyngeal anatomy in our series of patients. CTVE is a new radiographic technique that provides surface-contour details. The technique is undergoing rapid technical evolution, and although the image quality is limited in situations where there is apposition of tissue folds, there are a number of potential applications for this new imaging technique.

  17. Application of Equilibrium Partitioning Theory to Soil PAH Contamination (External Review Draft)

    Science.gov (United States)

    In March 2004, ORD's Ecological Risk Assessment Support Center (ERASC) received a request from the Ecological Risk Assessment Forum (ERAF) to provide insight into the issue of whether equilibrium partitioning (EqP) techniques can be used to predict the toxicity of polycyclic arom...

  18. Equilibrium studies of helical axis stellarators

    International Nuclear Information System (INIS)

    Hender, T.C.; Carreras, B.A.; Garcia, L.; Harris, J.H.; Rome, J.A.; Cantrell, J.L.; Lynch, V.E.

    1984-01-01

    The equilibrium properties of helical axis stellarators are studied with a 3-D equilibrium code and with an average method (2-D). The helical axis ATF is shown to have a toroidally dominated equilibrium shift and good equilibria up to at least 10% peak beta. Low aspect ratio heliacs, with relatively large toroidal shifts, are shown to have low equilibrium beta limits (approx. 5%). Increasing the aspect ratio and number of field periods proportionally is found to improve the equilibrium beta limit. Alternatively, increasing the number of field periods at fixed aspect ratio which raises and lowers the toroidal shift improves the equilibrium beta limit

  19. Heavy quark energy loss far from equilibrium in a strongly coupled collision

    CERN Document Server

    Chesler, Paul M; Rajagopal, Krishna

    2013-01-01

    We compute and study the drag force acting on a heavy quark propagating through the matter produced in the collision of two sheets of energy in a strongly coupled gauge theory that can be analyzed holographically. Although this matter is initially far from equilibrium, we find that the equilibrium expression for heavy quark energy loss in a homogeneous strongly coupled plasma with the same instantaneous energy density or pressure as that at the location of the quark describes many qualitative features of our results. One interesting exception is that there is a time delay after the initial collision before the heavy quark energy loss becomes significant. At later times, once a liquid plasma described by viscous hydrodynamics has formed, expressions based upon assuming instantaneous homogeneity and equilibrium provide a semi-quantitative description of our results - as long as the rapidity of the heavy quark is not too large. For a heavy quark with large rapidity, the gradients in the velocity of the hydrodyna...

  20. The Equilibrium Rule--A Personal Discovery

    Science.gov (United States)

    Hewitt, Paul G.

    2016-01-01

    Examples of equilibrium are evident everywhere and the equilibrium rule provides a reasoned way to view all things, whether in static (balancing rocks, steel beams in building construction) or dynamic (airplanes, bowling balls) equilibrium. Interestingly, the equilibrium rule applies not just to objects at rest but whenever any object or system of…

  1. Quasistatic zooming of FDTD E-field computations: the impact of down-scaling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Van de Kamer, J.B.; Kroeze, H.; De Leeuw, A.A.C.; Lagendijk, J.J.W. [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands)

    2001-05-01

    Due to current computer limitations, regional hyperthermia treatment planning (HTP) is practically limited to a resolution of 1 cm, whereas a millimetre resolution is desired. Using the centimetre resolution E-vector-field distribution, computed with, for example, the finite-difference time-domain (FDTD) method and the millimetre resolution patient anatomy it is possible to obtain a millimetre resolution SAR distribution in a volume of interest (VOI) by means of quasistatic zooming. To compute the required low-resolution E-vector-field distribution, a low-resolution dielectric geometry is needed which is constructed by down-scaling the millimetre resolution dielectric geometry. In this study we have investigated which down-scaling technique results in a dielectric geometry that yields the best low-resolution E-vector-field distribution as input for quasistatic zooming. A segmented 2 mm resolution CT data set of a patient has been down-scaled to 1 cm resolution using three different techniques: 'winner-takes-all', 'volumetric averaging' and 'anisotropic volumetric averaging'. The E-vector-field distributions computed for those low-resolution dielectric geometries have been used as input for quasistatic zooming. The resulting zoomed-resolution SAR distributions were compared with a reference: the 2 mm resolution SAR distribution computed with the FDTD method. The E-vector-field distribution for both a simple phantom and the complex partial patient geometry down-scaled using 'anisotropic volumetric averaging' resulted in zoomed-resolution SAR distributions that best approximate the corresponding high-resolution SAR distribution (correlation 97, 96% and absolute averaged difference 6, 14% respectively). (author)

  2. Investigation of Non-Equilibrium Radiation for Earth Entry

    Science.gov (United States)

    Brandis, A. M.; Johnston, C. O.; Cruden, B. A.

    2016-01-01

    For Earth re-entry at velocities between 8 and 11.5 km/s, the accuracy of NASA's computational uid dynamic and radiative simulations of non-equilibrium shock layer radiation is assessed through comparisons with measurements. These measurements were obtained in the NASA Ames Research Center's Electric Arc Shock Tube (EAST) facility. The experiments were aimed at measuring the spatially and spectrally resolved radiance at relevant entry conditions for both an approximate Earth atmosphere (79% N2 : 21% O2 by mole) as well as a more accurate composition featuring the trace species Ar and CO2 (78.08% N2 : 20.95% O2 : 0.04% CO2 : 0.93% Ar by mole). The experiments were configured to target a wide range of conditions, of which shots from 8 to 11.5 km/s at 0.2 Torr (26.7 Pa) are examined in this paper. The non-equilibrium component was chosen to be the focus of this study as it can account for a significant percentage of the emitted radiation for Earth re-entry, and more importantly, non-equilibrium has traditionally been assigned a large uncertainty for vehicle design. The main goals of this study are to present the shock tube data in the form of a non-equilibrium metric, evaluate the level of agreement between the experiment and simulations, identify key discrepancies and to examine critical aspects of modeling non-equilibrium radiating flows. Radiance pro les integrated over discreet wavelength regions, ranging from the Vacuum Ultra Violet (VUV) through to the Near Infra-Red (NIR), were compared in order to maximize both the spectral coverage and the number of experiments that could be used in the analysis. A previously defined non-equilibrium metric has been used to allow comparisons with several shots and reveal trends in the data. Overall, LAURA/HARA is shown to under-predict EAST by as much as 40% and over-predict by as much as 12% depending on the shock speed. DPLR/NEQAIR is shown to under-predict EAST by as much as 50% and over-predict by as much as 20% depending

  3. A novel technique for presurgical nasoalveolar molding using computer-aided reverse engineering and rapid prototyping.

    Science.gov (United States)

    Yu, Quan; Gong, Xin; Wang, Guo-Min; Yu, Zhe-Yuan; Qian, Yu-Fen; Shen, Gang

    2011-01-01

    To establish a new method of presurgical nasoalveolar molding (NAM) using computer-aided reverse engineering and rapid prototyping technique in infants with unilateral cleft lip and palate (UCLP). Five infants (2 males and 3 females with mean age of 1.2 w) with complete UCLP were recruited. All patients were subjected to NAM before the cleft lip repair. The upper denture casts were recorded using a three-dimensional laser scanner within 2 weeks after birth in UCLP infants. A digital model was constructed and analyzed to simulate the NAM procedure with reverse engineering software. The digital geometrical data were exported to print the solid model with rapid prototyping system. The whole set of appliances was fabricated based on these solid models. Laser scanning and digital model construction simplified the NAM procedure and estimated the treatment objective. The appliances were fabricated based on the rapid prototyping technique, and for each patient, the complete set of appliances could be obtained at one time. By the end of presurgical NAM treatment, the cleft was narrowed, and the malformation of nasoalveolar segments was aligned normally. We have developed a novel technique of presurgical NAM based on a computer-aided design. The accurate digital denture model of UCLP infants could be obtained with laser scanning. The treatment design and appliance fabrication could be simplified with a computer-aided reverse engineering and rapid prototyping technique.

  4. Non-linear neutron star oscillations viewed as deviations from an equilibrium state

    International Nuclear Information System (INIS)

    Sperhake, U

    2002-01-01

    A numerical technique is presented which facilitates the evolution of non-linear neutron star oscillations with a high accuracy essentially independent of the oscillation amplitude. We apply this technique to radial neutron star oscillations in a Lagrangian formulation and demonstrate the superior performance of the new scheme compared with 'conventional' techniques. The key feature of our approach is to describe the evolution in terms of deviations from an equilibrium configuration. In contrast to standard perturbation analysis we keep all higher order terms in the evolution equations and thus obtain a fully non-linear description. The advantage of our scheme lies in the elimination of background terms from the equations and the associated numerical errors. The improvements thus achieved will be particularly significant in the study of mildly non-linear effects where the amplitude of the dynamic signal is small compared with the equilibrium values but large enough to warrant non-linear effects. We apply the new technique to the study of non-linear coupling of Eigenmodes and non-linear effects in the oscillations of marginally stable neutron stars. We find non-linear effects in low amplitude oscillations to be particularly pronounced in the range of modes with vanishing frequency which typically mark the onset of instability. (author)

  5. DIAGNOSIS OF FINANCIAL EQUILIBRIUM

    Directory of Open Access Journals (Sweden)

    SUCIU GHEORGHE

    2013-04-01

    Full Text Available The analysis based on the balance sheet tries to identify the state of equilibrium (disequilibrium that exists in a company. The easiest way to determine the state of equilibrium is by looking at the balance sheet and at the information it offers. Because in the balance sheet there are elements that do not reflect their real value, the one established on the market, they must be readjusted, and those elements which are not related to the ordinary operating activities must be eliminated. The diagnosis of financial equilibrium takes into account 2 components: financing sources (ownership equity, loaned, temporarily attracted. An efficient financial equilibrium must respect 2 fundamental requirements: permanent sources represented by ownership equity and loans for more than 1 year should finance permanent needs, and temporary resources should finance the operating cycle.

  6. Deviations from mass transfer equilibrium and mathematical modeling of mixer-settler contactors

    International Nuclear Information System (INIS)

    Beyerlein, A.L.; Geldard, J.F.; Chung, H.F.; Bennett, J.E.

    1980-01-01

    This paper presents the mathematical basis for the computer model PUBG of mixer-settler contactors which accounts for deviations from mass transfer equilibrium. This is accomplished by formulating the mass balance equations for the mixers such that the mass transfer rate of nuclear materials between the aqueous and organic phases is accounted for. 19 refs

  7. Analysis of the plasma magnetohydrodynamic equilibrium in iron core transformer Tokamak HL-1M

    International Nuclear Information System (INIS)

    Chen Xiaoguang; Yuan Baoshan

    1992-01-01

    The physical and mathematical model are presented on the problem of MHD equilibrium with the self consistent in iron core transformer HL-1M. Calculation and analysis for the plasma equilibrium of the stable boundary and free boundary are shown respectively, in an axisymmetric equilibrium model of two dimensions. First, a variation formulation of the problem is written and the equations of the poloided flux ψ are solved by a finite element method; the Picard and Newton algorithms are tested for the non-linearities. The plasma boundary and the magnetic surfaces are being simulated, with the currents in the coils, the total plasma current, its current density function and the magnetic permeability of the iron being the data for the problem; a certain number of the characteristic parameter of the equilibrium configuration is calculated. Secondly, a simple method of calculation is adopted in the determination of equilibrium fields and currents in iron core HL-1M tokamak device. In the plasma equilibrium, the magnetic effect of the air gaps in the iron core and the iron magnetic shielded plate are considered in HL-1M device. Reliable data are provided for designing and constructing the poloidal field system of HL-1M device. A good computer code is constructed, which may be useful in operating on analysis for the future device

  8. Major parameters affecting the calculation of equilibrium factor using SSNTD-measured track densities

    International Nuclear Information System (INIS)

    Abo-Elmagd, M.; Mansy, M.; Eissa, H.M.; El-Fiki, M.A.

    2006-01-01

    The equilibrium factor F between radon and its daughters as a function of the track density ratio D/D 0 between bare and in can track detectors is solved graphically and gave more accurate solution than that solved mathematically elsewhere. The advantages of the graphical solution come from its simplicity and does not need any tedious mathematical formula or a computer program. The simplicity of this solution makes us study many parameters that affect the equilibrium factor determination such as the detector type, the diffusion chamber dimensions, the membrane specifications and the behavior of α-emitters around the detector. The results show that the equilibrium factor as a function of D/D 0 takes different form according to the facility used. The range of this study covers two widely used detectors (CR-39 and LR-115) equipped in two widely used diffusion chambers (small and medium chambers)

  9. Probing the conformational energetics of alkyl thiols on gold surfaces by means of a morphing/steering non-equilibrium tool.

    Science.gov (United States)

    Piserchia, Andrea; Zerbetto, Mirco; Frezzato, Diego

    2015-03-28

    In this work we show that a non-equilibrium statistical tool based on Jarzynski's equality (JE) can be applied to achieve a sufficiently accurate mapping of the torsion free energy, bond-by-bond, for an alkyl thiol ligand tethered to a gold surface and sensing the presence of the surrounding cluster of similar chains. The strength of our approach is the employment of a strategy to let grow the internal energetics of the whole system (namely, the "energy morphing" stage recently presented by us in J. Comput. Chem., 2014, 35, 1865-1881) before initiating the rotational steering, which yields accurate results in terms of statistical uncertainties and bias on the free energy profiles. The work is mainly methodological and illustrates the feasibility of this kind of inspection on nanoscale molecular clusters with conformational flexibility. The outcomes for the archetype of self-assembled-monolayers considered here, a regular pattern of 10-carbon alkyl thiols on an ideal gold surface, give information on the conformational mobility of the ligands. Notably, such information is unlikely to be obtained by means of standard equilibrium techniques or by conventional molecular dynamics simulations.

  10. A Simple Technique for Securing Data at Rest Stored in a Computing Cloud

    Science.gov (United States)

    Sedayao, Jeff; Su, Steven; Ma, Xiaohao; Jiang, Minghao; Miao, Kai

    "Cloud Computing" offers many potential benefits, including cost savings, the ability to deploy applications and services quickly, and the ease of scaling those application and services once they are deployed. A key barrier for enterprise adoption is the confidentiality of data stored on Cloud Computing Infrastructure. Our simple technique implemented with Open Source software solves this problem by using public key encryption to render stored data at rest unreadable by unauthorized personnel, including system administrators of the cloud computing service on which the data is stored. We validate our approach on a network measurement system implemented on PlanetLab. We then use it on a service where confidentiality is critical - a scanning application that validates external firewall implementations.

  11. 16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT)

    CERN Document Server

    Lokajicek, M; Tumova, N

    2015-01-01

    16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT). The ACAT workshop series, formerly AIHENP (Artificial Intelligence in High Energy and Nuclear Physics), was created back in 1990. Its main purpose is to gather researchers related with computing in physics research together, from both physics and computer science sides, and bring them a chance to communicate with each other. It has established bridges between physics and computer science research, facilitating the advances in our understanding of the Universe at its smallest and largest scales. With the Large Hadron Collider and many astronomy and astrophysics experiments collecting larger and larger amounts of data, such bridges are needed now more than ever. The 16th edition of ACAT aims to bring related researchers together, once more, to explore and confront the boundaries of computing, automatic data analysis and theoretical calculation technologies. It will create a forum for exchanging ideas among the fields an...

  12. Computer-aided auscultation learning system for nursing technique instruction.

    Science.gov (United States)

    Hou, Chun-Ju; Chen, Yen-Ting; Hu, Ling-Chen; Chuang, Chih-Chieh; Chiu, Yu-Hsien; Tsai, Ming-Shih

    2008-01-01

    Pulmonary auscultation is a physical assessment skill learned by nursing students for examining the respiratory system. Generally, a sound simulator equipped mannequin is used to group teach auscultation techniques via classroom demonstration. However, nursing students cannot readily duplicate this learning environment for self-study. The advancement of electronic and digital signal processing technologies facilitates simulating this learning environment. This study aims to develop a computer-aided auscultation learning system for assisting teachers and nursing students in auscultation teaching and learning. This system provides teachers with signal recording and processing of lung sounds and immediate playback of lung sounds for students. A graphical user interface allows teachers to control the measuring device, draw lung sound waveforms, highlight lung sound segments of interest, and include descriptive text. Effects on learning lung sound auscultation were evaluated for verifying the feasibility of the system. Fifteen nursing students voluntarily participated in the repeated experiment. The results of a paired t test showed that auscultative abilities of the students were significantly improved by using the computer-aided auscultation learning system.

  13. Equilibrium Temperature Profiles within Fission Product Waste Forms

    Energy Technology Data Exchange (ETDEWEB)

    Kaminski, Michael D. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-10-01

    We studied waste form strategies for advanced fuel cycle schemes. Several options were considered for three waste streams with the following fission products: cesium and strontium, transition metals, and lanthanides. These three waste streams may be combined or disposed separately. The decay of several isotopes will generate heat that must be accommodated by the waste form, and this heat will affect the waste loadings. To help make an informed decision on the best option, we present computational data on the equilibrium temperature of glass waste forms containing a combination of these three streams.

  14. A Predictor-Corrector Method for Solving Equilibrium Problems

    Directory of Open Access Journals (Sweden)

    Zong-Ke Bao

    2014-01-01

    Full Text Available We suggest and analyze a predictor-corrector method for solving nonsmooth convex equilibrium problems based on the auxiliary problem principle. In the main algorithm each stage of computation requires two proximal steps. One step serves to predict the next point; the other helps to correct the new prediction. At the same time, we present convergence analysis under perfect foresight and imperfect one. In particular, we introduce a stopping criterion which gives rise to Δ-stationary points. Moreover, we apply this algorithm for solving the particular case: variational inequalities.

  15. Assessment of health and economic effects by PM2.5 pollution in Beijing: a combined exposure-response and computable general equilibrium analysis.

    Science.gov (United States)

    Wang, Guizhi; Gu, SaiJu; Chen, Jibo; Wu, Xianhua; Yu, Jun

    2016-12-01

    Assessment of the health and economic impacts of PM2.5 pollution is of great importance for urban air pollution prevention and control. In this study, we evaluate the damage of PM2.5 pollution using Beijing as an example. First, we use exposure-response functions to estimate the adverse health effects due to PM2.5 pollution. Then, the corresponding labour loss and excess medical expenditure are computed as two conducting variables. Finally, different from the conventional valuation methods, this paper introduces the two conducting variables into the computable general equilibrium (CGE) model to assess the impacts on sectors and the whole economic system caused by PM2.5 pollution. The results show that, substantial health effects of the residents in Beijing from PM2.5 pollution occurred in 2013, including 20,043 premature deaths and about one million other related medical cases. Correspondingly, using the 2010 social accounting data, Beijing gross domestic product loss due to the health impact of PM2.5 pollution is estimated as 1286.97 (95% CI: 488.58-1936.33) million RMB. This demonstrates that PM2.5 pollution not only has adverse health effects, but also brings huge economic loss.

  16. Controller Design of DFIG Based Wind Turbine by Using Evolutionary Soft Computational Techniques

    Directory of Open Access Journals (Sweden)

    O. P. Bharti

    2017-06-01

    Full Text Available This manuscript illustrates the controller design for a doubly fed induction generator based variable speed wind turbine by using a bioinspired scheme. This methodology is based on exploiting two proficient swarm intelligence based evolutionary soft computational procedures. The particle swarm optimization (PSO and bacterial foraging optimization (BFO techniques are employed to design the controller intended for small damping plant of the DFIG. Wind energy overview and DFIG operating principle along with the equivalent circuit model is adequately discussed in this paper. The controller design for DFIG based WECS using PSO and BFO are described comparatively in detail. The responses of the DFIG system regarding terminal voltage, current, active-reactive power, and DC-Link voltage have slightly improved with the evolutionary soft computational procedure. Lastly, the obtained output is equated with a standard technique for performance improvement of DFIG based wind energy conversion system.

  17. An Evolutionary Comparison of the Handicap Principle and Hybrid Equilibrium Theories of Signaling

    Science.gov (United States)

    Kane, Patrick; Zollman, Kevin J. S.

    2015-01-01

    The handicap principle has come under significant challenge both from empirical studies and from theoretical work. As a result, a number of alternative explanations for honest signaling have been proposed. This paper compares the evolutionary plausibility of one such alternative, the “hybrid equilibrium,” to the handicap principle. We utilize computer simulations to compare these two theories as they are instantiated in Maynard Smith’s Sir Philip Sidney game. We conclude that, when both types of communication are possible, evolution is unlikely to lead to handicap signaling and is far more likely to result in the partially honest signaling predicted by hybrid equilibrium theory. PMID:26348617

  18. W7-X vacuum and finite-β magnetic field structure resolved with the HINT 3D equilibrium code

    International Nuclear Information System (INIS)

    Hayashi, T.; Merkel, P.; Nuehrenberg, J.; Schwenn, U.

    1994-01-01

    The 3D equilibrium code HINT allows the direct investigation of finite-β effects on sizes and phases of islands in genuinely 3D configurations like the W7-X stellarator planned by the Max-Planck-Institut fuer Plasmaphysik in Germany. The code does not require the existence of nested flux surfaces. This, in contrast to the inverse formulation used in the VMEC code, leads to a considerably more complex computational goal. The HINT code combines some crucial features reducing the numerical problems and the computational effort to such an extent as to allow computation of 3D equilibria at finite-β with magnetic islands. The code is based on a two-step procedure: Starting from a given B and an initial pressure, the iteration technique for the pressure advancement is differencing in an artificial time with an explicit 4th order scheme, or - alternatively for resolving the island topology - field lines starting from all gridpoints are followed long enough to allow pressure equalization along these. B.∇p 0, for fixed B. In a second step, p is kept fixed and B is advanced with an artificial time for solving ∇p - jxB = 0 under the constraint of vanishing toroidal current J. The differential equations are discretized in space with 4th order difference approximations on an Eulerian grid spanned by a rectangular box whose toroidal rotation law follows the W7-X geometry. The two sub-iteration steps are repeated until the force balance is satisfied to an appropriate accuracy. The boundaries (where the boundary conditions are prescribed) are far enough away from the last closed magnetic surface, thus guaranteeing the motion of the plasma column not being constrained by the boundary conditions. Due to the stellarator symmetry in the toroidal direction only half of an equilibrium period is computed, using modified periodic boundary conditions guaranteeing the 4th order of the spatial discretization. (author) 5 refs., 4 figs

  19. A numerical solution for a toroidal plasma in equilibrium

    International Nuclear Information System (INIS)

    Hintz, E.; Sudano, J.P.

    1982-01-01

    The iterative techniques alternating direction implicit (ADI), sucessive ove-relaxation (SOR) and Gauss-Seidel are applied to a nonlinear elliptical second order differential equation (Grand-Shafranov). This equation was solve with the free boundary conditions plasma-vacuum interface over a rectangular section in cylindrical coordinates R and Z. The current density profile, plasma pressure profile, magnetic and isobaric surfaces are numerically determined for a toroidal plasma in equilibrium. (L.C.) [pt

  20. Recent advances in 3D computed tomography techniques for simulation and navigation in hepatobiliary pancreatic surgery.

    Science.gov (United States)

    Uchida, Masafumi

    2014-04-01

    A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.

  1. The geometry of finite equilibrium sets

    DEFF Research Database (Denmark)

    Balasko, Yves; Tvede, Mich

    2009-01-01

    We investigate the geometry of finite datasets defined by equilibrium prices, income distributions, and total resources. We show that the equilibrium condition imposes no restrictions if total resources are collinear, a property that is robust to small perturbations. We also show that the set...... of equilibrium datasets is pathconnected when the equilibrium condition does impose restrictions on datasets, as for example when total resources are widely noncollinear....

  2. Non-Equilibrium Relations for Bounded Rational Decision-Making in Changing Environments

    Directory of Open Access Journals (Sweden)

    Jordi Grau-Moya

    2017-12-01

    Full Text Available Living organisms from single cells to humans need to adapt continuously to respond to changes in their environment. The process of behavioural adaptation can be thought of as improving decision-making performance according to some utility function. Here, we consider an abstract model of organisms as decision-makers with limited information-processing resources that trade off between maximization of utility and computational costs measured by a relative entropy, in a similar fashion to thermodynamic systems undergoing isothermal transformations. Such systems minimize the free energy to reach equilibrium states that balance internal energy and entropic cost. When there is a fast change in the environment, these systems evolve in a non-equilibrium fashion because they are unable to follow the path of equilibrium distributions. Here, we apply concepts from non-equilibrium thermodynamics to characterize decision-makers that adapt to changing environments under the assumption that the temporal evolution of the utility function is externally driven and does not depend on the decision-maker’s action. This allows one to quantify performance loss due to imperfect adaptation in a general manner and, additionally, to find relations for decision-making similar to Crooks’ fluctuation theorem and Jarzynski’s equality. We provide simulations of several exemplary decision and inference problems in the discrete and continuous domains to illustrate the new relations.

  3. Knowledge Management through the Equilibrium Pattern Model for Learning

    Science.gov (United States)

    Sarirete, Akila; Noble, Elizabeth; Chikh, Azeddine

    Contemporary students are characterized by having very applied learning styles and methods of acquiring knowledge. This behavior is consistent with the constructivist models where students are co-partners in the learning process. In the present work the authors developed a new model of learning based on the constructivist theory coupled with the cognitive development theory of Piaget. The model considers the level of learning based on several stages and the move from one stage to another requires learners' challenge. At each time a new concept is introduced creates a disequilibrium that needs to be worked out to return back to its equilibrium stage. This process of "disequilibrium/equilibrium" has been analyzed and validated using a course in computer networking as part of Cisco Networking Academy Program at Effat College, a women college in Saudi Arabia. The model provides a theoretical foundation for teaching especially in a complex knowledge domain such as engineering and can be used in a knowledge economy.

  4. Evaluation of segmental left ventricular wall motion by equilibrium gated radionuclide ventriculography.

    Science.gov (United States)

    Van Nostrand, D; Janowitz, W R; Holmes, D R; Cohen, H A

    1979-01-01

    The ability of equilibrium gated radionuclide ventriculography to detect segmental left ventricular (LV) wall motion abnormalities was determined in 26 patients undergoing cardiac catheterization. Multiple gated studies obtained in 30 degrees right anterior oblique and 45 degrees left anterior oblique projections, played back in a movie format, were compared to the corresponding LV ventriculograms. The LV wall in the two projections was divided into eight segments. Each segment was graded as normal, hypokinetic, akinetic, dyskinetic, or indeterminate. Thirteen percent of the segments in the gated images were indeterminate; 24 out of 27 of these were proximal or distal inferior wall segments. There was exact agreement in 86% of the remaining segments. The sensitivity of the radionuclide technique for detecting normal versus any abnormal wall motion was 71%, with a specificity of 99%. Equilibrium gated ventriculography is an excellent noninvasive technique for evaluating segmental LV wall motion. It is least reliable in assessing the proximal inferior wall and interventricular septum.

  5. Relevance of equilibrium in multifragmentation

    International Nuclear Information System (INIS)

    Furuta, Takuya; Ono, Akira

    2009-01-01

    The relevance of equilibrium in a multifragmentation reaction of very central 40 Ca + 40 Ca collisions at 35 MeV/nucleon is investigated by using simulations of antisymmetrized molecular dynamics (AMD). Two types of ensembles are compared. One is the reaction ensemble of the states at each reaction time t in collision events simulated by AMD, and the other is the equilibrium ensemble prepared by solving the AMD equation of motion for a many-nucleon system confined in a container for a long time. The comparison of the ensembles is performed for the fragment charge distribution and the excitation energies. Our calculations show that there exists an equilibrium ensemble that well reproduces the reaction ensemble at each reaction time t for the investigated period 80≤t≤300 fm/c. However, there are some other observables that show discrepancies between the reaction and equilibrium ensembles. These may be interpreted as dynamical effects in the reaction. The usual static equilibrium at each instant is not realized since any equilibrium ensemble with the same volume as that of the reaction system cannot reproduce the fragment observables

  6. The impact of increased efficiency in the industrial use of energy: A computable general equilibrium analysis for the United Kingdom

    International Nuclear Information System (INIS)

    Allan, Grant; Hanley, Nick; McGregor, Peter; Swales, Kim; Turner, Karen

    2007-01-01

    The conventional wisdom is that improving energy efficiency will lower energy use. However, there is an extensive debate in the energy economics/policy literature concerning 'rebound' effects. These occur because an improvement in energy efficiency produces a fall in the effective price of energy services. The response of the economic system to this price fall at least partially offsets the expected beneficial impact of the energy efficiency gain. In this paper we use an economy-energy-environment computable general equilibrium (CGE) model for the UK to measure the impact of a 5% across the board improvement in the efficiency of energy use in all production sectors. We identify rebound effects of the order of 30-50%, but no backfire (no increase in energy use). However, these results are sensitive to the assumed structure of the labour market, key production elasticities, the time period under consideration and the mechanism through which increased government revenues are recycled back to the economy

  7. Tokamak equilibrium reconstruction code LIUQE and its real time implementation

    International Nuclear Information System (INIS)

    Moret, J.-M.; Duval, B.P.; Le, H.B.; Coda, S.; Felici, F.; Reimerdes, H.

    2015-01-01

    Highlights: • Algorithm vertical stabilisation using a linear parametrisation of the current density. • Experimentally derived model of the vacuum vessel to account for vessel currents. • Real-time contouring algorithm for flux surface averaged 1.5 D transport equations. • Full real time implementation coded in SIMULINK runs in less than 200 μs. • Applications: shape control, safety factor profile control, coupling with RAPTOR. - Abstract: Equilibrium reconstruction consists in identifying, from experimental measurements, a distribution of the plasma current density that satisfies the pressure balance constraint. The LIUQE code adopts a computationally efficient method to solve this problem, based on an iterative solution of the Poisson equation coupled with a linear parametrisation of the plasma current density. This algorithm is unstable against vertical gross motion of the plasma column for elongated shapes and its application to highly shaped plasmas on TCV requires a particular treatment of this instability. TCV's continuous vacuum vessel has a low resistance designed to enhance passive stabilisation of the vertical position. The eddy currents in the vacuum vessel have a sizeable influence on the equilibrium reconstruction and must be taken into account. A real time version of LIUQE has been implemented on TCV's distributed digital control system with a cycle time shorter than 200 μs for a full spatial grid of 28 by 65, using all 133 experimental measurements and including the flux surface average of quantities necessary for the real time solution of 1.5 D transport equations. This performance was achieved through a thoughtful choice of numerical methods and code optimisation techniques at every step of the algorithm, and was coded in MATLAB and SIMULINK for the off-line and real time version respectively

  8. From Wang-Chen System with Only One Stable Equilibrium to a New Chaotic System Without Equilibrium

    Science.gov (United States)

    Pham, Viet-Thanh; Wang, Xiong; Jafari, Sajad; Volos, Christos; Kapitaniak, Tomasz

    2017-06-01

    Wang-Chen system with only one stable equilibrium as well as the coexistence of hidden attractors has attracted increasing interest due to its striking features. In this work, the effect of state feedback on Wang-Chen system is investigated by introducing a further state variable. It is worth noting that a new chaotic system without equilibrium is obtained. We believe that the system is an interesting example to illustrate the conversion of hidden attractors with one stable equilibrium to hidden attractors without equilibrium.

  9. Immunity by equilibrium.

    Science.gov (United States)

    Eberl, Gérard

    2016-08-01

    The classical model of immunity posits that the immune system reacts to pathogens and injury and restores homeostasis. Indeed, a century of research has uncovered the means and mechanisms by which the immune system recognizes danger and regulates its own activity. However, this classical model does not fully explain complex phenomena, such as tolerance, allergy, the increased prevalence of inflammatory pathologies in industrialized nations and immunity to multiple infections. In this Essay, I propose a model of immunity that is based on equilibrium, in which the healthy immune system is always active and in a state of dynamic equilibrium between antagonistic types of response. This equilibrium is regulated both by the internal milieu and by the microbial environment. As a result, alteration of the internal milieu or microbial environment leads to immune disequilibrium, which determines tolerance, protective immunity and inflammatory pathology.

  10. Grinding kinetics and equilibrium states

    Science.gov (United States)

    Opoczky, L.; Farnady, F.

    1984-01-01

    The temporary and permanent equilibrium occurring during the initial stage of cement grinding does not indicate the end of comminution, but rather an increased energy consumption during grinding. The constant dynamic equilibrium occurs after a long grinding period indicating the end of comminution for a given particle size. Grinding equilibrium curves can be constructed to show the stages of comminution and agglomeration for certain particle sizes.

  11. The Geometry of Finite Equilibrium Datasets

    DEFF Research Database (Denmark)

    Balasko, Yves; Tvede, Mich

    We investigate the geometry of finite datasets defined by equilibrium prices, income distributions, and total resources. We show that the equilibrium condition imposes no restrictions if total resources are collinear, a property that is robust to small perturbations. We also show that the set...... of equilibrium datasets is pathconnected when the equilibrium condition does impose restrictions on datasets, as for example when total resources are widely non collinear....

  12. Verification for excess reactivity on beginning equilibrium core of RSG GAS

    International Nuclear Information System (INIS)

    Daddy Setyawan; Budi Rohman

    2011-01-01

    BAPETEN is an institution authorized to control the use of nuclear energy in Indonesia. Control for the use of nuclear energy is carried out through three pillars: regulation, licensing, and inspection. In order to assure the safety of the operating research reactors, the assessment unit of BAPETEN is carrying out independent assessment in order to verify safety related parameters in the SAR including neutronic aspect. The work includes verification to the Power Peaking Factor in the equilibrium silicide core of RSG GAS reactor by computational method using MCNP-ORIGEN. This verification calculation results for is 9.4 %. Meanwhile, the RSG-GAS safety analysis report shows that the excess reactivity on equilibrium core of RSG GAS is 9.7 %. The verification calculation results show a good agreement with the report. (author)

  13. Equilibrium concentration of radionuclides in cement/groundwater/carbon steel system

    International Nuclear Information System (INIS)

    Keum, D. K.; Cho, W. J.; Hahn, P. S.

    1997-01-01

    Equilibrium concentration of major elements in an underground repository with a capacity of 100,000 drums have been simulated using the geochemical computer code (EQMOD). The simulation has been carried out at the conditions of pH 12 to 13.5, and Eh 520 and -520 mV. Solubilities of magnesium and calcium decrease with the increase of pH. The solubility of iron increases with pH at Eh -520 mV of reducing environment, while it almost entirely exists as the precipitate of Fe(OH) 3 (s) at Eh 520 mV of oxidizing environment. All of cobalt and nickel are predicted to be dissolved in the liquid phase regardless of pH since the solubility limit is greater than the total concentration. In the case of cesium and strontium, all forms of both ions are present in the liquid phase because they have negligible sorption capacity on cement and large solubility under disposal atmosphere. And thus the total concentration determines the equilibrium concentration. Adsorbed amounts of iodide and carbonate are dependent on adsorption capacity and adsorption equilibrium constant. Especially, the calcite turns out to be a solubility-limiting phase on the carbonate system. In order to validate the model, the equilibrium concentrations measured for a number of systems which consist of iron, cement, synthetic groundwater and radionuclides are compared with those predicted by the model. The concentrations between the model and the experiment of nonadsorptive elements - cesium, strontium, cobalt, nickel and iron, are well agreed. It indicates that the assumptions and the thermodynamic data in this work are valid. Using the adsorption equilibrium constant as a free parameter, the experimental data of iodide and carbonate have been fitted to the model. The model is in a good agreement with the experimental data of the iodide system. (author)

  14. A pseudo-equilibrium thermodynamic model of information processing in nonlinear brain dynamics.

    Science.gov (United States)

    Freeman, Walter J

    2008-01-01

    Computational models of brain dynamics fall short of performance in speed and robustness of pattern recognition in detecting minute but highly significant pattern fragments. A novel model employs the properties of thermodynamic systems operating far from equilibrium, which is analyzed by linearization near adaptive operating points using root locus techniques. Such systems construct order by dissipating energy. Reinforcement learning of conditioned stimuli creates a landscape of attractors and their basins in each sensory cortex by forming nerve cell assemblies in cortical connectivity. Retrieval of a selected category of stored knowledge is by a phase transition that is induced by a conditioned stimulus, and that leads to pattern self-organization. Near self-regulated criticality the cortical background activity displays aperiodic null spikes at which analytic amplitude nears zero, and which constitute a form of Rayleigh noise. Phase transitions in recognition and recall are initiated at null spikes in the presence of an input signal, owing to the high signal-to-noise ratio that facilitates capture of cortex by an attractor, even by very weak activity that is typically evoked by a conditioned stimulus.

  15. Equilibrium sampling of environmental pollutants in fish: Comparison with lipid- normalized concentrations and homogenization effects on chemical activity

    DEFF Research Database (Denmark)

    Jahnke, Annika; Mayer, Philipp; Adolfsson-Erici, Margaretha

    2011-01-01

    of the equilibrium sampling technique, while at the same time confirming that the fugacity capacity of these lipid-rich tissues for PCBs was dominated by the lipid fraction. Equilibrium sampling was also applied to homogenates of the same fish tissues. The PCB concentrations in the PDMS were 1.2 to 2.0 times higher...... in the homogenates (statistically significant in 18 of 21 cases, phomogenization increased the chemical activity of the PCBs and decreased the fugacity capacity of the tissue. This observation has implications for equilibrium sampling and partition coefficients determined using tissue...... homogenates....

  16. Development of parallellized higher-order generalized depletion perturbation theory for application in equilibrium cycle optimization

    Energy Technology Data Exchange (ETDEWEB)

    Geemert, R. van E-mail: rene.vangeemert@psi.ch; Hoogenboom, J.E. E-mail: j.e.hoogenboom@iri.tudelft.nl

    2001-09-01

    As nuclear fuel economy is basically a multi-cycle issue, a fair way of evaluating reload patterns is to consider their performance in the case of an equilibrium cycle. The equilibrium cycle associated with a reload pattern is defined as the limit fuel cycle that eventually emerges after multiple successive periodic refueling, each time implementing the same reload scheme. Since the equilibrium cycle is the solution of a reload operation invariance equation, it can in principle be found with sufficient accuracy only by applying an iterative procedure, simulating the emergence of the limit cycle. For a design purpose such as the optimization of reload patterns, in which many different equilibrium cycle perturbations (resulting from many different limited changes in the reload operator) must be evaluated, this requires far too much computational effort. However, for very fast calculation of these many different equilibrium cycle perturbations it is also possible to set up a generalized variational approach. This approach results in an iterative scheme that yields the exact perturbation in the equilibrium cycle solution as well, in an accelerated way. Furthermore, both the solution of the adjoint equations occurring in the perturbation theory formalism and the implementation of the optimization algorithm have been parallellized and executed on a massively parallel machine. The combination of parallellism and generalized perturbation theory offers the opportunity to perform very exhaustive, fast and accurate sampling of the solution space for the equilibrium cycle reload pattern optimization problem.

  17. A Printed Equilibrium Dialysis Device with Integrated Membranes for Improved Binding Affinity Measurements.

    Science.gov (United States)

    Pinger, Cody W; Heller, Andrew A; Spence, Dana M

    2017-07-18

    Equilibrium dialysis is a simple and effective technique used for investigating the binding of small molecules and ions to proteins. A three-dimensional (3D) printer was used to create a device capable of measuring binding constants between a protein and a small ion based on equilibrium dialysis. Specifically, the technology described here enables the user to customize an equilibrium dialysis device to fit their own experiments by choosing membranes of various material and molecular-weight cutoff values. The device has dimensions similar to that of a standard 96-well plate, thus being amenable to automated sample handlers and multichannel pipettes. The device consists of a printed base that hosts multiple windows containing a porous regenerated-cellulose membrane with a molecular-weight cutoff of ∼3500 Da. A key step in the fabrication process is a print-pause-print approach for integrating membranes directly into the windows subsequently inserted into the base. The integrated membranes display no leaking upon placement into the base. After characterizing the system's requirements for reaching equilibrium, the device was used to successfully measure an equilibrium dissociation constant for Zn 2+ and human serum albumin (K d = (5.62 ± 0.93) × 10 -7 M) under physiological conditions that is statistically equal to the constants reported in the literature.

  18. A note on existence of mixed solutions to equilibrium problems with equilibrium constraints

    Czech Academy of Sciences Publication Activity Database

    Červinka, Michal

    2007-01-01

    Roč. 2007, č. 24 (2007), s. 27-44 ISSN 1212-074X R&D Projects: GA AV ČR IAA1030405 Institutional research plan: CEZ:AV0Z10750506 Keywords : equilibrium problems with equilibrium constraints * variational analysis * mixed strategy Subject RIV: BA - General Mathematics

  19. Development of computational technique for labeling magnetic flux-surfaces

    International Nuclear Information System (INIS)

    Nunami, Masanori; Kanno, Ryutaro; Satake, Shinsuke; Hayashi, Takaya; Takamaru, Hisanori

    2006-03-01

    In recent Large Helical Device (LHD) experiments, radial profiles of ion temperature, electric field, etc. are measured in the m/n=1/1 magnetic island produced by island control coils, where m is the poloidal mode number and n the toroidal mode number. When the transport of the plasma in the radial profiles is numerically analyzed, an average over a magnetic flux-surface in the island is a very useful concept to understand the transport. On averaging, a proper labeling of the flux-surfaces is necessary. In general, it is not easy to label the flux-surfaces in the magnetic field with the island, compared with the case of a magnetic field configuration having nested flux-surfaces. In the present paper, we have developed a new computational technique to label the magnetic flux-surfaces. This technique is constructed by using an optimization algorithm, which is known as an optimization method called the simulated annealing method. The flux-surfaces are discerned by using two labels: one is classification of the magnetic field structure, i.e., core, island, ergodic, and outside regions, and the other is a value of the toroidal magnetic flux. We have applied the technique to an LHD configuration with the m/n=1/1 island, and successfully obtained the discrimination of the magnetic field structure. (author)

  20. Open problems in non-equilibrium physics

    International Nuclear Information System (INIS)

    Kusnezov, D.

    1997-01-01

    The report contains viewgraphs on the following: approaches to non-equilibrium statistical mechanics; classical and quantum processes in chaotic environments; classical fields in non-equilibrium situations: real time dynamics at finite temperature; and phase transitions in non-equilibrium conditions