WorldWideScience

Sample records for optimisation equilibrium model

  1. Comprehensive optimisation of China’s energy prices, taxes and subsidy policies based on the dynamic computable general equilibrium model

    He, Y.X.; Liu, Y.Y.; Du, M.; Zhang, J.X.; Pang, Y.X.

    2015-01-01

    Highlights: • Energy policy is defined as a complication of energy price, tax and subsidy policies. • The maximisation of total social benefit is the optimised objective. • A more rational carbon tax ranges from 10 to 20 Yuan/ton under the current situation. • The optimal coefficient pricing is more conducive to maximise total social benefit. - Abstract: Under the condition of increasingly serious environmental pollution, rational energy policy plays an important role in the practical significance of energy conservation and emission reduction. This paper defines energy policies as the compilation of energy prices, taxes and subsidy policies. Moreover, it establishes the optimisation model of China’s energy policy based on the dynamic computable general equilibrium model, which maximises the total social benefit, in order to explore the comprehensive influences of a carbon tax, the sales pricing mechanism and the renewable energy fund policy. The results show that when the change rates of gross domestic product and consumer price index are ±2%, ±5% and the renewable energy supply structure ratio is 7%, the more reasonable carbon tax ranges from 10 to 20 Yuan/ton, and the optimal coefficient pricing mechanism is more conducive to the objective of maximising the total social benefit. From the perspective of optimising the overall energy policies, if the upper limit of change rate in consumer price index is 2.2%, the existing renewable energy fund should be improved

  2. MERGE-ETL: An Optimisation Equilibrium Model with Two Different Endogeneous Technological Learning Formulations

    Bahn, O.; Kypreos, S.

    2002-07-01

    In MERGE-ETL, endogenous technological progress is applied to eight energy technologies: six power plants (integrated coal gasification with combined cycle, gas turbine with combined cycle, gas fuel cell, new nuclear designs, wind turbine and solar photovoltaic) and two plants producing hydrogen (from biomass and solar photovoltaic). Furthermore, compared to the original MERGE model, we have introduced two new power plants (using coal and gas) with CO{sub 2} capture and disposal into depleted oil and gas reservoirs. The difficulty with incorporating endogenous technological progress in MERGE comes from the resulting formulation of the MERGE-ETL model. Indeed, technological learning is related to increasing returns to adoption, and the mathematical formulation of MERGE-ETL corresponds then to a (non-linear and) non-convex optimisation problem. To solve MERGE-ETL, we have devised a three-step heuristic approach, where we search for the global optimum in an iterative way. We use in particular for this a linearisation, following mixed integer programming techniques, of the bottom-up part of MERGE-ETL. To study the impacts of modelling endogenous technological change in MERGE, we have considered several scenarios related to technological learning and carbon control. The latter corresponds to a 'soft landing' of world energy related CO{sub 2} emissions to a level of 10 Gt C by 2050, and takes into account the recent (2001) Marrakech Agreements for CO{sub 2} emission limits by 2010. Notice that our baseline scenario (without emission control and endogenous technological change) is consistent, in particular in terms of population and CO{sub 2} emissions, with the IPCC B2 scenario. Our numerical application with MERGE-ETL shows that technological learning yields an increase of primary energy use and of electricity generation. Indeed, energy production, and in particular electricity generation, become less expensive over-time. Energy (electricity, but also non

  3. Medium-term generation programming in competitive environments: a new optimisation approach for market equilibrium computing

    Barquin, J.; Centeno, E.; Reneses, J.

    2004-01-01

    The paper proposes a model to represent medium-term hydro-thermal operation of electrical power systems in deregulated frameworks. The model objective is to compute the oligopolistic market equilibrium point in which each utility maximises its profit, based on other firms' behaviour. This problem is not an optimisation one. The main contribution of the paper is to demonstrate that, nevertheless, under some reasonable assumptions, it can be formulated as an equivalent minimisation problem. A computer program has been coded by using the proposed approach. It is used to compute the market equilibrium of a real-size system. (author)

  4. Equilibrium models and variational inequalities

    Konnov, Igor

    2007-01-01

    The concept of equilibrium plays a central role in various applied sciences, such as physics (especially, mechanics), economics, engineering, transportation, sociology, chemistry, biology and other fields. If one can formulate the equilibrium problem in the form of a mathematical model, solutions of the corresponding problem can be used for forecasting the future behavior of very complex systems and, also, for correcting the the current state of the system under control. This book presents a unifying look on different equilibrium concepts in economics, including several models from related sciences.- Presents a unifying look on different equilibrium concepts and also the present state of investigations in this field- Describes static and dynamic input-output models, Walras, Cassel-Wald, spatial price, auction market, oligopolistic equilibrium models, transportation and migration equilibrium models- Covers the basics of theory and solution methods both for the complementarity and variational inequality probl...

  5. A Multiperiod Equilibrium Pricing Model

    Minsuk Kwak

    2014-01-01

    Full Text Available We propose an equilibrium pricing model in a dynamic multiperiod stochastic framework with uncertain income. There are one tradable risky asset (stock/commodity, one nontradable underlying (temperature, and also a contingent claim (weather derivative written on the tradable risky asset and the nontradable underlying in the market. The price of the contingent claim is priced in equilibrium by optimal strategies of representative agent and market clearing condition. The risk preferences are of exponential type with a stochastic coefficient of risk aversion. Both subgame perfect strategy and naive strategy are considered and the corresponding equilibrium prices are derived. From the numerical result we examine how the equilibrium prices vary in response to changes in model parameters and highlight the importance of our equilibrium pricing principle.

  6. Helical axis stellarator equilibrium model

    Koniges, A.E.; Johnson, J.L.

    1985-02-01

    An asymptotic model is developed to study MHD equilibria in toroidal systems with a helical magnetic axis. Using a characteristic coordinate system based on the vacuum field lines, the equilibrium problem is reduced to a two-dimensional generalized partial differential equation of the Grad-Shafranov type. A stellarator-expansion free-boundary equilibrium code is modified to solve the helical-axis equations. The expansion model is used to predict the equilibrium properties of Asperators NP-3 and NP-4. Numerically determined flux surfaces, magnetic well, transform, and shear are presented. The equilibria show a toroidal Shafranov shift

  7. Optimisation of timetable-based, stochastic transit assignment models based on MSA

    Nielsen, Otto Anker; Frederiksen, Rasmus Dyhr

    2006-01-01

    (CRM), such a large-scale transit assignment model was developed and estimated. The Stochastic User Equilibrium problem was solved by the Method of Successive Averages (MSA). However, the model suffered from very large calculation times. The paper focuses on how to optimise transit assignment models...

  8. Optimisation of BPMN Business Models via Model Checking

    Herbert, Luke Thomas; Sharp, Robin

    2013-01-01

    We present a framework for the optimisation of business processes modelled in the business process modelling language BPMN, which builds upon earlier work, where we developed a model checking based method for the analysis of BPMN models. We define a structure for expressing optimisation goals...... for synthesized BPMN components, based on probabilistic computation tree logic and real-valued reward structures of the BPMN model, allowing for the specification of complex quantitative goals. We here present a simple algorithm, inspired by concepts from evolutionary algorithms, which iteratively generates...

  9. Non-equilibrium modelling of distillation

    Wesselingh, JA; Darton, R

    1997-01-01

    There are nasty conceptual problems in the classical way of describing distillation columns via equilibrium stages, and efficiencies or HETP's. We can nowadays avoid these problems by simulating the behaviour of a complete column in one go using a non-equilibrium model. Such a model has phase

  10. Non-equilibrium dog-flea model

    Ackerson, Bruce J.

    2017-11-01

    We develop the open dog-flea model to serve as a check of proposed non-equilibrium theories of statistical mechanics. The model is developed in detail. Then it is applied to four recent models for non-equilibrium statistical mechanics. Comparison of the dog-flea solution with these different models allows checking claims and giving a concrete example of the theoretical models.

  11. FISHRENT; Bio-economic simulation and optimisation model

    Salz, P.; Buisman, F.C.; Soma, K.; Frost, H.; Accadia, P.; Prellezo, R.

    2011-01-01

    Key findings: The FISHRENT model is a major step forward in bio-economic model-ling, combining features that have not been fully integrated in earlier models: 1- Incorporation of any number of species (or stock) and/or fleets 2- Integration of simulation and optimisation over a period of 25 years 3-

  12. Modelling of an homogeneous equilibrium mixture model

    Bernard-Champmartin, A.; Poujade, O.; Mathiaud, J.; Mathiaud, J.; Ghidaglia, J.M.

    2014-01-01

    We present here a model for two phase flows which is simpler than the 6-equations models (with two densities, two velocities, two temperatures) but more accurate than the standard mixture models with 4 equations (with two densities, one velocity and one temperature). We are interested in the case when the two-phases have been interacting long enough for the drag force to be small but still not negligible. The so-called Homogeneous Equilibrium Mixture Model (HEM) that we present is dealing with both mixture and relative quantities, allowing in particular to follow both a mixture velocity and a relative velocity. This relative velocity is not tracked by a conservation law but by a closure law (drift relation), whose expression is related to the drag force terms of the two-phase flow. After the derivation of the model, a stability analysis and numerical experiments are presented. (authors)

  13. Parameter Optimisation for the Behaviour of Elastic Models over Time

    Mosegaard, Jesper

    2004-01-01

    Optimisation of parameters for elastic models is essential for comparison or finding equivalent behaviour of elastic models when parameters cannot simply be transferred or converted. This is the case with a large range of commonly used elastic models. In this paper we present a general method tha...

  14. Micro Data and General Equilibrium Models

    Browning, Martin; Hansen, Lars Peter; Heckman, James J.

    1999-01-01

    Dynamic general equilibrium models are required to evaluate policies applied at the national level. To use these models to make quantitative forecasts requires knowledge of an extensive array of parameter values for the economy at large. This essay describes the parameters required for different...... economic models, assesses the discordance between the macromodels used in policy evaluation and the microeconomic models used to generate the empirical evidence. For concreteness, we focus on two general equilibrium models: the stochastic growth model extended to include some forms of heterogeneity...

  15. Optimisation of Hidden Markov Model using Baum–Welch algorithm

    Home; Journals; Journal of Earth System Science; Volume 126; Issue 1. Optimisation of Hidden Markov Model using Baum–Welch algorithm for prediction of maximum and minimum temperature over Indian Himalaya. J C Joshi Tankeshwar Kumar Sunita Srivastava Divya Sachdeva. Volume 126 Issue 1 February 2017 ...

  16. Statistical models of shape optimisation and evaluation

    Davies, Rhodri; Taylor, Chris

    2014-01-01

    Deformable shape models have wide application in computer vision and biomedical image analysis. This book addresses a key issue in shape modelling: establishment of a meaningful correspondence between a set of shapes. Full implementation details are provided.

  17. General Equilibrium Models: Improving the Microeconomics Classroom

    Nicholson, Walter; Westhoff, Frank

    2009-01-01

    General equilibrium models now play important roles in many fields of economics including tax policy, environmental regulation, international trade, and economic development. The intermediate microeconomics classroom has not kept pace with these trends, however. Microeconomics textbooks primarily focus on the insights that can be drawn from the…

  18. Thermochemical equilibrium modelling of a gasifying process

    Melgar, Andres; Perez, Juan F.; Laget, Hannes; Horillo, Alfonso

    2007-01-01

    This article discusses a mathematical model for the thermochemical processes in a downdraft biomass gasifier. The model combines the chemical equilibrium and the thermodynamic equilibrium of the global reaction, predicting the final composition of the producer gas as well as its reaction temperature. Once the composition of the producer gas is obtained, a range of parameters can be derived, such as the cold gas efficiency of the gasifier, the amount of dissociated water in the process and the heating value and engine fuel quality of the gas. The model has been validated experimentally. This work includes a parametric study of the influence of the gasifying relative fuel/air ratio and the moisture content of the biomass on the characteristics of the process and the producer gas composition. The model helps to predict the behaviour of different biomass types and is a useful tool for optimizing the design and operation of downdraft biomass gasifiers

  19. An Equilibrium Model of User Generated Content

    Dae-Yong Ahn; Jason A. Duan; Carl F. Mela

    2011-01-01

    This paper considers the joint creation and consumption of content on user generated content platforms (e.g., reviews or articles, chat, videos, etc.). On these platforms, users' utilities depend upon the participation of others; hence, users' expectations regarding the participation of others on the site becomes germane to their own involvement levels. Yet these beliefs are often assumed to be fixed. Accordingly, we develop a dynamic rational expectations equilibrium model of joint consumpti...

  20. Optimisation of a parallel ocean general circulation model

    Beare, M. I.; Stevens, D. P.

    1997-10-01

    This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  1. Generation of safe optimised execution strategies for uml models

    Herbert, Luke Thomas; Herbert-Hansen, Zaza Nadja Lee

    When designing safety critical systems there is a need for verification of safety properties while ensuring system operations have a specific performance profile. We present a novel application of model checking to derive execution strategies, sequences of decisions at workflow branch points...... which optimise a set of reward variables, while simultaneously observing constraints which encode any required safety properties and accounting for the underlying stochastic nature of the system. By evaluating quantitative properties of the generated adversaries we are able to construct an execution...

  2. Optimisation-Based Solution Methods for Set Partitioning Models

    Rasmussen, Matias Sevel

    The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...

  3. Measuring productivity differences in equilibrium search models

    Lanot, Gauthier; Neumann, George R.

    1996-01-01

    Equilibrium search models require unobserved heterogeneity in productivity to fit observed wage distribution data, but provide no guidance about the location parameter of the heterogeneity. In this paper we show that the location of the productivity heterogeneity implies a mode in a kernel density...... estimate of the wage distribution. The number of such modes and their location are identified using bump hunting techniques due to Silverman (1981). These techniques are applied to Danish panel data on workers and firms. These estimates are used to assess the importance of employer wage policy....

  4. Adiabatic equilibrium models for direct containment heating

    Pilch, M.; Allen, M.D.

    1991-01-01

    Probabilistic risk assessment (PRA) studies are being extended to include a wider spectrum of reactor plants than was considered in NUREG-1150. There is a need for simple direct containment heating (DCH) models that can be used for screening studies aimed at identifying potentially significant contributors to overall risk in individual nuclear power plants. This paper presents two adiabatic equilibrium models suitable for the task. The first, a single-cell model, places a true upper bound on DCH loads. This upper bound, however, often far exceeds reasonable expectations of containment loads based on CONTAIN calculations and experiment observations. In this paper, a two cell model is developed that captures the major mitigating feature of containment compartmentalization, thus providing more reasonable estimates of the containment load

  5. Equilibrium statistical mechanics of lattice models

    Lavis, David A

    2015-01-01

    Most interesting and difficult problems in equilibrium statistical mechanics concern models which exhibit phase transitions. For graduate students and more experienced researchers this book provides an invaluable reference source of approximate and exact solutions for a comprehensive range of such models. Part I contains background material on classical thermodynamics and statistical mechanics, together with a classification and survey of lattice models. The geometry of phase transitions is described and scaling theory is used to introduce critical exponents and scaling laws. An introduction is given to finite-size scaling, conformal invariance and Schramm—Loewner evolution. Part II contains accounts of classical mean-field methods. The parallels between Landau expansions and catastrophe theory are discussed and Ginzburg—Landau theory is introduced. The extension of mean-field theory to higher-orders is explored using the Kikuchi—Hijmans—De Boer hierarchy of approximations. In Part III the use of alge...

  6. Selecting a climate model subset to optimise key ensemble properties

    N. Herger

    2018-02-01

    Full Text Available End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  7. Selecting a climate model subset to optimise key ensemble properties

    Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.

    2018-02-01

    End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  8. Radiative-convective equilibrium model intercomparison project

    Wing, Allison A.; Reed, Kevin A.; Satoh, Masaki; Stevens, Bjorn; Bony, Sandrine; Ohno, Tomoki

    2018-03-01

    RCEMIP, an intercomparison of multiple types of models configured in radiative-convective equilibrium (RCE), is proposed. RCE is an idealization of the climate system in which there is a balance between radiative cooling of the atmosphere and heating by convection. The scientific objectives of RCEMIP are three-fold. First, clouds and climate sensitivity will be investigated in the RCE setting. This includes determining how cloud fraction changes with warming and the role of self-aggregation of convection in climate sensitivity. Second, RCEMIP will quantify the dependence of the degree of convective aggregation and tropical circulation regimes on temperature. Finally, by providing a common baseline, RCEMIP will allow the robustness of the RCE state across the spectrum of models to be assessed, which is essential for interpreting the results found regarding clouds, climate sensitivity, and aggregation, and more generally, determining which features of tropical climate a RCE framework is useful for. A novel aspect and major advantage of RCEMIP is the accessibility of the RCE framework to a variety of models, including cloud-resolving models, general circulation models, global cloud-resolving models, single-column models, and large-eddy simulation models.

  9. Optimisation of a parallel ocean general circulation model

    M. I. Beare

    1997-10-01

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  10. Optimisation of a parallel ocean general circulation model

    M. I. Beare

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  11. Modelling, analysis and optimisation of energy systems on offshore platforms

    Nguyen, Tuong-Van

    of oil and gas facilities, (ii) the means to reduce their performance losses, and (iii) the systematic design of future plants. This work builds upon a combination of modelling tools, performance evaluation methods and multi-objective optimisation routines to reproduce the behaviour of five offshore......Nowadays, the offshore production of oil and gas requires on-site processing, which includes operations such as separation, compression and purification. The offshore system undergoes variations of the petroleum production rates over the field life – it is therefore operated far from its nominal...... with the combustion, pressure-change and cooling operations, but these processes are ranked differently depending on the plant layout and on the field production stage. The most promising improvements consist of introducing a multi-level production manifold, avoiding anti-surge gas recirculation, installing a waste...

  12. Optimisation of Marine Boilers using Model-based Multivariable Control

    Solberg, Brian

    Traditionally, marine boilers have been controlled using classical single loop controllers. To optimise marine boiler performance, reduce new installation time and minimise the physical dimensions of these large steel constructions, a more comprehensive and coherent control strategy is needed....... This research deals with the application of advanced control to a specific class of marine boilers combining well-known design methods for multivariable systems. This thesis presents contributions for modelling and control of the one-pass smoke tube marine boilers as well as for hybrid systems control. Much...... of the focus has been directed towards water level control which is complicated by the nature of the disturbances acting on the system as well as by low frequency sensor noise. This focus was motivated by an estimated large potential to minimise the boiler geometry by reducing water level fluctuations...

  13. Equilibrium Price Dispersion in a Matching Model with Divisible Money

    Kamiya, K.; Sato, T.

    2002-01-01

    The main purpose of this paper is to show that, for any given parameter values, an equilibrium with dispersed prices (two-price equilibrium) exists in a simple matching model with divisible money presented by Green and Zhou (1998).We also show that our two-price equilibrium is unique in certain

  14. Mathematical models and equilibrium in irreversible microeconomics

    Anatoly M. Tsirlin

    2010-07-01

    Full Text Available A set of equilibrium states in a system consisting of economic agents, economic reservoirs, and firms is considered. Methods of irreversible microeconomics are used. We show that direct sale/purchase leads to an equilibrium state which depends upon the coefficients of supply/demand functions. To reach the unique equilibrium state it is necessary to add either monetary exchange or an intermediate firm.

  15. Normal tissue dose-effect models in biological dose optimisation

    Alber, M.

    2008-01-01

    Sophisticated radiotherapy techniques like intensity modulated radiotherapy with photons and protons rely on numerical dose optimisation. The evaluation of normal tissue dose distributions that deviate significantly from the common clinical routine and also the mathematical expression of desirable properties of a dose distribution is difficult. In essence, a dose evaluation model for normal tissues has to express the tissue specific volume effect. A formalism of local dose effect measures is presented, which can be applied to serial and parallel responding tissues as well as target volumes and physical dose penalties. These models allow a transparent description of the volume effect and an efficient control over the optimum dose distribution. They can be linked to normal tissue complication probability models and the equivalent uniform dose concept. In clinical applications, they provide a means to standardize normal tissue doses in the face of inevitable anatomical differences between patients and a vastly increased freedom to shape the dose, without being overly limiting like sets of dose-volume constraints. (orig.)

  16. Production optimisation in the petrochemical industry by hierarchical multivariate modelling

    Andersson, Magnus; Furusjoe, Erik; Jansson, Aasa

    2004-06-01

    This project demonstrates the advantages of applying hierarchical multivariate modelling in the petrochemical industry in order to increase knowledge of the total process. The models indicate possible ways to optimise the process regarding the use of energy and raw material, which is directly linked to the environmental impact of the process. The refinery of Nynaes Refining AB (Goeteborg, Sweden) has acted as a demonstration site in this project. The models developed for the demonstration site resulted in: Detection of an unknown process disturbance and suggestions of possible causes; Indications on how to increase the yield in combination with energy savings; The possibility to predict product quality from on-line process measurements, making the results available at a higher frequency than customary laboratory analysis; Quantification of the gradually lowered efficiency of heat transfer in the furnace and increased fuel consumption as an effect of soot build-up on the furnace coils; Increased knowledge of the relation between production rate and the efficiency of the heat exchangers. This report is one of two reports from the project. It contains a technical discussion of the result with some degree of detail. A shorter and more easily accessible report is also available, see IVL report B1586-A.

  17. Simulation and optimisation modelling approach for operation of the Hoa Binh Reservoir, Vietnam

    Ngo, Long le; Madsen, Henrik; Rosbjerg, Dan

    2007-01-01

    Hoa Binh, the largest reservoir in Vietnam, plays an important role in flood control for the Red River delta and hydropower generation. Due to its multi-purpose character, conflicts and disputes in operating the reservoir have been ongoing since its construction, particularly in the flood season....... This paper proposes to optimise the control strategies for the Hoa Binh reservoir operation by applying a combination of simulation and optimisation models. The control strategies are set up in the MIKE 11 simulation model to guide the releases of the reservoir system according to the current storage level......, the hydro-meteorological conditions, and the time of the year. A heuristic global optimisation tool, the shuffled complex evolution (SCE) algorithm, is adopted for optimising the reservoir operation. The optimisation puts focus on the trade-off between flood control and hydropower generation for the Hoa...

  18. Water quality modelling and optimisation of wastewater treatment network using mixed integer programming

    Mahlathi, Christopher

    2016-10-01

    Full Text Available Instream water quality management encompasses field monitoring and utilisation of mathematical models. These models can be coupled with optimisation techniques to determine more efficient water quality management alternatives. Among these activities...

  19. Approach to chemical equilibrium in thermal models

    Boal, D.H.

    1984-01-01

    The experimentally measured (μ - , charged particle)/(μ - ,n) and (p,n/p,p') ratios for the emission of energetic nucleons are used to estimate the time evolution of a system of secondary nucleons produced in a direct interaction of a projectile or captured muon. The values of these ratios indicate that chemical equilibrium is not achieved among the secondary nucleons in noncomposite induced reactions, and this restricts the time scale for the emission of energetic nucleons to be about 0.7 x 10 -23 sec. It is shown that the reason why thermal equilibrium can be reached so rapidly for a particular nucleon species is that the sum of the particle spectra produced in multiple direct reactions looks surprisingly thermal. The rate equations used to estimate the reaction times for muon and nucleon induced reactions are then applied to heavy ion collisions, and it is shown that chemical equilibrium can be reached more rapidly, as one would expect

  20. Comparative evaluation of kinetic, equilibrium and semi-equilibrium models for biomass gasification

    Buragohain, Buljit [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Chakma, Sankar; Kumar, Peeush [Department of Chemical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Mahanta, Pinakeswar [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Department of Mechanical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Moholkar, Vijayanand S. [Center for Energy, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India); Department of Chemical Engineering, Indian Institute of Technology Guwahati, Guwahati – 781 039, Assam (India)

    2013-07-01

    Modeling of biomass gasification has been an active area of research for past two decades. In the published literature, three approaches have been adopted for the modeling of this process, viz. thermodynamic equilibrium, semi-equilibrium and kinetic. In this paper, we have attempted to present a comparative assessment of these three types of models for predicting outcome of the gasification process in a circulating fluidized bed gasifier. Two model biomass, viz. rice husk and wood particles, have been chosen for analysis, with gasification medium being air. Although the trends in molar composition, net yield and LHV of the producer gas predicted by three models are in concurrence, significant quantitative difference is seen in the results. Due to rather slow kinetics of char gasification and tar oxidation, carbon conversion achieved in single pass of biomass through the gasifier, calculated using kinetic model, is quite low, which adversely affects the yield and LHV of the producer gas. Although equilibrium and semi-equilibrium models reveal relative insensitivity of producer gas characteristics towards temperature, the kinetic model shows significant effect of temperature on LHV of the gas at low air ratios. Kinetic models also reveal volume of the gasifier to be an insignificant parameter, as the net yield and LHV of the gas resulting from 6 m and 10 m riser is same. On a whole, the analysis presented in this paper indicates that thermodynamic models are useful tools for quantitative assessment of the gasification process, while kinetic models provide physically more realistic picture.

  1. On the impact of optimisation models in maintenance decision making: the state of the art

    Dekker, Rommert; Scarf, Philip A.

    1998-01-01

    In this paper we discuss the state of the art in applications of maintenance optimisation models. After giving a short introduction to the area, we consider several ways in which models may be used to optimise maintenance, such as case studies, operational and strategic decision support systems, and give examples of each of them. Next we discuss several areas where the models have been applied successfully. These include civil structure and aeroplane maintenance. From a comparative point of view, we discuss future prospects

  2. Optimisation of NMR dynamic models II. A new methodology for the dual optimisation of the model-free parameters and the Brownian rotational diffusion tensor

    D'Auvergne, Edward J.; Gooley, Paul R.

    2008-01-01

    Finding the dynamics of an entire macromolecule is a complex problem as the model-free parameter values are intricately linked to the Brownian rotational diffusion of the molecule, mathematically through the autocorrelation function of the motion and statistically through model selection. The solution to this problem was formulated using set theory as an element of the universal set U-the union of all model-free spaces (d'Auvergne EJ and Gooley PR (2007) Mol BioSyst 3(7), 483-494). The current procedure commonly used to find the universal solution is to initially estimate the diffusion tensor parameters, to optimise the model-free parameters of numerous models, and then to choose the best model via model selection. The global model is then optimised and the procedure repeated until convergence. In this paper a new methodology is presented which takes a different approach to this diffusion seeded model-free paradigm. Rather than starting with the diffusion tensor this iterative protocol begins by optimising the model-free parameters in the absence of any global model parameters, selecting between all the model-free models, and finally optimising the diffusion tensor. The new model-free optimisation protocol will be validated using synthetic data from Schurr JM et al. (1994) J Magn Reson B 105(3), 211-224 and the relaxation data of the bacteriorhodopsin (1-36)BR fragment from Orekhov VY (1999) J Biomol NMR 14(4), 345-356. To demonstrate the importance of this new procedure the NMR relaxation data of the Olfactory Marker Protein (OMP) of Gitti R et al. (2005) Biochem 44(28), 9673-9679 is reanalysed. The result is that the dynamics for certain secondary structural elements is very different from those originally reported

  3. Parameter Estimation for a Computable General Equilibrium Model

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  4. Parameter Estimation for a Computable General Equilibrium Model

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  5. The DART general equilibrium model: A technical description

    Springer, Katrin

    1998-01-01

    This paper provides a technical description of the Dynamic Applied Regional Trade (DART) General Equilibrium Model. The DART model is a recursive dynamic, multi-region, multi-sector computable general equilibrium model. All regions are fully specified and linked by bilateral trade flows. The DART model can be used to project economic activities, energy use and trade flows for each of the specified regions to simulate various trade policy as well as environmental policy scenarios, and to analy...

  6. A strictly hyperbolic equilibrium phase transition model

    Allaire, G; Faccanoni, G; Kokh, S.

    2007-01-01

    This Note is concerned with the strict hyperbolicity of the compressible Euler equations equipped with an equation of state that describes the thermodynamical equilibrium between the liquid phase and the vapor phase of a fluid. The proof is valid for a very wide class of fluids. The argument only relies on smoothness assumptions and on the classical thermodynamical stability assumptions, that requires a definite negative Hessian matrix for each phase entropy as a function of the specific volume and internal energy. (authors)

  7. An optimised portfolio management model, incorporating best practices

    2015-01-01

    M.Ing. (Engineering Management) Driving sustainability, optimising return on investments and cultivating a competitive market advantage, are imperative for organisational success and growth. In order to achieve the business objectives and value proposition, effective management strategies must be efficiently implemented, monitored and controlled. Failure to do so ultimately result in; financial loss due to increased capital and operational expenditure, schedule slippages, substandard deliv...

  8. A statistical mechanical model for equilibrium ionization

    Macris, N.; Martin, P.A.; Pule, J.

    1990-01-01

    A quantum electron interacts with a classical gas of hard spheres and is in thermal equilibrium with it. The interaction is attractive and the electron can form a bound state with the classical particles. It is rigorously shown that in a well defined low density and low temperature limit, the ionization probability for the electron tends to the value predicted by the Saha formula for thermal ionization. In this regime, the electron is found to be in a statistical mixture of a bound and a free state. (orig.)

  9. Modelling and genetic algorithm based optimisation of inverse supply chain

    Bányai, T.

    2009-04-01

    (Recycling of household appliances with emphasis on reuse options). The purpose of this paper is the presentation of a possible method for avoiding the unnecessary environmental risk and landscape use through unprovoked large supply chain of collection systems of recycling processes. In the first part of the paper the author presents the mathematical model of recycling related collection systems (applied especially for wastes of electric and electronic products) and in the second part of the work a genetic algorithm based optimisation method will be demonstrated, by the aid of which it is possible to determine the optimal structure of the inverse supply chain from the point of view economical, ecological and logistic objective functions. The model of the inverse supply chain is based on a multi-level, hierarchical collection system. In case of this static model it is assumed that technical conditions are permanent. The total costs consist of three parts: total infrastructure costs, total material handling costs and environmental risk costs. The infrastructure-related costs are dependent only on the specific fixed costs and the specific unit costs of the operation points (collection, pre-treatment, treatment, recycling and reuse plants). The costs of warehousing and transportation are represented by the material handling related costs. The most important factors determining the level of environmental risk cost are the number of out of time recycled (treated or reused) products, the number of supply chain objects and the length of transportation routes. The objective function is the minimization of the total cost taking into consideration the constraints. However a lot of research work discussed the design of supply chain [8], but most of them concentrate on linear cost functions. In the case of this model non-linear cost functions were used. The non-linear cost functions and the possible high number of objects of the inverse supply chain leaded to the problem of choosing a

  10. Economic and Mathematical Modelling of Optimisation of Transaction Expenses of Engineering Enterprises

    Makaliuk Iryna V.

    2014-01-01

    The article identifies stages of the process of optimisation of transaction expenses. It develops an economic and mathematical model of optimisation of transaction expenses of engineering enterprises by the criterion of maximisation of income from realisation of products and system of restrictions, which envisages exceeding income growth rate over the expenses growth rate. The article offers to use types of expenses by accounting accounts as indicators of transaction expenses. In the result o...

  11. Dividend taxation in an infinite-horizon general equilibrium model

    Pham, Ngoc-Sang

    2017-01-01

    We consider an infinite-horizon general equilibrium model with heterogeneous agents and financial market imperfections. We investigate the role of dividend taxation on economic growth and asset price. The optimal dividend taxation is also studied.

  12. Modeling equilibrium adsorption of organic micropollutants onto activated carbon

    De Ridder, David J.; Villacorte, Loreen O.; Verliefde, Arne R. D.; Verberk, Jasper Q J C; Heijman, Bas G J; Amy, Gary L.; Van Dijk, Johannis C.

    2010-01-01

    to these properties occur in parallel, and their respective dominance depends on the solute properties as well as carbon characteristics. In this paper, a model based on multivariate linear regression is described that was developed to predict equilibrium carbon

  13. Termination of Dynamic Contracts in an Equilibrium Labor Market Model

    Wang, Cheng

    2005-01-01

    I construct an equilibrium model of the labor market where workers and firms enter into dyamic contracts that can potentially last forever, but are subject to optimal terminations. Upon a termination, the firm hires a new worker, and the worker who is terminated receives a termination compensation from the firm and is then free to go back to the labor market to seek new employment opportunities and enter into new dynamic contracts. The model permits only two types of equilibrium terminations ...

  14. Insights: Simple Models for Teaching Equilibrium and Le Chatelier's Principle.

    Russell, Joan M.

    1988-01-01

    Presents three models that have been effective for teaching chemical equilibrium and Le Chatelier's principle: (1) the liquid transfer model, (2) the fish model, and (3) the teeter-totter model. Explains each model and its relation to Le Chatelier's principle. (MVL)

  15. Learning of Chemical Equilibrium through Modelling-Based Teaching

    Maia, Poliana Flavia; Justi, Rosaria

    2009-01-01

    This paper presents and discusses students' learning process of chemical equilibrium from a modelling-based approach developed from the use of the "Model of Modelling" diagram. The investigation was conducted in a regular classroom (students 14-15 years old) and aimed at discussing how modelling-based teaching can contribute to students…

  16. Simulation optimisation

    Anon

    2010-01-01

    Over the past decade there has been a significant advance in flotation circuit optimisation through performance benchmarking using metallurgical modelling and steady-state computer simulation. This benchmarking includes traditional measures, such as grade and recovery, as well as new flotation measures, such as ore floatability, bubble surface area flux and froth recovery. To further this optimisation, Outotec has released its HSC Chemistry software with simulation modules. The flotation model developed by the AMIRA P9 Project, of which Outotec is a sponsor, is regarded by industry as the most suitable flotation model to use for circuit optimisation. This model incorporates ore floatability with flotation cell pulp and froth parameters, residence time, entrainment and water recovery. Outotec's HSC Sim enables you to simulate mineral processes in different levels, from comminution circuits with sizes and no composition, through to flotation processes with minerals by size by floatability components, to full processes with true particles with MLA data.

  17. Statistical meandering wake model and its application to yaw-angle optimisation of wind farms

    Thøgersen, Emil; Tranberg, Bo; Herp, Jürgen

    2017-01-01

    deterministic models to a statistical meandering wake model (SMWM), where a random directional deflection is assigned to a narrow wake in such a way that on average it resembles a broad Jensen wake. In a second step, the model is further generalised to wind-farm level, where the deflections of the multiple...... wakes are treated as independently and identically distributed random variables. When carefully calibrated to the Nysted wind farm, the ensemble average of the statistical model produces the same wind-direction dependence of the power efficiency as obtained from the standard Jensen model. Upon using...... the JWM to perform a yaw-angle optimisation of wind-farm power output, we find an optimisation gain of 6.7% for the Nysted wind farm when compared to zero yaw angles and averaged over all wind directions. When applying the obtained JWM-based optimised yaw angles to the SMWM, the ensemble-averaged gain...

  18. Geochemical modelling of groundwater evolution using chemical equilibrium codes

    Pitkaenen, P.; Pirhonen, V.

    1991-01-01

    Geochemical equilibrium codes are a modern tool in studying interaction between groundwater and solid phases. The most common used programs and application subjects are shortly presented in this article. The main emphasis is laid on the approach method of using calculated results in evaluating groundwater evolution in hydrogeological system. At present in geochemical equilibrium modelling also kinetic as well as hydrologic constrains along a flow path are taken into consideration

  19. Phylogenies support out-of-equilibrium models of biodiversity.

    Manceau, Marc; Lambert, Amaury; Morlon, Hélène

    2015-04-01

    There is a long tradition in ecology of studying models of biodiversity at equilibrium. These models, including the influential Neutral Theory of Biodiversity, have been successful at predicting major macroecological patterns, such as species abundance distributions. But they have failed to predict macroevolutionary patterns, such as those captured in phylogenetic trees. Here, we develop a model of biodiversity in which all individuals have identical demographic rates, metacommunity size is allowed to vary stochastically according to population dynamics, and speciation arises naturally from the accumulation of point mutations. We show that this model generates phylogenies matching those observed in nature if the metacommunity is out of equilibrium. We develop a likelihood inference framework that allows fitting our model to empirical phylogenies, and apply this framework to various mammalian families. Our results corroborate the hypothesis that biodiversity dynamics are out of equilibrium. © 2015 John Wiley & Sons Ltd/CNRS.

  20. The rational expectations equilibrium inventory model theory and applications

    1989-01-01

    This volume consists of six essays that develop and/or apply "rational expectations equilibrium inventory models" to study the time series behavior of production, sales, prices, and inventories at the industry level. By "rational expectations equilibrium inventory model" I mean the extension of the inventory model of Holt, Modigliani, Muth, and Simon (1960) to account for: (i) discounting, (ii) infinite horizon planning, (iii) observed and unobserved by the "econometrician" stochastic shocks in the production, factor adjustment, storage, and backorders management processes of firms, as well as in the demand they face for their products; and (iv) rational expectations. As is well known according to the Holt et al. model firms hold inventories in order to: (a) smooth production, (b) smooth production changes, and (c) avoid stockouts. Following the work of Zabel (1972), Maccini (1976), Reagan (1982), and Reagan and Weitzman (1982), Blinder (1982) laid the foundations of the rational expectations equilibrium inve...

  1. Non-equilibrium modelling of distillation

    Wesselingh, J.A

    This is a lecture on the way that we engineers model distillation. How we have done such modelling, how we would like to do it, and how far we have come at this moment. The ideas that I will be bringing forward are not my own. I owe them mostly to R. Krishna, R. Taylor, H. Kooijman and A. Gorak.

  2. Plasma equilibrium response modelling and validation on JT-60U

    Lister, J.B.; Sharma, A.; Limebeer, D.J.N.; Wainwright, J.P.; Nakamura, Y.; Yoshino, R.

    2002-01-01

    A systematic procedure to identify the plasma equilibrium response to the poloidal field coil voltages has been applied to the JT-60U tokamak. The required response was predicted with a high accuracy by a state-space model derived from first principles. The ab initio derivation of linearized plasma equilibrium response models is re-examined using an approach standard in analytical mechanics. A symmetric formulation is naturally obtained, removing a previous weakness in such models. RZIP, a rigid current distribution model, is re-derived using this approach and is compared with the new experimental plasma equilibrium response data obtained from Ohmic and neutral beam injection discharges in the JT-60U tokamak. In order to remove any bias from the comparison between modelled and measured plasma responses, the electromagnetic response model without plasma was first carefully tuned against experimental data, using a parametric approach, for which different cost functions for quantifying model agreement were explored. This approach additionally provides new indications of the accuracy to which various plasma parameters are known, and to the ordering of physical effects. Having taken these precautions when tuning the plasmaless model, an empirical estimate of the plasma self-inductance, the plasma resistance and its radial derivative could be established and compared with initial assumptions. Off-line tuning of the JT-60U controller is presented as an example of the improvements which might be obtained by using such a model of the plasma equilibrium response. (author)

  3. Fitting Equilibrium Search Models to Labour Market Data

    Bowlus, Audra J.; Kiefer, Nicholas M.; Neumann, George R.

    1996-01-01

    Specification and estimation of a Burdett-Mortensen type equilibrium search model is considered. The estimation is nonstandard. An estimation strategy asymptotically equivalent to maximum likelihood is proposed and applied. The results indicate that specifications with a small number of productiv...... of productivity types fit the data well compared to the homogeneous model....

  4. Numerical equilibrium analysis for structured consumer resource models

    de Roos, A.M.; Diekmann, O.; Getto, P.; Kirkilionis, M.A.

    2010-01-01

    In this paper, we present methods for a numerical equilibrium and stability analysis for models of a size structured population competing for an unstructured re- source. We concentrate on cases where two model parameters are free, and thus existence boundaries for equilibria and stability boundaries

  5. Numerical equilibrium analysis for structured consumer resource models

    de Roos, A.M.; Diekmann, O.; Getto, P.; Kirkilionis, M.A.

    2010-01-01

    In this paper, we present methods for a numerical equilibrium and stability analysis for models of a size structured population competing for an unstructured resource. We concentrate on cases where two model parameters are free, and thus existence boundaries for equilibria and stability boundaries

  6. Estimating Dynamic Equilibrium Models using Macro and Financial Data

    Christensen, Bent Jesper; Posch, Olaf; van der Wel, Michel

    We show that including financial market data at daily frequency, along with macro series at standard lower frequency, facilitates statistical inference on structural parameters in dynamic equilibrium models. Our continuous-time formulation conveniently accounts for the difference in observation...... of the estimators and estimate the model using 20 years of U.S. macro and financial data....

  7. Simple models of equilibrium and nonequilibrium phenomena

    Lebowitz, J.L.

    1987-01-01

    This volume consists of two chapters of particular interest to researchers in the field of statistical mechanics. The first chapter is based on the premise that the best way to understand the qualitative properties that characterize many-body (i.e. macroscopic) systems is to study 'a number of the more significant model systems which, at least in principle are susceptible of complete analysis'. The second chapter deals exclusively with nonequilibrium phenomena. It reviews the theory of fluctuations in open systems to which they have made important contributions. Simple but interesting model examples are emphasised

  8. Chemical equilibrium models of interstellar gas clouds

    Freeman, A.

    1982-10-01

    This thesis contains work which helps towards our understanding of the chemical processes and astrophysical conditions in interstellar clouds, across the whole range of cloud types. The object of the exercise is to construct a mathematical model representing a large system of two-body chemical reactions in order to deduce astrophysical parameters and predict molecular abundances and chemical pathways. Comparison with observations shows that this type of model is valid but also indicates that our knowledge of some chemical reactions is incomplete. (author)

  9. Statistical meandering wake model and its application to yaw-angle optimisation of wind farms

    Thøgersen, E; Tranberg, B; Greiner, M; Herp, J

    2017-01-01

    The wake produced by a wind turbine is dynamically meandering and of rather narrow nature. Only when looking at large time averages, the wake appears to be static and rather broad, and is then well described by simple engineering models like the Jensen wake model (JWM). We generalise the latter deterministic models to a statistical meandering wake model (SMWM), where a random directional deflection is assigned to a narrow wake in such a way that on average it resembles a broad Jensen wake. In a second step, the model is further generalised to wind-farm level, where the deflections of the multiple wakes are treated as independently and identically distributed random variables. When carefully calibrated to the Nysted wind farm, the ensemble average of the statistical model produces the same wind-direction dependence of the power efficiency as obtained from the standard Jensen model. Upon using the JWM to perform a yaw-angle optimisation of wind-farm power output, we find an optimisation gain of 6.7% for the Nysted wind farm when compared to zero yaw angles and averaged over all wind directions. When applying the obtained JWM-based optimised yaw angles to the SMWM, the ensemble-averaged gain is calculated to be 7.5%. This outcome indicates the possible operational robustness of an optimised yaw control for real-life wind farms. (paper)

  10. Statistical meandering wake model and its application to yaw-angle optimisation of wind farms

    Thøgersen, E.; Tranberg, B.; Herp, J.; Greiner, M.

    2017-05-01

    The wake produced by a wind turbine is dynamically meandering and of rather narrow nature. Only when looking at large time averages, the wake appears to be static and rather broad, and is then well described by simple engineering models like the Jensen wake model (JWM). We generalise the latter deterministic models to a statistical meandering wake model (SMWM), where a random directional deflection is assigned to a narrow wake in such a way that on average it resembles a broad Jensen wake. In a second step, the model is further generalised to wind-farm level, where the deflections of the multiple wakes are treated as independently and identically distributed random variables. When carefully calibrated to the Nysted wind farm, the ensemble average of the statistical model produces the same wind-direction dependence of the power efficiency as obtained from the standard Jensen model. Upon using the JWM to perform a yaw-angle optimisation of wind-farm power output, we find an optimisation gain of 6.7% for the Nysted wind farm when compared to zero yaw angles and averaged over all wind directions. When applying the obtained JWM-based optimised yaw angles to the SMWM, the ensemble-averaged gain is calculated to be 7.5%. This outcome indicates the possible operational robustness of an optimised yaw control for real-life wind farms.

  11. A knowledge representation model for the optimisation of electricity generation mixes

    Chee Tahir, Aidid; Bañares-Alcántara, René

    2012-01-01

    Highlights: ► Prototype energy model which uses semantic representation (ontologies). ► Model accepts both quantitative and qualitative based energy policy goals. ► Uses logic inference to formulate equations for linear optimisation. ► Proposes electricity generation mix based on energy policy goals. -- Abstract: Energy models such as MARKAL, MESSAGE and DNE-21 are optimisation tools which aid in the formulation of energy policies. The strength of these models lie in their solid theoretical foundations built on rigorous mathematical equations designed to process numerical (quantitative) data related to economics and the environment. Nevertheless, a complete consideration of energy policy issues also requires the consideration of the political and social aspects of energy. These political and social issues are often associated with non-numerical (qualitative) information. To enable the evaluation of these aspects in a computer model, we hypothesise that a different approach to energy model optimisation design is required. A prototype energy model that is based on a semantic representation using ontologies and is integrated to engineering models implemented in Java has been developed. The model provides both quantitative and qualitative evaluation capabilities through the use of logical inference. The semantic representation of energy policy goals is used (i) to translate a set of energy policy goals into a set of logic queries which is then used to determine the preferred electricity generation mix and (ii) to assist in the formulation of a set of equations which is then solved in order to obtain a proposed electricity generation mix. Scenario case studies have been developed and tested on the prototype energy model to determine its capabilities. Knowledge queries were made on the semantic representation to determine an electricity generation mix which fulfilled a set of energy policy goals (e.g. CO 2 emissions reduction, water conservation, energy supply

  12. Electricity market equilibrium model with resource constraint and transmission congestion

    Gao, F. [ABB, Inc., Santa Clara, CA 95050 (United States); Sheble, G.B. [Portland State University, Portland, OR 97207 (United States)

    2010-01-15

    Electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models and many efforts have been made on it before. However, most past research focused on a single-period, single-market model and did not address the fact that GENCOs hold a portfolio of assets in both electricity and fuel markets. This paper first identifies a proper SFE model, which can be applied to a multiple-period situation. Then the paper develops the equilibrium condition using discrete time optimal control considering fuel resource constraints. Finally, the paper discusses the issues of multiple equilibria caused by transmission network and shows that a transmission constrained equilibrium may exist, however the shadow price may not be zero. Additionally, an advantage from the proposed model for merchant transmission planning is discussed. (author)

  13. Electricity market equilibrium model with resource constraint and transmission congestion

    Gao, F.; Sheble, G.B.

    2010-01-01

    Electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models and many efforts have been made on it before. However, most past research focused on a single-period, single-market model and did not address the fact that GENCOs hold a portfolio of assets in both electricity and fuel markets. This paper first identifies a proper SFE model, which can be applied to a multiple-period situation. Then the paper develops the equilibrium condition using discrete time optimal control considering fuel resource constraints. Finally, the paper discusses the issues of multiple equilibria caused by transmission network and shows that a transmission constrained equilibrium may exist, however the shadow price may not be zero. Additionally, an advantage from the proposed model for merchant transmission planning is discussed. (author)

  14. Feasibility of the use of optimisation techniques to calibrate the models used in a post-closure radiological assessment

    Laundy, R.S.

    1991-01-01

    This report addresses the feasibility of the use of optimisation techniques to calibrate the models developed for the impact assessment of a radioactive waste repository. The maximum likelihood method for improving parameter estimates is considered in detail, and non-linear optimisation techniques for finding solutions are reviewed. Applications are described for the calibration of groundwater flow, radionuclide transport and biosphere models. (author)

  15. Higher adsorption capacity of Spirulina platensis alga for Cr(VI) ions removal: parameter optimisation, equilibrium, kinetic and thermodynamic predictions.

    Gunasundari, Elumalai; Senthil Kumar, Ponnusamy

    2017-04-01

    This study discusses about the biosorption of Cr(VI) ion from aqueous solution using ultrasonic assisted Spirulina platensis (UASP). The prepared UASP biosorbent was characterised by Fourier transform infrared spectroscopy, X-ray diffraction, Brunauer-Emmet-Teller, scanning electron spectroscopy and energy dispersive X-ray and thermogravimetric analyses. The optimum condition for the maximum removal of Cr(VI) ions for an initial concentration of 50 mg/l by UASP was measured as: adsorbent dose of 1 g/l, pH of 3.0, contact time of 30 min and temperature of 303 K. Adsorption isotherm, kinetics and thermodynamic parameters were calculated. Freundlich model provided the best results for the removal of Cr(VI) ions by UASP. The adsorption kinetics of Cr(VI) ions onto UASP showed that the pseudo-first-order model was well in line with the experimental data. In the thermodynamic study, the parameters like Gibb's free energy, enthalpy and entropy changes were evaluated. This result explains that the adsorption of Cr(VI) ions onto the UASP was exothermic and spontaneous in nature. Desorption of the biosorbent was done using different desorbing agents in which NaOH gave the best result. The prepared material showed higher affinity for the removal of Cr(VI) ions and this may be an alternative material to the existing commercial adsorbents.

  16. African wildlife and people : finding solutions where equilibrium models fail

    Poshiwa, X.

    2013-01-01

    Grazing systems, covering about half of the terrestrial surface, tend to be either equilibrial or non-equilibrial in nature, largely depending on the environmental stochasticity.The equilibrium model perspective stresses the importance of biotic feedbacks between herbivores and their resource,

  17. An applied general equilibrium model for Dutch agribusiness policy analysis

    Peerlings, J.

    1993-01-01

    The purpose of this thesis was to develop a basic static applied general equilibrium (AGE) model to analyse the effects of agricultural policy changes on Dutch agribusiness. In particular the effects on inter-industry transactions, factor demand, income, and trade are of

  18. Non-equilibrium Quasi-Chemical Nucleation Model

    Gorbachev, Yuriy E.

    2018-04-01

    Quasi-chemical model, which is widely used for nucleation description, is revised on the basis of recent results in studying of non-equilibrium effects in reacting gas mixtures (Kolesnichenko and Gorbachev in Appl Math Model 34:3778-3790, 2010; Shock Waves 23:635-648, 2013; Shock Waves 27:333-374, 2017). Non-equilibrium effects in chemical reactions are caused by the chemical reactions themselves and therefore these contributions should be taken into account in the corresponding expressions for reaction rates. Corrections to quasi-equilibrium reaction rates are of two types: (a) spatially homogeneous (caused by physical-chemical processes) and (b) spatially inhomogeneous (caused by gas expansion/compression processes and proportional to the velocity divergency). Both of these processes play an important role during the nucleation and are included into the proposed model. The method developed for solving the generalized Boltzmann equation for chemically reactive gases is applied for solving the set of equations of the revised quasi-chemical model. It is shown that non-equilibrium processes lead to essential deviation of the quasi-stationary distribution and therefore the nucleation rate from its traditional form.

  19. Finite element model updating in structural dynamics using design sensitivity and optimisation

    Calvi, Adriano

    1998-01-01

    Model updating is an important issue in engineering. In fact a well-correlated model provides for accurate evaluation of the structure loads and responses. The main objectives of the study were to exploit available optimisation programs to create an error localisation and updating procedure of nite element models that minimises the "error" between experimental and analytical modal data, addressing in particular the updating of large scale nite element models with se...

  20. Choking flow modeling with mechanical and thermal non-equilibrium

    Yoon, H.J.; Ishii, M.; Revankar, S.T. [School of Nuclear Engineering, Purdue University, West Lafayette, IN 47907 (United States)

    2006-01-15

    The mechanistic model, which considers the mechanical and thermal non-equilibrium, is described for two-phase choking flow. The choking mass flux is obtained from the momentum equation with the definition of choking. The key parameter for the mechanical non-equilibrium is a slip ratio. The dependent parameters for the slip ratio are identified. In this research, the slip ratio which is defined in the drift flux model is used to identify the impact parameters on the slip ratio. Because the slip ratio in the drift flux model is related to the distribution parameter and drift velocity, the adequate correlations depending on the flow regime are introduced in this study. For the thermal non-equilibrium, the model is developed with bubble conduction time and Bernoulli choking model. In case of highly subcooled water compared to the inlet pressure, the Bernoulli choking model using the pressure undershoot is used because there is no bubble generation in the test section. When the phase change happens inside the test section, two-phase choking model with relaxation time calculates the choking mass flux. According to the comparison of model prediction with experimental data shows good agreement. The developed model shows good prediction in both low and high pressure ranges. (author)

  1. A dissipative model of plasma equilibrium in toroidal systems

    Wobig, H.

    1985-10-01

    In order to describe a steady-state plasma equilibrium in tokamaks, stellarators or other non-axisymmetric configurations, the model of ideal MHD with isotropic plasma pressure is widely used. The ideal MHD - model of a toroidal plasma equilibrium requires the existence of closed magnetic surfaces. Several numerical codes have been developed in the past to solve the three-dimensional equilibrium problem, but so far no existence theorem for a solution has been proved. Another difficulty is the formation of magnetic islands and field line ergodisation, which can only be described in terms of ideal MHD if the plasma pressure is constant in the ergodic region. In order to describe the formation of magnetic islands and ergodisation of surfaces properly, additional dissipative terms have to be incorporated to allow decoupling of the plasma and magnetic field. In a collisional plasma viscosity and inelastic collisions introduce such dissipative processes. In the model used a friction term proportional to the velocity v vector of the plasma is included. Such a term originates from charge exchange interaction of the plasma with a nuetral background. With these modifications, the equilibrium problem reduces to a set of quasilinear elliptic equations for the pressure, the electric potential and the magnetic field. The paper deals with an existence theorem based on the Fixed - Point method of Schauder. It can be shown that a self-consistent and unique equilibrium exists if the friction term is large and the plasma pressure is sufficiently low. The essential role of the dissipative terms is to remove the singularities of the ideal MHD model on rational magnetic surfaces. The problem has a strong similarity to Benard cell convection, and consequently similar behaviour such as bifurcation and exchange of stability are expected. (orig./GG)

  2. Knowledge Management through the Equilibrium Pattern Model for Learning

    Sarirete, Akila; Noble, Elizabeth; Chikh, Azeddine

    Contemporary students are characterized by having very applied learning styles and methods of acquiring knowledge. This behavior is consistent with the constructivist models where students are co-partners in the learning process. In the present work the authors developed a new model of learning based on the constructivist theory coupled with the cognitive development theory of Piaget. The model considers the level of learning based on several stages and the move from one stage to another requires learners' challenge. At each time a new concept is introduced creates a disequilibrium that needs to be worked out to return back to its equilibrium stage. This process of "disequilibrium/equilibrium" has been analyzed and validated using a course in computer networking as part of Cisco Networking Academy Program at Effat College, a women college in Saudi Arabia. The model provides a theoretical foundation for teaching especially in a complex knowledge domain such as engineering and can be used in a knowledge economy.

  3. Numerical solution of dynamic equilibrium models under Poisson uncertainty

    Posch, Olaf; Trimborn, Timo

    2013-01-01

    We propose a simple and powerful numerical algorithm to compute the transition process in continuous-time dynamic equilibrium models with rare events. In this paper we transform the dynamic system of stochastic differential equations into a system of functional differential equations of the retar...... solution to Lucas' endogenous growth model under Poisson uncertainty are used to compute the exact numerical error. We show how (potential) catastrophic events such as rare natural disasters substantially affect the economic decisions of households....

  4. Energy efficiency optimisation for distillation column using artificial neural network models

    Osuolale, Funmilayo N.; Zhang, Jie

    2016-01-01

    This paper presents a neural network based strategy for the modelling and optimisation of energy efficiency in distillation columns incorporating the second law of thermodynamics. Real-time optimisation of distillation columns based on mechanistic models is often infeasible due to the effort in model development and the large computation effort associated with mechanistic model computation. This issue can be addressed by using neural network models which can be quickly developed from process operation data. The computation time in neural network model evaluation is very short making them ideal for real-time optimisation. Bootstrap aggregated neural networks are used in this study for enhanced model accuracy and reliability. Aspen HYSYS is used for the simulation of the distillation systems. Neural network models for exergy efficiency and product compositions are developed from simulated process operation data and are used to maximise exergy efficiency while satisfying products qualities constraints. Applications to binary systems of methanol-water and benzene-toluene separations culminate in a reduction of utility consumption of 8.2% and 28.2% respectively. Application to multi-component separation columns also demonstrate the effectiveness of the proposed method with a 32.4% improvement in the exergy efficiency. - Highlights: • Neural networks can accurately model exergy efficiency in distillation columns. • Bootstrap aggregated neural network offers improved model prediction accuracy. • Improved exergy efficiency is obtained through model based optimisation. • Reductions of utility consumption by 8.2% and 28.2% were achieved for binary systems. • The exergy efficiency for multi-component distillation is increased by 32.4%.

  5. A national optimisation model for energy wood streams; Energiapuuvirtojen valtakunnallinen optimointimalli

    Iikkanen, P.; Keskinen, S.; Korpilahti, A.; Raesaenen, T.; Sirkiae, A.

    2011-07-01

    In 2010 a total of 12,5 terawatt hours of forest energy was used in Finland's heat and power plants. According to studies by Metsaeteho and Poeyry, use of energy wood will nearly double to 21.6 terawatt hours by 2020. There are also plans to use energy wood as a raw material for biofuel plants. The techno-ecological supply potential of energy wood in 2020 is estimated at 42.9 terawatt hours. Energy wood has been transported almost entirely by road. The situation is changing, however, because growing demand for energy wood will expand raw wood procurement areas and lengthen transport distances. A cost-effective transport system therefore also requires the use of rail and waterway transports. In Finland, however, there is almost a complete absence of the terminals required for the use of rail and waterway transports; where energy wood is chipped, temporarily stored and loaded onto railway wagons and vessels for further transport. A national optimisation model for energy wood has been developed to serve transport system planning in particular. The linear optimisation model optimises, on a national level, goods streams between supply points and usage points based on forest energy procurement costs. The model simultaneously covers deliveries of forest chips, stumps and small-sized thinning wood. The procurement costs used in the optimisation include the costs of the energy wood's roadside price, chipping, transport and terminal handling. The transport system described in the optimisation model consists of wood supply points (2007 municipality precision), wood usage points, railway terminals and the connections between them along the main road and rail network. Elements required for the examination of waterway transports can also be easily added to the model. The optimisation model can be used to examine, for example, the effects of changes of energy wood demand and supply as well as transport costs on energy wood goods streams, the relative use of different

  6. Gaussian random bridges and a geometric model for information equilibrium

    Mengütürk, Levent Ali

    2018-03-01

    The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.

  7. Parallel unstructured mesh optimisation for 3D radiation transport and fluids modelling

    Gorman, G.J.; Pain, Ch. C.; Oliveira, C.R.E. de; Umpleby, A.P.; Goddard, A.J.H.

    2003-01-01

    In this paper we describe the theory and application of a parallel mesh optimisation procedure to obtain self-adapting finite element solutions on unstructured tetrahedral grids. The optimisation procedure adapts the tetrahedral mesh to the solution of a radiation transport or fluid flow problem without sacrificing the integrity of the boundary (geometry), or internal boundaries (regions) of the domain. The objective is to obtain a mesh which has both a uniform interpolation error in any direction and the element shapes are of good quality. This is accomplished with use of a non-Euclidean (anisotropic) metric which is related to the Hessian of the solution field. Appropriate scaling of the metric enables the resolution of multi-scale phenomena as encountered in transient incompressible fluids and multigroup transport calculations. The resulting metric is used to calculate element size and shape quality. The mesh optimisation method is based on a series of mesh connectivity and node position searches of the landscape defining mesh quality which is gauged by a functional. The mesh modification thus fits the solution field(s) in an optimal manner. The parallel mesh optimisation/adaptivity procedure presented in this paper is of general applicability. We illustrate this by applying it to a transient CFD (computational fluid dynamics) problem. Incompressible flow past a cylinder at moderate Reynolds numbers is modelled to demonstrate that the mesh can follow transient flow features. (authors)

  8. Non-Equilibrium Turbulence and Two-Equation Modeling

    Rubinstein, Robert

    2011-01-01

    Two-equation turbulence models are analyzed from the perspective of spectral closure theories. Kolmogorov theory provides useful information for models, but it is limited to equilibrium conditions in which the energy spectrum has relaxed to a steady state consistent with the forcing at large scales; it does not describe transient evolution between such states. Transient evolution is necessarily through nonequilibrium states, which can only be found from a theory of turbulence evolution, such as one provided by a spectral closure. When the departure from equilibrium is small, perturbation theory can be used to approximate the evolution by a two-equation model. The perturbation theory also gives explicit conditions under which this model can be valid, and when it will fail. Implications of the non-equilibrium corrections for the classic Tennekes-Lumley balance in the dissipation rate equation are drawn: it is possible to establish both the cancellation of the leading order Re1/2 divergent contributions to vortex stretching and enstrophy destruction, and the existence of a nonzero difference which is finite in the limit of infinite Reynolds number.

  9. Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model

    Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.

    2017-09-01

    The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.

  10. Non-Equilibrium Modeling of Inductively Coupled RF Plasmas

    2015-01-01

    wall can be approximated with the expression for an infinite solenoid , B(r = R) = µ0NIc, where quan- tities N and Ic are the number of turns per unit...Modeling of non-equilibrium plasmas in an induc- tively coupled plasma facility. AIAA Paper 2014– 2235, 2014. 45th AIAA Plasmadynamics and Lasers ...1993. 24th Plas- madynamics and Laser Conference, Orlando, FL. [22] M. Capitelli, I. Armenise, D. Bruno, M. Caccia- tore, R. Celiberto, G. Colonna, O

  11. Optimisation of Hidden Markov Model using Baum–Welch algorithm ...

    The present work is a part of development of Hidden Markov Model. (HMM) based ... the Himalaya. In this work, HMMs have been developed for forecasting of maximum and minimum ..... data collection teams of Snow and Avalanche Study.

  12. A reliability-based maintenance technicians' workloads optimisation model with stochastic consideration

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2016-06-01

    The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.

  13. Optimisation of a parallel ocean general circulation model

    M. I. Beare; D. P. Stevens

    1997-01-01

    International audience; This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by...

  14. Power law-based local search in spider monkey optimisation for lower order system modelling

    Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala

    2017-01-01

    The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.

  15. Computing diffusivities from particle models out of equilibrium

    Embacher, Peter; Dirr, Nicolas; Zimmer, Johannes; Reina, Celia

    2018-04-01

    A new method is proposed to numerically extract the diffusivity of a (typically nonlinear) diffusion equation from underlying stochastic particle systems. The proposed strategy requires the system to be in local equilibrium and have Gaussian fluctuations but it is otherwise allowed to undergo arbitrary out-of-equilibrium evolutions. This could be potentially relevant for particle data obtained from experimental applications. The key idea underlying the method is that finite, yet large, particle systems formally obey stochastic partial differential equations of gradient flow type satisfying a fluctuation-dissipation relation. The strategy is here applied to three classic particle models, namely independent random walkers, a zero-range process and a symmetric simple exclusion process in one space dimension, to allow the comparison with analytic solutions.

  16. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction

    Cobbs Gary

    2012-08-01

    Full Text Available Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Results Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the

  17. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.

    Cobbs, Gary

    2012-08-16

    Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of

  18. modeling, optimisation and analysis of re-entrant flowshop job

    HOD

    different alpha-cuts to obtain fuzzy processing times (FPT) of jobs to explore the importance of .... showed that fuzzy set theory can be useful in modeling ..... level. - α of processing times. Figure 10 shows short ridge heights of start times,.

  19. A joint spare part and maintenance inspection optimisation model using the Delay-Time concept

    Wang Wenbin

    2011-01-01

    Spare parts and maintenance are closely related logistics activities where maintenance generates the need for spare parts. When preventive maintenance is present, it may need more spare parts at one time because of the planned preventive maintenance activities. This paper considers the joint optimisation of three decision variables, e.g., the ordering quantity, ordering interval and inspection interval. The model is constructed using the well-known Delay-Time concept where the failure process is divided into a two-stage process. The objective function is the long run expected cost per unit time in terms of the three decision variables to be optimised. Here we use a block-based inspection policy where all components are inspected at the same time regardless of the ages of the components. This creates a situation that the time to failure since the immediate previous inspection is random and has to be modelled by a distribution. This time is called the forward time and a limiting but closed form of such distribution is obtained. We develop an algorithm for the optimal solution of the decision process using a combination of analytical and enumeration approaches. The model is demonstrated by a numerical example. - Highlights: → Joint optimisation of maintenance and spare part inventory. → The use of the Delay-Time concept. → Block-based inspection. → Fixed order interval but variable order quantity.

  20. Optimising resolution for a preparative separation of Chinese herbal medicine using a surrogate model sample system.

    Ye, Haoyu; Ignatova, Svetlana; Peng, Aihua; Chen, Lijuan; Sutherland, Ian

    2009-06-26

    This paper builds on previous modelling research with short single layer columns to develop rapid methods for optimising high-performance counter-current chromatography at constant stationary phase retention. Benzyl alcohol and p-cresol are used as model compounds to rapidly optimise first flow and then rotational speed operating conditions at a preparative scale with long columns for a given phase system using a Dynamic Extractions Midi-DE centrifuge. The transfer to a high value extract such as the crude ethanol extract of Chinese herbal medicine Millettia pachycarpa Benth. is then demonstrated and validated using the same phase system. The results show that constant stationary phase modelling of flow and speed with long multilayer columns works well as a cheap, quick and effective method of optimising operating conditions for the chosen phase system-hexane-ethyl acetate-methanol-water (1:0.8:1:0.6, v/v). Optimum conditions for resolution were a flow of 20 ml/min and speed of 1200 rpm, but for throughput were 80 ml/min at the same speed. The results show that 80 ml/min gave the best throughputs for tephrosin (518 mg/h), pyranoisoflavone (47.2 mg/h) and dehydrodeguelin (10.4 mg/h), whereas for deguelin (100.5 mg/h), the best flow rate was 40 ml/min.

  1. Mathematical modelling and TMCP simulation for optimisation of steel behaviour

    Siwecki, T.

    2001-01-01

    Physically based mathematical models for prediction of steel behaviour and microstructure evolution in connection with thermal and thermomechanical controlled processing (TMCP) development in Swedish Institute for Metals Research are discussed. The models can be used for computer predictions of recrystallization and grain growth of austenite after deformation, precipitation or dissolution of microalloying carbonitride in austenite, flow stress during hot working, phase transformation behaviour during accelerated cooling as well as the final microstructure and mechanical properties. The database, which contains information about steel behaviour for a large number of HSLA steels, is also presented. Optimization of TMCP parameters for improving the properties of the steel are discussed in relation to the microstructure and mechanical properties. The effect of TMCP parameters (reheating temperature, rolling schedules and finish rolling temperature as well as accelerated control cooling) on steel properties was studied in laboratory scale. (author)

  2. Waste management system optimisation for Southern Italy with MARKAL model

    Salvia, M.; Cosmi, C. [Istituto di Metodologie Avanzate di Analisi Ambientale, Consiglio Nazionale delle Ricerche, C. da S. Loja, 85050 (PZ) Tito Scalo (Italy); Macchiato, M. [Dipartimento di Scienze Fisiche, Universita Federico II, Via Cintia, 80126 Napoli (Italy); Mangiamele, L. [Dipartimento di Ingegneria e Fisica dell' Ambiente, Universita degli Studi della Basilicata, C. da Macchia Romana, 85100 Potenza (Italy)

    2002-01-01

    The MARKAL models generator was utilised to build up a comprehensive model of the anthropogenic activities system which points out the linkages between productive processes and waste disposal technologies. The aim of such a study is to determine the optimal configuration of the waste management system for the Basilicata region (Southern Italy), in order to support the definition of the regional waste management plan in compliance with the Italian laws. A sensitivity analysis was performed to evaluate the influence of landfilling fees on the choice of waste processing technologies, in order to foster waste management strategies which are environmentally sustainable, economically affordable and highly efficient. The results show the key role of separate collection and mechanical pre-treatments in the achievement of the legislative targets.

  3. Recent tests of the equilibrium-point hypothesis (lambda model).

    Feldman, A G; Ostry, D J; Levin, M F; Gribble, P L; Mitnitski, A B

    1998-07-01

    The lambda model of the equilibrium-point hypothesis (Feldman & Levin, 1995) is an approach to motor control which, like physics, is based on a logical system coordinating empirical data. The model has gone through an interesting period. On one hand, several nontrivial predictions of the model have been successfully verified in recent studies. In addition, the explanatory and predictive capacity of the model has been enhanced by its extension to multimuscle and multijoint systems. On the other hand, claims have recently appeared suggesting that the model should be abandoned. The present paper focuses on these claims and concludes that they are unfounded. Much of the experimental data that have been used to reject the model are actually consistent with it.

  4. Modelling of diffusion from equilibrium diffraction fluctuations in ordered phases

    Arapaki, E.; Argyrakis, P.; Tringides, M.C.

    2008-01-01

    Measurements of the collective diffusion coefficient D c at equilibrium are difficult because they are based on monitoring low amplitude concentration fluctuations generated spontaneously, that are difficult to measure experimentally. A new experimental method has been recently used to measure time-dependent correlation functions from the diffraction intensity fluctuations and was applied to measure thermal step fluctuations. The method has not been applied yet to measure superstructure intensity fluctuations in surface overlayers and to extract D c . With Monte Carlo simulations we study equilibrium fluctuations in Ising lattice gas models with nearest neighbor attractive and repulsive interactions. The extracted diffusion coefficients are compared to the ones obtained from equilibrium methods. The new results are in good agreement with the results from the other methods, i.e., D c decreases monotonically with coverage Θ for attractive interactions and increases monotonically with Θ for repulsive interactions. Even the absolute value of D c agrees well with the results obtained with the probe area method. These results confirm that this diffraction based method is a novel, reliable way to measure D c especially within the ordered region of the phase diagram when the superstructure spot has large intensity

  5. Chemical equilibrium relations used in the fireball model of relativistic heavy ion reactions

    Gupta, S.D.

    1978-01-01

    The fireball model of relativistic heavy-ion collision uses chemical equilibrium relations to predict cross sections for particle and composite productions. These relations are examined in a canonical ensemble model where chemical equilibrium is not explicitly invoked

  6. MODELLING AND OPTIMISATION OF A BIMORPH PIEZOELECTRIC CANTILEVER BEAM IN AN ENERGY HARVESTING APPLICATION

    CHUNG KET THEIN

    2016-02-01

    Full Text Available Piezoelectric materials are excellent transducers in converting vibrational energy into electrical energy, and vibration-based piezoelectric generators are seen as an enabling technology for wireless sensor networks, especially in selfpowered devices. This paper proposes an alternative method for predicting the power output of a bimorph cantilever beam using a finite element method for both static and dynamic frequency analyses. Experiments are performed to validate the model and the simulation results. In addition, a novel approach is presented for optimising the structure of the bimorph cantilever beam, by which the power output is maximised and the structural volume is minimised simultaneously. Finally, the results of the optimised design are presented and compared with other designs.

  7. Pharmaceutical industry and trade liberalization using computable general equilibrium model.

    Barouni, M; Ghaderi, H; Banouei, Aa

    2012-01-01

    Computable general equilibrium models are known as a powerful instrument in economic analyses and widely have been used in order to evaluate trade liberalization effects. The purpose of this study was to provide the impacts of trade openness on pharmaceutical industry using CGE model. Using a computable general equilibrium model in this study, the effects of decrease in tariffs as a symbol of trade liberalization on key variables of Iranian pharmaceutical products were studied. Simulation was performed via two scenarios in this study. The first scenario was the effect of decrease in tariffs of pharmaceutical products as 10, 30, 50, and 100 on key drug variables, and the second was the effect of decrease in other sectors except pharmaceutical products on vital and economic variables of pharmaceutical products. The required data were obtained and the model parameters were calibrated according to the social accounting matrix of Iran in 2006. The results associated with simulation demonstrated that the first scenario has increased import, export, drug supply to markets and household consumption, while import, export, supply of product to market, and household consumption of pharmaceutical products would averagely decrease in the second scenario. Ultimately, society welfare would improve in all scenarios. We presents and synthesizes the CGE model which could be used to analyze trade liberalization policy issue in developing countries (like Iran), and thus provides information that policymakers can use to improve the pharmacy economics.

  8. Model for optimising the execution of anti-spam filters

    David Ruano-Ordás

    2016-12-01

    Full Text Available During last years, the combination of several filtering techniques for the development of anti-spam systems has gained a enormous popularity. However, although the accuracy achieved by these models has increased considerably, its use has entailed the emergence of new challenges such as the need to reduce the excessive use of computational resources, the increase of filtering speed and the adjustment of the weights used for the combination of several filtering techniques. In order to achieve this goal we have been refined several aspects including: (i the design and development of small technical improvements to increase the overall performance of the filter, (ii application of genetic algorithms to increase filtering accuracy and (iii the use of scheduling algorithms to improve filtering throughput.

  9. CIME course on Modelling and Optimisation of Flows on Networks

    Ambrosio, Luigi; Helbing, Dirk; Klar, Axel; Zuazua, Enrique

    2013-01-01

    In recent years flows in networks have attracted the interest of many researchers from different areas, e.g. applied mathematicians, engineers, physicists, economists. The main reason for this ubiquity is the wide and diverse range of applications, such as vehicular traffic, supply chains, blood flow, irrigation channels, data networks and others. This book presents an extensive set of notes by world leaders on the main mathematical techniques used to address such problems, together with investigations into specific applications. The main focus is on partial differential equations in networks, but ordinary differential equations and optimal transport are also included. Moreover, the modeling is completed by analysis, numerics, control and optimization of flows in networks. The book will be a valuable resource for every researcher or student interested in the subject.

  10. Modelling and optimisation of fs laser-produced Kα sources

    Gibbon, P.; Masek, M.; Teubner, U.; Lu, W.; Nicoul, M.; Shymanovich, U.; Tarasevitch, A.; Zhou, P.; Sokolowski-Tinten, K.; Linde, D. von der

    2009-01-01

    Recent theoretical and numerical studies of laser-driven femtosecond K α sources are presented, aimed at understanding a recent experimental campaign to optimize emission from thin coating targets. Particular attention is given to control over the laser-plasma interaction conditions defined by the interplay between a controlled prepulse and the angle of incidence. It is found that the x-ray efficiency for poor-contrast laser systems in which a large preplasma is suspected can be enhanced by using a near-normal incidence geometry even at high laser intensities. With high laser contrast, similar efficiencies can be achieved by going to larger incidence angles, but only at the expense of larger X-ray spot size. New developments in three-dimensional modelling are also reported with the goal of handling interactions with geometrically complex targets and finite resistivity. (orig.)

  11. A new inorganic atmospheric aerosol phase equilibrium model (UHAERO

    N. R. Amundson

    2006-01-01

    Full Text Available A variety of thermodynamic models have been developed to predict inorganic gas-aerosol equilibrium. To achieve computational efficiency a number of the models rely on a priori specification of the phases present in certain relative humidity regimes. Presented here is a new computational model, named UHAERO, that is both efficient and rigorously computes phase behavior without any a priori specification. The computational implementation is based on minimization of the Gibbs free energy using a primal-dual method, coupled to a Newton iteration. The mathematical details of the solution are given elsewhere. The model computes deliquescence behavior without any a priori specification of the relative humidities of deliquescence. Also included in the model is a formulation based on classical theory of nucleation kinetics that predicts crystallization behavior. Detailed phase diagrams of the sulfate/nitrate/ammonium/water system are presented as a function of relative humidity at 298.15 K over the complete space of composition.

  12. Feeder Type Optimisation for the Plain Flow Discharge Process of an Underground Hopper by Discrete Element Modelling

    Jan Nečas; Jakub Hlosta; David Žurovec; Martin Žídek; Jiří Rozbroj; Jiří Zegzulka

    2017-01-01

    This paper describes optimisation of a conveyor from an underground hopper intended for a coal transfer station. The original solution was designed with a chain conveyor encountered operational problems that have limited its continuous operation. The Discrete Element Modeling (DEM) was chosen to optimise the transport. DEM simulations allow device design modifications directly in the 3D CAD model, and then the simulation makes it possible to evaluate whether the adjustment was successful. By...

  13. Model-based PEEP optimisation in mechanical ventilation

    Chiew Yeong Shiong

    2011-12-01

    Full Text Available Abstract Background Acute Respiratory Distress Syndrome (ARDS patients require mechanical ventilation (MV for breathing support. Patient-specific PEEP is encouraged for treating different patients but there is no well established method in optimal PEEP selection. Methods A study of 10 patients diagnosed with ALI/ARDS whom underwent recruitment manoeuvre is carried out. Airway pressure and flow data are used to identify patient-specific constant lung elastance (Elung and time-variant dynamic lung elastance (Edrs at each PEEP level (increments of 5cmH2O, for a single compartment linear lung model using integral-based methods. Optimal PEEP is estimated using Elung versus PEEP, Edrs-Pressure curve and Edrs Area at minimum elastance (maximum compliance and the inflection of the curves (diminishing return. Results are compared to clinically selected PEEP values. The trials and use of the data were approved by the New Zealand South Island Regional Ethics Committee. Results Median absolute percentage fitting error to the data when estimating time-variant Edrs is 0.9% (IQR = 0.5-2.4 and 5.6% [IQR: 1.8-11.3] when estimating constant Elung. Both Elung and Edrs decrease with PEEP to a minimum, before rising, and indicating potential over-inflation. Median Edrs over all patients across all PEEP values was 32.2 cmH2O/l [IQR: 26.1-46.6], reflecting the heterogeneity of ALI/ARDS patients, and their response to PEEP, that complicates standard approaches to PEEP selection. All Edrs-Pressure curves have a clear inflection point before minimum Edrs, making PEEP selection straightforward. Model-based selected PEEP using the proposed metrics were higher than clinically selected values in 7/10 cases. Conclusion Continuous monitoring of the patient-specific Elung and Edrs and minimally invasive PEEP titration provide a unique, patient-specific and physiologically relevant metric to optimize PEEP selection with minimal disruption of MV therapy.

  14. Optimisation models and solution methods for load management

    Gustafsson, Stig-Inge [Linkoeping Univ. (Sweden). Div. of Wood Science and Technology; Roennqvist, Mikael; Claesson, Marcus [Linkoeping Univ. (Sweden). Div. of Optimisation

    2001-02-01

    The electricity market in Sweden has changed during recent years. Electricity for industrial use can nowadays be purchased from a number of competing electricity suppliers. Hence, the price for each kilowatt-hour is significantly lower than just two years ago and the interest for electricity conservation measures has declined. Part of the electricity tariff is, however, almost the same as before, i.e. the demand cost expressed in Swedish Kronor, SEK, for each kilowatt. This has put focus on load management measures in order to decrease this specific cost. Saving one kWh might lead to monetary savings between 0.22 to 914 SEK and this paper shows how to save only those kWh which really save money. A load management system has been installed in a small carpentry factory and the device can turn off equipment due to a certain priority and for a number of minutes each hour. The question is now, what level on the electricity load is optimal in a strict mathematical sense, i.e. how many kW should be set in the load management computer in order to get the best profitability? In this paper we develop a mathematical model which can be used both as a tool to find a best profitable subscription level and as a tool to control the turn of choices. Numerical results from a case study are presented.

  15. Integrating climatic information in water resources modelling and optimisation

    Gelati, Emiliano

    . I det andet eksempel simuleres og forudsiges månedlig afstrømning i det vestlige Ecuador ved udnyttelse af El Niño information. El Niño forårsager høj nedbør i de kystnære egne af Ecuador, hviket er en konsekvens af positive temperatur-anomalier i det østlige Stillehav. Nyhedsværdien består i en...... kombination af flere tidssvarende modeller i en ikke-stationær beskrivelse af afstrømningen, hvorunder El Niño-betingede regimeskift tages i regning. Potentielle anvendelsesmuligheder omfatter udnyttelse af observerede og forudsagte storskala klimatiske data til simulering og forudsigelse af afstrømningen....... Optimeringsmetoderne er udviklet med henblik på at benytte output fra afstrømningsmodellerne til at opnå en forbedret reservoirstyring. Lang- og korttidsoptimeringer for to reserviorer (Daule Peripa og Baba) i det vestlige Ecuador viser, at der kan opnås betydelige forbedringer af reservoirstyringen, når El Niño...

  16. Optimisation models and solution methods for load management

    Gustafsson, Stig-Inge; Roennqvist, Mikael; Claesson, Marcus

    2001-02-01

    The electricity market in Sweden has changed during recent years. Electricity for industrial use can nowadays be purchased from a number of competing electricity suppliers. Hence, the price for each kilowatt-hour is significantly lower than just two years ago and the interest for electricity conservation measures has declined. Part of the electricity tariff is, however, almost the same as before, i.e. the demand cost expressed in Swedish Kronor, SEK, for each kilowatt. This has put focus on load management measures in order to decrease this specific cost. Saving one kWh might lead to monetary savings between 0.22 to 914 SEK and this paper shows how to save only those kWh which really save money. A load management system has been installed in a small carpentry factory and the device can turn off equipment due to a certain priority and for a number of minutes each hour. The question is now, what level on the electricity load is optimal in a strict mathematical sense, i.e. how many kW should be set in the load management computer in order to get the best profitability? In this paper we develop a mathematical model which can be used both as a tool to find a best profitable subscription level and as a tool to control the turn of choices. Numerical results from a case study are presented

  17. Modeling Inflation Using a Non-Equilibrium Equation of Exchange

    Chamberlain, Robert G.

    2013-01-01

    Inflation is a change in the prices of goods that takes place without changes in the actual values of those goods. The Equation of Exchange, formulated clearly in a seminal paper by Irving Fisher in 1911, establishes an equilibrium relationship between the price index P (also known as "inflation"), the economy's aggregate output Q (also known as "the real gross domestic product"), the amount of money available for spending M (also known as "the money supply"), and the rate at which money is reused V (also known as "the velocity of circulation of money"). This paper offers first a qualitative discussion of what can cause these factors to change and how those causes might be controlled, then develops a quantitative model of inflation based on a non-equilibrium version of the Equation of Exchange. Causal relationships are different from equations in that the effects of changes in the causal variables take time to play out-often significant amounts of time. In the model described here, wages track prices, but only after a distributed lag. Prices change whenever the money supply, aggregate output, or the velocity of circulation of money change, but only after a distributed lag. Similarly, the money supply depends on the supplies of domestic and foreign money, which depend on the monetary base and a variety of foreign transactions, respectively. The spreading of delays mitigates the shocks of sudden changes to important inputs, but the most important aspect of this model is that delays, which often have dramatic consequences in dynamic systems, are explicitly incorporated.macroeconomics, inflation, equation of exchange, non-equilibrium, Athena Project

  18. Equilibrium in a random viewer model of television broadcasting

    Hansen, Bodil Olai; Keiding, Hans

    2014-01-01

    The authors considered a model of commercial television market with advertising with probabilistic viewer choice of channel, where private broadcasters may coexist with a public television broadcaster. The broadcasters influence the probability of getting viewer attention through the amount...... number of channels. The authors derive properties of equilibrium in an oligopolistic market with private broadcasters and show that the number of firms has a negative effect on overall advertising and viewer satisfaction. If there is a public channel that also sells advertisements but does not maximize...... profits, this will have a positive effect on advertiser and viewer satisfaction....

  19. Model of opacity and emissivity of non-equilibrium plasma

    Politov V Y

    2008-01-01

    In this work the model describing absorption and emission properties of the non-equilibrium plasma is presented. It is based on the kinetics equations for populations of the ground, singly and doubly excited states of multi-charged ions. After solving these equations, the states populations together with the spectroscopic data, supplied in the special database for a lot ionization stages, are used for building the spectral distributions of plasma opacity and emissivity in STA approximation. Results of kinetics simulation are performed for such important X-ray converter as gold, which is investigated intensively in ICF-experiments

  20. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  1. Out-of-equilibrium dynamics in a Gaussian trap model

    Diezemann, Gregor

    2007-01-01

    The violations of the fluctuation-dissipation theorem are analysed for a trap model with a Gaussian density of states. In this model, the system reaches thermal equilibrium for long times after a quench to any finite temperature and therefore all ageing effect are of a transient nature. For not too long times after the quench it is found that the so-called fluctuation-dissipation ratio tends to a non-trivial limit, thus indicating the possibility for the definition of a timescale-dependent effective temperature. However, different definitions of the effective temperature yield distinct results. In particular, plots of the integrated response versus the correlation function strongly depend on the way they are constructed. Also the definition of effective temperatures in the frequency domain is not unique for the model considered. This may have some implications for the interpretation of results from computer simulations and experimental determinations of effective temperatures

  2. Equilibrium Analysis of a Yellow Fever Dynamical Model with Vaccination

    Silvia Martorano Raimundo

    2015-01-01

    Full Text Available We propose an equilibrium analysis of a dynamical model of yellow fever transmission in the presence of a vaccine. The model considers both human and vector populations. We found thresholds parameters that affect the development of the disease and the infectious status of the human population in the presence of a vaccine whose protection may wane over time. In particular, we derived a threshold vaccination rate, above which the disease would be eradicated from the human population. We show that if the mortality rate of the mosquitoes is greater than a given threshold, then the disease is naturally (without intervention eradicated from the population. In contrast, if the mortality rate of the mosquitoes is less than that threshold, then the disease is eradicated from the populations only when the growing rate of humans is less than another threshold; otherwise, the disease is eradicated only if the reproduction number of the infection after vaccination is less than 1. When this reproduction number is greater than 1, the disease will be eradicated from the human population if the vaccination rate is greater than a given threshold; otherwise, the disease will establish itself among humans, reaching a stable endemic equilibrium. The analysis presented in this paper can be useful, both to the better understanding of the disease dynamics and also for the planning of vaccination strategies.

  3. Homogeneous non-equilibrium two-phase critical flow model

    Schroeder, J.J.; Vuxuan, N.

    1987-01-01

    An important aspect of nuclear and chemical reactor safety is the ability to predict the maximum or critical mass flow rate from a break or leak in a pipe system. At the beginning of such a blowdown, if the stagnation condition of the fluid is subcooled or slightly saturated thermodynamic non-equilibrium exists in the downstream, e.g. the fluid becomes superheated to a degree determined by the liquid pressure. A simplified non-equilibrium model, explained in this report, is valid for rapidly decreasing pressure along the flow path. It presumes that fluid has to be superheated by an amount governed by physical principles before it starts to flash into steam. The flow is assumed to be homogeneous, i.e. the steam and liquid velocities are equal. An adiabatic flow calculation mode (Fanno lines) is employed to evaluate the critical flow rate for long pipes. The model is found to satisfactorily describe critical flow tests. Good agreement is obtained with the large scale Marviken tests as well as with small scale experiments. (orig.)

  4. A new equilibrium trading model with asymmetric information

    Lianzhang Bao

    2018-03-01

    Full Text Available Taking arbitrage opportunities into consideration in an incomplete market, dealers will pricebonds based on asymmetric information. The dealer with the best offering price wins the bid. The riskpremium in dealer’s offering price is primarily determined by the dealer’s add-on rate of change tothe term structure. To optimize the trading strategy, a new equilibrium trading model is introduced.Optimal sequential estimation scheme for detecting the risk premium due to private inforamtion isproposed based on historical prices, and the best bond pricing formula is given with the accordingoptimal trading strategy. Numerical examples are provided to illustrate the economic insights underthe certain stochastic term structure interest rate models.

  5. Equilibrium modeling of the TFCX poloidal field coil system

    Strickler, D.J.; Miller, J.B.; Rothe, K.E.; Peng, Y.K.M.

    1984-04-01

    The Toroidal Fusion Core Experiment (TFCX) isproposed to be an ignition device with a low safety factor (q approx. = 2.0), rf or rf-assisted startup, long inductive burn pulse (approx. 300 s), and an elongated plasma cross section (kappa = 1.6) with moderate triangularity (delta = 0.3). System trade studies have been carried out to assist in choosing an appropriate candidate for TFCX conceptual design. This report describes an important element in these system studies - the magnetohydrodynamic (MHD) equilibrium modeling of the TFCX poloidal field (PF) coil system and its impact on the choice of machine size. Reference design points for the all-super-conducting toroidal field (TF) coil (TFCX-S) and hybrid (TFCX-H) options are presented that satisfy given PF system criteria, including volt-second requirements during burn, mechanical configuration constraints, maximum field constraints at the superconducting PF coils, and plasma shape parameters. Poloidal coil current waveforms for the TFCX-S and TFCX-H reference designs consistent with the equilibrium requirements of the plasma startup, heating, and burn phases of a typical discharge scenario are calculated. Finally, a possible option for quasi-steady-state operation is discussed

  6. The negotiated equilibrium model of spinal cord function.

    Wolpaw, Jonathan R

    2018-04-16

    The belief that the spinal cord is hardwired is no longer tenable. Like the rest of the CNS, the spinal cord changes during growth and aging, when new motor behaviours are acquired, and in response to trauma and disease. This paper describes a new model of spinal cord function that reconciles its recently appreciated plasticity with its long recognized reliability as the final common pathway for behaviour. According to this model, the substrate of each motor behaviour comprises brain and spinal plasticity: the plasticity in the brain induces and maintains the plasticity in the spinal cord. Each time a behaviour occurs, the spinal cord provides the brain with performance information that guides changes in the substrate of the behaviour. All the behaviours in the repertoire undergo this process concurrently; each repeatedly induces plasticity to preserve its key features despite the plasticity induced by other behaviours. The aggregate process is a negotiation among the behaviours: they negotiate the properties of the spinal neurons and synapses that they all use. The ongoing negotiation maintains the spinal cord in an equilibrium - a negotiated equilibrium - that serves all the behaviours. This new model of spinal cord function is supported by laboratory and clinical data, makes predictions borne out by experiment, and underlies a new approach to restoring function to people with neuromuscular disorders. Further studies are needed to test its generality, to determine whether it may apply to other CNS areas such as the cerebral cortex, and to develop its therapeutic implications. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  7. Foundations and models of pre-equilibrium decay

    Bunakov, V.E.

    1980-01-01

    A review is given of the presently existing microscopic, semi-phenomenologic and phenomenologic models used for the description of nuclear reactions. Their advantages and drawbacks are analyzed. A special attention is given to the analysis of pre-equilibrium decay phenomenological models based on the use of master equations (time-dependent versions of exciton models, intranuclear cascade, etc.). A version of the unified theory of nuclear reactions is discussed which makes use of quantum master equations for finite open systems. The conditions are formulated for the derivation of these equations from the time-dependent Schroedinger equation for the many-body problem. The various models of nuclear reactions used in practice are shown to be approximate solutions of master equations for finite open systems. From this point of view the analysis is carried out of these models' reliability in the description of experimental data. Possible modifications are considered which provide for better agreement between the different models and for the more exact description of experimental data. (author)

  8. Cost modelling in maintenance strategy optimisation for infrastructure assets with limited data

    Zhang, Wenjuan; Wang, Wenbin

    2014-01-01

    Our paper reports on the use of cost modelling in maintenance strategy optimisation for infrastructure assets. We present an original approach: the possibility of modelling even when the data and information usually required are not sufficient in quantity and quality. Our method makes use of subjective expert knowledge, and requires information gathered for only a small sample of assets to start with. Bayes linear methods are adopted to combine the subjective expert knowledge with the sample data to estimate the unknown model parameters of the cost model. When new information becomes available, Bayes linear methods also prove useful in updating these estimates. We use a case study from the rail industry to demonstrate our methods. The optimal maintenance strategy is obtained via simulation based on the estimated model parameters and the strategy with the least unit time cost is identified. When the optimal strategy is not followed due to insufficient funding, the future costs of recovering the degraded asset condition are estimated

  9. Computable general equilibrium model fiscal year 2013 capability development report

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-17

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

  10. Modelling, simulation, and optimisation of a downflow entrained flow reactor for black liquor gasification

    Marklund, Magnus [ETC Energitekniskt Centrum, Piteaa (Sweden)

    2003-12-01

    Black liquor, a by-product of the chemical pulping process, is an important liquid fuel in the pulp and paper industry. A potential technology for improving the recovery cycle of energy and chemicals contained in the liquid fuel is pressurised gasification of black liquor (PGLG). However, uncertainties about the reliability and robustness of the technology are preventing a large-scale market introduction. One important step towards a greater trust in the process reliability is the development of simulation tools that can provide a better understanding of the process and improve performance through optimisation. In the beginning of 2001 a project was initiated in order to develop a simulation tool for an entrained-flow gasifier in PBLG based on CFD (Computational Fluid Dynamics). The aim has been to provide an advanced tool for a better understanding of process performance, to help with trouble shooting in the development plant, and for use in optimisation of a full-scale commercial gasifier. Furthermore, the project will also provide quantitative information on burner functionality through advanced laser-optical measurements by use of a Phase Doppler Anemometer (PDA). To this point in current project, three different concept models have been developed. The work has been comprised in a thesis 'Modelling and Simulation of Pressurised Black Liquor Gasification at High Temperature' and presented at Luleaa Univ. of Technology in Oct 2003. The construction of an atmospheric burner test rig has also been initiated. The main objective with the rig will be to quantify the atomisation performance of suitable burner nozzles for a PBLG gasifier that can be used as input for the CFD model. The main conclusions from the modelling work done this far can be condensed to the following points: From the first modelling results it was concluded that a wide spray pattern is preferable with respect to demand for long residence times for black liquor droplets and a low amount

  11. Self-optimisation and model-based design of experiments for developing a C–H activation flow process

    Alexander Echtermeyer

    2017-01-01

    Full Text Available A recently described C(sp3–H activation reaction to synthesise aziridines was used as a model reaction to demonstrate the methodology of developing a process model using model-based design of experiments (MBDoE and self-optimisation approaches in flow. The two approaches are compared in terms of experimental efficiency. The self-optimisation approach required the least number of experiments to reach the specified objectives of cost and product yield, whereas the MBDoE approach enabled a rapid generation of a process model.

  12. Methodology and Toolset for Model Verification, Hardware/Software co-simulation, Performance Optimisation and Customisable Source-code generation

    Berger, Michael Stübert; Soler, José; Yu, Hao

    2013-01-01

    The MODUS project aims to provide a pragmatic and viable solution that will allow SMEs to substantially improve their positioning in the embedded-systems development market. The MODUS tool will provide a model verification and Hardware/Software co-simulation tool (TRIAL) and a performance...... optimisation and customisable source-code generation tool (TUNE). The concept is depicted in automated modelling and optimisation of embedded-systems development. The tool will enable model verification by guiding the selection of existing open-source model verification engines, based on the automated analysis...

  13. Assessment of thermodynamic models for the design, analysis and optimisation of gas liquefaction systems

    Nguyen, Tuong-Van; Elmegaard, Brian

    2016-01-01

    Highlights: • Six thermodynamic models used for evaluating gas liquefaction systems are compared. • Three gas liquefaction systems are modelled, assessed and optimised for each equation of state. • The predictions of thermophysical properties and energy flows are significantly different. • The GERG-2008 model is the only consistent one, while cubic, virial and statistical equations are unsatisfying. - Abstract: Natural gas liquefaction systems are based on refrigeration cycles – they consist of the same operations such as heat exchange, compression and expansion, but they have different layouts, components and working fluids. The design of these systems requires a preliminary simulation and evaluation of their performance. However, the thermodynamic models used for this purpose are characterised by different mathematical formulations, ranges of application and levels of accuracy. This may lead to inconsistent results when estimating hydrocarbon properties and assessing the efficiency of a given process. This paper presents a thorough comparison of six equations of state widely used in the academia and industry, including the GERG-2008 model, which has recently been adopted as an ISO standard for natural gases. These models are used to (i) estimate the thermophysical properties of a Danish natural gas, (ii) simulate, and (iii) optimise liquefaction systems. Three case studies are considered: a cascade layout with three pure refrigerants, a single mixed-refrigerant unit, and an expander-based configuration. Significant deviations are found between all property models, and in all case studies. The main discrepancies are related to the prediction of the energy flows (up to 7%) and to the heat exchanger conductances (up to 11%), and they are not systematic errors. The results illustrate the superiority of using the GERG-2008 model for designing gas processes in real applications, with the aim of reducing their energy use. They demonstrate as well that

  14. An Equilibrium-Based Model of Gas Reaction and Detonation

    Trowbridge, L.D.

    2000-01-01

    During gaseous diffusion plant operations, conditions leading to the formation of flammable gas mixtures may occasionally arise. Currently, these could consist of the evaporative coolant CFC-114 and fluorinating agents such as F2 and ClF3. Replacement of CFC-114 with a non-ozone-depleting substitute is planned. Consequently, in the future, the substitute coolant must also be considered as a potential fuel in flammable gas mixtures. Two questions of practical interest arise: (1) can a particular mixture sustain and propagate a flame if ignited, and (2) what is the maximum pressure that can be generated by the burning (and possibly exploding) gas mixture, should it ignite? Experimental data on these systems, particularly for the newer coolant candidates, are limited. To assist in answering these questions, a mathematical model was developed to serve as a tool for predicting the potential detonation pressures and for estimating the composition limits of flammability for these systems based on empirical correlations between gas mixture thermodynamics and flammability for known systems. The present model uses the thermodynamic equilibrium to determine the reaction endpoint of a reactive gas mixture and uses detonation theory to estimate an upper bound to the pressure that could be generated upon ignition. The model described and documented in this report is an extended version of related models developed in 1992 and 1999

  15. Modeling equilibrium adsorption of organic micropollutants onto activated carbon

    De Ridder, David J.

    2010-05-01

    Solute hydrophobicity, polarizability, aromaticity and the presence of H-bond donor/acceptor groups have been identified as important solute properties that affect the adsorption on activated carbon. However, the adsorption mechanisms related to these properties occur in parallel, and their respective dominance depends on the solute properties as well as carbon characteristics. In this paper, a model based on multivariate linear regression is described that was developed to predict equilibrium carbon loading on a specific activated carbon (F400) for solutes reflecting a wide range of solute properties. In order to improve prediction accuracy, groups (bins) of solutes with similar solute properties were defined and solute removals were predicted for each bin separately. With these individual linear models, coefficients of determination (R2) values ranging from 0.61 to 0.84 were obtained. With the mechanistic approach used in developing this predictive model, a strong relation with adsorption mechanisms is established, improving the interpretation and, ultimately, acceptance of the model. © 2010 Elsevier Ltd.

  16. Iterative optimisation of Monte Carlo detector models using measurements and simulations

    Marzocchi, O., E-mail: olaf@marzocchi.net [European Patent Office, Rijswijk (Netherlands); Leone, D., E-mail: debora.leone@kit.edu [Institute for Nuclear Waste Disposal, Karlsruhe Institute of Technology, Karlsruhe (Germany)

    2015-04-11

    This work proposes a new technique to optimise the Monte Carlo models of radiation detectors, offering the advantage of a significantly lower user effort and therefore an improved work efficiency compared to the prior techniques. The method consists of four steps, two of which are iterative and suitable for automation using scripting languages. The four steps consist in the acquisition in the laboratory of measurement data to be used as reference; the modification of a previously available detector model; the simulation of a tentative model of the detector to obtain the coefficients of a set of linear equations; the solution of the system of equations and the update of the detector model. Steps three and four can be repeated for more accurate results. This method avoids the “try and fail” approach typical of the prior techniques.

  17. hydroPSO: A Versatile Particle Swarm Optimisation R Package for Calibration of Environmental Models

    Zambrano-Bigiarini, M.; Rojas, R.

    2012-04-01

    Particle Swarm Optimisation (PSO) is a recent and powerful population-based stochastic optimisation technique inspired by social behaviour of bird flocking, which shares similarities with other evolutionary techniques such as Genetic Algorithms (GA). In PSO, however, each individual of the population, known as particle in PSO terminology, adjusts its flying trajectory on the multi-dimensional search-space according to its own experience (best-known personal position) and the one of its neighbours in the swarm (best-known local position). PSO has recently received a surge of attention given its flexibility, ease of programming, low memory and CPU requirements, and efficiency. Despite these advantages, PSO may still get trapped into sub-optimal solutions, suffer from swarm explosion or premature convergence. Thus, the development of enhancements to the "canonical" PSO is an active area of research. To date, several modifications to the canonical PSO have been proposed in the literature, resulting into a large and dispersed collection of codes and algorithms which might well be used for similar if not identical purposes. In this work we present hydroPSO, a platform-independent R package implementing several enhancements to the canonical PSO that we consider of utmost importance to bring this technique to the attention of a broader community of scientists and practitioners. hydroPSO is model-independent, allowing the user to interface any model code with the calibration engine without having to invest considerable effort in customizing PSO to a new calibration problem. Some of the controlling options to fine-tune hydroPSO are: four alternative topologies, several types of inertia weight, time-variant acceleration coefficients, time-variant maximum velocity, regrouping of particles when premature convergence is detected, different types of boundary conditions and many others. Additionally, hydroPSO implements recent PSO variants such as: Improved Particle Swarm

  18. Model-Free Trajectory Optimisation for Unmanned Aircraft Serving as Data Ferries for Widespread Sensors

    Ben Pearre

    2012-10-01

    Full Text Available Given multiple widespread stationary data sources such as ground-based sensors, an unmanned aircraft can fly over the sensors and gather the data via a wireless link. Performance criteria for such a network may incorporate costs such as trajectory length for the aircraft or the energy required by the sensors for radio transmission. Planning is hampered by the complex vehicle and communication dynamics and by uncertainty in the locations of sensors, so we develop a technique based on model-free learning. We present a stochastic optimisation method that allows the data-ferrying aircraft to optimise data collection trajectories through an unknown environment in situ, obviating the need for system identification. We compare two trajectory representations, one that learns near-optimal trajectories at low data requirements but that fails at high requirements, and one that gives up some performance in exchange for a data collection guarantee. With either encoding the ferry is able to learn significantly improved trajectories compared with alternative heuristics. To demonstrate the versatility of the model-free learning approach, we also learn a policy to minimise the radio transmission energy required by the sensor nodes, allowing prolonged network lifetime.

  19. Social security as Markov equilibrium in OLG models: A note

    Gonzalez Eiras, Martin

    2011-01-01

    I refine and extend the Markov perfect equilibrium of the social security policy game in Forni (2005) for the special case of logarithmic utility. Under the restriction that the policy function be continuous, instead of differentiable, the equilibrium is globally well defined and its dynamics...

  20. A Synthesis of Equilibrium and Historical Models of Landform Development.

    Renwick, William H.

    1985-01-01

    The synthesis of two approaches that can be used in teaching geomorphology is described. The equilibrium approach explains landforms and landform change in terms of equilibrium between landforms and controlling processes. The historical approach draws on climatic geomorphology to describe the effects of Quaternary climatic and tectonic events on…

  1. Numerical equilibrium analysis for structured consumer resource models.

    de Roos, A M; Diekmann, O; Getto, P; Kirkilionis, M A

    2010-02-01

    In this paper, we present methods for a numerical equilibrium and stability analysis for models of a size structured population competing for an unstructured resource. We concentrate on cases where two model parameters are free, and thus existence boundaries for equilibria and stability boundaries can be defined in the (two-parameter) plane. We numerically trace these implicitly defined curves using alternatingly tangent prediction and Newton correction. Evaluation of the maps defining the curves involves integration over individual size and individual survival probability (and their derivatives) as functions of individual age. Such ingredients are often defined as solutions of ODE, i.e., in general only implicitly. In our case, the right-hand sides of these ODE feature discontinuities that are caused by an abrupt change of behavior at the size where juveniles are assumed to turn adult. So, we combine the numerical solution of these ODE with curve tracing methods. We have implemented the algorithms for "Daphnia consuming algae" models in C-code. The results obtained by way of this implementation are shown in the form of graphs.

  2. An optimisation approach for capacity planning: modelling insights and empirical findings from a tactical perspective

    Andréa Nunes Carvalho

    2017-09-01

    Full Text Available Abstract The academic literature presents a research-practice gap on the application of decision support tools to address tactical planning problems in real-world organisations. This paper addresses this gap and extends a previous action research relative to an optimisation model applied for tactical capacity planning in an engineer-to-order industrial setting. The issues discussed herein raise new insights to better understand the practical results that can be achieved through the proposed model. The topics presented include the modelling of objectives, the representation of the production process and the costing approach, as well as findings regarding managerial decisions and the scope of action considered. These insights may inspire ideas to academics and practitioners when developing tools for capacity planning problems in similar contexts.

  3. A Hybrid Method for the Modelling and Optimisation of Constrained Search Problems

    Sitek Pawel

    2014-08-01

    Full Text Available The paper presents a concept and the outline of the implementation of a hybrid approach to modelling and solving constrained problems. Two environments of mathematical programming (in particular, integer programming and declarative programming (in particular, constraint logic programming were integrated. The strengths of integer programming and constraint logic programming, in which constraints are treated in a different way and different methods are implemented, were combined to use the strengths of both. The hybrid method is not worse than either of its components used independently. The proposed approach is particularly important for the decision models with an objective function and many discrete decision variables added up in multiple constraints. To validate the proposed approach, two illustrative examples are presented and solved. The first example is the authors’ original model of cost optimisation in the supply chain with multimodal transportation. The second one is the two-echelon variant of the well-known capacitated vehicle routing problem.

  4. Non-equilibrium mass transfer absorption model for the design of boron isotopes chemical exchange column

    Bai, Peng; Fan, Kaigong; Guo, Xianghai; Zhang, Haocui

    2016-01-01

    Highlights: • We propose a non-equilibrium mass transfer absorption model instead of a distillation equilibrium model to calculate boron isotopes separation. • We apply the model to calculate the needed column height to meet prescribed separation requirements. - Abstract: To interpret the phenomenon of chemical exchange in boron isotopes separation accurately, the process is specified as an absorption–reaction–desorption hybrid process instead of a distillation equilibrium model, the non-equilibrium mass transfer absorption model is put forward and a mass transfer enhancement factor E is introduced to find the packing height needed to meet the specified separation requirements with MATLAB.

  5. Revisiting EOR Projects in Indonesia through Integrated Study: EOR Screening, Predictive Model, and Optimisation

    Hartono, A. D.; Hakiki, Farizal; Syihab, Z.; Ambia, F.; Yasutra, A.; Sutopo, S.; Efendi, M.; Sitompul, V.; Primasari, I.; Apriandi, R.

    2017-01-01

    EOR preliminary analysis is pivotal to be performed at early stage of assessment in order to elucidate EOR feasibility. This study proposes an in-depth analysis toolkit for EOR preliminary evaluation. The toolkit incorporates EOR screening, predictive, economic, risk analysis and optimisation modules. The screening module introduces algorithms which assimilates statistical and engineering notions into consideration. The United States Department of Energy (U.S. DOE) predictive models were implemented in the predictive module. The economic module is available to assess project attractiveness, while Monte Carlo Simulation is applied to quantify risk and uncertainty of the evaluated project. Optimization scenario of EOR practice can be evaluated using the optimisation module, in which stochastic methods of Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and Evolutionary Strategy (ES) were applied in the algorithms. The modules were combined into an integrated package of EOR preliminary assessment. Finally, we utilised the toolkit to evaluate several Indonesian oil fields for EOR evaluation (past projects) and feasibility (future projects). The attempt was able to update the previous consideration regarding EOR attractiveness and open new opportunity for EOR implementation in Indonesia.

  6. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  7. Revisiting EOR Projects in Indonesia through Integrated Study: EOR Screening, Predictive Model, and Optimisation

    Hartono, A. D.

    2017-10-17

    EOR preliminary analysis is pivotal to be performed at early stage of assessment in order to elucidate EOR feasibility. This study proposes an in-depth analysis toolkit for EOR preliminary evaluation. The toolkit incorporates EOR screening, predictive, economic, risk analysis and optimisation modules. The screening module introduces algorithms which assimilates statistical and engineering notions into consideration. The United States Department of Energy (U.S. DOE) predictive models were implemented in the predictive module. The economic module is available to assess project attractiveness, while Monte Carlo Simulation is applied to quantify risk and uncertainty of the evaluated project. Optimization scenario of EOR practice can be evaluated using the optimisation module, in which stochastic methods of Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and Evolutionary Strategy (ES) were applied in the algorithms. The modules were combined into an integrated package of EOR preliminary assessment. Finally, we utilised the toolkit to evaluate several Indonesian oil fields for EOR evaluation (past projects) and feasibility (future projects). The attempt was able to update the previous consideration regarding EOR attractiveness and open new opportunity for EOR implementation in Indonesia.

  8. From equilibrium spin models to probabilistic cellular automata

    Georges, A.; Le Doussal, P.

    1989-01-01

    The general equivalence between D-dimensional probabilistic cellular automata (PCA) and (D + 1)-dimensional equilibrium spin models satisfying a disorder condition is first described in a pedagogical way and then used to analyze the phase diagrams, the critical behavior, and the universality classes of some automato. Diagrammatic representations of time-dependent correlation functions PCA are introduced. Two important classes of PCA are singled out for which these correlation functions simplify: (1) Quasi-Hamiltonian automata, which have a current-carrying steady state, and for which some correlation functions are those of a D-dimensional static model PCA satisfying the detailed balance condition appear as a particular case of these rules for which the current vanishes. (2) Linear (and more generally affine) PCA for which the diagrammatics reduces to a random walk problem closely related to (D + 1)-dimensional directed SAWs: both problems display a critical behavior with mean-field exponents in any dimension. The correlation length and effective velocity of propagation of excitations can be calculated for affine PCA, as is shown on an explicit D = 1 example. The authors conclude with some remarks on nonlinear PCA, for which the diagrammatics is related to reaction-diffusion processes, and which belong in some cases to the universality class of Reggeon field theory

  9. A non-equilibrium neutral model for analysing cultural change.

    Kandler, Anne; Shennan, Stephen

    2013-08-07

    Neutral evolution is a frequently used model to analyse changes in frequencies of cultural variants over time. Variants are chosen to be copied according to their relative frequency and new variants are introduced by a process of random mutation. Here we present a non-equilibrium neutral model which accounts for temporally varying population sizes and mutation rates and makes it possible to analyse the cultural system under consideration at any point in time. This framework gives an indication whether observed changes in the frequency distributions of a set of cultural variants between two time points are consistent with the random copying hypothesis. We find that the likelihood of the existence of the observed assemblage at the end of the considered time period (expressed by the probability of the observed number of cultural variants present in the population during the whole period under neutral evolution) is a powerful indicator of departures from neutrality. Further, we study the effects of frequency-dependent selection on the evolutionary trajectories and present a case study of change in the decoration of pottery in early Neolithic Central Europe. Based on the framework developed we show that neutral evolution is not an adequate description of the observed changes in frequency. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. The Supermarket Model with Bounded Queue Lengths in Equilibrium

    Brightwell, Graham; Fairthorne, Marianne; Luczak, Malwina J.

    2018-04-01

    In the supermarket model, there are n queues, each with a single server. Customers arrive in a Poisson process with arrival rate λ n , where λ = λ (n) \\in (0,1) . Upon arrival, a customer selects d=d(n) servers uniformly at random, and joins the queue of a least-loaded server amongst those chosen. Service times are independent exponentially distributed random variables with mean 1. In this paper, we analyse the behaviour of the supermarket model in the regime where λ (n) = 1 - n^{-α } and d(n) = \\lfloor n^β \\rfloor , where α and β are fixed numbers in (0, 1]. For suitable pairs (α , β ) , our results imply that, in equilibrium, with probability tending to 1 as n → ∞, the proportion of queues with length equal to k = \\lceil α /β \\rceil is at least 1-2n^{-α + (k-1)β } , and there are no longer queues. We further show that the process is rapidly mixing when started in a good state, and give bounds on the speed of mixing for more general initial conditions.

  11. Real-time Modelling, Diagnostics and Optimised MPPT for Residential PV Systems

    Sera, Dezso

    responsible for yield-reduction of residential photovoltaic systems. Combining the model calculations with measurements, a method to detect changes in the panels’ series resistance based on the slope of the I − V curve in the vicinity of open-circuit conditions and scaled to Standard Test Conditions (STC......The work documented in the thesis has been focused into two main sections. The first part is centred around Maximum Power Point Tracking (MPPT) techniques for photovoltaic arrays, optimised for fast-changing environmental conditions, and is described in Chapter 2. The second part is dedicated...... to diagnostic functions as an additional tool to maximise the energy yield of photovoltaic arrays (Chapter 4). Furthermore, mathematical models of PV panels and arrays have been developed and built (detailed in Chapter 3) for testing MPPT algorithms, and for diagnostic purposes. In Chapter 2 an overview...

  12. Production optimisation in the petrochemical industry by hierarchical multivariate modelling. Phase 2: On-line implementation

    Nilsson, Aasa; Persson, Fredrik; Andersson, Magnus

    2009-07-15

    IVL, together with Emerson Process Management, has developed a decision support system (DSS) based on multivariate statistical process models. The system was implemented at Nynas AB's refinery in order to provide real-time TBP curves and to enable the operator to optimise the process with regards to product quality and energy consumption. The project resulted in the following proven benefits at the industrial reference site, Nynas Refinery in Gothenburg: - Increased yield with up to 14 % (relative terms) for the most valuable product - Decreased energy consumption of 8 %. Validation of model predictions compared to the laboratory analysis showed that the prediction error lay within 1 deg C throughout the whole test period

  13. Turbulence optimisation in stellarator experiments

    Proll, Josefine H.E. [Max-Planck/Princeton Center for Plasma Physics (Germany); Max-Planck-Institut fuer Plasmaphysik, Wendelsteinstr. 1, 17491 Greifswald (Germany); Faber, Benjamin J. [HSX Plasma Laboratory, University of Wisconsin-Madison, Madison, WI 53706 (United States); Helander, Per; Xanthopoulos, Pavlos [Max-Planck/Princeton Center for Plasma Physics (Germany); Lazerson, Samuel A.; Mynick, Harry E. [Plasma Physics Laboratory, Princeton University, P.O. Box 451 Princeton, New Jersey 08543-0451 (United States)

    2015-05-01

    Stellarators, the twisted siblings of the axisymmetric fusion experiments called tokamaks, have historically suffered from confining the heat of the plasma insufficiently compared with tokamaks and were therefore considered to be less promising candidates for a fusion reactor. This has changed, however, with the advent of stellarators in which the laminar transport is reduced to levels below that of tokamaks by shaping the magnetic field accordingly. As in tokamaks, the turbulent transport remains as the now dominant transport channel. Recent analytical theory suggests that the large configuration space of stellarators allows for an additional optimisation of the magnetic field to also reduce the turbulent transport. In this talk, the idea behind the turbulence optimisation is explained. We also present how an optimised equilibrium is obtained and how it might differ from the equilibrium field of an already existing device, and we compare experimental turbulence measurements in different configurations of the HSX stellarator in order to test the optimisation procedure.

  14. Three-dimensional modelling and numerical optimisation of the W7-X ICRH antenna

    Louche, F., E-mail: fabrice.louche@rma.ac.be [Laboratoire de physique des plasmas de l’ERM, Laboratorium voor plasmafysica van de KMS (LPP-ERM/KMS), Ecole Royale Militaire, Koninklijke Militaire School, Brussels (Belgium); Křivská, A.; Messiaen, A.; Ongena, J. [Laboratoire de physique des plasmas de l’ERM, Laboratorium voor plasmafysica van de KMS (LPP-ERM/KMS), Ecole Royale Militaire, Koninklijke Militaire School, Brussels (Belgium); Borsuk, V. [Institute of Energy and Climate Research – Plasma Physics, Forschungszentrum Juelich (Germany); Durodié, F.; Schweer, B. [Laboratoire de physique des plasmas de l’ERM, Laboratorium voor plasmafysica van de KMS (LPP-ERM/KMS), Ecole Royale Militaire, Koninklijke Militaire School, Brussels (Belgium)

    2015-10-15

    Highlights: • A simplified version of the ICRF antenna for the stellarator W7-X has been modelled with the 3D electromagnetic software Microwave Studio. This antenna can be tuned between 25 and 38 MHz with the help of adjustable capacitors. • In previous modellings the front of the antenna was modelled with the help of 3D codes, while the capacitors were modelled as lumped elements with a given DC capacitance. As this approach does not take into account the effect of the internal inductance, a MWS model of these capacitors has been developed. • The initial geometry does not permit the operation at 38 MHz. By modifying some geometrical parameters of the front face, it was possible to increase the frequency band of the antenna, and to increase (up to 25%) the maximum coupled power accounting for the technical constraints on the capacitors. • The W7-X ICRH antenna must be operated at 25 and 38 MHz, and for various toroidal phasings of the strap RF currents. Due to the considered duty cycle it is shown that thanks to a special procedure based on minimisation techniques, it is possible to define a satisfactory optimum geometry in agreement with the specifications of the capacitors. • The various steps of the optimisation are validated with TOPICA simulations. For a given density profile the RF power coupling expectancy can be precisely computed. - Abstract: Ion Cyclotron Resonance Heating (ICRH) is a promising heating and wall conditioning method considered for the W7-X stellarator and a dedicated ICRH antenna has been designed. This antenna must perform several tasks in a long term physics programme: fast particles generation, heating at high densities, current drive and ICRH physics studies. Various minority heating scenarios are considered and two frequency bands will be used. In the present work a design for the low frequency range (25–38 MHz) only is developed. The antenna is made of 2 straps with tap feeds and tuning capacitors with DC capacitance in

  15. On non-equilibrium states in QFT model with boundary interaction

    Bazhanov, Vladimir V.; Lukyanov, Sergei L.; Zamolodchikov, Alexander B.

    1999-01-01

    We prove that certain non-equilibrium expectation values in the boundary sine-Gordon model coincide with associated equilibrium-state expectation values in the systems which differ from the boundary sine-Gordon in that certain extra boundary degrees of freedom (q-oscillators) are added. Applications of this result to actual calculation of non-equilibrium characteristics of the boundary sine-Gordon model are also discussed

  16. Optimising Transport Decision Making using Customised Decision Models and Decision Conferences

    Barfod, Michael Bruhn

    The subject of this Ph.D. thesis entitled “Optimising Transport Decision Making using Customised Decision Models and Decision Conferences” is multi-criteria decision analysis (MCDA) and decision support in the context of transport infrastructure assessments. Despite the fact that large amounts...... is concerned with the insufficiency of conventional cost-benefit analysis (CBA), and proposes the use of MCDA as a supplementing tool in order to also capture impacts of a more strategic character in the appraisals and hence make more use of the often large efforts put in the preliminary examinations. MCDA...... and rail to bike transport projects. Two major concerns have been to propose an examination process that can be used in situations where complex decision problems need to be addressed by experts as well as non-experts in decision making, and to identify appropriate assessment techniques to be used...

  17. Geometric Generalisation of Surrogate Model-Based Optimisation to Combinatorial and Program Spaces

    Yong-Hyuk Kim

    2014-01-01

    Full Text Available Surrogate models (SMs can profitably be employed, often in conjunction with evolutionary algorithms, in optimisation in which it is expensive to test candidate solutions. The spatial intuition behind SMs makes them naturally suited to continuous problems, and the only combinatorial problems that have been previously addressed are those with solutions that can be encoded as integer vectors. We show how radial basis functions can provide a generalised SM for combinatorial problems which have a geometric solution representation, through the conversion of that representation to a different metric space. This approach allows an SM to be cast in a natural way for the problem at hand, without ad hoc adaptation to a specific representation. We test this adaptation process on problems involving binary strings, permutations, and tree-based genetic programs.

  18. Near infrared spectrometric technique for testing fruit quality: optimisation of regression models using genetic algorithms

    Isingizwe Nturambirwe, J. Frédéric; Perold, Willem J.; Opara, Umezuruike L.

    2016-02-01

    Near infrared (NIR) spectroscopy has gained extensive use in quality evaluation. It is arguably one of the most advanced spectroscopic tools in non-destructive quality testing of food stuff, from measurement to data analysis and interpretation. NIR spectral data are interpreted through means often involving multivariate statistical analysis, sometimes associated with optimisation techniques for model improvement. The objective of this research was to explore the extent to which genetic algorithms (GA) can be used to enhance model development, for predicting fruit quality. Apple fruits were used, and NIR spectra in the range from 12000 to 4000 cm-1 were acquired on both bruised and healthy tissues, with different degrees of mechanical damage. GAs were used in combination with partial least squares regression methods to develop bruise severity prediction models, and compared to PLS models developed using the full NIR spectrum. A classification model was developed, which clearly separated bruised from unbruised apple tissue. GAs helped improve prediction models by over 10%, in comparison with full spectrum-based models, as evaluated in terms of error of prediction (Root Mean Square Error of Cross-validation). PLS models to predict internal quality, such as sugar content and acidity were developed and compared to the versions optimized by genetic algorithm. Overall, the results highlighted the potential use of GA method to improve speed and accuracy of fruit quality prediction.

  19. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest

  20. Integration of environmental aspects in modelling and optimisation of water supply chains.

    Koleva, Mariya N; Calderón, Andrés J; Zhang, Di; Styan, Craig A; Papageorgiou, Lazaros G

    2018-04-26

    Climate change becomes increasingly more relevant in the context of water systems planning. Tools are necessary to provide the most economic investment option considering the reliability of the infrastructure from technical and environmental perspectives. Accordingly, in this work, an optimisation approach, formulated as a spatially-explicit multi-period Mixed Integer Linear Programming (MILP) model, is proposed for the design of water supply chains at regional and national scales. The optimisation framework encompasses decisions such as installation of new purification plants, capacity expansion, and raw water trading schemes. The objective is to minimise the total cost incurring from capital and operating expenditures. Assessment of available resources for withdrawal is performed based on hydrological balances, governmental rules and sustainable limits. In the light of the increasing importance of reliability of water supply, a second objective, seeking to maximise the reliability of the supply chains, is introduced. The epsilon-constraint method is used as a solution procedure for the multi-objective formulation. Nash bargaining approach is applied to investigate the fair trade-offs between the two objectives and find the Pareto optimality. The models' capability is addressed through a case study based on Australia. The impact of variability in key input parameters is tackled through the implementation of a rigorous global sensitivity analysis (GSA). The findings suggest that variations in water demand can be more disruptive for the water supply chain than scenarios in which rainfalls are reduced. The frameworks can facilitate governmental multi-aspect decision making processes for the adequate and strategic investments of regional water supply infrastructure. Copyright © 2018. Published by Elsevier B.V.

  1. A continuous stochastic model for non-equilibrium dense gases

    Sadr, M.; Gorji, M. H.

    2017-12-01

    While accurate simulations of dense gas flows far from the equilibrium can be achieved by direct simulation adapted to the Enskog equation, the significant computational demand required for collisions appears as a major constraint. In order to cope with that, an efficient yet accurate solution algorithm based on the Fokker-Planck approximation of the Enskog equation is devised in this paper; the approximation is very much associated with the Fokker-Planck model derived from the Boltzmann equation by Jenny et al. ["A solution algorithm for the fluid dynamic equations based on a stochastic model for molecular motion," J. Comput. Phys. 229, 1077-1098 (2010)] and Gorji et al. ["Fokker-Planck model for computational studies of monatomic rarefied gas flows," J. Fluid Mech. 680, 574-601 (2011)]. The idea behind these Fokker-Planck descriptions is to project the dynamics of discrete collisions implied by the molecular encounters into a set of continuous Markovian processes subject to the drift and diffusion. Thereby, the evolution of particles representing the governing stochastic process becomes independent from each other and thus very efficient numerical schemes can be constructed. By close inspection of the Enskog operator, it is observed that the dense gas effects contribute further to the advection of molecular quantities. That motivates a modelling approach where the dense gas corrections can be cast in the extra advection of particles. Therefore, the corresponding Fokker-Planck approximation is derived such that the evolution in the physical space accounts for the dense effects present in the pressure, stress tensor, and heat fluxes. Hence the consistency between the devised Fokker-Planck approximation and the Enskog operator is shown for the velocity moments up to the heat fluxes. For validation studies, a homogeneous gas inside a box besides Fourier, Couette, and lid-driven cavity flow setups is considered. The results based on the Fokker-Planck model are

  2. DAE Tools: equation-based object-oriented modelling, simulation and optimisation software

    Dragan D. Nikolić

    2016-04-01

    Full Text Available In this work, DAE Tools modelling, simulation and optimisation software, its programming paradigms and main features are presented. The current approaches to mathematical modelling such as the use of modelling languages and general-purpose programming languages are analysed. The common set of capabilities required by the typical simulation software are discussed, and the shortcomings of the current approaches recognised. A new hybrid approach is introduced, and the modelling languages and the hybrid approach are compared in terms of the grammar, compiler, parser and interpreter requirements, maintainability and portability. The most important characteristics of the new approach are discussed, such as: (1 support for the runtime model generation; (2 support for the runtime simulation set-up; (3 support for complex runtime operating procedures; (4 interoperability with the third party software packages (i.e. NumPy/SciPy; (5 suitability for embedding and use as a web application or software as a service; and (6 code-generation, model exchange and co-simulation capabilities. The benefits of an equation-based approach to modelling, implemented in a fourth generation object-oriented general purpose programming language such as Python are discussed. The architecture and the software implementation details as well as the type of problems that can be solved using DAE Tools software are described. Finally, some applications of the software at different levels of abstraction are presented, and its embedding capabilities and suitability for use as a software as a service is demonstrated.

  3. Optimisation of load control

    Koponen, P.

    1998-01-01

    Electricity cannot be stored in large quantities. That is why the electricity supply and consumption are always almost equal in large power supply systems. If this balance were disturbed beyond stability, the system or a part of it would collapse until a new stable equilibrium is reached. The balance between supply and consumption is mainly maintained by controlling the power production, but also the electricity consumption or, in other words, the load is controlled. Controlling the load of the power supply system is important, if easily controllable power production capacity is limited. Temporary shortage of capacity causes high peaks in the energy price in the electricity market. Load control either reduces the electricity consumption during peak consumption and peak price or moves electricity consumption to some other time. The project Optimisation of Load Control is a part of the EDISON research program for distribution automation. The following areas were studied: Optimization of space heating and ventilation, when electricity price is time variable, load control model in power purchase optimization, optimization of direct load control sequences, interaction between load control optimization and power purchase optimization, literature on load control, optimization methods and field tests and response models of direct load control and the effects of the electricity market deregulation on load control. An overview of the main results is given in this chapter

  4. Optimisation of load control

    Koponen, P [VTT Energy, Espoo (Finland)

    1998-08-01

    Electricity cannot be stored in large quantities. That is why the electricity supply and consumption are always almost equal in large power supply systems. If this balance were disturbed beyond stability, the system or a part of it would collapse until a new stable equilibrium is reached. The balance between supply and consumption is mainly maintained by controlling the power production, but also the electricity consumption or, in other words, the load is controlled. Controlling the load of the power supply system is important, if easily controllable power production capacity is limited. Temporary shortage of capacity causes high peaks in the energy price in the electricity market. Load control either reduces the electricity consumption during peak consumption and peak price or moves electricity consumption to some other time. The project Optimisation of Load Control is a part of the EDISON research program for distribution automation. The following areas were studied: Optimization of space heating and ventilation, when electricity price is time variable, load control model in power purchase optimization, optimization of direct load control sequences, interaction between load control optimization and power purchase optimization, literature on load control, optimization methods and field tests and response models of direct load control and the effects of the electricity market deregulation on load control. An overview of the main results is given in this chapter

  5. A Comparison of the Computation Times of Thermal Equilibrium and Non-equilibrium Models of Droplet Field in a Two-Fluid Three-Field Model

    Park, Ik Kyu; Cho, Heong Kyu; Kim, Jong Tae; Yoon, Han Young; Jeong, Jae Jun

    2007-12-15

    A computational model for transient, 3 dimensional 2 phase flows was developed by using 'unstructured-FVM-based, non-staggered, semi-implicit numerical scheme' considering the thermally non-equilibrium droplets. The assumption of the thermally equilibrium between liquid and droplets of previous studies was not used any more, and three energy conservation equations for vapor, liquid, liquid droplets were set up. Thus, 9 conservation equations for mass, momentum, and energy were established to simulate 2 phase flows. In this report, the governing equations and a semi-implicit numerical sheme for a transient 1 dimensional 2 phase flows was described considering the thermally non-equilibrium between liquid and liquid droplets. The comparison with the previous model considering the thermally non-equilibrium between liquid and liquid droplets was also reported.

  6. Research on spot power market equilibrium model considering the electric power network characteristics

    Wang, Chengmin; Jiang, Chuanwen; Chen, Qiming

    2007-01-01

    Equilibrium is the optimum operational condition for the power market by economics rule. A realistic spot power market cannot achieve the equilibrium condition due to network losses and congestions. The impact of the network losses and congestion on spot power market is analyzed in this paper in order to establish a new equilibrium model considering the network loss and transmission constraints. The OPF problem formulated according to the new equilibrium model is solved by means of the equal price principle. A case study on the IEEE-30-bus system is provided in order to prove the effectiveness of the proposed approach. (author)

  7. Equilibrium and kinetic models for colloid release under transient solution chemistry conditions

    We present continuum models to describe colloid release in the subsurface during transient physicochemical conditions. Our modeling approach relates the amount of colloid release to changes in the fraction of the solid surface area that contributes to retention. Equilibrium, kinetic, equilibrium and...

  8. Dynamic Processes of Conceptual Change: Analysis of Constructing Mental Models of Chemical Equilibrium.

    Chiu, Mei-Hung; Chou, Chin-Cheng; Liu, Chia-Ju

    2002-01-01

    Investigates students' mental models of chemical equilibrium using dynamic science assessments. Reports that students at various levels have misconceptions about chemical equilibrium. Involves 10th grade students (n=30) in the study doing a series of hands-on chemical experiments. Focuses on the process of constructing mental models, dynamic…

  9. Non-equilibrium scaling analysis of the Kondo model with voltage bias

    Fritsch, Peter; Kehrein, Stefan

    2009-01-01

    The quintessential description of Kondo physics in equilibrium is obtained within a scaling picture that shows the buildup of Kondo screening at low temperature. For the non-equilibrium Kondo model with a voltage bias, the key new feature are decoherence effects due to the current across the impurity. In the present paper, we show how one can develop a consistent framework for studying the non-equilibrium Kondo model within a scaling picture of infinitesimal unitary transformations (flow equations). Decoherence effects appear naturally in third order of the β-function and dominate the Hamiltonian flow for sufficiently large voltage bias. We work out the spin dynamics in non-equilibrium and compare it with finite temperature equilibrium results. In particular, we report on the behavior of the static spin susceptibility including leading logarithmic corrections and compare it with the celebrated equilibrium result as a function of temperature.

  10. A Biomechanical Model of Single-joint Arm Movement Control Based on the Equilibrium Point Hypothesis

    Masataka, SUZUKI; Yoshihiko, YAMAZAKI; Yumiko, TANIGUCHI; Department of Psychology, Kinjo Gakuin University; Department of Health and Physical Education, Nagoya Institute of Technology; College of Human Life and Environment, Kinjo Gakuin University

    2003-01-01

    SUZUKI,M., YAMAZAKI,Y. and TANIGUCHI,Y., A Biomechanical Model of Single-joint Arm Movement Control Based on the Equilibrium Point Hypothesis. Adv. Exerc. Sports Physiol., Vol.9, No.1 pp.7-25, 2003. According to the equilibrium point hypothesis of motor control, control action of muscles is not explicitly computed, but rather arises as a consequence of interaction among moving equilibrium point, reflex feedback and muscle mechanical properties. This approach is attractive as it obviates the n...

  11. The lagRST Model: A Turbulence Model for Non-Equilibrium Flows

    Lillard, Randolph P.; Oliver, A. Brandon; Olsen, Michael E.; Blaisdell, Gregory A.; Lyrintzis, Anastasios S.

    2011-01-01

    This study presents a new class of turbulence model designed for wall bounded, high Reynolds number flows with separation. The model addresses deficiencies seen in the modeling of nonequilibrium turbulent flows. These flows generally have variable adverse pressure gradients which cause the turbulent quantities to react at a finite rate to changes in the mean flow quantities. This "lag" in the response of the turbulent quantities can t be modeled by most standard turbulence models, which are designed to model equilibrium turbulent boundary layers. The model presented uses a standard 2-equation model as the baseline for turbulent equilibrium calculations, but adds transport equations to account directly for non-equilibrium effects in the Reynolds Stress Tensor (RST) that are seen in large pressure gradients involving shock waves and separation. Comparisons are made to several standard turbulence modeling validation cases, including an incompressible boundary layer (both neutral and adverse pressure gradients), an incompressible mixing layer and a transonic bump flow. In addition, a hypersonic Shock Wave Turbulent Boundary Layer Interaction with separation is assessed along with a transonic capsule flow. Results show a substantial improvement over the baseline models for transonic separated flows. The results are mixed for the SWTBLI flows assessed. Separation predictions are not as good as the baseline models, but the over prediction of the peak heat flux downstream of the reattachment shock that plagues many models is reduced.

  12. Modeling of the equilibrium of a tokamak plasma

    Grandgirard, V.

    1999-12-01

    The simulation and the control of a plasma discharge in a tokamak require an efficient and accurate solving of the equilibrium because this equilibrium needs to be calculated again every microsecond to simulate discharges that can last up to 1000 seconds. The purpose of this thesis is to propose numerical methods in order to calculate these equilibrium with acceptable computer time and memory size. Chapter 1 deals with hydrodynamics equation and sets up the problem. Chapter 2 gives a method to take into account the boundary conditions. Chapter 3 is dedicated to the optimization of the inversion of the system matrix. This matrix being quasi-symmetric, the Woodbury method combined with Cholesky method has been used. This direct method has been compared with 2 iterative methods: GMRES (generalized minimal residual) and BCG (bi-conjugate gradient). The 2 last chapters study the control of the plasma equilibrium, this work is presented in the formalism of the optimized control of distributed systems and leads to non-linear equations of state and quadratic functionals that are solved numerically by a quadratic sequential method. This method is based on the replacement of the initial problem with a series of control problems involving linear equations of state. (A.C.)

  13. Equilibrium and non-equilibrium concepts in forest genetic modelling: population- and individually-based approaches

    Kramer, Koen; van der Werf, D. C.

    2010-01-01

    The environment is changing and so are forests, in their functioning, in species composition, and in the species’ genetic composition. Many empirical and process-based models exist to support forest management. However, most of these models do not consider the impact of environmental changes and forest management on genetic diversity nor on the rate of adaptation of critical plant processes. How genetic diversity and rates of adaptation depend on management actions is a crucial next step in m...

  14. Optimising a Model of Minimum Stock Level Control and a Model of Standing Order Cycle in Selected Foundry Plant

    Szymszal J.

    2013-09-01

    Full Text Available It has been found that the area where one can look for significant reserves in the procurement logistics is a rational management of the stock of raw materials. Currently, the main purpose of projects which increase the efficiency of inventory management is to rationalise all the activities in this area, taking into account and minimising at the same time the total inventory costs. The paper presents a method for optimising the inventory level of raw materials under a foundry plant conditions using two different control models. The first model is based on the estimate of an optimal level of the minimum emergency stock of raw materials, giving information about the need for an order to be placed immediately and about the optimal size of consignments ordered after the minimum emergency level has occurred. The second model is based on the estimate of a maximum inventory level of raw materials and an optimal order cycle. Optimisation of the presented models has been based on the previously done selection and use of rational methods for forecasting the time series of the delivery of a chosen auxiliary material (ceramic filters to a casting plant, including forecasting a mean size of the delivered batch of products and its standard deviation.

  15. Modeling of two-phase flow with thermal and mechanical non-equilibrium

    Houdayer, G.; Pinet, B.; Le Coq, G.; Reocreux, M.; Rousseau, J.C.

    1977-01-01

    To improve two-phase flow modeling by taking into account thermal and mechanical non-equilibrium a joint effort on analytical experiment and physical modeling has been undertaken. A model describing thermal non-equilibrium effects is first presented. A correlation of mass transfer has been developed using steam water critical flow tests. This model has been used to predict in a satisfactory manner blowdown tests. It has been incorporated in CLYSTERE system code. To take into account mechanical non-equilibrium, a six equations model is written. To get information on the momentum transfers special nitrogen-water tests have been undertaken. The first results of these studies are presented

  16. The Extended Generalized Cost Concept and its Application in Freight Transport and General Equilibrium Modeling

    Tavasszy, L.; Davydenko, I.; Ruijgrok, K.

    2009-01-01

    The integration of Spatial Equilibrium models and Freight transport network models is important to produce consistent scenarios for future freight transport demand. At various spatial scales, we see the changes in production, trade, logistics networking and transportation, being driven by

  17. RESRO: A spatio-temporal model to optimise regional energy systems emphasising renewable energies

    Gadocha S.

    2012-10-01

    Full Text Available RESRO (Reference Energy System Regional Optimization optimises the simultaneous fulfilment of the heat and power demand in regional energy systems. It is a mixed-integer program realised in the modelling language GAMS. The model handles information on geographically disaggregated data describing heat demand and renewable energy potentials (e.g. biomass, solar energy, ambient heat. Power demand is handled spatially aggregated in an hourly time resolution within 8 type days. The major idea is to use a high-spatial, low-temporal heat resolution and a low-spatial, hightemporal power resolution with both demand levels linked with each other. Due to high transport losses the possibilities for heat transport over long distances are unsatisfying. Thus, the spatial, raster-based approach is used to identify and utilise renewable energy resources for heat generation close to the customers as well as to optimize district heating grids and related energy flows fed by heating plants or combined heat and power (CHP plants fuelled by renewables. By combining the heat and electricity sector within the model, it is possible to evaluate relationships between these energy fields such as the use of CHP or heat pump technologies and also to examine relationships between technologies such as solar thermal and photovoltaic facilities, which are in competition for available, suitable roof or ground areas.

  18. Optimisation modelling to assess cost of dietary improvement in remote Aboriginal Australia.

    Julie Brimblecombe

    Full Text Available The cost and dietary choices required to fulfil nutrient recommendations defined nationally, need investigation, particularly for disadvantaged populations.We used optimisation modelling to examine the dietary change required to achieve nutrient requirements at minimum cost for an Aboriginal population in remote Australia, using where possible minimally-processed whole foods.A twelve month cross-section of population-level purchased food, food price and nutrient content data was used as the baseline. Relative amounts from 34 food group categories were varied to achieve specific energy and nutrient density goals at minimum cost while meeting model constraints intended to minimise deviation from the purchased diet.Simultaneous achievement of all nutrient goals was not feasible. The two most successful models (A & B met all nutrient targets except sodium (146.2% and 148.9% of the respective target and saturated fat (12.0% and 11.7% of energy. Model A was achieved with 3.2% lower cost than the baseline diet (which cost approximately AUD$13.01/person/day and Model B at 7.8% lower cost but with a reduction in energy of 4.4%. Both models required very large reductions in sugar sweetened beverages (-90% and refined cereals (-90% and an approximate four-fold increase in vegetables, fruit, dairy foods, eggs, fish and seafood, and wholegrain cereals.This modelling approach suggested population level dietary recommendations at minimal cost based on the baseline purchased diet. Large shifts in diet in remote Aboriginal Australian populations are needed to achieve national nutrient targets. The modeling approach used was not able to meet all nutrient targets at less than current food expenditure.

  19. Optimisation modelling to assess cost of dietary improvement in remote Aboriginal Australia.

    Brimblecombe, Julie; Ferguson, Megan; Liberato, Selma C; O'Dea, Kerin; Riley, Malcolm

    2013-01-01

    The cost and dietary choices required to fulfil nutrient recommendations defined nationally, need investigation, particularly for disadvantaged populations. We used optimisation modelling to examine the dietary change required to achieve nutrient requirements at minimum cost for an Aboriginal population in remote Australia, using where possible minimally-processed whole foods. A twelve month cross-section of population-level purchased food, food price and nutrient content data was used as the baseline. Relative amounts from 34 food group categories were varied to achieve specific energy and nutrient density goals at minimum cost while meeting model constraints intended to minimise deviation from the purchased diet. Simultaneous achievement of all nutrient goals was not feasible. The two most successful models (A & B) met all nutrient targets except sodium (146.2% and 148.9% of the respective target) and saturated fat (12.0% and 11.7% of energy). Model A was achieved with 3.2% lower cost than the baseline diet (which cost approximately AUD$13.01/person/day) and Model B at 7.8% lower cost but with a reduction in energy of 4.4%. Both models required very large reductions in sugar sweetened beverages (-90%) and refined cereals (-90%) and an approximate four-fold increase in vegetables, fruit, dairy foods, eggs, fish and seafood, and wholegrain cereals. This modelling approach suggested population level dietary recommendations at minimal cost based on the baseline purchased diet. Large shifts in diet in remote Aboriginal Australian populations are needed to achieve national nutrient targets. The modeling approach used was not able to meet all nutrient targets at less than current food expenditure.

  20. Discussions on the non-equilibrium effects in the quantitative phase field model of binary alloys

    Zhi-Jun, Wang; Jin-Cheng, Wang; Gen-Cang, Yang

    2010-01-01

    All the quantitative phase field models try to get rid of the artificial factors of solutal drag, interface diffusion and interface stretch in the diffuse interface. These artificial non-equilibrium effects due to the introducing of diffuse interface are analysed based on the thermodynamic status across the diffuse interface in the quantitative phase field model of binary alloys. Results indicate that the non-equilibrium effects are related to the negative driving force in the local region of solid side across the diffuse interface. The negative driving force results from the fact that the phase field model is derived from equilibrium condition but used to simulate the non-equilibrium solidification process. The interface thickness dependence of the non-equilibrium effects and its restriction on the large scale simulation are also discussed. (cross-disciplinary physics and related areas of science and technology)

  1. An exergy-based multi-objective optimisation model for energy retrofit strategies in non-domestic buildings

    García Kerdan, Iván; Raslan, Rokia; Ruyssevelt, Paul

    2016-01-01

    While the building sector has a significant thermodynamic improvement potential, exergy analysis has been shown to provide new insight for the optimisation of building energy systems. This paper presents an exergy-based multi-objective optimisation tool that aims to assess the impact of a diverse range of retrofit measures with a focus on non-domestic buildings. EnergyPlus was used as a dynamic calculation engine for first law analysis, while a Python add-on was developed to link dynamic exergy analysis and a Genetic Algorithm optimisation process with the aforementioned software. Two UK archetype case studies (an office and a primary school) were used to test the feasibility of the proposed framework. Different measures combinations based on retrofitting the envelope insulation levels and the application of different HVAC configurations were assessed. The objective functions in this study are annual energy use, occupants' thermal comfort, and total building exergy destructions. A large range of optimal solutions was achieved highlighting the framework capabilities. The model achieved improvements of 53% in annual energy use, 51% of exergy destructions and 66% of thermal comfort for the school building, and 50%, 33%, and 80% for the office building. This approach can be extended by using exergoeconomic optimisation. - Highlights: • Integration of dynamic exergy analysis into a retrofit-oriented simulation tool. • Two UK non-domestic building archetypes are used as case studies. • The model delivers non-dominated solutions based on energy, exergy and comfort. • Exergy destructions of ERMs are optimised using GA algorithms. • Strengths and limitations of the proposed exergy-based framework are discussed.

  2. Automated Sperm Head Detection Using Intersecting Cortical Model Optimised by Particle Swarm Optimization.

    Tan, Weng Chun; Mat Isa, Nor Ashidi

    2016-01-01

    In human sperm motility analysis, sperm segmentation plays an important role to determine the location of multiple sperms. To ensure an improved segmentation result, the Laplacian of Gaussian filter is implemented as a kernel in a pre-processing step before applying the image segmentation process to automatically segment and detect human spermatozoa. This study proposes an intersecting cortical model (ICM), which was derived from several visual cortex models, to segment the sperm head region. However, the proposed method suffered from parameter selection; thus, the ICM network is optimised using particle swarm optimization where feature mutual information is introduced as the new fitness function. The final results showed that the proposed method is more accurate and robust than four state-of-the-art segmentation methods. The proposed method resulted in rates of 98.14%, 98.82%, 86.46% and 99.81% in accuracy, sensitivity, specificity and precision, respectively, after testing with 1200 sperms. The proposed algorithm is expected to be implemented in analysing sperm motility because of the robustness and capability of this algorithm.

  3. Techno-Economic Models for Optimised Utilisation of Jatropha curcas Linnaeus under an Out-Grower Farming Scheme in Ghana

    Isaac Osei

    2016-11-01

    Full Text Available Techno-economic models for optimised utilisation of jatropha oil under an out-grower farming scheme were developed based on different considerations for oil and by-product utilisation. Model 1: Out-grower scheme where oil is exported and press cake utilised for compost. Model 2: Out-grower scheme with six scenarios considered for the utilisation of oil and by-products. Linear programming models were developed based on outcomes of the models to optimise the use of the oil through profit maximisation. The findings revealed that Model 1 was financially viable from the processors’ perspective but not for the farmer at seed price of $0.07/kg. All scenarios considered under Model 2 were financially viable from the processors perspective but not for the farmer at seed price of $0.07/kg; however, at seed price of $0.085/kg, financial viability was achieved for both parties. Optimising the utilisation of the oil resulted in an annual maximum profit of $123,300.

  4. The truthful signalling hypothesis: an explicit general equilibrium model.

    Hausken, Kjell; Hirshleifer, Jack

    2004-06-21

    In mating competition, the truthful signalling hypothesis (TSH), sometimes known as the handicap principle, asserts that higher-quality males signal while lower-quality males do not (or else emit smaller signals). Also, the signals are "believed", that is, females mate preferentially with higher-signalling males. Our analysis employs specific functional forms to generate analytic solutions and numerical simulations that illuminate the conditions needed to validate the TSH. Analytic innovations include: (1) A Mating Success Function indicates how female mating choices respond to higher and lower signalling levels. (2) A congestion function rules out corner solutions in which females would mate exclusively with higher-quality males. (3) A Malthusian condition determines equilibrium population size as related to per-capita resource availability. Equilibria validating the TSH are achieved over a wide range of parameters, though not universally. For TSH equilibria it is not strictly necessary that the high-quality males have an advantage in terms of lower per-unit signalling costs, but a cost difference in favor of the low-quality males cannot be too great if a TSH equilibrium is to persist. And although the literature has paid less attention to these points, TSH equilibria may also fail if: the quality disparity among males is too great, or the proportion of high-quality males in the population is too large, or if the congestion effect is too weak. Signalling being unprofitable in aggregate, it can take off from a no-signalling equilibrium only if the trait used for signalling is not initially a handicap, but instead is functionally useful at low levels. Selection for this trait sets in motion a bandwagon, whereby the initially useful indicator is pushed by male-male competition into the domain where it does indeed become a handicap.

  5. Thermal time constant: optimising the skin temperature predictive modelling in lower limb prostheses using Gaussian processes.

    Mathur, Neha; Glesk, Ivan; Buis, Arjan

    2016-06-01

    Elevated skin temperature at the body/device interface of lower-limb prostheses is one of the major factors that affect tissue health. The heat dissipation in prosthetic sockets is greatly influenced by the thermal conductive properties of the hard socket and liner material employed. However, monitoring of the interface temperature at skin level in lower-limb prosthesis is notoriously complicated. This is due to the flexible nature of the interface liners used which requires consistent positioning of sensors during donning and doffing. Predicting the residual limb temperature by monitoring the temperature between socket and liner rather than skin and liner could be an important step in alleviating complaints on increased temperature and perspiration in prosthetic sockets. To predict the residual limb temperature, a machine learning algorithm - Gaussian processes is employed, which utilizes the thermal time constant values of commonly used socket and liner materials. This Letter highlights the relevance of thermal time constant of prosthetic materials in Gaussian processes technique which would be useful in addressing the challenge of non-invasively monitoring the residual limb skin temperature. With the introduction of thermal time constant, the model can be optimised and generalised for a given prosthetic setup, thereby making the predictions more reliable.

  6. Equilibrium polymerization models of re-entrant self-assembly

    Dudowicz, Jacek; Douglas, Jack F.; Freed, Karl F.

    2009-04-01

    As is well known, liquid-liquid phase separation can occur either upon heating or cooling, corresponding to lower and upper critical solution phase boundaries, respectively. Likewise, self-assembly transitions from a monomeric state to an organized polymeric state can proceed either upon increasing or decreasing temperature, and the concentration dependent ordering temperature is correspondingly called the "floor" or "ceiling" temperature. Motivated by the fact that some phase separating systems exhibit closed loop phase boundaries with two critical points, the present paper analyzes self-assembly analogs of re-entrant phase separation, i.e., re-entrant self-assembly. In particular, re-entrant self-assembly transitions are demonstrated to arise in thermally activated equilibrium self-assembling systems, when thermal activation is more favorable than chain propagation, and in equilibrium self-assembly near an adsorbing boundary where strong competition exists between adsorption and self-assembly. Apparently, the competition between interactions or equilibria generally underlies re-entrant behavior in both liquid-liquid phase separation and self-assembly transitions.

  7. Vapor-liquid equilibrium thermodynamics of N2 + CH4 - Model and Titan applications

    Thompson, W. R.; Zollweg, John A.; Gabis, David H.

    1992-01-01

    A thermodynamic model is presented for vapor-liquid equilibrium in the N2 + CH4 system, which is implicated in calculations of the Titan tropospheric clouds' vapor-liquid equilibrium thermodynamics. This model imposes constraints on the consistency of experimental equilibrium data, and embodies temperature effects by encompassing enthalpy data; it readily calculates the saturation criteria, condensate composition, and latent heat for a given pressure-temperature profile of the Titan atmosphere. The N2 content of condensate is about half of that computed from Raoult's law, and about 30 percent greater than that computed from Henry's law.

  8. Model-based online optimisation. Pt. 1: active learning; Modellbasierte Online-Optimierung moderner Verbrennungsmotoren. T. 1: Aktives Lernen

    Poland, J.; Knoedler, K.; Zell, A. [Tuebingen Univ. (Germany). Lehrstuhl fuer Rechnerarchitektur; Fleischhauer, T.; Mitterer, A.; Ullmann, S. [BMW Group (Germany)

    2003-05-01

    This two-part article presents the model-based optimisation algorithm ''mbminimize''. It was developed in a corporate project of the University Tuebingen and the BMW Group for the purpose of optimising internal combustion engines online on the engine test bed. The first part concentrates on the basic algorithmic design, as well as on modelling, experimental design and active learning. The second part will discuss strategies for dealing with limits such as knocking. (orig.) [German] Dieser zweiteilige Beitrag stellt den modellbasierten Optimierungsalgorithmus ''mbminimize'' vor, der in Kooperation von der Universitaet Tuebingen und der BMW Group fuer die Online-Optimierung von Verbrennungsmotoren entwickelt wurde. Der vorliegende erste Teil konzentriert sich auf das grundlegende algorithmische Design, auf Modellierung, Versuchsplanung und aktives Lernen. Der zweite Teil diskutiert Strategien zur Behandlung von Limits wie Motorklopfen.

  9. Equilibrium and transient conductivity for gadolium-doped ceria under large perturbations: II. Modeling

    Zhu, Huayang; Ricote, Sandrine; Coors, W. Grover

    2014-01-01

    the computational implementation of a Nernst–Planck–Poisson (NPP) model to represent and interpret conductivity-relaxation measurements. Defect surface chemistry is represented with both equilibrium and finite-rate kinetic models. The experiments and the models are capable of representing relaxations from strongly......A model-based approach is used to interpret equilibrium and transient conductivity measurements for 10% gadolinium-doped ceria: Ce0.9Gd0.1O1.95 − δ (GDC10). The measurements were carried out by AC impedance spectroscopy on slender extruded GDC10 rods. Although equilibrium conductivity measurements...... provide sufficient information from which to derive material properties, it is found that uniquely establishing properties is difficult. Augmenting equilibrium measurements with conductivity relaxation significantly improves the evaluation of needed physical properties. This paper develops and applies...

  10. Optimisation models for decision support in the development of biomass-based industrial district-heating networks in Italy

    Chinese, Damiana; Meneghetti, Antonella

    2005-01-01

    A system optimisation approach is proposed to design biomass-based district-heating networks in the context of industrial districts, which are one of the main successful productive aspects of Italian industry. Two different perspectives are taken into account, that of utilities and of policy makers, leading to two optimisation models to be further integrated. A mixed integer linear-programming model is developed for a utility company's profit maximisation, while a linear-programming model aims at minimising the balance of greenhouse-gas emissions related to the proposed energy system and the avoided emissions due to the substitution of current fossil-fuel boilers with district-heating connections. To systematically compare their results, a sensitivity analysis is performed with respect to network size in order to identify how the optimal system configuration, in terms of selected boilers to be connected to a multiple energy-source network, may vary in the two cases and to detect possible optimal sizes. Then a factorial analysis is adopted to rank desirable client types under the two perspectives and identify proper marketing strategies. The proposed optimisation approach was applied to the design of a new district-heating network in the chair-manufacturing district of North-Eastern Italy. (Author)

  11. Improving firm performance in out-of-equilibrium, deregulated markets using feedback simulation models

    Gary, S.; Larsen, E.R.

    2000-01-01

    Deregulation has reshaped the utility sector in many countries around the world. Organisations in these deregulated industries must adopt new polices which guide strategic decisions, in an uncertain and unfamiliar environment, that determine the short- and long-term fate of their companies. Traditional economic equilibrium models do not adequately address the issues facing these organisations in the shift towards deregulated market competition. Equilibrium assumptions break down in the out-of-equilibrium transition to competitive markets, and therefore different underpinning assumptions must be adopted in order to guide management in these periods. Simulation models incorporating information feedback through behavioural policies fill the void left by equilibrium models and support strategic policy analysis in out-of-equilibrium markets. As an example, we present a feedback simulation model developed to examine firm and industry level performance consequences of new generation capacity investment policies in the deregulated UK electricity sector. The model explicitly captures behavioural decision polices of boundedly rational managers and avoids equilibrium assumptions. Such models are essential to help managers evaluate the performance impact of various strategic policies in environments in which disequilibrum behaviour dominates. (Author)

  12. Models of supply function equilibrium with applications to the electricity industry

    Aromi, J. Daniel

    Electricity market design requires tools that result in a better understanding of incentives of generators and consumers. Chapter 1 and 2 provide tools and applications of these tools to analyze incentive problems in electricity markets. In chapter 1, models of supply function equilibrium (SFE) with asymmetric bidders are studied. I prove the existence and uniqueness of equilibrium in an asymmetric SFE model. In addition, I propose a simple algorithm to calculate numerically the unique equilibrium. As an application, a model of investment decisions is considered that uses the asymmetric SFE as an input. In this model, firms can invest in different technologies, each characterized by distinct variable and fixed costs. In chapter 2, option contracts are introduced to a supply function equilibrium (SFE) model. The uniqueness of the equilibrium in the spot market is established. Comparative statics results on the effect of option contracts on the equilibrium price are presented. A multi-stage game where option contracts are traded before the spot market stage is considered. When contracts are optimally procured by a central authority, the selected profile of option contracts is such that the spot market price equals marginal cost for any load level resulting in a significant reduction in cost. If load serving entities (LSEs) are price takers, in equilibrium, there is no trade of option contracts. Even when LSEs have market power, the central authority's solution cannot be implemented in equilibrium. In chapter 3, we consider a game in which a buyer must repeatedly procure an input from a set of firms. In our model, the buyer is able to sign long term contracts that establish the likelihood with which the next period contract is awarded to an entrant or the incumbent. We find that the buyer finds it optimal to favor the incumbent, this generates more intense competition between suppliers. In a two period model we are able to completely characterize the optimal mechanism.

  13. Quantum Cournot equilibrium for the Hotelling–Smithies model of product choice

    Rahaman, Ramij; Majumdar, Priyadarshi; Basu, B

    2012-01-01

    This paper demonstrates the quantization of a spatial Cournot duopoly model with product choice, a two stage game focusing on non-cooperation in locations and quantities. With quantization, the players can access a continuous set of strategies, using a continuous variable quantum mechanical approach. The presence of quantum entanglement in the initial state identifies a quantity equilibrium for each location pair choice with any transport cost. Also higher profit is obtained by the firms at Nash equilibrium. Adoption of quantum strategies rewards us by the existence of a larger quantum strategic space at equilibrium. (paper)

  14. A two-temperature chemical non-equilibrium modeling of DC arc plasma

    Qian Haiyang; Wu Bin

    2011-01-01

    To a better understanding of non-equilibrium characteristics of DC arc plasma,a two-dimensional axisymmetric two-temperature chemical non-equilibrium (2T-NCE) model is applied for direct current arc argon plasma generator with water-cooled constrictor at atmospheric pressure. The results show that the electron temperature and heavy particle temperature has a relationship under different working parameters, indicating that DC arc plasma has a strong non-equilibrium characteristic, and the variation is obvious. (authors)

  15. KEMOD: A mixed chemical kinetic and equilibrium model of aqueous and solid phase geochemical reactions

    Yeh, G.T.; Iskra, G.A.

    1995-01-01

    This report presents the development of a mixed chemical Kinetic and Equilibrium MODel in which every chemical species can be treated either as a equilibrium-controlled or as a kinetically controlled reaction. The reaction processes include aqueous complexation, adsorption/desorption, ion exchange, precipitation/dissolution, oxidation/reduction, and acid/base reactions. Further development and modification of KEMOD can be made in: (1) inclusion of species switching solution algorithms, (2) incorporation of the effect of temperature and pressure on equilibrium and rate constants, and (3) extension to high ionic strength

  16. Equilibrium and nonequilibrium attractors for a discrete, selection-migration model

    James F. Selgrade; James H. Roberds

    2003-01-01

    This study presents a discrete-time model for the effects of selection and immigration on the demographic and genetic compositions of a population. Under biologically reasonable conditions, it is shown that the model always has an equilibrium. Although equilibria for similar models without migration must have real eigenvalues, for this selection-migration model we...

  17. Exploring the Use of Multiple Analogical Models when Teaching and Learning Chemical Equilibrium

    Harrison, Allan G.; De Jong, Onno

    2005-01-01

    This study describes the multiple analogical models used to introduce and teach Grade 12 chemical equilibrium. We examine the teacher's reasons for using models, explain each model's development during the lessons, and analyze the understandings students derived from the models. A case study approach was used and the data were drawn from the…

  18. Emergency food storage for organisations and citizens in New Zealand: results of optimisation modelling.

    Nghiem, Nhung; Carter, Mary-Ann; Wilson, Nick

    2012-12-14

    New Zealand (NZ), is a country subject to a wide range of natural disasters, some of which (e.g., floods and storms) may increase in frequency and severity with the effects of climate change. To improve disaster preparations, we aimed to use scenario development and linear programming to identify the lowest-cost foods for emergency storage. We used NZ food price data (e.g., from the Food Price Index) and nutritional data from a NZ food composition database. Different scenarios were modelled in Excel and R along with uncertainty analysis. A collection of low-cost emergency storage foods that meet daily energy requirements for men were identified e.g., at a median purchase cost of NZ$2.21 per day (equivalent to US$1.45) (95% simulation interval = NZ$2.04 to 2.38). In comparison, the cost of such a collection of foods which did not require cooking, was NZ$3.67 per day. While meeting all nutritional recommendations (and not just energy) is far from essential in a disaster setting, if such nutritionally optimised foods are purchased for storage, then the cost would be higher (NZ$7.10 per day). Where a zero level of food spoilage was assumed (e.g., storage by a government agency), the cost of purchasing food for storage was as low as NZ$1.93 per day. It appears to cost very little to purchase basic emergency foods for storage in the current New Zealand setting. The lists of the foods identified could be considered by organisations who participate in disaster relief (civil defence) but also by citizens.

  19. Partition Function and Configurational Entropy in Non-Equilibrium States: A New Theoretical Model

    Akira Takada

    2018-03-01

    Full Text Available A new model of non-equilibrium thermodynamic states has been investigated on the basis of the fact that all thermodynamic variables can be derived from partition functions. We have thus attempted to define partition functions for non-equilibrium conditions by introducing the concept of pseudo-temperature distributions. These pseudo-temperatures are configurational in origin and distinct from kinetic (phonon temperatures because they refer to the particular fragments of the system with specific energies. This definition allows thermodynamic states to be described either for equilibrium or non-equilibrium conditions. In addition; a new formulation of an extended canonical partition function; internal energy and entropy are derived from this new temperature definition. With this new model; computational experiments are performed on simple non-interacting systems to investigate cooling and two distinct relaxational effects in terms of the time profiles of the partition function; internal energy and configurational entropy.

  20. BGK-type models in strong reaction and kinetic chemical equilibrium regimes

    Monaco, R; Bianchi, M Pandolfi; Soares, A J

    2005-01-01

    A BGK-type procedure is applied to multi-component gases undergoing chemical reactions of bimolecular type. The relaxation process towards local Maxwellians, depending on mass and numerical densities of each species as well as common velocity and temperature, is investigated in two different cases with respect to chemical regimes. These cases are related to the strong reaction regime characterized by slow reactions, and to the kinetic chemical equilibrium regime where fast reactions take place. The consistency properties of both models are stated in detail. The trend to equilibrium is numerically tested and comparisons for the two regimes are performed within the hydrogen-air and carbon-oxygen reaction mechanism. In the spatial homogeneous case, it is also shown that the thermodynamical equilibrium of the models recovers satisfactorily the asymptotic equilibrium solutions to the reactive Euler equations

  1. Parametrizing coarse grained models for molecular systems at equilibrium

    Kalligiannaki, Evangelia; Chazirakis, A.; Tsourtis, A.; Katsoulakis, M. A.; Plechá č, P.; Harmandaris, V.

    2016-01-01

    Hierarchical coarse graining of atomistic molecular systems at equilibrium has been an intensive research topic over the last few decades. In this work we (a) review theoretical and numerical aspects of different parametrization methods (structural-based, force matching and relative entropy) to derive the effective interaction potential between coarse-grained particles. All methods approximate the many body potential of mean force; resulting, however, in different optimization problems. (b) We also use a reformulation of the force matching method by introducing a generalized force matching condition for the local mean force in the sense that allows the approximation of the potential of mean force under both linear and non-linear coarse graining mappings (E. Kalligiannaki, et al., J. Chem. Phys. 2015). We apply and compare these methods to: (a) a benchmark system of two isolated methane molecules; (b) methane liquid; (c) water; and (d) an alkane fluid. Differences between the effective interactions, derived from the various methods, are found that depend on the actual system under study. The results further reveal the relation of the various methods and the sensitivities that may arise in the implementation of numerical methods used in each case.

  2. Parametrizing coarse grained models for molecular systems at equilibrium

    Kalligiannaki, Evangelia

    2016-10-18

    Hierarchical coarse graining of atomistic molecular systems at equilibrium has been an intensive research topic over the last few decades. In this work we (a) review theoretical and numerical aspects of different parametrization methods (structural-based, force matching and relative entropy) to derive the effective interaction potential between coarse-grained particles. All methods approximate the many body potential of mean force; resulting, however, in different optimization problems. (b) We also use a reformulation of the force matching method by introducing a generalized force matching condition for the local mean force in the sense that allows the approximation of the potential of mean force under both linear and non-linear coarse graining mappings (E. Kalligiannaki, et al., J. Chem. Phys. 2015). We apply and compare these methods to: (a) a benchmark system of two isolated methane molecules; (b) methane liquid; (c) water; and (d) an alkane fluid. Differences between the effective interactions, derived from the various methods, are found that depend on the actual system under study. The results further reveal the relation of the various methods and the sensitivities that may arise in the implementation of numerical methods used in each case.

  3. A Tightly Coupled Non-Equilibrium Magneto-Hydrodynamic Model for Inductively Coupled RF Plasmas

    2016-02-29

    development a tightly coupled magneto-hydrodynamic model for Inductively Coupled Radio- Frequency (RF) Plasmas. Non Local Thermodynamic Equilibrium (NLTE...for Inductively Coupled Radio-Frequency (RF) Plasmas. Non Local Thermodynamic Equilibrium (NLTE) effects are described based on a hybrid State-to-State...Inductively Coupled Plasma (ICP) torches have wide range of possible applications which include deposition of metal coatings, synthesis of ultra-fine powders

  4. The Equilibrium Analysis of a Closed Economy Model with Government and Money Market Sector

    Catalin Angelo Ioan

    2011-10-01

    Full Text Available In this paper, we first study the static equilibrium of a a closed economy model in terms of dependence on national income and interest rate from the main factors namely the marginal propensity to consume, tax rate, investment rate and the rate of currency demand. In the second part, we study the dynamic equilibrium solutions in terms of stability. We thus obtain the variation functions of national income and interest rate variation and their limit values.

  5. The restricted stochastic user equilibrium with threshold model: Large-scale application and parameter testing

    Rasmussen, Thomas Kjær; Nielsen, Otto Anker; Watling, David P.

    2017-01-01

    Equilibrium model (DUE), by combining the strengths of the Boundedly Rational User Equilibrium model and the Restricted Stochastic User Equilibrium model (RSUE). Thereby, the RSUET model reaches an equilibrated solution in which the flow is distributed according to Random Utility Theory among a consistently...... model improves the behavioural realism, especially for high congestion cases. Also, fast and well-behaved convergence to equilibrated solutions among non-universal choice sets is observed across different congestion levels, choice model scale parameters, and algorithm step sizes. Clearly, the results...... highlight that the RSUET outperforms the MNP SUE in terms of convergence, calculation time and behavioural realism. The choice set composition is validated by using 16,618 observed route choices collected by GPS devices in the same network and observing their reproduction within the equilibrated choice sets...

  6. Modelling non-equilibrium thermodynamic systems from the speed-gradient principle.

    Khantuleva, Tatiana A; Shalymov, Dmitry S

    2017-03-06

    The application of the speed-gradient (SG) principle to the non-equilibrium distribution systems far away from thermodynamic equilibrium is investigated. The options for applying the SG principle to describe the non-equilibrium transport processes in real-world environments are discussed. Investigation of a non-equilibrium system's evolution at different scale levels via the SG principle allows for a fresh look at the thermodynamics problems associated with the behaviour of the system entropy. Generalized dynamic equations for finite and infinite number of constraints are proposed. It is shown that the stationary solution to the equations, resulting from the SG principle, entirely coincides with the locally equilibrium distribution function obtained by Zubarev. A new approach to describe time evolution of systems far from equilibrium is proposed based on application of the SG principle at the intermediate scale level of the system's internal structure. The problem of the high-rate shear flow of viscous fluid near the rigid plane plate is discussed. It is shown that the SG principle allows closed mathematical models of non-equilibrium processes to be constructed.This article is part of the themed issue 'Horizons of cybernetical physics'. © 2017 The Author(s).

  7. A hierarchical analysis of terrestrial ecosystem model Biome-BGC: Equilibrium analysis and model calibration

    Thornton, Peter E [ORNL; Wang, Weile [ORNL; Law, Beverly E. [Oregon State University; Nemani, Ramakrishna R [NASA Ames Research Center

    2009-01-01

    The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically support the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.

  8. Modeling Mathematical Programs with Equilibrium Constraints in Pyomo

    Hart, William E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Siirola, John Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    We describe new capabilities for modeling MPEC problems within the Pyomo modeling software. These capabilities include new modeling components that represent complementar- ity conditions, modeling transformations for re-expressing models with complementarity con- ditions in other forms, and meta-solvers that apply transformations and numeric optimization solvers to optimize MPEC problems. We illustrate the breadth of Pyomo's modeling capabil- ities for MPEC problems, and we describe how Pyomo's meta-solvers can perform local and global optimization of MPEC problems.

  9. Modeling Water Utility Investments and Improving Regulatory Policies using Economic Optimisation in England and Wales

    Padula, S.; Harou, J. J.

    2012-12-01

    Water utilities in England and Wales are regulated natural monopolies called 'water companies'. Water companies must obtain periodic regulatory approval for all investments (new supply infrastructure or demand management measures). Both water companies and their regulators use results from least economic cost capacity expansion optimisation models to develop or assess water supply investment plans. This presentation first describes the formulation of a flexible supply-demand planning capacity expansion model for water system planning. The model uses a mixed integer linear programming (MILP) formulation to choose the least-cost schedule of future supply schemes (reservoirs, desalination plants, etc.) and demand management (DM) measures (leakage reduction, water efficiency and metering options) and bulk transfers. Decisions include what schemes to implement, when to do so, how to size schemes and how much to use each scheme during each year of an n-year long planning horizon (typically 30 years). In addition to capital and operating (fixed and variable) costs, the estimated social and environmental costs of schemes are considered. Each proposed scheme is costed discretely at one or more capacities following regulatory guidelines. The model uses a node-link network structure: water demand nodes are connected to supply and demand management (DM) options (represented as nodes) or to other demand nodes (transfers). Yields from existing and proposed are estimated separately using detailed water resource system simulation models evaluated over the historical period. The model simultaneously considers multiple demand scenarios to ensure demands are met at required reliability levels; use levels of each scheme are evaluated for each demand scenario and weighted by scenario likelihood so that operating costs are accurately evaluated. Multiple interdependency relationships between schemes (pre-requisites, mutual exclusivity, start dates, etc.) can be accounted for by

  10. Prediction of the working parameters of a wood waste gasifier through an equilibrium model

    Altafini, Carlos R.; Baretto, Ronaldo M. [Caxias do Sul Univ., Dept. of Mechanical Engineering, Caxias do Sul, RS (Brazil); Wander, Paulo R. [Caxias do Sul Univ., Dept. of Mechanical Engineering, Caxias do Sul, RS (Brazil); Federal Univ. of Rio Grande do Sul State (UFRGS), Mechanical Engineering Postgraduation Program (PROMEC), RS (Brazil)

    2003-10-01

    This paper deals with the computational simulation of a wood waste (sawdust) gasifier using an equilibrium model based on minimization of the Gibbs free energy. The gasifier has been tested with Pinus Elliotis sawdust, an exotic specie largely cultivated in the South of Brazil. The biomass used in the tests presented a moisture of nearly 10% (wt% on wet basis), and the average composition results of the gas produced (without tar) are compared with the equilibrium models used. Sensitivity studies to verify the influence of the moisture sawdust content on the fuel gas composition and on its heating value were made. More complex models to reproduce with better accuracy the gasifier studied were elaborated. Although the equilibrium models do not represent the reactions that occur at relatively high temperatures ( {approx_equal} 800 deg C) very well, these models can be useful to show some tendencies on the working parameter variations of a gasifier. (Author)

  11. Pre-equilibrium assumptions and statistical model parameters effects on reaction cross-section calculations

    Avrigeanu, M.; Avrigeanu, V.

    1992-02-01

    A systematic study on effects of statistical model parameters and semi-classical pre-equilibrium emission models has been carried out for the (n,p) reactions on the 56 Fe and 60 Co target nuclei. The results obtained by using various assumptions within a given pre-equilibrium emission model differ among them more than the ones of different models used under similar conditions. The necessity of using realistic level density formulas is emphasized especially in connection with pre-equilibrium emission models (i.e. with the exciton state density expression), while a basic support could be found only by replacement of the Williams exciton state density formula with a realistic one. (author). 46 refs, 12 figs, 3 tabs

  12. Soils apart from equilibrium – consequences for soil carbon balance modelling

    T. Wutzler

    2007-01-01

    Full Text Available Many projections of the soil carbon sink or source are based on kinetically defined carbon pool models. Para-meters of these models are often determined in a way that the steady state of the model matches observed carbon stocks. The underlying simplifying assumption is that observed carbon stocks are near equilibrium. This assumption is challenged by observations of very old soils that do still accumulate carbon. In this modelling study we explored the consequences of the case where soils are apart from equilibrium. Calculation of equilibrium states of soils that are currently accumulating small amounts of carbon were performed using the Yasso model. It was found that already very small current accumulation rates cause big changes in theoretical equilibrium stocks, which can virtually approach infinity. We conclude that soils that have been disturbed several centuries ago are not in equilibrium but in a transient state because of the slowly ongoing accumulation of the slowest pool. A first consequence is that model calibrations to current carbon stocks that assume equilibrium state, overestimate the decay rate of the slowest pool. A second consequence is that spin-up runs (simulations until equilibrium overestimate stocks of recently disturbed sites. In order to account for these consequences, we propose a transient correction. This correction prescribes a lower decay rate of the slowest pool and accounts for disturbances in the past by decreasing the spin-up-run predicted stocks to match an independent estimate of current soil carbon stocks. Application of this transient correction at a Central European beech forest site with a typical disturbance history resulted in an additional carbon fixation of 5.7±1.5 tC/ha within 100 years. Carbon storage capacity of disturbed forest soils is potentially much higher than currently assumed. Simulations that do not adequately account for the transient state of soil carbon stocks neglect a considerable

  13. Development of a model for optimisation of a power plant mix by means of evolution strategy; Modellentwicklung zur Kraftwerksparkoptimierung mit Hilfe von Evolutionsstrategien

    Roth, Hans

    2008-09-17

    Within the scope of this thesis a model based on evolution strategy is depicted, which optimises the upgrade of an existing power plant mix. In doing so the optimisation problem is divided in two sections covering the building of new power plants as well as their ideal usage within the persisting power plant mix. The building of new power plants is optimised by means of mutations, while their ideal usage is specified by a heuristic classification according to the merit order of the power plant mix. By applying a residual yearly load curve the consumer load can be modelled, incorporating the impact of fluctuating power generation and its probability of occurrence. Power plant failures and the duration of revisions are adequately considered by means of a power reduction factor. The optimisation furthermore accommodates a limiting threshold for yearly carbon dioxide emissions as well as a premature decommissioning of power plants. (orig.)

  14. Navigating catastrophes: Local but not global optimisation allows for macro-economic navigation of crises

    Harré, Michael S.

    2013-02-01

    Two aspects of modern economic theory have dominated the recent discussion on the state of the global economy: Crashes in financial markets and whether or not traditional notions of economic equilibrium have any validity. We have all seen the consequences of market crashes: plummeting share prices, businesses collapsing and considerable uncertainty throughout the global economy. This seems contrary to what might be expected of a system in equilibrium where growth dominates the relatively minor fluctuations in prices. Recent work from within economics as well as by physicists, psychologists and computational scientists has significantly improved our understanding of the more complex aspects of these systems. With this interdisciplinary approach in mind, a behavioural economics model of local optimisation is introduced and three general properties are proven. The first is that under very specific conditions local optimisation leads to a conventional macro-economic notion of a global equilibrium. The second is that if both global optimisation and economic growth are required then under very mild assumptions market catastrophes are an unavoidable consequence. Third, if only local optimisation and economic growth are required then there is sufficient parametric freedom for macro-economic policy makers to steer an economy around catastrophes without overtly disrupting local optimisation.

  15. Optimisation of groundwater level monitoring networks using geostatistical modelling based on the Spartan family variogram and a genetic algorithm method

    Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2016-04-01

    Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the

  16. Phenomenological model for non-equilibrium deuteron emission in nucleon induced reactions

    Broeders, C.H.M.; Konobeyev, A.Yu.

    2005-01-01

    A new approach is proposed for the calculation of non-equilibrium deuteron energy distributions in nuclear reactions induced by nucleons of intermediate energies. It combines the model of the nucleon pick-up, the coalescence and the deuteron knock-out. Emission and absorption rates for excited particles are described by the pre-equilibrium hybrid model. The model of Sato, Iwamoto, Harada is used to describe the nucleon pick-up and the coalescence of nucleons from the exciton configurations starting from (2p, 1h). The model of deuteron knock-out is formulated taking into account the Pauli principle for the nucleon-deuteron interaction inside a nucleus. The contribution of the direct nucleon pick-up is described phenomenologically. The multiple pre-equilibrium emission of particles is taken into account. The calculated deuteron energy distributions are compared with experimental data from 12 C to 209 Bi. (orig.)

  17. Measuring Convergence using Dynamic Equilibrium Models: Evidence from Chinese Provinces

    Pan, Lei; Posch, Olaf; van der Wel, Michel

    We propose a model to study economic convergence in the tradition of neoclassical growth theory. We employ a novel stochastic set-up of the Solow (1956) model with shocks to both capital and labor. Our novel approach identifies the speed of convergence directly from estimating the parameters which...

  18. Estuarine Facies Model Revisited: Conceptual Model of Estuarine Sediment Dynamics During Non-Equilibrium Conditions

    Elliott, E. A.; Rodriguez, A. B.; McKee, B. A.

    2017-12-01

    Traditional models of estuarine systems show deposition occurs primarily within the central basin. There, accommodation space is high within the deep central valley, which is below regional wave base and where current energy is presumed to reach a relative minimum, promoting direct deposition of cohesive sediment and minimizing erosion. However, these models often reflect long-term (decadal-millennial) timescales, where accumulation rates are in relative equilibrium with the rate of relative sea-level rise, and lack the resolution to capture shorter term changes in sediment deposition and erosion within the central estuary. This work presents a conceptual model for estuarine sedimentation during non-equilibrium conditions, where high-energy inputs to the system reach a relative maximum in the central basin, resulting in temporary deposition and/or remobilization over sub-annual to annual timescales. As an example, we present a case study of Core Sound, NC, a lagoonal estuarine system where the regional base-level has been reached, and sediment deposition, resuspension and bypassing is largely a result of non-equilibrium, high-energy events. Utilizing a 465 cm-long sediment core from a mini-basin located between Core Sound and the continental shelf, a 40-year sub-annual chronology was developed for the system, with sediment accumulation rates (SAR) interpolated to a monthly basis over the 40-year record. This study links erosional processes in the estuary directly with sediment flux to the continental shelf, taking advantage of the highly efficient sediment trapping capability of the mini-basin. The SAR record indicates high variation in the estuarine sediment supply, with peaks in the SAR record at a recurrence interval of 1 year (+/- 0.25). This record has been compared to historical storm influence for the area. Through this multi-decadal record, sediment flushing events occur at a much more frequent interval than previously thought (i.e. annual rather than

  19. Pre-equilibrium nuclear reactions: An introduction to classical and quantum-mechanical models

    Koning, A.J.; Akkermans, J.M.

    1999-01-01

    In studies of light-ion induced nuclear reactions one distinguishes three different mechanisms: direct, compound and pre-equilibrium nuclear reactions. These reaction processes can be subdivided according to time scales or, equivalently, the number of intranuclear collisions taking place before emission. Furthermore, each mechanism preferably excites certain parts of the nuclear level spectrum and is characterized by different types of angular distributions. This presentation includes description of the classical, exciton model, semi-classical models, with some selected results, and quantum mechanical models. A survey of classical versus quantum-mechanical pre-equilibrium reaction theory is presented including practical applications

  20. Absence of local thermal equilibrium in two models of heat conduction

    Dhar, Abhishek; Dhar, Deepak

    1998-01-01

    A crucial assumption in the conventional description of thermal conduction is the existence of local thermal equilibrium. We test this assumption in two simple models of heat conduction. Our first model is a linear chain of planar spins with nearest neighbour couplings, and the second model is that of a Lorentz gas. We look at the steady state of the system when the two ends are connected to heat baths at temperatures T1 and T2. If T1=T2, the system reaches thermal equilibrium. If T1 is not e...

  1. Using marketing theory to inform strategies for recruitment: a recruitment optimisation model and the txt2stop experience.

    Galli, Leandro; Knight, Rosemary; Robertson, Steven; Hoile, Elizabeth; Oladapo, Olubukola; Francis, David; Free, Caroline

    2014-05-22

    Recruitment is a major challenge for many trials; just over half reach their targets and almost a third resort to grant extensions. The economic and societal implications of this shortcoming are significant. Yet, we have a limited understanding of the processes that increase the probability that recruitment targets will be achieved. Accordingly, there is an urgent need to bring analytical rigour to the task of improving recruitment, thereby increasing the likelihood that trials reach their recruitment targets. This paper presents a conceptual framework that can be used to improve recruitment to clinical trials. Using a case-study approach, we reviewed the range of initiatives that had been undertaken to improve recruitment in the txt2stop trial using qualitative (semi-structured interviews with the principal investigator) and quantitative (recruitment) data analysis. Later, the txt2stop recruitment practices were compared to a previous model of marketing a trial and to key constructs in social marketing theory. Post hoc, we developed a recruitment optimisation model to serve as a conceptual framework to improve recruitment to clinical trials. A core premise of the model is that improving recruitment needs to be an iterative, learning process. The model describes three essential activities: i) recruitment phase monitoring, ii) marketing research, and iii) the evaluation of current performance. We describe the initiatives undertaken by the txt2stop trial and the results achieved, as an example of the use of the model. Further research should explore the impact of adopting the recruitment optimisation model when applied to other trials.

  2. Computable general equilibrium model fiscal year 2014 capability development report

    Edwards, Brian Keith [Los Alamos National Laboratory; Boero, Riccardo [Los Alamos National Laboratory

    2016-05-11

    This report provides an overview of the development of the NISAC CGE economic modeling capability since 2012. This capability enhances NISAC's economic modeling and analysis capabilities to answer a broader set of questions than possible with previous economic analysis capability. In particular, CGE modeling captures how the different sectors of the economy, for example, households, businesses, government, etc., interact to allocate resources in an economy and this approach captures these interactions when it is used to estimate the economic impacts of the kinds of events NISAC often analyzes.

  3. Models of direct reactions and quantum pre-equilibrium for nucleon scattering on spherical nuclei

    Dupuis, M.

    2006-01-01

    When a nucleon collides with a target nucleus, several reactions may occur: elastic and inelastic scatterings, charge exchange... In order to describe these reactions, different models are involved: the direct reactions, pre-equilibrium and compound nucleus models. Our goal is to study, within a quantum framework and without any adjustable parameter, the direct and pre-equilibrium reactions for nucleons scatterings off double closed-shell nuclei. We first consider direct reactions: we are studying nucleon scattering with the Melbourne G-matrix, which represents the interaction between the projectile and one target nucleon, and with random phase approximation (RPA) wave functions which describe all target states. This is a fully microscopic approach since no adjustable parameters are involved. A second part is dedicated to the study of nucleon inelastic scattering for large energy transfer which necessarily involves the pre-equilibrium mechanism. Several models have been developed in the past to deal with pre-equilibrium. They start from the Born expansion of the transition amplitude which is associated to the inelastic process and they use several approximations which have not yet been tested. We have achieved some comparisons between second order cross sections which have been calculated with and without these approximations. Our results allow us to criticize some of these approximations and give several directions to improve the quantum pre-equilibrium models. (author)

  4. Accounting for household heterogeneity in general equilibrium economic growth models

    Melnikov, N.B.; O'Neill, B.C.; Dalton, M.G.

    2012-01-01

    We describe and evaluate a new method of aggregating heterogeneous households that allows for the representation of changing demographic composition in a multi-sector economic growth model. The method is based on a utility and labor supply calibration that takes into account time variations in demographic characteristics of the population. We test the method using the Population-Environment-Technology (PET) model by comparing energy and emissions projections employing the aggregate representation of households to projections representing different household types explicitly. Results show that the difference between the two approaches in terms of total demand for energy and consumption goods is negligible for a wide range of model parameters. Our approach allows the effects of population aging, urbanization, and other forms of compositional change on energy demand and CO 2 emissions to be estimated and compared in a computationally manageable manner using a representative household under assumptions and functional forms that are standard in economic growth models.

  5. A numerical model for simulating electroosmotic micro- and nanochannel flows under non-Boltzmann equilibrium

    Kim, Kyoungjin; Kwak, Ho Sang [School of Mechanical Engineering, Kumoh National Institute of Technology, 1 Yangho, Gumi, Gyeongbuk 730-701 (Korea, Republic of); Song, Tae-Ho, E-mail: kimkj@kumoh.ac.kr, E-mail: hskwak@kumoh.ac.kr, E-mail: thsong@kaist.ac.kr [Department of Mechanical, Aerospace and Systems Engineering, Korea Advanced Institute of Science and Technology, 373-1 Guseong, Yuseong, Daejeon 305-701 (Korea, Republic of)

    2011-08-15

    This paper describes a numerical model for simulating electroosmotic flows (EOFs) under non-Boltzmann equilibrium in a micro- and nanochannel. The transport of ionic species is represented by employing the Nernst-Planck equation. Modeling issues related to numerical difficulties are discussed, which include the handling of boundary conditions based on surface charge density, the associated treatment of electric potential and the evasion of nonlinearity due to the electric body force. The EOF in the entrance region of a straight channel is examined. The numerical results show that the present model is useful for the prediction of the EOFs requiring a fine resolution of the electric double layer under either the Boltzmann equilibrium or non-equilibrium. Based on the numerical results, the correlation between the surface charge density and the zeta potential is investigated.

  6. Solid-Liquid equilibrium of n-alkanes using the Chain Delta Lattice Parameter model

    Coutinho, João A.P.; Andersen, Simon Ivar; Stenby, Erling Halfdan

    1996-01-01

    The formation of a solid phase in liquid mixtures with large paraffinic molecules is a phenomenon of interest in the petroleum, pharmaceutical, and biotechnological industries among onters. Efforts to model the solid-liquid equilibrium in these systems have been mainly empirical and with different...... degrees of success.An attempt to describe the equilibrium between the high temperature form of a paraffinic solid solution, commonly known as rotator phase, and the liquid phase is performed. The Chain Delta Lattice Parameter model (CDLP) is developed allowing a successful description of the solid-liquid...... equilibrium of n-alkanes ranging from n-C_20 to n-C_40.The model is further modified to achieve a more correct temperature dependence because it severely underestimates the excess enthalpy. It is shown that the ratio of excess enthalpy and entropy for n-alkane solid solutions, as happens for other solid...

  7. NON-EQUILIBRIUM IONIZATION MODELING OF THE CURRENT SHEET IN A SIMULATED SOLAR ERUPTION

    Shen Chengcai; Reeves, Katharine K.; Raymond, John C.; Murphy, Nicholas A.; Ko, Yuan-Kuen; Lin Jun; Mikić, Zoran; Linker, Jon A.

    2013-01-01

    The current sheet that extends from the top of flare loops and connects to an associated flux rope is a common structure in models of coronal mass ejections (CMEs). To understand the observational properties of CME current sheets, we generated predictions from a flare/CME model to be compared with observations. We use a simulation of a large-scale CME current sheet previously reported by Reeves et al. This simulation includes ohmic and coronal heating, thermal conduction, and radiative cooling in the energy equation. Using the results of this simulation, we perform time-dependent ionization calculations of the flow in a CME current sheet and construct two-dimensional spatial distributions of ionic charge states for multiple chemical elements. We use the filter responses from the Atmospheric Imaging Assembly (AIA) on the Solar Dynamics Observatory and the predicted intensities of emission lines to compute the count rates for each of the AIA bands. The results show differences in the emission line intensities between equilibrium and non-equilibrium ionization. The current sheet plasma is underionized at low heights and overionized at large heights. At low heights in the current sheet, the intensities of the AIA 94 Å and 131 Å channels are lower for non-equilibrium ionization than for equilibrium ionization. At large heights, these intensities are higher for non-equilibrium ionization than for equilibrium ionization inside the current sheet. The assumption of ionization equilibrium would lead to a significant underestimate of the temperature low in the current sheet and overestimate at larger heights. We also calculate the intensities of ultraviolet lines and predict emission features to be compared with events from the Ultraviolet Coronagraph Spectrometer on the Solar and Heliospheric Observatory, including a low-intensity region around the current sheet corresponding to this model

  8. Spectral non-equilibrium property in homogeneous isotropic turbulence and its implication in subgrid-scale modeling

    Fang, Le [Laboratory of Mathematics and Physics, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Zhu, Ying [Laboratory of Mathematics and Physics, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Liu, Yangwei, E-mail: liuyangwei@126.com [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Lu, Lipeng [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China)

    2015-10-09

    The non-equilibrium property in turbulence is a non-negligible problem in large-eddy simulation but has not yet been systematically considered. The generalization from equilibrium turbulence to non-equilibrium turbulence requires a clear recognition of the non-equilibrium property. As a preliminary step of this recognition, the present letter defines a typical non-equilibrium process, that is, the spectral non-equilibrium process, in homogeneous isotropic turbulence. It is then theoretically investigated by employing the skewness of grid-scale velocity gradient, which permits the decomposition of resolved velocity field into an equilibrium one and a time-reversed one. Based on this decomposition, an improved Smagorinsky model is proposed to correct the non-equilibrium behavior of the traditional Smagorinsky model. The present study is expected to shed light on the future studies of more generalized non-equilibrium turbulent flows. - Highlights: • A spectral non-equilibrium process in isotropic turbulence is defined theoretically. • A decomposition method is proposed to divide a non-equilibrium turbulence field. • An improved Smagorinsky model is proposed to correct the non-equilibrium behavior.

  9. Restructured electric power systems analysis of electricity markets with equilibrium models

    2010-01-01

    Electricity market deregulation is driving the power energy production from a monopolistic structure into a competitive market environment. The development of electricity markets has necessitated the need to analyze market behavior and power. Restructured Electric Power Systems reviews the latest developments in electricity market equilibrium models and discusses the application of such models in the practical analysis and assessment of electricity markets.

  10. Regional disaster impact analysis: comparing Input-Output and Computable General Equilibrium models

    Koks, E.E.; Carrera, L.; Jonkeren, O.; Aerts, J.C.J.H.; Husby, T.G.; Thissen, M.; Standardi, G.; Mysiak, J.

    2016-01-01

    A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of

  11. The Matrix model, a driven state variables approach to non-equilibrium thermodynamics

    Jongschaap, R.J.J.

    2001-01-01

    One of the new approaches in non-equilibrium thermodynamics is the so-called matrix model of Jongschaap. In this paper some features of this model are discussed. We indicate the differences with the more common approach based upon internal variables and the more sophisticated Hamiltonian and GENERIC

  12. A general equilibrium model of ecosystem services in a river basin

    Travis Warziniack

    2014-01-01

    This study builds a general equilibrium model of ecosystem services, with sectors of the economy competing for use of the environment. The model recognizes that production processes in the real world require a combination of natural and human inputs, and understanding the value of these inputs and their competing uses is necessary when considering policies of resource...

  13. Phase equilibrium modeling of gas hydrate systems for CO2 capture

    Herslund, Peter Jørgensen; Thomsen, Kaj; Abildskov, Jens

    2012-01-01

    to form from vapor phases with initial mole fractions of CO2 at or above 0.15.The two models are validated against mixed hydrate equilibrium data found in literature. Both dissociation pressures and hydrate compositions are considered in the validation process.With the fitted parameters, Model I predicts...

  14. Quasi-equilibrium channel model of an constant current arc

    Gerasimov Alexander V.

    2003-01-01

    Full Text Available The rather simple method of calculation of electronic and gas temperature in the channel of arc of plasma generator is offered. This method is based on self-consistent two-temperature channel model of an electric arc. The method proposed enables to obtain radial allocation of gas and electronic temperatures in a non-conducting zone of an constant current arc, for prescribed parameters of discharge (current intensity and power of the discharge, with enough good precision. The results obtained can be used in model and engineering calculations to estimate gas and electronic temperatures in the channel of an arc plasma generator.

  15. General equilibrium basic needs policy model, (updating part).

    Kouwenaar A

    1985-01-01

    ILO pub-WEP pub-PREALC pub. Working paper, econometric model for the assessment of structural change affecting development planning for basic needs satisfaction in Ecuador - considers population growth, family size (households), labour force participation, labour supply, wages, income distribution, profit rates, capital ownership, etc.; examines nutrition, education and health as factors influencing productivity. Diagram, graph, references, statistical tables.

  16. Developing a Dynamic Stochastic General Equilibrium Model for the ...

    They bring benefits by helping to project changes that take place because of shocks to the ... This proposal seeks to develop a DSGE model for the Indian economy to ... In partnership with UNESCO's Organization for Women in Science for the ...

  17. Two-temperature chemically non-equilibrium modelling of an air supersonic ICP

    El Morsli, Mbark; Proulx, Pierre [Laboratoire de Modelisation de Procedes Chimiques par Ordinateur Oppus, Departement de Genie Chimique, Universite de Sherbrooke (Ciheam) J1K 2R1 (Canada)

    2007-08-21

    In this work, a non-equilibrium mathematical model for an air inductively coupled plasma torch with a supersonic nozzle is developed without making thermal and chemical equilibrium assumptions. Reaction rate equations are written, and two coupled energy equations are used, one for the calculation of the translational-rotational temperature T{sub hr} and one for the calculation of the electro-vibrational temperature T{sub ev}. The viscous dissipation is taken into account in the translational-rotational energy equation. The electro-vibrational energy equation also includes the pressure work of the electrons, the Ohmic heating power and the exchange due to elastic collision. Higher order approximations of the Chapman-Enskog method are used to obtain better accuracy for transport properties, taking advantage of the most recent sets of collisions integrals available in the literature. The results obtained are compared with those obtained using a chemical equilibrium model and a one-temperature chemical non-equilibrium model. The influence of the power and the pressure chamber on the chemical and thermal non-equilibrium is investigated.

  18. The development and use of plant models to assist with both the commissioning and performance optimisation of plant control systems

    Conner, A.S.; Region, S.E.

    1984-01-01

    Successful engagement of cascade control systems used to control complex nuclear plant often present control engineers with difficulties when trying to obtain early automatic operation of these systems. These difficulties often arise because prior to the start of live plant operation, control equipment performance can only be assessed using open loop techniques. By simulating simple models of plant on a computer and linking it to the site control equipment, the performance of the system can be examined and optimised prior to live plant operation. This significantly reduces the plant down time required to correct control equipment performance faults during live plant operation

  19. Model for the Value of a Business, Some Optimisation Problems in its Operating Procedures and the Valuation of its Debt

    M.Z. Apabhai; N.I. Georgikopoulos; D. Hasnip; R.K.D. Jamie; M. Kim

    1999-01-01

    In this paper we present a model for the value of a firm based on observable variables and parameters: the annual turnover, the expenses, interest rates. This value is the solution of a parabolic partial differential equation. We show how the value of the company depends on its legal status such as its liability (i.e. whether it is a Limited Company or a sole trader/partnership). We give examples of how the operating procedures can be optimised (e.g. whether the firm should close down, reloca...

  20. Copper removal by algal biomass: biosorbents characterization and equilibrium modelling.

    Vilar, Vítor J P; Botelho, Cidália M S; Pinheiro, José P S; Domingos, Rute F; Boaventura, Rui A R

    2009-04-30

    The general principles of Cu(II) binding to algal waste from agar extraction, composite material and algae Gelidium, and different modelling approaches, are discussed. FTIR analyses provided a detailed description of the possible binding groups present in the biosorbents, as carboxylic groups (D-glucuronic and pyruvic acids), hydroxyl groups (cellulose, agar and floridean starch) and sulfonate groups (sulphated galactans). Potentiometric acid-base titrations showed a heterogeneous distribution of two major binding groups, carboxyl and hydroxyl, following the quasi-Gaussian affinity constant distribution suggested by Sips, which permitted to estimate the maximum amount of acid functional groups (0.36, 0.25 and 0.1 mmol g(-1)) and proton binding parameters (pK(H)=5.0, 5.3 and 4.4; m(H)=0.43, 0.37, 0.33), respectively for algae Gelidium, algal waste and composite material. A non-ideal, semi-empirical, thermodynamically consistent (NICCA) isotherm fitted better the experimental ion binding data for different pH values and copper concentrations, considering only the acid functional groups, than the discrete model. Values of pK(M) (3.2; 3.6 and 3.3), n(M) (0.98, 0.91, 1.0) and p (0.67, 0.53 and 0.43) were obtained, respectively for algae Gelidium, algal waste and composite material. NICCA model reflects the complex macromolecular systems that take part in biosorption considering the heterogeneity of the biosorbent, the competition between protons and metals ions to the binding sites and the stoichiometry for different ions.

  1. Copper removal by algal biomass: Biosorbents characterization and equilibrium modelling

    Vilar, Vitor J.P. [LSRE-Laboratory of Separation and Reaction Engineering, Departamento de Engenharia Quimica, Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto (Portugal)], E-mail: vilar@fe.up.pt; Botelho, Cidalia M.S. [LSRE-Laboratory of Separation and Reaction Engineering, Departamento de Engenharia Quimica, Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto (Portugal)], E-mail: cbotelho@fe.up.pt; Pinheiro, Jose P.S.; Domingos, Rute F. [Centro de Biomedicina Molecular e Estrutural, Department of Chemistry and Biochemistry, University of Algarve, Campus de Gambelas, 8005-139 Faro (Portugal); Boaventura, Rui A.R. [LSRE-Laboratory of Separation and Reaction Engineering, Departamento de Engenharia Quimica, Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto (Portugal)], E-mail: bventura@fe.up.pt

    2009-04-30

    The general principles of Cu(II) binding to algal waste from agar extraction, composite material and algae Gelidium, and different modelling approaches, are discussed. FTIR analyses provided a detailed description of the possible binding groups present in the biosorbents, as carboxylic groups (D-glucuronic and pyruvic acids), hydroxyl groups (cellulose, agar and floridean starch) and sulfonate groups (sulphated galactans). Potentiometric acid-base titrations showed a heterogeneous distribution of two major binding groups, carboxyl and hydroxyl, following the quasi-Gaussian affinity constant distribution suggested by Sips, which permitted to estimate the maximum amount of acid functional groups (0.36, 0.25 and 0.1 mmol g{sup -1}) and proton binding parameters (pK{sup '}{sub H}=5.0,5.3and4.4;m{sub H} = 0.43, 0.37, 0.33), respectively for algae Gelidium, algal waste and composite material. A non-ideal, semi-empirical, thermodynamically consistent (NICCA) isotherm fitted better the experimental ion binding data for different pH values and copper concentrations, considering only the acid functional groups, than the discrete model. Values of pK{sup '}{sub M} (3.2; 3.6 and 3.3), n{sub M} (0.98, 0.91, 1.0) and p (0.67, 0.53 and 0.43) were obtained, respectively for algae Gelidium, algal waste and composite material. NICCA model reflects the complex macromolecular systems that take part in biosorption considering the heterogeneity of the biosorbent, the competition between protons and metals ions to the binding sites and the stoichiometry for different ions.

  2. Copper removal by algal biomass: Biosorbents characterization and equilibrium modelling

    Vilar, Vitor J.P.; Botelho, Cidalia M.S.; Pinheiro, Jose P.S.; Domingos, Rute F.; Boaventura, Rui A.R.

    2009-01-01

    The general principles of Cu(II) binding to algal waste from agar extraction, composite material and algae Gelidium, and different modelling approaches, are discussed. FTIR analyses provided a detailed description of the possible binding groups present in the biosorbents, as carboxylic groups (D-glucuronic and pyruvic acids), hydroxyl groups (cellulose, agar and floridean starch) and sulfonate groups (sulphated galactans). Potentiometric acid-base titrations showed a heterogeneous distribution of two major binding groups, carboxyl and hydroxyl, following the quasi-Gaussian affinity constant distribution suggested by Sips, which permitted to estimate the maximum amount of acid functional groups (0.36, 0.25 and 0.1 mmol g -1 ) and proton binding parameters (pK ' H =5.0,5.3and4.4;m H = 0.43, 0.37, 0.33), respectively for algae Gelidium, algal waste and composite material. A non-ideal, semi-empirical, thermodynamically consistent (NICCA) isotherm fitted better the experimental ion binding data for different pH values and copper concentrations, considering only the acid functional groups, than the discrete model. Values of pK ' M (3.2; 3.6 and 3.3), n M (0.98, 0.91, 1.0) and p (0.67, 0.53 and 0.43) were obtained, respectively for algae Gelidium, algal waste and composite material. NICCA model reflects the complex macromolecular systems that take part in biosorption considering the heterogeneity of the biosorbent, the competition between protons and metals ions to the binding sites and the stoichiometry for different ions

  3. Atomistic modeling of thermodynamic equilibrium and polymorphism of iron

    Lee, Tongsik; Baskes, Michael I; Valone, Steven M; Doll, J D

    2012-01-01

    We develop two new modified embedded-atom method (MEAM) potentials for elemental iron, intended to reproduce the experimental phase stability with respect to both temperature and pressure. These simple interatomic potentials are fitted to a wide variety of material properties of bcc iron in close agreement with experiments. Numerous defect properties of bcc iron and bulk properties of the two close-packed structures calculated with these models are in reasonable agreement with the available first-principles calculations and experiments. Performance at finite temperatures of these models has also been examined using Monte Carlo simulations. We attempt to reproduce the experimental iron polymorphism at finite temperature by means of free energy computations, similar to the procedure previously pursued by Müller et al (2007 J. Phys.: Condens. Matter 19 326220), and re-examine the adequacy of the conclusion drawn in the study by addressing two critical aspects missing in their analysis: (i) the stability of the hcp structure relative to the bcc and fcc structures and (ii) the compatibility between the temperature and pressure dependences of the phase stability. Using two MEAM potentials, we are able to represent all of the observed structural phase transitions in iron. We discuss that the correct reproductions of the phase stability among three crystal structures of iron with respect to both temperature and pressure are incompatible with each other due to the lack of magnetic effects in this class of empirical interatomic potential models. The MEAM potentials developed in this study correctly predict, in the bcc structure, the self-interstitial in the 〈110〉 orientation to be the most stable configuration, and the screw dislocation to have a non-degenerate core structure, in contrast to many embedded-atom method potentials for bcc iron in the literature. (paper)

  4. Quasi-equilibrium models of magnetized compact objects

    Markakis, Charalampos; Uryu, Koji; Gourgoulhon, Eric

    2011-01-01

    We report work towards a relativistic formulation for modeling strongly magnetized neutron stars, rotating or in a close circular orbit around another neutron star or black hole, under the approximations of helical symmetry and ideal MHD. The quasi-stationary evolution is governed by the frst law of thermodynamics for helically symmetric systems, which is generalized to include magnetic felds. The formulation involves an iterative scheme for solving the Einstein-Maxwell and relativistic MHD-Euler equations numerically. The resulting configurations for binary systems could be used as self-consistent initial data for studying their inspiral and merger.

  5. Equilibrium models of trade equations : a critical review

    Portugal, Marcelo Savino

    1993-01-01

    Neste artigo, revisa-se a literatura teórica sobre equações de comércio exterior, inclusive o modelo de comércio baseado na teoria da produção. Discute-se vários problemas comumente encontrados em trabalhos empíricos e também a literatura existente sobre equações relativas ao comércio exterior brasileiro. In this paper we review the theoretical literature on trade equation models, including the production theory approach. We discuss several empirical problems commonly found in the applied ...

  6. Overshoot in biological systems modelled by Markov chains: a non-equilibrium dynamic phenomenon.

    Jia, Chen; Qian, Minping; Jiang, Daquan

    2014-08-01

    A number of biological systems can be modelled by Markov chains. Recently, there has been an increasing concern about when biological systems modelled by Markov chains will perform a dynamic phenomenon called overshoot. In this study, the authors found that the steady-state behaviour of the system will have a great effect on the occurrence of overshoot. They showed that overshoot in general cannot occur in systems that will finally approach an equilibrium steady state. They further classified overshoot into two types, named as simple overshoot and oscillating overshoot. They showed that except for extreme cases, oscillating overshoot will occur if the system is far from equilibrium. All these results clearly show that overshoot is a non-equilibrium dynamic phenomenon with energy consumption. In addition, the main result in this study is validated with real experimental data.

  7. Modelling Chemical Equilibrium Partitioning with the GEMS-PSI Code

    Kulik, D.; Berner, U.; Curti, E

    2004-03-01

    Sorption, co-precipitation and re-crystallisation are important retention processes for dissolved contaminants (radionuclides) migrating through the sub-surface. The retention of elements is usually measured by empirical partition coefficients (Kd), which vary in response to many factors: temperature, solid/liquid ratio, total contaminant loading, water composition, host-mineral composition, etc. The Kd values can be predicted for in-situ conditions from thermodynamic modelling of solid solution, aqueous solution or sorption equilibria, provided that stoichiometry, thermodynamic stability and mixing properties of the pure components are known (Example 1). Unknown thermodynamic properties can be retrieved from experimental Kd values using inverse modelling techniques (Example 2). An efficient, advanced tool for performing both tasks is the Gibbs Energy Minimization (GEM) approach, implemented in the user-friendly GEM-Selector (GEMS) program package, which includes the Nagra-PSI chemical thermodynamic database. The package is being further developed at PSI and used extensively in studies relating to nuclear waste disposal. (author)

  8. Modelling Chemical Equilibrium Partitioning with the GEMS-PSI Code

    Kulik, D.; Berner, U.; Curti, E.

    2004-01-01

    Sorption, co-precipitation and re-crystallisation are important retention processes for dissolved contaminants (radionuclides) migrating through the sub-surface. The retention of elements is usually measured by empirical partition coefficients (Kd), which vary in response to many factors: temperature, solid/liquid ratio, total contaminant loading, water composition, host-mineral composition, etc. The Kd values can be predicted for in-situ conditions from thermodynamic modelling of solid solution, aqueous solution or sorption equilibria, provided that stoichiometry, thermodynamic stability and mixing properties of the pure components are known (Example 1). Unknown thermodynamic properties can be retrieved from experimental Kd values using inverse modelling techniques (Example 2). An efficient, advanced tool for performing both tasks is the Gibbs Energy Minimization (GEM) approach, implemented in the user-friendly GEM-Selector (GEMS) program package, which includes the Nagra-PSI chemical thermodynamic database. The package is being further developed at PSI and used extensively in studies relating to nuclear waste disposal. (author)

  9. An experiment on radioactive equilibrium and its modelling using the ‘radioactive dice’ approach

    Santostasi, Davide; Malgieri, Massimiliano; Montagna, Paolo; Vitulo, Paolo

    2017-07-01

    In this article we describe an educational activity on radioactive equilibrium we performed with secondary school students (17-18 years old) in the context of a vocational guidance stage for talented students at the Department of Physics of the University of Pavia. Radioactive equilibrium is investigated experimentally by having students measure the activity of 214Bi from two different samples, obtained using different preparation procedures from an uraniferous rock. Students are guided in understanding the mathematical structure of radioactive equilibrium through a modelling activity in two parts. Before the lab measurements, a dice game, which extends the traditional ‘radioactive dice’ activity to the case of a chain of two decaying nuclides, is performed by students divided into small groups. At the end of the laboratory work, students design and run a simple spreadsheet simulation modelling the same basic radioactive chain with user defined decay constants. By setting the constants to realistic values corresponding to nuclides of the uranium decay chain, students can deepen their understanding of the meaning of the experimental data, and also explore the difference between cases of non-equilibrium, transient and secular equilibrium.

  10. Game equilibrium models I evolution and game dynamics

    1991-01-01

    There are two main approaches towards the phenotypic analysis of frequency dependent natural selection. First, there is the approach of evolutionary game theory, which was introduced in 1973 by John Maynard Smith and George R. Price. In this theory, the dynamical process of natural selection is not modeled explicitly. Instead, the selective forces acting within a population are represented by a fitness function, which is then analysed according to the concept of an evolutionarily stable strategy or ESS. Later on, the static approach of evolutionary game theory has been complemented by a dynamic stability analysis of the replicator equations. Introduced by Peter D. Taylor and Leo B. Jonker in 1978, these equations specify a class of dynamical systems, which provide a simple dynamic description of a selection process. Usually, the investigation of the replicator dynamics centers around a stability analysis of their stationary solutions. Although evolutionary stability and dynamic stability both intend to charac...

  11. Optimising the introduction of multiple childhood vaccines in Japan: A model proposing the introduction sequence achieving the highest health gains.

    Standaert, Baudouin; Schecroun, Nadia; Ethgen, Olivier; Topachevskyi, Oleksandr; Morioka, Yoriko; Van Vlaenderen, Ilse

    2017-12-01

    Many countries struggle with the prioritisation of introducing new vaccines because of budget limitations and lack of focus on public health goals. A model has been developed that defines how specific health goals can be optimised through immunisation within vaccination budget constraints. Japan, as a country example, could introduce 4 new pediatric vaccines targeting influenza, rotavirus, pneumococcal disease and mumps with known burden of disease, vaccine efficacies and maximum achievable coverages. Operating under budget constraints, the Portfolio-model for the Management of Vaccines (PMV) identifies the optimal vaccine ranking and combination for achieving the maximum QALY gain over a period of 10 calendar years in children optimal sequence of vaccine introduction (mumps [1st], followed by influenza [2nd], rotavirus [3rd], and pneumococcal [4th]). With exactly the same budget but without vaccine ranking, the total QALY gain can be 20% lower. The PMV model could be a helpful tool for decision makers in those environments with limited budget where vaccines have to be selected for trying to optimise specific health goals. Copyright © 2017 GlaxoSmithKline Biologicals SA. Published by Elsevier B.V. All rights reserved.

  12. Comparing two non-equilibrium approaches to modelling of a free-burning arc

    Baeva, M; Uhrlandt, D; Benilov, M S; Cunha, M D

    2013-01-01

    Two models of high-pressure arc discharges are compared with each other and with experimental data for an atmospheric-pressure free-burning arc in argon for arc currents of 20–200 A. The models account for space-charge effects and thermal and ionization non-equilibrium in somewhat different ways. One model considers space-charge effects, thermal and ionization non-equilibrium in the near-cathode region and thermal non-equilibrium in the bulk plasma. The other model considers thermal and ionization non-equilibrium in the entire arc plasma and space-charge effects in the near-cathode region. Both models are capable of predicting the arc voltage in fair agreement with experimental data. Differences are observed in the arc attachment to the cathode, which do not strongly affect the near-cathode voltage drop and the total arc voltage for arc currents exceeding 75 A. For lower arc currents the difference is significant but the arc column structure is quite similar and the predicted bulk plasma characteristics are relatively close to each other. (paper)

  13. Continuum model of non-equilibrium solvation and solvent effect on ultra-fast processes

    Li Xiangyuan; Fu Kexiang; Zhu Quan

    2006-01-01

    In the past 50 years, non-equilibrium solvation theory for ultra-fast processes such as electron transfer and light absorption/emission has attracted particular interest. A great deal of research efforts was made in this area and various models which give reasonable qualitative descriptions for such as solvent reorganization energy in electron transfer and spectral shift in solution, were developed within the framework of continuous medium theory. In a series of publications by the authors, we clarified that the expression of the non-equilibrium electrostatic free energy that is at the dominant position of non-equilibrium solvation and serves as the basis of various models, however, was incorrectly formulated. In this work, the authors argue that reversible charging work integration was inappropriately applied in the past to an irreversible path linking the equilibrium or the non-equilibrium state. Because the step from the equilibrium state to the nonequilibrium state is factually thermodynamically irreversible, the conventional expression for non-equilibrium free energy that was deduced in different ways is unreasonable. Here the authors derive the non-equilibrium free energy to a quite different form according to Jackson integral formula. Such a difference throws doubts to the models including the famous Marcus two-sphere model for solvent reorganization energy of electron transfer and the Lippert-Mataga equation for spectral shift. By introducing the concept of 'spring energy' arising from medium polarizations, the energy constitution of the non-equilibrium state is highlighted. For a solute-solvent system, the authors separate the total electrostatic energy into different components: the self-energies of solute charge and polarized charge, the interaction energy between them and the 'spring energy' of the solvent polarization. With detailed reasoning and derivation, our formula for non-equilibrium free energy can be reached through different ways. Based on the

  14. A development of multi-Species mass transport model considering thermodynamic phase equilibrium

    Hosokawa, Yoshifumi; Yamada, Kazuo; Johannesson, Björn

    2008-01-01

    ) variation in solid-phase composition when using different types of cement, (ii) physicochemical evaluation of steel corrosion initiation behaviour by calculating the molar ratio of chloride ion to hydroxide ion [Cl]/[OH] in pore solution, (iii) complicated changes of solid-phase composition caused......In this paper, a multi-species mass transport model, which can predict time dependent variation of pore solution and solid-phase composition due to the mass transport into the hardened cement paste, has been developed. Since most of the multi-species models established previously, based...... on the Poisson-Nernst-Planck theory, did not involve the modeling of chemical process, it has been coupled to thermodynamic equilibrium model in this study. By the coupling of thermodynamic equilibrium model, the multi-species model could simulate many different behaviours in hardened cement paste such as: (i...

  15. A new model of equilibrium subsurface hydration on Mars

    Hecht, M. H.

    2011-12-01

    One of the surprises of the Odyssey mission was the discovery by the Gamma Ray Spectrometer (GRS) suite of large concentrations of water-equivalent hydrogen (WEH) in the shallow subsurface at low latitudes, consistent with 5-7% regolith water content by weight (Mitrofanov et al. Science 297, p. 78, 2002; Feldman et al. Science 297, p. 75, 2002). Water at low latitudes on Mars is generally believed to be sequestered in the form of hydrated minerals. Numerous attempts have been made to relate the global map of WEH to specific mineralogy. For example Feldman et al. (Geophys. Res. Lett., 31, L16702, 2004) associated an estimated 10% sulfate content of the soil with epsomite (51% water), hexahydrite (46% water) and kieserite (13% water). In such studies, stability maps have been created by assuming equilibration of the subsurface water vapor density with a global mean annual column mass vapor density. Here it is argued that this value significantly understates the subsurface humidity. Results from the Phoenix mission are used to suggest that the midday vapor pressure measured just above the surface is a better proxy for the saturation vapor pressure of subsurface hydrous minerals. The measured frostpoint at the Phoenix site was found to be equal to the surface temperature by night and the modeled temperature at the top of the ice table by day (Zent et al. J. Geophys. Res., 115, E00E14, 2010). It was proposed by Hecht (41st LPSC abstract #1533, 2010) that this phenomenon results from water vapor trapping at the coldest nearby surface. At night, the surface is colder than the surface of the ice table; by day it is warmer. Thus, at night, the subsurface is bounded by a fully saturated layer of cold water frost or adsorbed water at the surface, not by the dry boundary layer itself. This argument is not strongly dependent on the particular saturation vapor pressure (SVP) of ice or other subsurface material, only on the thickness of the dry layer. Specifically, the diurnal

  16. Feeder Type Optimisation for the Plain Flow Discharge Process of an Underground Hopper by Discrete Element Modelling

    Jan Nečas

    2017-09-01

    Full Text Available This paper describes optimisation of a conveyor from an underground hopper intended for a coal transfer station. The original solution was designed with a chain conveyor encountered operational problems that have limited its continuous operation. The Discrete Element Modeling (DEM was chosen to optimise the transport. DEM simulations allow device design modifications directly in the 3D CAD model, and then the simulation makes it possible to evaluate whether the adjustment was successful. By simulating the initial state of coal extraction using a chain conveyor, trouble spots were identified that caused operational failures. The main problem has been the increased resistance during removal of material from the underground hopper. Revealed resistances against material movement were not considered in the original design at all. In the next step, structural modifications of problematic nodes were made. For example, the following changes have been made: reduction of storage space or installation of passive elements into the interior of the underground hopper. These modifications made were not effective enough, so the type of the conveyor was changed from a drag chain conveyor to a belt conveyor. The simulation of the material extraction using a belt conveyor showed a significant reduction in resistance parameters while maintaining the required transport performance.

  17. An Experimental Facility to Validate Ground Source Heat Pump Optimisation Models for the Australian Climate

    Yuanshen Lu

    2017-01-01

    Full Text Available Ground source heat pumps (GSHPs are one of the most widespread forms of geothermal energy technology. They utilise the near-constant temperature of the ground below the frost line to achieve energy-efficiencies two or three times that of conventional air-conditioners, consequently allowing a significant offset in electricity demand for space heating and cooling. Relatively mature GSHP markets are established in Europe and North America. GSHP implementation in Australia, however, is limited, due to high capital price, uncertainties regarding optimum designs for the Australian climate, and limited consumer confidence in the technology. Existing GSHP design standards developed in the Northern Hemisphere are likely to lead to suboptimal performance in Australia where demand might be much more cooling-dominated. There is an urgent need to develop Australia’s own GSHP system optimisation principles on top of the industry standards to provide confidence to bring the GSHP market out of its infancy. To assist in this, the Queensland Geothermal Energy Centre of Excellence (QGECE has commissioned a fully instrumented GSHP experimental facility in Gatton, Australia, as a publically-accessible demonstration of the technology and a platform for systematic studies of GSHPs, including optimisation of design and operations. This paper presents a brief review on current GSHP use in Australia, the technical details of the Gatton GSHP facility, and an analysis on the observed cooling performance of this facility to date.

  18. Efficacy of an Optimised Bacteriophage Cocktail to Clear Clostridium difficile in a Batch Fermentation Model

    Janet Y. Nale

    2018-02-01

    Full Text Available Clostridium difficile infection (CDI is a major cause of infectious diarrhea. Conventional antibiotics are not universally effective for all ribotypes, and can trigger dysbiosis, resistance and recurrent infection. Thus, novel therapeutics are needed to replace and/or supplement the current antibiotics. Here, we describe the activity of an optimised 4-phage cocktail to clear cultures of a clinical ribotype 014/020 strain in fermentation vessels spiked with combined fecal slurries from four healthy volunteers. After 5 h, we observed ~6-log reductions in C. difficile abundance in the prophylaxis regimen and complete C. difficile eradication after 24 h following prophylactic or remedial regimens. Viability assays revealed that commensal enterococci, bifidobacteria, lactobacilli, total anaerobes, and enterobacteria were not affected by either regimens, but a ~2-log increase in the enterobacteria, lactobacilli, and total anaerobe abundance was seen in the phage-only-treated vessel compared to other treatments. The impact of the phage treatments on components of the microbiota was further assayed using metagenomic analysis. Together, our data supports the therapeutic application of our optimised phage cocktail to treat CDI. Also, the increase in specific commensals observed in the phage-treated control could prevent further colonisation of C. difficile, and thus provide protection from infection being able to establish.

  19. Energy taxes and wages in a general equilibrium model of production

    Thompson, H.

    2000-01-01

    Energy taxes are responsible for a good deal of observed differences in energy prices across states and countries. They alter patterns of production and income distribution. The present paper examines the potential of energy taxes to lower wages in a general equilibrium model of production with capital, labour and energy inputs. (Author)

  20. Non-existence of Steady State Equilibrium in the Neoclassical Growth Model with a Longevity Trend

    Hermansen, Mikkel Nørlem

    of steady state equilibrium when considering the empirically observed trend in longevity. We extend a standard continuous time overlapping generations model by a longevity trend and are thereby able to study the properties of mortality-driven population growth. This turns out to be exceedingly complicated...

  1. Ginsburg criterion for an equilibrium superradiant model in the dynamic approach

    Trache, M.

    1991-10-01

    Some critical properties of an equilibrium superradiant model are discussed, taking into account the quantum fluctuations of the field variables. The critical region is calculated using the Ginsburg criterion, underlining the role of the atomic concentration as a control parameter of the phase transition. (author). 16 refs, 1 fig

  2. Deviations from mass transfer equilibrium and mathematical modeling of mixer-settler contactors

    Beyerlein, A.L.; Geldard, J.F.; Chung, H.F.; Bennett, J.E.

    1980-01-01

    This paper presents the mathematical basis for the computer model PUBG of mixer-settler contactors which accounts for deviations from mass transfer equilibrium. This is accomplished by formulating the mass balance equations for the mixers such that the mass transfer rate of nuclear materials between the aqueous and organic phases is accounted for. 19 refs

  3. Modeling chromatographic columns. Non-equilibrium packed-bed adsorption with non-linear adsorption isotherms

    Özdural, A.R.; Alkan, A.; Kerkhof, P.J.A.M.

    2004-01-01

    In this work a new mathematical model, based on non-equilibrium conditions, describing the dynamic adsorption of proteins in columns packed with spherical adsorbent particles is used to study the performance of chromatographic systems. Simulations of frontal chromatography, including axial

  4. Phase equilibrium of North Sea oils with polar chemicals: Experiments and CPA modeling

    Frost, Michael Grynnerup; Kontogeorgis, Georgios M.; von Solms, Nicolas

    2016-01-01

    This work consists of a combined experimental and modeling study for oil - MEG - water systems, of relevance to petroleum applications. We present new experimental liquid-liquid equilibrium data for the mutual solubility of two North Sea oils + MEG and North Sea oils + MEG + water systems...

  5. Coenzyme B12 model studies: Equilibrium constants for the pH ...

    Home; Journals; Journal of Chemical Sciences; Volume 114; Issue 1. Coenzyme B12 model studies: Equilibrium constants for the H-dependent axial ligation of benzyl(aquo)cobaloxime by various N- and S-donor ligands. D Sudarshan Reddy N Ravi Kumar Reddy V Sridhar S Satyanarayana. Inorganic and Analytical ...

  6. Sudden transition from equilibrium stability to chaotic dynamics in a cautious tâtonnement model

    Foroni, Ilaria; Avellone, Alessandro; Panchuk, Anastasiia

    2015-01-01

    Tâtonnement processes are usually interpreted as auctions, where a fictitious agent sets the prices until an equilibrium is reached and the trades are made. The main purpose of such processes is to explain how an economy comes to its equilibrium. It is well known that discrete time price adjustment processes may fail to converge and may exhibit periodic or even chaotic behavior. To avoid large price changes, a version of the discrete time tâtonnement process for reaching an equilibrium in a pure exchange economy based on a cautious updating of the prices has been proposed two decades ago. This modification leads to a one dimensional bimodal piecewise smooth map, for which we show analytically that degenerate bifurcations and border collision bifurcations play a fundamental role for the asymptotic behavior of the model.

  7. First-principles atomistic Wulff constructions for an equilibrium rutile TiO2 shape modeling

    Jiang, Fengzhou; Yang, Lei; Zhou, Dali; He, Gang; Zhou, Jiabei; Wang, Fanhou; Chen, Zhi-Gang

    2018-04-01

    Identifying the exposed surfaces of rutile TiO2 crystal is crucial for its industry application and surface engineering. In this study, the shape of the rutile TiO2 was constructed by applying equilibrium thermodynamics of TiO2 crystals via first-principles density functional theory (DFT) and Wulff principles. From the DFT calculations, the surface energies of six low-index stoichiometric facets of TiO2 are determined after the calibrations of crystal structure. And then, combined surface energy calculations and Wulff principles, a geometric model of equilibrium rutile TiO2 is built up, which is coherent with the typical morphology of fully-developed equilibrium TiO2 crystal. This study provides fundamental theoretical guidance for the surface analysis and surface modification of the rutile TiO2-based materials from experimental research to industry manufacturing.

  8. Modelling Thomson scattering for systems with non-equilibrium electron distributions

    Chapman D.A.

    2013-11-01

    Full Text Available We investigate the effect of non-equilibrium electron distributions in the analysis of Thomson scattering for a range of conditions of interest to inertial confinement fusion experiments. Firstly, a generalised one-component model based on quantum statistical theory is given in the random phase approximation (RPA. The Chihara expression for electron-ion plasmas is then adapted to include the new non-equilibrium electron physics. The theoretical scattering spectra for both diffuse and dense plasmas in which non-equilibrium electron distributions are expected to arise are considered. We find that such distributions strongly influence the spectra and are hence an important consideration for accurately determining the plasma conditions.

  9. Evaluation and optimisation of phenomenological multi-step soot model for spray combustion under diesel engine-like operating conditions

    Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song; Schramm, Jesper

    2015-05-01

    In this work, a two-dimensional computational fluid dynamics study is reported of an n-heptane combustion event and the associated soot formation process in a constant volume combustion chamber. The key interest here is to evaluate the sensitivity of the chemical kinetics and submodels of a semi-empirical soot model in predicting the associated events. Numerical computation is performed using an open-source code and a chemistry coordinate mapping approach is used to expedite the calculation. A library consisting of various phenomenological multi-step soot models is constructed and integrated with the spray combustion solver. Prior to the soot modelling, combustion simulations are carried out. Numerical results show that the ignition delay times and lift-off lengths exhibit good agreement with the experimental measurements across a wide range of operating conditions, apart from those in the cases with ambient temperature lower than 850 K. The variation of the soot precursor production with respect to the change of ambient oxygen levels qualitatively agrees with that of the conceptual models when the skeletal n-heptane mechanism is integrated with a reduced pyrene chemistry. Subsequently, a comprehensive sensitivity analysis is carried out to appraise the existing soot formation and oxidation submodels. It is revealed that the soot formation is captured when the surface growth rate is calculated using a square root function of the soot specific surface area and when a pressure-dependent model constant is considered. An optimised soot model is then proposed based on the knowledge gained through this exercise. With the implementation of optimised model, the simulated soot onset and transport phenomena before reaching quasi-steady state agree reasonably well with the experimental observation. Also, variation of spatial soot distribution and soot mass produced at oxygen molar fractions ranging from 10.0 to 21.0% for both low and high density conditions are reproduced.

  10. Development of a bi-equilibrium model for biomass gasification in a downdraft bed reactor.

    Biagini, Enrico; Barontini, Federica; Tognotti, Leonardo

    2016-02-01

    This work proposes a simple and accurate tool for predicting the main parameters of biomass gasification (syngas composition, heating value, flow rate), suitable for process study and system analysis. A multizonal model based on non-stoichiometric equilibrium models and a repartition factor, simulating the bypass of pyrolysis products through the oxidant zone, was developed. The results of tests with different feedstocks (corn cobs, wood pellets, rice husks and vine pruning) in a demonstrative downdraft gasifier (350kW) were used for validation. The average discrepancy between model and experimental results was up to 8 times less than the one with the simple equilibrium model. The repartition factor was successfully related to the operating conditions and characteristics of the biomass to simulate different conditions of the gasifier (variation in potentiality, densification and mixing of feedstock) and analyze the model sensitivity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Application of particle swarm optimisation for solving deteriorating inventory model with fluctuating demand and controllable deterioration rate

    Chen, Yu-Ren; Dye, Chung-Yuan

    2013-06-01

    In most of the inventory models in the literature, the deterioration rate of goods is viewed as an exogenous variable, which is not subject to control. In the real market, the retailer can reduce the deterioration rate of product by making effective capital investment in storehouse equipments. In this study, we formulate a deteriorating inventory model with time-varying demand by allowing preservation technology cost as a decision variable in conjunction with replacement policy. The objective is to find the optimal replenishment and preservation technology investment strategies while minimising the total cost over the planning horizon. For any given feasible replenishment scheme, we first prove that the optimal preservation technology investment strategy not only exists but is also unique. Then, a particle swarm optimisation is coded and used to solve the nonlinear programming problem by employing the properties derived from this article. Some numerical examples are used to illustrate the features of the proposed model.

  12. Adaptive behaviour and multiple equilibrium states in a predator-prey model.

    Pimenov, Alexander; Kelly, Thomas C; Korobeinikov, Andrei; O'Callaghan, Michael J A; Rachinskii, Dmitrii

    2015-05-01

    There is evidence that multiple stable equilibrium states are possible in real-life ecological systems. Phenomenological mathematical models which exhibit such properties can be constructed rather straightforwardly. For instance, for a predator-prey system this result can be achieved through the use of non-monotonic functional response for the predator. However, while formal formulation of such a model is not a problem, the biological justification for such functional responses and models is usually inconclusive. In this note, we explore a conjecture that a multitude of equilibrium states can be caused by an adaptation of animal behaviour to changes of environmental conditions. In order to verify this hypothesis, we consider a simple predator-prey model, which is a straightforward extension of the classic Lotka-Volterra predator-prey model. In this model, we made an intuitively transparent assumption that the prey can change a mode of behaviour in response to the pressure of predation, choosing either "safe" of "risky" (or "business as usual") behaviour. In order to avoid a situation where one of the modes gives an absolute advantage, we introduce the concept of the "cost of a policy" into the model. A simple conceptual two-dimensional predator-prey model, which is minimal with this property, and is not relying on odd functional responses, higher dimensionality or behaviour change for the predator, exhibits two stable co-existing equilibrium states with basins of attraction separated by a separatrix of a saddle point. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Development of the hard and soft constraints based optimisation model for unit sizing of the hybrid renewable energy system designed for microgrid applications

    Sundaramoorthy, Kumaravel

    2017-02-01

    The hybrid energy systems (HESs) based electricity generation system has become a more attractive solution for rural electrification nowadays. Economically feasible and technically reliable HESs are solidly based on an optimisation stage. This article discusses about the optimal unit sizing model with the objective function to minimise the total cost of the HES. Three typical rural sites from southern part of India have been selected for the application of the developed optimisation methodology. Feasibility studies and sensitivity analysis on the optimal HES are discussed elaborately in this article. A comparison has been carried out with the Hybrid Optimization Model for Electric Renewable optimisation model for three sites. The optimal HES is found with less total net present rate and rate of energy compared with the existing method

  14. Using marketing theory to inform strategies for recruitment: a recruitment optimisation model and the txt2stop experience

    2014-01-01

    Background Recruitment is a major challenge for many trials; just over half reach their targets and almost a third resort to grant extensions. The economic and societal implications of this shortcoming are significant. Yet, we have a limited understanding of the processes that increase the probability that recruitment targets will be achieved. Accordingly, there is an urgent need to bring analytical rigour to the task of improving recruitment, thereby increasing the likelihood that trials reach their recruitment targets. This paper presents a conceptual framework that can be used to improve recruitment to clinical trials. Methods Using a case-study approach, we reviewed the range of initiatives that had been undertaken to improve recruitment in the txt2stop trial using qualitative (semi-structured interviews with the principal investigator) and quantitative (recruitment) data analysis. Later, the txt2stop recruitment practices were compared to a previous model of marketing a trial and to key constructs in social marketing theory. Results Post hoc, we developed a recruitment optimisation model to serve as a conceptual framework to improve recruitment to clinical trials. A core premise of the model is that improving recruitment needs to be an iterative, learning process. The model describes three essential activities: i) recruitment phase monitoring, ii) marketing research, and iii) the evaluation of current performance. We describe the initiatives undertaken by the txt2stop trial and the results achieved, as an example of the use of the model. Conclusions Further research should explore the impact of adopting the recruitment optimisation model when applied to other trials. PMID:24886627

  15. Modified Ammonia Removal Model Based on Equilibrium and Mass Transfer Principles

    Shanableh, A.; Imteaz, M.

    2010-01-01

    Yoon et al. 1 presented an approximate mathematical model to describe ammonia removal from an experimental batch reactor system with gaseous headspace. The development of the model was initially based on assuming instantaneous equilibrium between ammonia in the aqueous and gas phases. In the model, a 'saturation factor, β' was defined as a constant and used to check whether the equilibrium assumption was appropriate. The authors used the trends established by the estimated β values to conclude that the equilibrium assumption was not valid. The authors presented valuable experimental results obtained using a carefully designed system and the model used to analyze the results accounted for the following effects: speciation of ammonia between NH 3 and NH 4 + as a function of pH: temperature dependence of the reactions constants; and air flow rate. In this article, an alternative model based on the exact solution of the governing mass-balance differential equations was developed and used to describe ammonia removal without relying on the use of the saturation factor. The modified model was also extended to mathematically describe the pH dependence of the ammonia removal rate, in addition to accounting for the speciation of ammonia, temperature dependence of reactions constants, and air flow rate. The modified model was used to extend the analysis of the original experimental data presented by Yoon et al. 1 and the results matched the theory in an excellent manner

  16. Modeling and Control of an Ornithopter for Non-Equilibrium Maneuvers

    Rose, Cameron Jarrel

    2015-01-01

    Flapping-winged flight is very complex, and it is difficult to efficiently model the unsteady airflow and nonlinear dynamics for online control. While steady state flight is well understood, transitions between flight regimes are not readily modeled or controlled. Maneuverability in non-equilibrium flight, which birds and insects readily exhibit in nature, is necessary to operate in the types of cluttered environments that small-scale flapping-winged robots are best suited for. The advantages...

  17. Examining Policies to Reduce Homelessness Using a General Equilibrium Model of the Housing Market

    Mansur, Erin; Quigley, John M.; Raphael, Steven; Smolensky, Eugene

    2003-01-01

    In this paper, we use a general equilibrium simulation model to assess the potential impacts on homelessness of various housing-market policy interventions. We calibrate the model to the four largest metropolitan areas in California. We explore the welfare con- sequences and the effects on homelessness of three housing-market policy interventions: extending housing vouchers to all low-income households, subsidizing all landlords, and subsidizing those landlords who supply low-income housing. ...

  18. Organic Rankine Cycle for Residual Heat to Power Conversion in Natural Gas Compressor Station. Part I: Modelling and Optimisation Framework

    Chaczykowski, Maciej

    2016-06-01

    Basic organic Rankine cycle (ORC), and two variants of regenerative ORC have been considered for the recovery of exhaust heat from natural gas compressor station. The modelling framework for ORC systems has been presented and the optimisation of the systems was carried out with turbine power output as the variable to be maximized. The determination of ORC system design parameters was accomplished by means of the genetic algorithm. The study was aimed at estimating the thermodynamic potential of different ORC configurations with several working fluids employed. The first part of this paper describes the ORC equipment models which are employed to build a NLP formulation to tackle design problems representative for waste energy recovery on gas turbines driving natural gas pipeline compressors.

  19. Clarifications to the limitations of the s-α equilibrium model for gyrokinetic computations of turbulence

    Lapillonne, X.; Brunner, S.; Dannert, T.; Jolliet, S.; Marinoni, A.; Villard, L.; Goerler, T.; Jenko, F.; Merz, F.

    2009-01-01

    In the context of gyrokinetic flux-tube simulations of microturbulence in magnetized toroidal plasmas, different treatments of the magnetic equilibrium are examined. Considering the Cyclone DIII-D base case parameter set [Dimits et al., Phys. Plasmas 7, 969 (2000)], significant differences in the linear growth rates, the linear and nonlinear critical temperature gradients, and the nonlinear ion heat diffusivities are observed between results obtained using either an s-α or a magnetohydrodynamic (MHD) equilibrium. Similar disagreements have been reported previously [Redd et al., Phys. Plasmas 6, 1162 (1999)]. In this paper it is shown that these differences result primarily from the approximation made in the standard implementation of the s-α model, in which the straight field line angle is identified to the poloidal angle, leading to inconsistencies of order ε (ε=a/R is the inverse aspect ratio, a the minor radius and R the major radius). An equilibrium model with concentric, circular flux surfaces and a correct treatment of the straight field line angle gives results very close to those using a finite ε, low β MHD equilibrium. Such detailed investigation of the equilibrium implementation is of particular interest when comparing flux tube and global codes. It is indeed shown here that previously reported agreements between local and global simulations in fact result from the order ε inconsistencies in the s-α model, coincidentally compensating finite ρ * effects in the global calculations, where ρ * =ρ s /a with ρ s the ion sound Larmor radius. True convergence between local and global simulations is finally obtained by correct treatment of the geometry in both cases, and considering the appropriate ρ * →0 limit in the latter case.

  20. A model for non-equilibrium, non-homogeneous two-phase critical flow

    Bassel, Wageeh Sidrak; Ting, Daniel Kao Sun

    1999-01-01

    Critical two phase flow is a very important phenomena in nuclear reactor technology for the analysis of loss of coolant accident. Several recent papers, Lee and Shrock (1990), Dagan (1993) and Downar (1996) , among others, treat the phenomena using complex models which require heuristic parameters such as relaxation constants or interfacial transfer models. In this paper a mathematical model for one dimensional non equilibrium and non homogeneous two phase flow in constant area duct is developed. The model is constituted of three conservation equations type mass ,momentum and energy. Two important variables are defined in the model: equilibrium constant in the energy equation and the impulse function in the momentum equation. In the energy equation, the enthalpy of the liquid phase is determined by a linear interpolation function between the liquid phase enthalpy at inlet condition and the saturated liquid enthalpy at local pressure. The interpolation coefficient is the equilibrium constant. The momentum equation is expressed in terms of the impulse function. It is considered that there is slip between the liquid and vapor phases, the liquid phase is in metastable state and the vapor phase is in saturated stable state. The model is not heuristic in nature and does not require complex interface transfer models. It is proved numerically that for the critical condition the partial derivative of two phase pressure drop with respect to the local pressure or to phase velocity must be zero.This criteria is demonstrated by numerical examples. The experimental work of Fauske (1962) and Jeandey (1982) were analyzed resulting in estimated numerical values for important parameters like slip ratio, equilibrium constant and two phase frictional drop. (author)

  1. Once more on the equilibrium-point hypothesis (lambda model) for motor control.

    Feldman, A G

    1986-03-01

    The equilibrium control hypothesis (lambda model) is considered with special reference to the following concepts: (a) the length-force invariant characteristic (IC) of the muscle together with central and reflex systems subserving its activity; (b) the tonic stretch reflex threshold (lambda) as an independent measure of central commands descending to alpha and gamma motoneurons; (c) the equilibrium point, defined in terms of lambda, IC and static load characteristics, which is associated with the notion that posture and movement are controlled by a single mechanism; and (d) the muscle activation area (a reformulation of the "size principle")--the area of kinematic and command variables in which a rank-ordered recruitment of motor units takes place. The model is used for the interpretation of various motor phenomena, particularly electromyographic patterns. The stretch reflex in the lambda model has no mechanism to follow-up a certain muscle length prescribed by central commands. Rather, its task is to bring the system to an equilibrium, load-dependent position. Another currently popular version defines the equilibrium point concept in terms of alpha motoneuron activity alone (the alpha model). Although the model imitates (as does the lambda model) spring-like properties of motor performance, it nevertheless is inconsistent with a substantial data base on intact motor control. An analysis of alpha models, including their treatment of motor performance in deafferented animals, reveals that they suffer from grave shortcomings. It is concluded that parameterization of the stretch reflex is a basis for intact motor control. Muscle deafferentation impairs this graceful mechanism though it does not remove the possibility of movement.

  2. Thermodynamic Modeling and Optimization of the Copper Flash Converting Process Using the Equilibrium Constant Method

    Li, Ming-zhou; Zhou, Jie-min; Tong, Chang-ren; Zhang, Wen-hai; Chen, Zhuo; Wang, Jin-liang

    2018-05-01

    Based on the principle of multiphase equilibrium, a mathematical model of the copper flash converting process was established by the equilibrium constant method, and a computational system was developed with the use of MetCal software platform. The mathematical model was validated by comparing simulated outputs, industrial data, and published data. To obtain high-quality blister copper, a low copper content in slag, and increased impurity removal rate, the model was then applied to investigate the effects of the operational parameters [oxygen/feed ratio (R OF), flux rate (R F), and converting temperature (T)] on the product weights, compositions, and the distribution behaviors of impurity elements. The optimized results showed that R OF, R F, and T should be controlled at approximately 156 Nm3/t, within 3.0 pct, and at approximately 1523 K (1250 °C), respectively.

  3. 30th International School of Mathematics "G Stampacchia" : Equilibrium Problems and Variational Models "Ettore Majorana"

    Giannessi, Franco; Maugeri, Antonino; Equilibrium Problems and Variational Models

    2000-01-01

    The volume, devoted to variational analysis and its applications, collects selected and refereed contributions, which provide an outline of the field. The meeting of the title "Equilibrium Problems and Variational Models", which was held in Erice (Sicily) in the period June 23 - July 2 2000, was the occasion of the presentation of some of these papers; other results are a consequence of a fruitful and constructive atmosphere created during the meeting. New results, which enlarge the field of application of variational analysis, are presented in the book; they deal with the vectorial analysis, time dependent variational analysis, exact penalization, high order deriva­ tives, geometric aspects, distance functions and log-quadratic proximal methodology. The new theoretical results allow one to improve in a remarkable way the study of significant problems arising from the applied sciences, as continuum model of transportation, unilateral problems, multicriteria spatial price models, network equilibrium...

  4. Thermodynamic parameters for mixtures of quartz under shock wave loading in views of the equilibrium model

    Maevskii, K. K.; Kinelovskii, S. A.

    2015-01-01

    The numerical results of modeling of shock wave loading of mixtures with the SiO 2 component are presented. The TEC (thermodynamic equilibrium component) model is employed to describe the behavior of solid and porous multicomponent mixtures and alloys under shock wave loading. State equations of a Mie–Grüneisen type are used to describe the behavior of condensed phases, taking into account the temperature dependence of the Grüneisen coefficient, gas in pores is one of the components of the environment. The model is based on the assumption that all components of the mixture under shock-wave loading are in thermodynamic equilibrium. The calculation results are compared with the experimental data derived by various authors. The behavior of the mixture containing components with a phase transition under high dynamic loads is described

  5. Mechanism of alkalinity lowering and chemical equilibrium model of high fly ash silica fume cement

    Hoshino, Seiichi; Honda, Akira; Negishi, Kumi

    2014-01-01

    The mechanism of alkalinity lowering of a High Fly ash Silica fume Cement (HFSC) under liquid/solid ratio conditions where the pH is largely controlled by the soluble alkali components (Region I) has been studied. This mechanism was incorporated in the chemical equilibrium model of HFSC. As a result, it is suggested that the dissolution and precipitation behavior of SO 4 2- partially contributes to alkalinity lowering of HFSC in Region I. A chemical equilibrium model of HFSC incorporating alkali (Na, K) adsorption, which was presumed as another contributing factor of the alkalinity lowering effect, was also developed, and an HFSC immersion experiment was analyzed using the model. The results of the developed model showed good agreement with the experiment results. From the above results, it was concluded that the alkalinity lowering of HFSC in Region I was attributed to both the dissolution and precipitation behavior of SO 4 2- and alkali adsorption, in addition to the absence of Ca(OH) 2 . A chemical equilibrium model of HFSC incorporating alkali and SO 4 2- adsorption was also proposed. (author)

  6. Validation of vibration-dissociation coupling models in hypersonic non-equilibrium separated flows

    Shoev, G.; Oblapenko, G.; Kunova, O.; Mekhonoshina, M.; Kustova, E.

    2018-03-01

    The validation of recently developed models of vibration-dissociation coupling is discussed in application to numerical solutions of the Navier-Stokes equations in a two-temperature approximation for a binary N2/N flow. Vibrational-translational relaxation rates are computed using the Landau-Teller formula generalized for strongly non-equilibrium flows obtained in the framework of the Chapman-Enskog method. Dissociation rates are calculated using the modified Treanor-Marrone model taking into account the dependence of the model parameter on the vibrational state. The solutions are compared to those obtained using traditional Landau-Teller and Treanor-Marrone models, and it is shown that for high-enthalpy flows, the traditional and recently developed models can give significantly different results. The computed heat flux and pressure on the surface of a double cone are in a good agreement with experimental data available in the literature on low-enthalpy flow with strong thermal non-equilibrium. The computed heat flux on a double wedge qualitatively agrees with available data for high-enthalpy non-equilibrium flows. Different contributions to the heat flux calculated using rigorous kinetic theory methods are evaluated. Quantitative discrepancy of numerical and experimental data is discussed.

  7. Post-CHF heat transfer: a non-equilibrium, relaxation model

    Jones, O.C. Jr.; Zuber, N.

    1977-01-01

    Existing phenomenological models of heat transfer in the non-equilibrium, liquid-deficient, dispersed flow regime can sometimes predict the thermal behavior fairly well but are quite complex, requiring coupled simultaneous differential equations to describe the axial gradients of mass and energy along with those of droplet acceleration and size. In addition, empirical relations are required to express the droplet breakup and increased effective heat transfer due to holdup. This report describes the development of a different approach to the problem. It is shown that the non-equilibrium component of the total energy can be expressed as a first order, inhomogeneous relaxation equation in terms of one variable coefficient termed the Superheat Relaxation number. A demonstration is provided to show that this relaxation number can be correlated using local variables in such a manner to allow the single non-equilibrium equation to accurately calculate the effects of mass velocity and heat flux along with tube length, diameter, and critical quality for equilibrium qualities from 0.13 to over 3.0

  8. Ageing in the trap model as a relaxation further away from equilibrium

    Bertin, Eric

    2013-01-01

    The ageing regime of the trap model, observed for a temperature T below the glass transition temperature T g , is a prototypical example of non-stationary out-of-equilibrium state. We characterize this state by evaluating its ‘distance to equilibrium’, defined as the Shannon entropy difference ΔS (in absolute value) between the non-equilibrium state and the equilibrium state with the same energy. We consider the time evolution of ΔS and show that, rather unexpectedly, ΔS(t) continuously increases in the ageing regime, if the number of traps is infinite, meaning that the ‘distance to equilibrium’ increases instead of decreasing in the relaxation process. For a finite number N of traps, ΔS(t) exhibits a maximum value before eventually converging to zero when equilibrium is reached. The time t* at which the maximum is reached however scales in a non-standard way as t * ∼N T g /2T , while the equilibration time scales as τ eq ∼N T g /T . In addition, the curves ΔS(t) for different N are found to rescale as ln t/ln t*, instead of the more familiar scaling t/t*. (paper)

  9. Modeling the economic costs of disasters and recovery: analysis using a dynamic computable general equilibrium model

    Xie, W.; Li, N.; Wu, J.-D.; Hao, X.-L.

    2014-04-01

    Disaster damages have negative effects on the economy, whereas reconstruction investment has positive effects. The aim of this study is to model economic causes of disasters and recovery involving the positive effects of reconstruction activities. Computable general equilibrium (CGE) model is a promising approach because it can incorporate these two kinds of shocks into a unified framework and furthermore avoid the double-counting problem. In order to factor both shocks into the CGE model, direct loss is set as the amount of capital stock reduced on the supply side of the economy; a portion of investments restores the capital stock in an existing period; an investment-driven dynamic model is formulated according to available reconstruction data, and the rest of a given country's saving is set as an endogenous variable to balance the fixed investment. The 2008 Wenchuan Earthquake is selected as a case study to illustrate the model, and three scenarios are constructed: S0 (no disaster occurs), S1 (disaster occurs with reconstruction investment) and S2 (disaster occurs without reconstruction investment). S0 is taken as business as usual, and the differences between S1 and S0 and that between S2 and S0 can be interpreted as economic losses including reconstruction and excluding reconstruction, respectively. The study showed that output from S1 is found to be closer to real data than that from S2. Economic loss under S2 is roughly 1.5 times that under S1. The gap in the economic aggregate between S1 and S0 is reduced to 3% at the end of government-led reconstruction activity, a level that should take another four years to achieve under S2.

  10. Removal of semivolatiles from soils by steam stripping. 1. A local equilibrium model

    Wilson, D.J.; Clarke, A.N.

    1992-01-01

    A mathematical model for the in-situ steam stripping of volatile and semivolatile organics from contaminated vadose zone soils at hazardous waste sites is developed. A single steam injection well is modeled. The model assumes that the pneumatic permeability of the soil is spatially constant and isotropic, that the adsorption isotherm of the contaminant is linear, and that the local equilibrium approximation is adequate. The model is used to explore the streamlines and transit times of the injected steam as well as the effects of injection well depth and contaminant distribution on the time required for remediation

  11. An Iterative Algorithm to Determine the Dynamic User Equilibrium in a Traffic Simulation Model

    Gawron, C.

    An iterative algorithm to determine the dynamic user equilibrium with respect to link costs defined by a traffic simulation model is presented. Each driver's route choice is modeled by a discrete probability distribution which is used to select a route in the simulation. After each simulation run, the probability distribution is adapted to minimize the travel costs. Although the algorithm does not depend on the simulation model, a queuing model is used for performance reasons. The stability of the algorithm is analyzed for a simple example network. As an application example, a dynamic version of Braess's paradox is studied.

  12. A novel multiphysic model for simulation of swelling equilibrium of ionized thermal-stimulus responsive hydrogels

    Li, Hua; Wang, Xiaogui; Yan, Guoping; Lam, K. Y.; Cheng, Sixue; Zou, Tao; Zhuo, Renxi

    2005-03-01

    In this paper, a novel multiphysic mathematical model is developed for simulation of swelling equilibrium of ionized temperature sensitive hydrogels with the volume phase transition, and it is termed the multi-effect-coupling thermal-stimulus (MECtherm) model. This model consists of the steady-state Nernst-Planck equation, Poisson equation and swelling equilibrium governing equation based on the Flory's mean field theory, in which two types of polymer-solvent interaction parameters, as the functions of temperature and polymer-network volume fraction, are specified with or without consideration of the hydrogen bond interaction. In order to examine the MECtherm model consisting of nonlinear partial differential equations, a meshless Hermite-Cloud method is used for numerical solution of one-dimensional swelling equilibrium of thermal-stimulus responsive hydrogels immersed in a bathing solution. The computed results are in very good agreements with experimental data for the variation of volume swelling ratio with temperature. The influences of the salt concentration and initial fixed-charge density are discussed in detail on the variations of volume swelling ratio of hydrogels, mobile ion concentrations and electric potential of both interior hydrogels and exterior bathing solution.

  13. Two-temperature chemically non-equilibrium modelling of transferred arcs

    Baeva, M; Kozakov, R; Gorchakov, S; Uhrlandt, D

    2012-01-01

    A two-temperature chemically non-equilibrium model describing in a self-consistent manner the heat transfer, the plasma chemistry, the electric and magnetic field in a high-current free-burning arc in argon has been developed. The model is aimed at unifying the description of a thermionic tungsten cathode, a flat copper anode, and the arc plasma including the electrode sheath regions. The heat transfer in the electrodes is coupled to the plasma heat transfer considering the energy fluxes onto the electrode boundaries with the plasma. The results of the non-equilibrium model for an arc current of 200 A and an argon flow rate of 12 slpm are presented along with results obtained from a model based on the assumption of local thermodynamic equilibrium (LTE) and from optical emission spectroscopy. The plasma shows a near-LTE behaviour along the arc axis and in a region surrounding the axis which becomes wider towards the anode. In the near-electrode regions, a large deviation from LTE is observed. The results are in good agreement with experimental findings from optical emission spectroscopy. (paper)

  14. Experimental design optimisation: theory and application to estimation of receptor model parameters using dynamic positron emission tomography

    Delforge, J.; Syrota, A.; Mazoyer, B.M.

    1989-01-01

    General framework and various criteria for experimental design optimisation are presented. The methodology is applied to estimation of receptor-ligand reaction model parameters with dynamic positron emission tomography data. The possibility of improving parameter estimation using a new experimental design combining an injection of the β + -labelled ligand and an injection of the cold ligand is investigated. Numerical simulations predict remarkable improvement in the accuracy of parameter estimates with this new experimental design and particularly the possibility of separate estimations of the association constant (k +1 ) and of receptor density (B' max ) in a single experiment. Simulation predictions are validated using experimental PET data in which parameter uncertainties are reduced by factors ranging from 17 to 1000. (author)

  15. Optimised formation of blue Maillard reaction products of xylose and glycine model systems and associated antioxidant activity.

    Yin, Zi; Sun, Qian; Zhang, Xi; Jing, Hao

    2014-05-01

    A blue colour can be formed in the xylose (Xyl) and glycine (Gly) Maillard reaction (MR) model system. However, there are fewer studies on the reaction conditions for the blue Maillard reaction products (MRPs). The objective of this study is to investigate characteristic colour formation and antioxidant activities in four different MR model systems and to determine the optimum reaction conditions for the blue colour formation in a Xyl-Gly MR model system, using the random centroid optimisation program. The blue colour with an absorbance peak at 630 nm appeared before browning in the Xyl-Gly MR model system, while no blue colour formation but only browning was observed in the xylose-alanine, xylose-aspartic acid and glucose-glycine MR model systems. The Xyl-Gly MR model system also showed higher antioxidant activity than the other three model systems. The optimum conditions for blue colour formation were as follows: xylose and glycine ratio 1:0.16 (M:M), 0.20 mol L⁻¹ NaHCO₃, 406.1 mL L⁻¹ ethanol, initial pH 8.63, 33.7°C for 22.06 h, which gave a much brighter blue colour and a higher peak at 630 nm. A characteristic blue colour could be formed in the Xyl-Gly MR model system and the optimum conditions for the blue colour formation were proposed and confirmed. © 2013 Society of Chemical Industry.

  16. Why Enforcing its UNCAC Commitments Would be Good for Russia: A Computable General Equilibrium Model

    Michael P. BARRY

    2010-05-01

    Full Text Available Russia has ratified the UN Convention Against Corruption but has not successfully enforced it. This paper uses updated GTAP data to reconstruct a computable general equilibrium (CGE model to quantify the macroeconomic effects of corruption in Russia. Corruption is found to cost the Russian economy billions of dollars a year. A conclusion of the paper is that implementing and enforcing the UNCAC would be of significant economic benefit to Russia and its people.

  17. Discharge Fee Policy Analysis: A Computable General Equilibrium (CGE) Model of Water Resources and Water Environments

    Guohua Fang; Ting Wang; Xinyi Si; Xin Wen; Yu Liu

    2016-01-01

    To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE) model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and out...

  18. Effects of Risk Aversion on Market Outcomes: A Stochastic Two-Stage Equilibrium Model

    Kazempour, Jalal; Pinson, Pierre

    2016-01-01

    This paper evaluates how different risk preferences of electricity producers alter the market-clearing outcomes. Toward this goal, we propose a stochastic equilibrium model for electricity markets with two settlements, i.e., day-ahead and balancing, in which a number of conventional and stochastic...... by its optimality conditions, resulting in a mixed complementarity problem. Numerical results from a case study based on the IEEE one-area reliability test system are derived and discussed....

  19. A Metastable Equilibrium Model for the Relative Abundances of Microbial Phyla in a Hot Spring

    Dick, Jeffrey M.; Shock, Everett L.

    2013-01-01

    Many studies link the compositions of microbial communities to their environments, but the energetics of organism-specific biomass synthesis as a function of geochemical variables have rarely been assessed. We describe a thermodynamic model that integrates geochemical and metagenomic data for biofilms sampled at five sites along a thermal and chemical gradient in the outflow channel of the hot spring known as “Bison Pool” in Yellowstone National Park. The relative abundances of major phyla in individual communities sampled along the outflow channel are modeled by computing metastable equilibrium among model proteins with amino acid compositions derived from metagenomic sequences. Geochemical conditions are represented by temperature and activities of basis species, including pH and oxidation-reduction potential quantified as the activity of dissolved hydrogen. By adjusting the activity of hydrogen, the model can be tuned to closely approximate the relative abundances of the phyla observed in the community profiles generated from BLAST assignments. The findings reveal an inverse relationship between the energy demand to form the proteins at equal thermodynamic activities and the abundance of phyla in the community. The distance from metastable equilibrium of the communities, assessed using an equation derived from energetic considerations that is also consistent with the information-theoretic entropy change, decreases along the outflow channel. Specific divergences from metastable equilibrium, such as an underprediction of the relative abundances of phototrophic organisms at lower temperatures, can be explained by considering additional sources of energy and/or differences in growth efficiency. Although the metabolisms used by many members of these communities are driven by chemical disequilibria, the results support the possibility that higher-level patterns of chemotrophic microbial ecosystems are shaped by metastable equilibrium states that depend on both the

  20. Assimilation of tourism satellite accounts and applied general equilibrium models to inform tourism policy analysis

    Rossouw, Riaan; Saayman, Melville

    2011-01-01

    Historically, tourism policy analysis in South Africa has posed challenges to accurate measurement. The primary reason for this is that tourism is not designated as an 'industry' in standard economic accounts. This paper therefore demonstrates the relevance and need for applied general equilibrium (AGE) models to be completed and extended through an integration with tourism satellite accounts (TSAs) as a tool for policy makers (especially tourism policy makers) in South Africa. The paper sets...

  1. First principles modeling of hydrocarbons conversion in non-equilibrium plasma

    Deminsky, M.A.; Strelkova, M.I.; Durov, S.G.; Jivotov, V.K.; Rusanov, V.D.; Potapkin, B.V. [Russian Research Centre Kurchatov Inst., Moscow (Russian Federation)

    2001-07-01

    Theoretical justification of catalytic activity of non-equilibrium plasma in hydrocarbons conversion process is presented in this paper. The detailed model of highest hydrocarbons conversion includes the gas-phase reactions, chemistry of the growth of polycyclic aromatic hydrocarbons (PAHs), precursor of soot particles formation, neutral, charged clusters and soot particle formation, ion-molecular gas-phase and heterogeneous chemistry. The results of theoretical analysis are compared with experimental results. (authors)

  2. A framework for modelling gene regulation which accommodates non-equilibrium mechanisms.

    Ahsendorf, Tobias; Wong, Felix; Eils, Roland; Gunawardena, Jeremy

    2014-12-05

    Gene regulation has, for the most part, been quantitatively analysed by assuming that regulatory mechanisms operate at thermodynamic equilibrium. This formalism was originally developed to analyse the binding and unbinding of transcription factors from naked DNA in eubacteria. Although widely used, it has made it difficult to understand the role of energy-dissipating, epigenetic mechanisms, such as DNA methylation, nucleosome remodelling and post-translational modification of histones and co-regulators, which act together with transcription factors to regulate gene expression in eukaryotes. Here, we introduce a graph-based framework that can accommodate non-equilibrium mechanisms. A gene-regulatory system is described as a graph, which specifies the DNA microstates (vertices), the transitions between microstates (edges) and the transition rates (edge labels). The graph yields a stochastic master equation for how microstate probabilities change over time. We show that this framework has broad scope by providing new insights into three very different ad hoc models, of steroid-hormone responsive genes, of inherently bounded chromatin domains and of the yeast PHO5 gene. We find, moreover, surprising complexity in the regulation of PHO5, which has not yet been experimentally explored, and we show that this complexity is an inherent feature of being away from equilibrium. At equilibrium, microstate probabilities do not depend on how a microstate is reached but, away from equilibrium, each path to a microstate can contribute to its steady-state probability. Systems that are far from equilibrium thereby become dependent on history and the resulting complexity is a fundamental challenge. To begin addressing this, we introduce a graph-based concept of independence, which can be applied to sub-systems that are far from equilibrium, and prove that history-dependent complexity can be circumvented when sub-systems operate independently. As epigenomic data become increasingly

  3. On a unified presentation of the non-equilibrium two-phase flow models

    Boure, J.A.

    1975-01-01

    If the various existing one-dimensional two-phase flow models are consistent, they must appear as particular cases of more general models. It is shown that such is the case if, and only if, the mathematical form of the laws of the transfers between the phases is sufficiently general. These transfer laws control the non-equilibrium phenomena. A convenient general model is a particular form of the two-fluid model. This particular form involves three equations and three dependent variables characterizing the mixture, and three equations and three dependent variables characterizing the differences between the phases (slip, thermal non-equilibriums). The mathematical expressions of the transfert terms present in the above equations involve first-order partial derivatives of the dependent variables. The other existing models may be deduced from the general model by making assumptions on the fluid evolution. Several examples are given. The resulting unified presentation of the existing model enables a comparison of the implicit assumptions made in these models on the transfer laws. It is therefore, a useful tool for the appraisal of the existing models and for the development of new models [fr

  4. Development of a modified equilibrium model for biomass pilot-scale fluidized bed gasifier performance predictions

    Rodriguez-Alejandro, David A.; Nam, Hyungseok; Maglinao, Amado L.; Capareda, Sergio C.; Aguilera-Alvarado, Alberto F.

    2016-01-01

    The objective of this work is to develop a thermodynamic model considering non-stoichiometric restrictions. The model validation was done from experimental works using a bench-scale fluidized bed gasifier with wood chips, dairy manure, and sorghum. The model was used for a further parametric study to predict the performance of a pilot-scale fluidized biomass gasifier. The Gibbs free energy minimization was applied to the modified equilibrium model considering a heat loss to the surroundings, carbon efficiency, and two non-equilibrium factors based on empirical correlations of ER and gasification temperature. The model was in a good agreement with RMS <4 for the produced gas. The parametric study ranges were 0.01 < ER < 0.99 and 500 °C < T < 900 °C to predict syngas concentrations and its LHV (lower heating value) for the optimization. Higher aromatics in tar were contained in WC gasification compared to manure gasification. A wood gasification tar simulation was produced to predict the amount of tars at specific conditions. The operating conditions for the highest quality syngas were reconciled experimentally with three biomass wastes using a fluidized bed gasifier. The thermodynamic model was used to predict the gasification performance at conditions beyond the actual operation. - Highlights: • Syngas from experimental gasification was used to create a non-equilibrium model. • Different types of biomass (HTS, DM, and WC) were used for gasification modelling. • Different tar compositions were identified with a simulation of tar yields. • The optimum operating conditions were found through the developed model.

  5. One-Dimensional Transport with Equilibrium Chemistry (OTEQ) - A Reactive Transport Model for Streams and Rivers

    Runkel, Robert L.

    2010-01-01

    OTEQ is a mathematical simulation model used to characterize the fate and transport of waterborne solutes in streams and rivers. The model is formed by coupling a solute transport model with a chemical equilibrium submodel. The solute transport model is based on OTIS, a model that considers the physical processes of advection, dispersion, lateral inflow, and transient storage. The equilibrium submodel is based on MINTEQ, a model that considers the speciation and complexation of aqueous species, acid-base reactions, precipitation/dissolution, and sorption. Within OTEQ, reactions in the water column may result in the formation of solid phases (precipitates and sorbed species) that are subject to downstream transport and settling processes. Solid phases on the streambed may also interact with the water column through dissolution and sorption/desorption reactions. Consideration of both mobile (waterborne) and immobile (streambed) solid phases requires a unique set of governing differential equations and solution techniques that are developed herein. The partial differential equations describing physical transport and the algebraic equations describing chemical equilibria are coupled using the sequential iteration approach. The model's ability to simulate pH, precipitation/dissolution, and pH-dependent sorption provides a means of evaluating the complex interactions between instream chemistry and hydrologic transport at the field scale. This report details the development and application of OTEQ. Sections of the report describe model theory, input/output specifications, model applications, and installation instructions. OTEQ may be obtained over the Internet at http://water.usgs.gov/software/OTEQ.

  6. Energy, economy and equity interactions in a CGE [Computable General Equilibrium] model for Pakistan

    Naqvi, Farzana

    1997-01-01

    In the last three decades, Computable General Equilibrium modelling has emerged as an established field of applied economics. This book presents a CGE model developed for Pakistan with the hope that it will lay down a foundation for application of general equilibrium modelling for policy formation in Pakistan. As the country is being driven swiftly to become an open market economy, it becomes vital to found out the policy measures that can foster the objectives of economic planning, such as social equity, with the minimum loss of the efficiency gains from the open market resource allocations. It is not possible to build a model for practical use that can do justice to all sectors of the economy in modelling of their peculiar features. The CGE model developed in this book focuses on the energy sector. Energy is considered as one of the basic needs and an essential input to economic growth. Hence, energy policy has multiple criteria to meet. In this book, a case study has been carried out to analyse energy pricing policy in Pakistan using this CGE model of energy, economy and equity interactions. Hence, the book also demonstrates how researchers can model the fine details of one sector given the core structure of a CGE model. (UK)

  7. Verify Super Double-Heterogeneous Spherical Lattice Model for Equilibrium Fuel Cycle Analysis AND HTR Spherical Super Lattice Model for Equilibrium Fuel Cycle Analysis

    Gray S. Chang

    2005-01-01

    The currently being developed advanced High Temperature gas-cooled Reactors (HTR) is able to achieve a simplification of safety through reliance on innovative features and passive systems. One of the innovative features in these HTRs is reliance on ceramic-coated fuel particles to retain the fission products even under extreme accident conditions. Traditionally, the effect of the random fuel kernel distribution in the fuel pebble/block is addressed through the use of the Dancoff correction factor in the resonance treatment. However, the Dancoff correction factor is a function of burnup and fuel kernel packing factor, which requires that the Dancoff correction factor be updated during Equilibrium Fuel Cycle (EqFC) analysis. An advanced KbK-sph model and whole pebble super lattice model (PSLM), which can address and update the burnup dependent Dancoff effect during the EqFC analysis. The pebble homogeneous lattice model (HLM) is verified by the burnup characteristics with the double-heterogeneous KbK-sph lattice model results. This study summarizes and compares the KbK-sph lattice model and HLM burnup analyzed results. Finally, we discuss the Monte-Carlo coupling with a fuel depletion and buildup code--ORIGEN-2 as a fuel burnup analysis tool and its PSLM calculated results for the HTR EqFC burnup analysis

  8. Modeling of the (liquid + liquid) equilibrium of polydisperse hyperbranched polymer solutions by lattice-cluster theory

    Enders, Sabine; Browarzik, Dieter

    2014-01-01

    Graphical abstract: - Highlights: • Calculation of the (liquid + liquid) equilibrium of hyperbranched polymer solutions. • Description of branching effects by the lattice-cluster theory. • Consideration of self- and cross association by chemical association models. • Treatment of the molar-mass polydispersity by the use of continuous thermodynamics. • Improvement of the theoretical results by the incorporation of polydispersity. - Abstract: The (liquid + liquid) equilibrium of solutions of hyperbranched polymers of the Boltorn type is modeled in the framework of lattice-cluster theory. The association effects are described by the chemical association models CALM (for self association) and ECALM (for cross association). For the first time the molar mass polydispersity of the hyperbranched polymers is taken into account. For this purpose continuous thermodynamics is applied. Because the segment-molar excess Gibbs free energy depends on the number average of the segment number of the polymer the treatment is more general than in previous papers on continuous thermodynamics. The polydispersity is described by a generalized Schulz–Flory distribution. The calculation of the cloud-point curve reduces to two equations that have to be numerically solved. Conditions for the calculation of the spinodal curve and of the critical point are derived. The calculated results are compared to experimental data taken from the literature. For Boltorn solutions in non-polar solvents the polydispersity influence is small. In all other of the considered cases polydispersity influences the (liquid + liquid) equilibrium considerably. However, association and polydispersity influence phase equilibrium in a complex manner. Taking polydispersity into account the accuracy of the calculations is improved, especially, in the diluted region

  9. Electric Circuit Model Analogy for Equilibrium Lattice Relaxation in Semiconductor Heterostructures

    Kujofsa, Tedi; Ayers, John E.

    2018-01-01

    The design and analysis of semiconductor strained-layer device structures require an understanding of the equilibrium profiles of strain and dislocations associated with mismatched epitaxy. Although it has been shown that the equilibrium configuration for a general semiconductor strained-layer structure may be found numerically by energy minimization using an appropriate partitioning of the structure into sublayers, such an approach is computationally intense and non-intuitive. We have therefore developed a simple electric circuit model approach for the equilibrium analysis of these structures. In it, each sublayer of an epitaxial stack may be represented by an analogous circuit configuration involving an independent current source, a resistor, an independent voltage source, and an ideal diode. A multilayered structure may be built up by the connection of the appropriate number of these building blocks, and the node voltages in the analogous electric circuit correspond to the equilibrium strains in the original epitaxial structure. This enables analysis using widely accessible circuit simulators, and an intuitive understanding of electric circuits can easily be extended to the relaxation of strained-layer structures. Furthermore, the electrical circuit model may be extended to continuously-graded epitaxial layers by considering the limit as the individual sublayer thicknesses are diminished to zero. In this paper, we describe the mathematical foundation of the electrical circuit model, demonstrate its application to several representative structures involving In x Ga1- x As strained layers on GaAs (001) substrates, and develop its extension to continuously-graded layers. This extension allows the development of analytical expressions for the strain, misfit dislocation density, critical layer thickness and widths of misfit dislocation free zones for a continuously-graded layer having an arbitrary compositional profile. It is similar to the transition from circuit

  10. Stability of the thermodynamic equilibrium - A test of the validity of dynamic models as applied to gyroviscous perpendicular magnetohydrodynamics

    Faghihi, Mustafa; Scheffel, Jan; Spies, Guenther O.

    1988-05-01

    Stability of the thermodynamic equilibrium is put forward as a simple test of the validity of dynamic equations, and is applied to perpendicular gyroviscous magnetohydrodynamics (i.e., perpendicular magnetohydrodynamics with gyroviscosity added). This model turns out to be invalid because it predicts exponentially growing Alfven waves in a spatially homogeneous static equilibrium with scalar pressure.

  11. Stability of the thermodynamic equilibrium: A test of the validity of dynamic models as applied to gyroviscous perpendicular magnetohydrodynamics

    Faghihi, M.; Scheffel, J.; Spies, G.O.

    1988-01-01

    Stability of the thermodynamic equilibrium is put forward as a simple test of the validity of dynamic equations, and is applied to perpendicular gyroviscous magnetohydrodynamics (i.e., perpendicular magnetohydrodynamics with gyroviscosity added). This model turns out to be invalid because it predicts exponentially growing Alfven waves in a spatially homogeneous static equilibrium with scalar pressure

  12. A nonlinear model for myogenic regulation of blood flow to bone: equilibrium states and stability characteristics.

    Harrigan, T P

    1996-01-01

    A simple compartmental model for myogenic regulation of interstitial pressure in bone is developed, and the interaction between changes in interstitial pressure and changes in arterial and venous resistance is studied. The arterial resistance is modeled by a myogenic model that depends on transmural pressure, and the venous resistance is modeled by using a vascular waterfall. Two series capacitances model blood storage in the vascular system and interstitial fluid storage in the extravascular space. The static results mimic the observed effect that vasodilators work less well in bone than do vasoconstrictors. The static results also show that the model gives constant flow rates over a limited range of arterial pressure. The dynamic model shows unstable behavior at small values of bony capacitance and at high enough myogenic gain. At low myogenic gain, only a single equilibrium state is present, but a high enough myogenic gain, two new equilibrium states appear. At additional increases in gain, one of the two new states merges with and then separates from the original state, and the original state becomes a saddle point. The appearance of the new states and the transition of the original state to a saddle point do not depend on the bony capacitance, and these results are relevant to general fluid compartments. Numerical integration of the rate equations confirms the stability calculations and shows limit cycling behavior in several situations. The relevance of this model to circulation in bone and to other compartments is discussed.

  13. Transfer coefficients to terrestrial food products in equilibrium assessment models for nuclear installations

    Zach, R.

    1980-09-01

    Transfer coefficients have become virtually indispensible in the study of the fate of radioisotopes released from nuclear installations. These coefficients are used in equilibrium assessment models where they specify the degree of transfer in food chains of individual radioisotopes from soil to plant products and from feed or forage and drinking water to animal products and ultimately to man. Information on transfer coefficients for terrestrial food chain models is very piecemeal and occurs in a wide variety of journals and reports. To enable us to choose or determine suitable values for assessments, we have addressed the following aspects of transfer coefficients on a very broad scale: (1) definitions, (2) equilibrium assumption, which stipulates that transfer coefficients be restricted to equilibrium or steady rate conditions, (3) assumption of linearity, that is the idea that radioisotope concentrations in food products increase linearly with contamination levels in the soil or animal feed, (4) methods of determination, (5) variability, (6) generic versus site-specific values, (7) statistical aspects, (8) use, (9) sources of currently used values, (10) criteria for revising values, (11) establishment and maintenance of files on transfer coefficients, and (12) future developments. (auth)

  14. Particle swarm optimisation classical and quantum perspectives

    Sun, Jun; Wu, Xiao-Jun

    2016-01-01

    IntroductionOptimisation Problems and Optimisation MethodsRandom Search TechniquesMetaheuristic MethodsSwarm IntelligenceParticle Swarm OptimisationOverviewMotivationsPSO Algorithm: Basic Concepts and the ProcedureParadigm: How to Use PSO to Solve Optimisation ProblemsSome Harder Examples Some Variants of Particle Swarm Optimisation Why Does the PSO Algorithm Need to Be Improved? Inertia and Constriction-Acceleration Techniques for PSOLocal Best ModelProbabilistic AlgorithmsOther Variants of PSO Quantum-Behaved Particle Swarm Optimisation OverviewMotivation: From Classical Dynamics to Quantum MechanicsQuantum Model: Fundamentals of QPSOQPSO AlgorithmSome Essential ApplicationsSome Variants of QPSOSummary Advanced Topics Behaviour Analysis of Individual ParticlesConvergence Analysis of the AlgorithmTime Complexity and Rate of ConvergenceParameter Selection and PerformanceSummaryIndustrial Applications Inverse Problems for Partial Differential EquationsInverse Problems for Non-Linear Dynamical SystemsOptimal De...

  15. Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation

    Benjamin Scellier

    2017-05-01

    Full Text Available We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made and the second phase of training (after the target or prediction error is revealed. Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well-defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point or stationary distribution toward a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged toward their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal “back-propagated” during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not. We also show experimentally that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST

  16. Optimisation of technical specifications using probabilistic methods

    Ericsson, G.; Knochenhauer, M.; Hultqvist, G.

    1986-01-01

    During the last few years the development of methods for modifying and optimising nuclear power plant Technical Specifications (TS) for plant operations has received increased attention. Probalistic methods in general, and the plant and system models of probabilistic safety assessment (PSA) in particular, seem to provide the most forceful tools for optimisation. This paper first gives some general comments on optimisation, identifying important parameters and then gives a description of recent Swedish experiences from the use of nuclear power plant PSA models and results for TS optimisation

  17. Oscillation Susceptibility Analysis of the ADMIRE Aircraft along the Path of Longitudinal Flight Equilibriums in Two Different Mathematical Models

    Achim Ionita

    2009-01-01

    Full Text Available The oscillation susceptibility of the ADMIRE aircraft along the path of longitudinal flight equilibriums is analyzed numerically in the general and in a simplified flight model. More precisely, the longitudinal flight equilibriums, the stability of these equilibriums, and the existence of bifurcations along the path of these equilibriums are researched in both models. Maneuvers and appropriate piloting tasks for the touch-down moment are simulated in both models. The computed results obtained in the models are compared in order to see if the movement concerning the landing phase computed in the simplified model is similar to that computed in the general model. The similarity we find is not a proof of the structural stability of the simplified system, what as far we know never been made, but can increase the confidence that the simplified system correctly describes the real phenomenon.

  18. Tablet potency of Tianeptine in coated tablets by near infrared spectroscopy: model optimisation, calibration transfer and confidence intervals.

    Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel

    2011-02-20

    A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. Mathematical modeling of the radiation-chemical behavior of neptunium in HNO3. Equilibrium states

    Vladimirova, M.V.

    1995-01-01

    A mathematical model of the radiation-chemical behavior of neptunium is presented for a wide range of α-and γ-irradiation doses. Equations determining the equilibrium concentrations of NP(IV), Np(V), and Np(VI) are derived for various concentrations of HNO 3 and dose rates of the ionizing irradiation. The rate constants of the reactions NP(IV) + OH, Np(IV) + NO 3 , Np(V) + NO 2 , Np(V) + H, Np(IV), and Np(V) + Np(V) are obtained by the mathematical modeling

  20. The performance of simulated annealing in parameter estimation for vapor-liquid equilibrium modeling

    A. Bonilla-Petriciolet

    2007-03-01

    Full Text Available In this paper we report the application and evaluation of the simulated annealing (SA optimization method in parameter estimation for vapor-liquid equilibrium (VLE modeling. We tested this optimization method using the classical least squares and error-in-variable approaches. The reliability and efficiency of the data-fitting procedure are also considered using different values for algorithm parameters of the SA method. Our results indicate that this method, when properly implemented, is a robust procedure for nonlinear parameter estimation in thermodynamic models. However, in difficult problems it still can converge to local optimums of the objective function.

  1. International nuclear model and code comparison on pre-equilibrium effects

    Gruppelaar, H.; van der Kamp, H.A.J.; Nagel, P.

    1983-01-01

    This paper gives the specification of an intercomparison of statistical nuclear models and codes with emphasis on pre-equilibrium effects. It is partly based upon the conclusions of a meeting of an ad-hoc working group on this subject. The parameters studied are: masses, Q values, level scheme data, optical model parameters, X-ray competition parameters, total level-density specifications, for 86 Rb, 89 Sr, 90 Y, 92 Y, 92 Zr, 93 Zr, 89 Y, 91 Nb, 92 Nb and 93 Nb

  2. Electron-Impact Excitation Cross Sections for Modeling Non-Equilibrium Gas

    Huo, Winifred M.; Liu, Yen; Panesi, Marco; Munafo, Alessandro; Wray, Alan; Carbon, Duane F.

    2015-01-01

    In order to provide a database for modeling hypersonic entry in a partially ionized gas under non-equilibrium, the electron-impact excitation cross sections of atoms have been calculated using perturbation theory. The energy levels covered in the calculation are retrieved from the level list in the HyperRad code. The downstream flow-field is determined by solving a set of continuity equations for each component. The individual structure of each energy level is included. These equations are then complemented by the Euler system of equations. Finally, the radiation field is modeled by solving the radiative transfer equation.

  3. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    Liu, Yen; Panesi, Marco; Sahai, Amal; Vinokur, Marcel

    2015-04-01

    This paper opens a new door to macroscopic modeling for thermal and chemical non-equilibrium. In a game-changing approach, we discard conventional theories and practices stemming from the separation of internal energy modes and the Landau-Teller relaxation equation. Instead, we solve the fundamental microscopic equations in their moment forms but seek only optimum representations for the microscopic state distribution function that provides converged and time accurate solutions for certain macroscopic quantities at all times. The modeling makes no ad hoc assumptions or simplifications at the microscopic level and includes all possible collisional and radiative processes; it therefore retains all non-equilibrium fluid physics. We formulate the thermal and chemical non-equilibrium macroscopic equations and rate coefficients in a coupled and unified fashion for gases undergoing completely general transitions. All collisional partners can have internal structures and can change their internal energy states after transitions. The model is based on the reconstruction of the state distribution function. The internal energy space is subdivided into multiple groups in order to better describe non-equilibrium state distributions. The logarithm of the distribution function in each group is expressed as a power series in internal energy based on the maximum entropy principle. The method of weighted residuals is applied to the microscopic equations to obtain macroscopic moment equations and rate coefficients succinctly to any order. The model's accuracy depends only on the assumed expression of the state distribution function and the number of groups used and can be self-checked for accuracy and convergence. We show that the macroscopic internal energy transfer, similar to mass and momentum transfers, occurs through nonlinear collisional processes and is not a simple relaxation process described by, e.g., the Landau-Teller equation. Unlike the classical vibrational energy

  4. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures.

    Liu, Yen; Panesi, Marco; Sahai, Amal; Vinokur, Marcel

    2015-04-07

    This paper opens a new door to macroscopic modeling for thermal and chemical non-equilibrium. In a game-changing approach, we discard conventional theories and practices stemming from the separation of internal energy modes and the Landau-Teller relaxation equation. Instead, we solve the fundamental microscopic equations in their moment forms but seek only optimum representations for the microscopic state distribution function that provides converged and time accurate solutions for certain macroscopic quantities at all times. The modeling makes no ad hoc assumptions or simplifications at the microscopic level and includes all possible collisional and radiative processes; it therefore retains all non-equilibrium fluid physics. We formulate the thermal and chemical non-equilibrium macroscopic equations and rate coefficients in a coupled and unified fashion for gases undergoing completely general transitions. All collisional partners can have internal structures and can change their internal energy states after transitions. The model is based on the reconstruction of the state distribution function. The internal energy space is subdivided into multiple groups in order to better describe non-equilibrium state distributions. The logarithm of the distribution function in each group is expressed as a power series in internal energy based on the maximum entropy principle. The method of weighted residuals is applied to the microscopic equations to obtain macroscopic moment equations and rate coefficients succinctly to any order. The model's accuracy depends only on the assumed expression of the state distribution function and the number of groups used and can be self-checked for accuracy and convergence. We show that the macroscopic internal energy transfer, similar to mass and momentum transfers, occurs through nonlinear collisional processes and is not a simple relaxation process described by, e.g., the Landau-Teller equation. Unlike the classical vibrational energy

  5. An Optimisation Approach for Room Acoustics Design

    Holm-Jørgensen, Kristian; Kirkegaard, Poul Henning; Andersen, Lars

    2005-01-01

    This paper discuss on a conceptual level the value of optimisation techniques in architectural acoustics room design from a practical point of view. It is chosen to optimise one objective room acoustics design criterium estimated from the sound field inside the room. The sound field is modeled...... using the boundary element method where absorption is incorporated. An example is given where the geometry of a room is defined by four design modes. The room geometry is optimised to get a uniform sound pressure....

  6. Comparison of Themodynamic and Transport Property Models for Computing Equilibrium High Enthalpy Flows

    Ramasahayam, Veda Krishna Vyas; Diwakar, Anant; Bodi, Kowsik

    2017-11-01

    To study the flow of high temperature air in vibrational and chemical equilibrium, accurate models for thermodynamic state and transport phenomena are required. In the present work, the performance of a state equation model and two mixing rules for determining equilibrium air thermodynamic and transport properties are compared with that of curve fits. The thermodynamic state model considers 11 species which computes flow chemistry by an iterative process and the mixing rules considered for viscosity are Wilke and Armaly-Sutton. The curve fits of Srinivasan, which are based on Grabau type transition functions, are chosen for comparison. A two-dimensional Navier-Stokes solver is developed to simulate high enthalpy flows with numerical fluxes computed by AUSM+-up. The accuracy of state equation model and curve fits for thermodynamic properties is determined using hypersonic inviscid flow over a circular cylinder. The performance of mixing rules and curve fits for viscosity are compared using hypersonic laminar boundary layer prediction on a flat plate. It is observed that steady state solutions from state equation model and curve fits match with each other. Though curve fits are significantly faster the state equation model is more general and can be adapted to any flow composition.

  7. Chaos in a dynamic model of urban transportation network flow based on user equilibrium states

    Xu Meng; Gao Ziyou

    2009-01-01

    In this study, we investigate the dynamical behavior of network traffic flow. We first build a two-stage mathematical model to analyze the complex behavior of network flow, a dynamical model, which is based on the dynamical gravity model proposed by Dendrinos and Sonis [Dendrinos DS, Sonis M. Chaos and social-spatial dynamic. Berlin: Springer-Verlag; 1990] is used to estimate the number of trips. Considering the fact that the Origin-Destination (O-D) trip cost in the traffic network is hard to express as a functional form, in the second stage, the user equilibrium network assignment model was used to estimate the trip cost, which is the minimum cost of used path when user equilibrium (UE) conditions are satisfied. It is important to use UE to estimate the O-D cost, since a connection is built among link flow, path flow, and O-D flow. The dynamical model describes the variations of O-D flows over discrete time periods, such as each day and each week. It is shown that even in a system with dimensions equal to two, chaos phenomenon still exists. A 'Chaos Propagation' phenomenon is found in the given model.

  8. Monte Carlo modeling of Lead-Cooled Fast Reactor in adiabatic equilibrium state

    Stanisz, Przemysław, E-mail: pstanisz@agh.edu.pl; Oettingen, Mikołaj, E-mail: moettin@agh.edu.pl; Cetnar, Jerzy, E-mail: cetnar@mail.ftj.agh.edu.pl

    2016-05-15

    Graphical abstract: - Highlights: • We present the Monte Carlo modeling of the LFR in the adiabatic equilibrium state. • We assess the adiabatic equilibrium fuel composition using the MCB code. • We define the self-adjusting process of breeding gain by the control rod operation. • The designed LFR can work in the adiabatic cycle with zero fuel breeding. - Abstract: Nuclear power would appear to be the only energy source able to satisfy the global energy demand while also achieving a significant reduction of greenhouse gas emissions. Moreover, it can provide a stable and secure source of electricity, and plays an important role in many European countries. However, nuclear power generation from its birth has been doomed by the legacy of radioactive nuclear waste. In addition, the looming decrease in the available resources of fissile U235 may influence the future sustainability of nuclear energy. The integrated solution to both problems is not trivial, and postulates the introduction of a closed-fuel cycle strategy based on breeder reactors. The perfect choice of a novel reactor system fulfilling both requirements is the Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state. In such a state, the reactor converts depleted or natural uranium into plutonium while consuming any self-generated minor actinides and transferring only fission products as waste. We present the preliminary design of a Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state with the Monte Carlo Continuous Energy Burnup Code – MCB. As a reference reactor model we apply the core design developed initially under the framework of the European Lead-cooled SYstem (ELSY) project and refined in the follow-up Lead-cooled European Advanced DEmonstration Reactor (LEADER) project. The major objective of the study is to show to what extent the constraints of the adiabatic cycle are maintained and to indicate the phase space for further improvements. The analysis

  9. Cap-and-Trade Modeling and Analysis: Congested Electricity Market Equilibrium

    Limpaitoon, Tanachai

    This dissertation presents an equilibrium framework for analyzing the impact of cap-and-trade regulation on transmission-constrained electricity market. The cap-and-trade regulation of greenhouse gas emissions has gained momentum in the past decade. The impact of the regulation and its efficacy in the electric power industry depend on interactions of demand elasticity, transmission network, market structure, and strategic behavior of firms. I develop an equilibrium model of an oligopoly electricity market in conjunction with a market for tradable emissions permits to study the implications of such interactions. My goal is to identify inefficiencies that may arise from policy design elements and to avoid any unintended adverse consequences on the electric power sector. I demonstrate this modeling framework with three case studies examining the impact of carbon cap-and-trade regulation. In the first case study, I study equilibrium results under various scenarios of resource ownership and emission targets using a 24-bus IEEE electric transmission system. The second and third case studies apply the equilibrium model to a realistic electricity market, Western Electricity Coordinating Council (WECC) 225-bus system with a detailed representation of the California market. In the first and second case studies, I examine oligopoly in electricity with perfect competition in the permit market. I find that under a stringent emission cap and a high degree of concentration of non-polluting firms, the electricity market is subject to potential abuses of market power. Also, market power can occur in the procurement of non-polluting energy through the permit market when non-polluting resources are geographically concentrated in a transmission-constrained market. In the third case study, I relax the competitive market structure assumption of the permit market by allowing oligopolistic competition in the market through a conjectural variation approach. A short-term equilibrium

  10. Comparison of a model vapor deposited glass films to equilibrium glass films

    Flenner, Elijah; Berthier, Ludovic; Charbonneau, Patrick; Zamponi, Francesco

    Vapor deposition of particles onto a substrate held at around 85% of the glass transition temperature can create glasses with increased density, enthalpy, kinetic stability, and mechanical stability compared to an ordinary glass created by cooling. It is estimated that an ordinary glass would need to age thousands of years to reach the kinetic stability of a vapor deposited glass, and a natural question is how close to the equilibrium is the vapor deposited glass. To understand the process, algorithms akin to vapor deposition are used to create simulated glasses that have a higher kinetic stability than their annealed counterpart, although these glasses may not be well equilibrated either. Here we use novel models optimized for a swap Monte Carlo algorithm in order to create equilibrium glass films and compare their properties with those of glasses obtained from vapor deposition algorithms. This approach allows us to directly assess the non-equilibrium nature of vapor-deposited ultrastable glasses. Simons Collaboration on Cracking the Glass Problem and NSF Grant No. DMR 1608086.

  11. Analytical modeling of equilibrium of strongly anisotropic plasma in tokamaks and stellarators

    Lepikhin, N. D.; Pustovitov, V. D.

    2013-01-01

    Theoretical analysis of equilibrium of anisotropic plasma in tokamaks and stellarators is presented. The anisotropy is assumed strong, which includes the cases with essentially nonuniform distributions of plasma pressure on magnetic surfaces. Such distributions can arise at neutral beam injection or at ion cyclotron resonance heating. Then the known generalizations of the standard theory of plasma equilibrium that treat p ‖ and p ⊥ (parallel and perpendicular plasma pressures) as almost constant on magnetic surfaces are not applicable anymore. Explicit analytical prescriptions of the profiles of p ‖ and p ⊥ are proposed that allow modeling of the anisotropic plasma equilibrium even with large ratios of p ‖ /p ⊥ or p ⊥ /p ‖ . A method for deriving the equation for the Shafranov shift is proposed that does not require introduction of the flux coordinates and calculation of the metric tensor. It is shown that for p ⊥ with nonuniformity described by a single poloidal harmonic, the equation for the Shafranov shift coincides with a known one derived earlier for almost constant p ⊥ on a magnetic surface. This does not happen in the other more complex case

  12. State-to-state modeling of non-equilibrium air nozzle flows

    Nagnibeda, E.; Papina, K.; Kunova, O.

    2018-05-01

    One-dimensional non-equilibrium air flows in nozzles are studied on the basis of the state-to-state description of vibrational-chemical kinetics. Five-component mixture N2/O2/NO/N/O is considered taking into account Zeldovich exchange reactions of NO formation, dissociation, recombination and vibrational energy transitions. The equations for vibrational and chem-ical kinetics in a flow are coupled to the conservation equations of momentum and total energy and solved numerically for different conditions in a nozzle throat. The vibrational distributions of nitrogen and oxygen molecules, number densities of species as well as the gas temperature and flow velocity along a nozzle axis are analysed using the detailed state-to-state flow description and in the frame of the simplified one-temperature thermal equilibrium kinetic model. The comparison of the results showed the influence of non-equilibrium kinetics on macroscopic nozzle flow parameters. In the state-to-state approach, non-Boltzmann vibrational dis-tributions of N2 and O2 molecules with a plateau part at intermediate levels are found. The results are found with the use of the complete and simplified schemes of reactions and the impact of exchange reactions, dissociation and recombination on variation of vibrational level populations, mixture composition, gas velocity and temperature along a nozzle axis is shown.

  13. A tightly coupled non-equilibrium model for inductively coupled radio-frequency plasmas

    Munafò, A.; Alfuhaid, S. A.; Panesi, M.; Cambier, J.-L.

    2015-01-01

    The objective of the present work is the development of a tightly coupled magneto-hydrodynamic model for inductively coupled radio-frequency plasmas. Non Local Thermodynamic Equilibrium (NLTE) effects are described based on a hybrid State-to-State approach. A multi-temperature formulation is used to account for thermal non-equilibrium between translation of heavy-particles and vibration of molecules. Excited electronic states of atoms are instead treated as separate pseudo-species, allowing for non-Boltzmann distributions of their populations. Free-electrons are assumed Maxwellian at their own temperature. The governing equations for the electro-magnetic field and the gas properties (e.g., chemical composition and temperatures) are written as a coupled system of time-dependent conservation laws. Steady-state solutions are obtained by means of an implicit Finite Volume method. The results obtained in both LTE and NLTE conditions over a broad spectrum of operating conditions demonstrate the robustness of the proposed coupled numerical method. The analysis of chemical composition and temperature distributions along the torch radius shows that: (i) the use of the LTE assumption may lead to an inaccurate prediction of the thermo-chemical state of the gas, and (ii) non-equilibrium phenomena play a significant role close the walls, due to the combined effects of Ohmic heating and macroscopic gradients

  14. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    Liu, Yen, E-mail: yen.liu@nasa.gov; Vinokur, Marcel [NASA Ames Research Center, Moffett Field, California 94035 (United States); Panesi, Marco; Sahai, Amal [University of Illinois, Urbana-Champaign, Illinois 61801 (United States)

    2015-04-07

    This paper opens a new door to macroscopic modeling for thermal and chemical non-equilibrium. In a game-changing approach, we discard conventional theories and practices stemming from the separation of internal energy modes and the Landau-Teller relaxation equation. Instead, we solve the fundamental microscopic equations in their moment forms but seek only optimum representations for the microscopic state distribution function that provides converged and time accurate solutions for certain macroscopic quantities at all times. The modeling makes no ad hoc assumptions or simplifications at the microscopic level and includes all possible collisional and radiative processes; it therefore retains all non-equilibrium fluid physics. We formulate the thermal and chemical non-equilibrium macroscopic equations and rate coefficients in a coupled and unified fashion for gases undergoing completely general transitions. All collisional partners can have internal structures and can change their internal energy states after transitions. The model is based on the reconstruction of the state distribution function. The internal energy space is subdivided into multiple groups in order to better describe non-equilibrium state distributions. The logarithm of the distribution function in each group is expressed as a power series in internal energy based on the maximum entropy principle. The method of weighted residuals is applied to the microscopic equations to obtain macroscopic moment equations and rate coefficients succinctly to any order. The model’s accuracy depends only on the assumed expression of the state distribution function and the number of groups used and can be self-checked for accuracy and convergence. We show that the macroscopic internal energy transfer, similar to mass and momentum transfers, occurs through nonlinear collisional processes and is not a simple relaxation process described by, e.g., the Landau-Teller equation. Unlike the classical vibrational energy

  15. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures

    Liu, Yen; Vinokur, Marcel; Panesi, Marco; Sahai, Amal

    2015-01-01

    This paper opens a new door to macroscopic modeling for thermal and chemical non-equilibrium. In a game-changing approach, we discard conventional theories and practices stemming from the separation of internal energy modes and the Landau-Teller relaxation equation. Instead, we solve the fundamental microscopic equations in their moment forms but seek only optimum representations for the microscopic state distribution function that provides converged and time accurate solutions for certain macroscopic quantities at all times. The modeling makes no ad hoc assumptions or simplifications at the microscopic level and includes all possible collisional and radiative processes; it therefore retains all non-equilibrium fluid physics. We formulate the thermal and chemical non-equilibrium macroscopic equations and rate coefficients in a coupled and unified fashion for gases undergoing completely general transitions. All collisional partners can have internal structures and can change their internal energy states after transitions. The model is based on the reconstruction of the state distribution function. The internal energy space is subdivided into multiple groups in order to better describe non-equilibrium state distributions. The logarithm of the distribution function in each group is expressed as a power series in internal energy based on the maximum entropy principle. The method of weighted residuals is applied to the microscopic equations to obtain macroscopic moment equations and rate coefficients succinctly to any order. The model’s accuracy depends only on the assumed expression of the state distribution function and the number of groups used and can be self-checked for accuracy and convergence. We show that the macroscopic internal energy transfer, similar to mass and momentum transfers, occurs through nonlinear collisional processes and is not a simple relaxation process described by, e.g., the Landau-Teller equation. Unlike the classical vibrational energy

  16. Chemical equilibrium model for high- Tc and heavy fermion superconductors: the density of states

    Kallio, A.; Hissa, J.; Hayrynen, T.; Braysy, V.; Sakkinen, T.

    1998-01-01

    The chemical equilibrium model is based on the idea of correlated electron pairs, which in singlet state can exist as quasimolecules in the superfluid and normal states of a superconductor. These preformed pairs are bosons which can undergo a Bose-Einstein condensation in analogy with the superfluidity of 4 He+ 3 He-mixture. The bosons (B ++ ) and the fermions (h + ) are in chemical equilibrium with respect to the reaction B ++ ↔ 2h + , at any temperature. The mean densities of bosons and fermions (quasiholes) n B (T) and n h (T) are determined from the thermodynamics of the equilibrium reaction in terms of a single function f(T). By thermodynamics the function f(T) is connected to equilibrium constant φ(T) by 1-f(T) = [1 + φ(T)] -1/2 . Using a simple power law, known to be valid near T = 0, for the chemical constant φ(T) α/t 2γ , t = T/T*, the mean density of quasiholes is given in closed form. This enables one to calculate the corresponding density of states (DOS) D(E) N s /N(0), by solving an integral equation. The NIS- tunneling conductivity near T = 0, given by D(E) compares well with the most recent experiments: D(E) ∼ E γ , for small E and a finite maximum of right size, corresponding to 'finite quasiparticle lifetime'. The corresponding SIS-tunneling conductivity is obtained from a simple convolution and is also in agreement with recent break junction experiments of Hancotte et al. The position of the maximum can be used to obtain the scaling temperature T*, which comes close to the one measured by Hall coefficient in the normal state. A simple explanation for the spingap effect in NMR is given. (Copyright (1998) World Scientific Publishing Co. Pte. Ltd)

  17. Equilibrium based analytical model for estimation of pressure magnification during deflagration of hydrogen air mixtures

    Karanam, Aditya; Sharma, Pavan K.; Ganju, Sunil; Singh, Ram Kumar [Bhabha Atomic Research Centre (BARC), Mumbai (India). Reactor Safety Div.

    2016-12-15

    During postulated accident sequences in nuclear reactors, hydrogen may get released from the core and form a flammable mixture in the surrounding containment structure. Ignition of such mixtures and the subsequent pressure rise are an imminent threat for safe and sustainable operation of nuclear reactors. Methods for evaluating post ignition characteristics are important for determining the design safety margins in such scenarios. This study presents two thermo-chemical models for determining the post ignition state. The first model is based on internal energy balance while the second model uses the concept of element potentials to minimize the free energy of the system with internal energy imposed as a constraint. Predictions from both the models have been compared against published data over a wide range of mixture compositions. Important differences in the regions close to flammability limits and for stoichiometric mixtures have been identified and explained. The equilibrium model has been validated for varied temperatures and pressures representative of initial conditions that may be present in the containment during accidents. Special emphasis has been given to the understanding of the role of dissociation and its effect on equilibrium pressure, temperature and species concentrations.

  18. Equilibrium based analytical model for estimation of pressure magnification during deflagration of hydrogen air mixtures

    Karanam, Aditya; Sharma, Pavan K.; Ganju, Sunil; Singh, Ram Kumar

    2016-01-01

    During postulated accident sequences in nuclear reactors, hydrogen may get released from the core and form a flammable mixture in the surrounding containment structure. Ignition of such mixtures and the subsequent pressure rise are an imminent threat for safe and sustainable operation of nuclear reactors. Methods for evaluating post ignition characteristics are important for determining the design safety margins in such scenarios. This study presents two thermo-chemical models for determining the post ignition state. The first model is based on internal energy balance while the second model uses the concept of element potentials to minimize the free energy of the system with internal energy imposed as a constraint. Predictions from both the models have been compared against published data over a wide range of mixture compositions. Important differences in the regions close to flammability limits and for stoichiometric mixtures have been identified and explained. The equilibrium model has been validated for varied temperatures and pressures representative of initial conditions that may be present in the containment during accidents. Special emphasis has been given to the understanding of the role of dissociation and its effect on equilibrium pressure, temperature and species concentrations.

  19. Efficient modeling of reactive transport phenomena by a multispecies random walk coupled to chemical equilibrium

    Pfingsten, W.

    1996-01-01

    Safety assessments for radioactive waste repositories require a detailed knowledge of physical, chemical, hydrological, and geological processes for long time spans. In the past, individual models for hydraulics, transport, or geochemical processes were developed more or less separately to great sophistication for the individual processes. Such processes are especially important in the near field of a waste repository. Attempts have been made to couple at least two individual processes to get a more adequate description of geochemical systems. These models are called coupled codes; they couple predominantly a multicomponent transport model with a chemical reaction model. Here reactive transport is modeled by the sequentially coupled code MCOTAC that couples one-dimensional advective, dispersive, and diffusive transport with chemical equilibrium complexation and precipitation/dissolution reactions in a porous medium. Transport, described by a random walk of multispecies particles, and chemical equilibrium calculations are solved separately, coupled only by an exchange term. The modular-structured code was applied to incongruent dissolution of hydrated silicate gels, to movement of multiple solid front systems, and to an artificial, numerically difficult heterogeneous redox problem. These applications show promising features with respect to applicability to relevant problems and possibilities of extensions

  20. Modelling formation of disinfection by-products in water distribution: Optimisation using a multi-objective evolutionary algorithm

    Radhakrishnan, Mohanasundar; Pathirana, Assela; Ghebremichael, Kebreab A.; Amy, Gary L.

    2012-01-01

    Concerns have been raised regarding disinfection by-products (DBPs) formed as a result of the reaction of halogen-based disinfectants with DBP precursors. In order to appreciate the chemical and biological tradeoffs, it is imperative to understand the formation trends of DBPs and their spread in the distribution network. However, the water at a point in a complex distribution system is a mixture from various sources, whose proportions are complex to estimate and requires advanced hydraulic analysis. To understand the risks of DBPs and to develop mitigation strategies, it is important to understand the distribution of DBPs in a water network, which requires modelling. The goal of this research was to integrate a steady-state water network model with a particle backtracking algorithm and chlorination as well as DBPs models in order to assess the tradeoffs between biological and chemical risks in the distribution network. A multi-objective optimisation algorithm was used to identify the optimal proportion of water from various sources, dosages of alum, and dosages of chlorine in the treatment plant and in booster locations to control the formation of chlorination DBPs and to achieve a balance between microbial and chemical risks. © IWA Publishing 2012.

  1. Modelling formation of disinfection by-products in water distribution: Optimisation using a multi-objective evolutionary algorithm

    Radhakrishnan, Mohanasundar

    2012-05-01

    Concerns have been raised regarding disinfection by-products (DBPs) formed as a result of the reaction of halogen-based disinfectants with DBP precursors. In order to appreciate the chemical and biological tradeoffs, it is imperative to understand the formation trends of DBPs and their spread in the distribution network. However, the water at a point in a complex distribution system is a mixture from various sources, whose proportions are complex to estimate and requires advanced hydraulic analysis. To understand the risks of DBPs and to develop mitigation strategies, it is important to understand the distribution of DBPs in a water network, which requires modelling. The goal of this research was to integrate a steady-state water network model with a particle backtracking algorithm and chlorination as well as DBPs models in order to assess the tradeoffs between biological and chemical risks in the distribution network. A multi-objective optimisation algorithm was used to identify the optimal proportion of water from various sources, dosages of alum, and dosages of chlorine in the treatment plant and in booster locations to control the formation of chlorination DBPs and to achieve a balance between microbial and chemical risks. © IWA Publishing 2012.

  2. Burnup effect on nuclear fuel cycle cost using an equilibrium model

    Youn, S. R.; Kim, S. K.; Ko, W. I.

    2014-01-01

    The degree of fuel burnup is an important technical parameter to the nuclear fuel cycle, being sensitive and progressive to reduce the total volume of process flow materials and eventually cut the nuclear fuel cycle costs. This paper performed the sensitivity analysis of the total nuclear fuel cycle costs to changes in the technical parameter by varying the degree of burnups in each of the three nuclear fuel cycles using an equilibrium model. Important as burnup does, burnup effect was used among the cost drivers of fuel cycle, as the technical parameter. The fuel cycle options analyzed in this paper are three different fuel cycle options as follows: PWR-Once Through Cycle(PWR-OT), PWR-MOX Recycle, Pyro-SFR Recycle. These fuel cycles are most likely to be adopted in the foreseeable future. As a result of the sensitivity analysis on burnup effect of each three different nuclear fuel cycle costs, PWR-MOX turned out to be the most influenced by burnup changes. Next to PWR-MOX cycle, in the order of Pyro-SFR and PWR-OT cycle turned out to be influenced by the degree of burnup. In conclusion, the degree of burnup in the three nuclear fuel cycles can act as the controlling driver of nuclear fuel cycle costs due to a reduction in the volume of spent fuel leading better availability and capacity factors. However, the equilibrium model used in this paper has a limit that time-dependent material flow and cost calculation is impossible. Hence, comparative analysis of the results calculated by dynamic model hereafter and the calculation results using an equilibrium model should be proceed. Moving forward to the foreseeable future with increasing burnups, further studies regarding alternative material of high corrosion resistance fuel cladding for the overall

  3. Sweatshop equilibrium

    Chau, Nancy H.

    2009-01-01

    This paper presents a capability-augmented model of on the job search, in which sweatshop conditions stifle the capability of the working poor to search for a job while on the job. The augmented setting unveils a sweatshop equilibrium in an otherwise archetypal Burdett-Mortensen economy, and reconciles a number of oft noted yet perplexing features of sweatshop economies. We demonstrate existence of multiple rational expectation equilibria, graduation pathways out of sweatshops in complete abs...

  4. Near-wall extension of a non-equilibrium, omega-based Reynolds stress model

    Nguyen, Tue; Behr, Marek; Reinartz, Birgit

    2011-01-01

    In this paper, the development of a new ω-based Reynolds stress model that is consistent with asymptotic analysis in the near wall region and with rapid distortion theory in homogeneous turbulence is reported. The model is based on the SSG/LRR-ω model developed by Eisfeld (2006) with three main modifications. Firstly, the near wall behaviors of the redistribution, dissipation and diffusion terms are modified according to the asymptotic analysis and a new blending function based on low Reynolds number is proposed. Secondly, an anisotropic dissipation tensor based on the Reynolds stress inhomogeneity (Jakirlic et al., 2007) is used instead of the original isotropic model. Lastly, the SSG redistribution term, which is activated far from the wall, is replaced by Speziale's non-equilibrium model (Speziale, 1998).

  5. Microscopic Simulation and Macroscopic Modeling for Thermal and Chemical Non-Equilibrium

    Liu, Yen; Panesi, Marco; Vinokur, Marcel; Clarke, Peter

    2013-01-01

    This paper deals with the accurate microscopic simulation and macroscopic modeling of extreme non-equilibrium phenomena, such as encountered during hypersonic entry into a planetary atmosphere. The state-to-state microscopic equations involving internal excitation, de-excitation, dissociation, and recombination of nitrogen molecules due to collisions with nitrogen atoms are solved time-accurately. Strategies to increase the numerical efficiency are discussed. The problem is then modeled using a few macroscopic variables. The model is based on reconstructions of the state distribution function using the maximum entropy principle. The internal energy space is subdivided into multiple groups in order to better describe the non-equilibrium gases. The method of weighted residuals is applied to the microscopic equations to obtain macroscopic moment equations and rate coefficients. The modeling is completely physics-based, and its accuracy depends only on the assumed expression of the state distribution function and the number of groups used. The model makes no assumption at the microscopic level, and all possible collisional and radiative processes are allowed. The model is applicable to both atoms and molecules and their ions. Several limiting cases are presented to show that the model recovers the classical twotemperature models if all states are in one group and the model reduces to the microscopic equations if each group contains only one state. Numerical examples and model validations are carried out for both the uniform and linear distributions. Results show that the original over nine thousand microscopic equations can be reduced to 2 macroscopic equations using 1 to 5 groups with excellent agreement. The computer time is decreased from 18 hours to less than 1 second.

  6. A phase-field model for non-equilibrium solidification of intermetallics

    Assadi, H.

    2007-01-01

    Intermetallics may exhibit unique solidification behaviour-including slow growth kinetics, anomalous partitioning and formation of unusual growth morphologies-because of departure from local equilibrium. A phase-field model is developed and used to illustrate these non-equilibrium effects in solidification of a prototype B2 intermetallic phase. The model takes sublattice compositions as primary field variables, from which chemical long-range order is derived. The diffusive reactions between the two sublattices, and those between each sublattice and the liquid phase are taken as 'internal' kinetic processes, which take place within control volumes of the system. The model can thus capture solute and disorder trapping effects, which are consistent-over a wide range of the solid/liquid interface thickness-with the predictions of the sharp-interface theory of solute and disorder trapping. The present model can also take account of solid-state ordering and thus illustrate the effects of chemical ordering on microstructure formation and crystal growth kinetics

  7. Modelling of Equilibrium Between Mantle and Core: Refractory, Volatile, and Highly Siderophile Elements

    Righter, K.; Danielson, L.; Pando, K.; Shofner, G.; Lee, C. -T.

    2013-01-01

    Siderophile elements have been used to constrain conditions of core formation and differentiation for the Earth, Mars and other differentiated bodies [1]. Recent models for the Earth have concluded that the mantle and core did not fully equilibrate and the siderophile element contents of the mantle can only be explained under conditions where the oxygen fugacity changes from low to high during accretion and the mantle and core do not fully equilibrate [2,3]. However these conclusions go against several physical and chemical constraints. First, calculations suggest that even with the composition of accreting material changing from reduced to oxidized over time, the fO2 defined by metal-silicate equilibrium does not change substantially, only by approximately 1 logfO2 unit [4]. An increase of more than 2 logfO2 units in mantle oxidation are required in models of [2,3]. Secondly, calculations also show that metallic impacting material will become deformed and sheared during accretion to a large body, such that it becomes emulsified to a fine scale that allows equilibrium at nearly all conditions except for possibly the length scale for giant impacts [5] (contrary to conclusions of [6]). Using new data for D(Mo) metal/silicate at high pressures, together with updated partitioning expressions for many other elements, we will show that metal-silicate equilibrium across a long span of Earth s accretion history may explain the concentrations of many siderophile elements in Earth's mantle. The modeling includes refractory elements Ni, Co, Mo, and W, as well as highly siderophile elements Au, Pd and Pt, and volatile elements Cd, In, Bi, Sb, Ge and As.

  8. Optimising intratumoral treatment of head and neck squamous cell carcinoma models with the diterpene ester Tigilanol tiglate.

    Barnett, Catherine M E; Broit, Natasa; Yap, Pei-Yi; Cullen, Jason K; Parsons, Peter G; Panizza, Benedict J; Boyle, Glen M

    2018-04-18

    The five-year survival rate for patients with head and neck squamous cell carcinoma (HNSCC) has remained at ~50% for the past 30 years despite advances in treatment. Tigilanol tiglate (TT, also known as EBC-46) is a novel diterpene ester that induces cell death in HNSCC in vitro and in mouse models, and has recently completed Phase I human clinical trials. The aim of this study was to optimise efficacy of TT treatment by altering different administration parameters. The tongue SCC cell line (SCC-15) was identified as the line with the lowest efficacy to treatment. Subcutaneous xenografts of SCC-15 cells were grown in BALB/c Foxn1 nu and NOD/SCID mice and treated with intratumoral injection of 30 μg TT or a vehicle only control (40% propylene glycol (PG)). Greater efficacy of TT treatment was found in the BALB/c Foxn1 nu mice compared to NOD/SCID mice. Immunohistochemical analysis indicated a potential role of the host's innate immune system in this difference, specifically neutrophil infiltration. Neither fractionated doses of TT nor the use of a different excipiant led to significantly increased efficacy. This study confirmed that TT in 40% PG given intratumorally as a single bolus dose was the most efficacious treatment for a tongue SCC mouse model.

  9. Kinetic modelling and optimisation of antimicrobial compound production by Candida pyralidae KU736785 for control of Candida guilliermondii.

    Mewa-Ngongang, Maxwell; du Plessis, Heinrich W; Hutchinson, Ucrecia F; Mekuto, Lukhanyo; Ntwampe, Seteno Ko

    2017-06-01

    Biological antimicrobial compounds from yeast can be used to address the critical need for safer preservatives in food, fruit and beverages. The inhibition of Candida guilliermondii, a common fermented beverage spoilage organism, was achieved using antimicrobial compounds produced by Candida pyralidae KU736785. The antimicrobial production system was modelled and optimised using response surface methodology, with 22.5 ℃ and pH of 5.0 being the optimum conditions. A new concept for quantifying spoilage organism inhibition was developed. The inhibition activity of the antimicrobial compounds was observed to be at a maximum after 17-23 h of fermentation, with C. pyralidae concentration being between 0.40 and 1.25 × 10 9 CFU ml -1 , while its maximum specific growth rate was 0.31-0.54 h -1 . The maximum inhibitory activity was between 0.19 and 1.08 l contaminated solidified media per millilitre of antimicrobial compound used. Furthermore, the antimicrobial compound formation rate was 0.037-0.086 l VZI ml -1 ACU h -1 , respectively. The response surface methodology analysis showed that the model developed sufficiently described the antimicrobial compound formation rate 1.08 l VZI ml -1 ACU, as 1.17 l VZI ml -1 ACU, predicted under the optimum production conditions.

  10. Econometrically calibrated computable general equilibrium models: Applications to the analysis of energy and climate politics

    Schu, Kathryn L.

    Economy-energy-environment models are the mainstay of economic assessments of policies to reduce carbon dioxide (CO2) emissions, yet their empirical basis is often criticized as being weak. This thesis addresses these limitations by constructing econometrically calibrated models in two policy areas. The first is a 35-sector computable general equilibrium (CGE) model of the U.S. economy which analyzes the uncertain impacts of CO2 emission abatement. Econometric modeling of sectors' nested constant elasticity of substitution (CES) cost functions based on a 45-year price-quantity dataset yields estimates of capital-labor-energy-material input substitution elasticities and biases of technical change that are incorporated into the CGE model. I use the estimated standard errors and variance-covariance matrices to construct the joint distribution of the parameters of the economy's supply side, which I sample to perform Monte Carlo baseline and counterfactual runs of the model. The resulting probabilistic abatement cost estimates highlight the importance of the uncertainty in baseline emissions growth. The second model is an equilibrium simulation of the market for new vehicles which I use to assess the response of vehicle prices, sales and mileage to CO2 taxes and increased corporate average fuel economy (CAFE) standards. I specify an econometric model of a representative consumer's vehicle preferences using a nested CES expenditure function which incorporates mileage and other characteristics in addition to prices, and develop a novel calibration algorithm to link this structure to vehicle model supplies by manufacturers engaged in Bertrand competition. CO2 taxes' effects on gasoline prices reduce vehicle sales and manufacturers' profits if vehicles' mileage is fixed, but these losses shrink once mileage can be adjusted. Accelerated CAFE standards induce manufacturers to pay fines for noncompliance rather than incur the higher costs of radical mileage improvements

  11. Exact correlations in the Lieb-Liniger model and detailed balance out-of-equilibrium

    Jacopo De Nardis, Miłosz Panfil

    2016-12-01

    Full Text Available We study the density-density correlation function of the 1D Lieb-Liniger model and obtain an exact expression for the small momentum limit of the static correlator in the thermodynamic limit. We achieve this by summing exactly over the relevant form factors of the density operator in the small momentum limit. The result is valid for any eigenstate, including thermal and non-thermal states. We also show that the small momentum limit of the dynamic structure factors obeys a generalized detailed balance relation valid for any equilibrium state.

  12. Pre-equilibrium (exciton) model and the heavy-ion reactions with cluster emission

    Betak, E

    2015-01-01

    We bring the possibility to include the cluster emission into the statistical pre-equilibrium (exciton) model enlarged for considering also the heavy ion collisions. At this moment, the calculations have been done without treatment of angular momentum variables, but all the approach can be straightforwardly applied to heavy-ion reactions with cluster emission including the angular momentum variables. The direct motivation of this paper is a possibility of producing the superdeformed nuclei, which are easier to be detected in heavy-ion reactions than in those induced by light projectiles (nucleons, deuterons, $\\alpha$-particles).

  13. Core-state models for fuel management of equilibrium and transition cycles in pressurized water reactors

    Aragones, J.M.; Martinez-Val, J.M.; Corella, M.R.

    1977-01-01

    Fuel management requires that mass, energy, and reactivity balance be satisfied in each reload cycle. Procedures for selection of alternatives, core-state models, and fuel cost calculations have been developed for both equilibrium and transition cycles. Effective cycle lengths and fuel cycle variables--namely, reload batch size, schedule of incore residence for the fuel, feed enrichments, energy sharing cycle by cycle, and discharge burnup and isotopics--are the variables being considered for fuel management planning with a given energy generation plan, fuel design, recycling strategy, and financial assumptions

  14. Sudden transition from equilibrium stability to chaotic dynamics in a cautious tâtonnement model

    Foroni, I.; Avellone, A.; Panchuk, A.

    2016-01-01

    Discrete time price adjustment processes may fail to converge and may exhibit periodic or even chaotic behavior. To avoid large price changes, a version of the discrete time tâtonnement process for reaching an equilibrium in a pure exchange economy based on a cautious updating of the prices has been proposed two decades ago. This modification leads to a one dimensional bimodal piecewise smooth map, for which we show analytically that degenerate bifurcations and border collision bifurcations play a fundamental role for the asymptotic behavior of the model. (paper)

  15. Analysis of a decision model in the context of equilibrium pricing and order book pricing

    Wagner, D. C.; Schmitt, T. A.; Schäfer, R.; Guhr, T.; Wolf, D. E.

    2014-12-01

    An agent-based model for financial markets has to incorporate two aspects: decision making and price formation. We introduce a simple decision model and consider its implications in two different pricing schemes. First, we study its parameter dependence within a supply-demand balance setting. We find realistic behavior in a wide parameter range. Second, we embed our decision model in an order book setting. Here, we observe interesting features which are not present in the equilibrium pricing scheme. In particular, we find a nontrivial behavior of the order book volumes which reminds of a trend switching phenomenon. Thus, the decision making model alone does not realistically represent the trading and the stylized facts. The order book mechanism is crucial.

  16. A multicomponent ion-exchange equilibrium model for chabazite columns treating ORNL wastewaters

    Perona, J.J.

    1993-06-01

    Planned near-term and long-term upgrades of the Oak Ridge National Laboratory (ORNL) Process Waste Treatment Plant (PWTP) will use chabazite columns to remove 90 Sr and 137 Cs from process wastewater. A valid equilibrium model is required for the design of these columns and for evaluating their performance when influent wastewater composition changes. The cations exchanged, in addition to strontium and cesium, are calcium, magnesium, and sodium. A model was developed using the Wilson equation for the calculation of the solid-phase activity coefficients. The model was tested against chabazite column runs on two different wastewaters and found to be valid. A sensitivity analysis was carried out for the projected wastewater compositions, in which the model was used to predict changes in relative separation factors for strontium and cesium subject to changes in calcium, magnesium, and sodium concentrations

  17. Integrated environmental assessment of future energy scenarios based on economic equilibrium models

    Igos, E.; Rugani, B.; Rege, S.; Benetto, E.; Drouet, L.; Zachary, D.; Haas, T.

    2014-01-01

    The future evolution of energy supply technologies strongly depends on (and affects) the economic and environmental systems, due to the high dependency of this sector on the availability and cost of fossil fuels, especially on the small regional scale. This paper aims at presenting the modeling system and preliminary results of a research project conducted on the scale of Luxembourg to assess the environmental impact of future energy scenarios for the country, integrating outputs from partial and computable general equilibrium models within hybrid Life Cycle Assessment (LCA) frameworks. The general equilibrium model for Luxembourg, LUXGEM, is used to evaluate the economic impacts of policy decisions and other economic shocks over the time horizon 2006-2030. A techno-economic (partial equilibrium) model for Luxembourg, ETEM, is used instead to compute operation levels of various technologies to meet the demand for energy services at the least cost along the same timeline. The future energy demand and supply are made consistent by coupling ETEM with LUXGEM so as to have the same macro-economic variables and energy shares driving both models. The coupling results are then implemented within a set of Environmentally-Extended Input-Output (EE-IO) models in historical time series to test the feasibility of the integrated framework and then to assess the environmental impacts of the country. Accordingly, a dis-aggregated energy sector was built with the different ETEM technologies in the EE-IO to allow hybridization with Life Cycle Inventory (LCI) and enrich the process detail. The results show that the environmental impact slightly decreased overall from 2006 to 2009. Most of the impacts come from some imported commodities (natural gas, used to produce electricity, and metalliferous ores and metal scrap). The main energy production technology is the combined-cycle gas turbine plant 'Twinerg', representing almost 80% of the domestic electricity production in Luxembourg

  18. An equilibrium pricing model for weather derivatives in a multi-commodity setting

    Lee, Yongheon; Oren, Shmuel S.

    2009-01-01

    Many industries are exposed to weather risk. Weather derivatives can play a key role in hedging and diversifying such risk because the uncertainty in a company's profit function can be correlated to weather condition which affects diverse industry sectors differently. Unfortunately the weather derivatives market is a classical example of an incomplete market that is not amenable to standard methodologies used for derivative pricing in complete markets. In this paper, we develop an equilibrium pricing model for weather derivatives in a multi-commodity setting. The model is constructed in the context of a stylized economy where agents optimize their hedging portfolios which include weather derivatives that are issued in a fixed quantity by a financial underwriter. The supply and demand resulting from hedging activities and the supply by the underwriter are combined in an equilibrium pricing model under the assumption that all agents maximize some risk averse utility function. We analyze the gains due to the inclusion of weather derivatives in hedging portfolios and examine the components of that gain attributable to hedging and to risk sharing. (author)

  19. Expansion dynamics and equilibrium conditions in a laser ablation plume of lithium: Modeling and experiment

    Stapleton, M.W.; McKiernan, A.P.; Mosnier, J.-P.

    2005-01-01

    The gas dynamics and atomic kinetics of a laser ablation plume of lithium, expanding adiabatically in vacuum, are included in a numerical model, using isothermal and isentropic self-similar analytical solutions and steady-state collisional radiative equations, respectively. Measurements of plume expansion dynamics using ultrafast imaging for various laser wavelengths (266-1064 nm), fluences (2-6.5 J cm -2 ), and spot sizes (50-1000 μm) are performed to provide input parameters for the model and, thereby, study the influence of laser spot size, wavelength, and fluence, respectively, on both the plume expansion dynamics and atomic kinetics. Target recoil pressure, which clearly affects plume dynamics, is included in the model. The effects of laser wavelength and spot size on plume dynamics are discussed in terms of plasma absorption of laser light. A transition from isothermal to isentropic behavior for spot sizes greater than 50 μm is clearly evidenced. Equilibrium conditions are found to exist only up to 300 ns after the plume creation, while complete local thermodynamic equilibrium is found to be confined to the very early parts of the expansion

  20. EDM - A model for optimising the short-term power operation of a complex hydroelectric network

    Tremblay, M.; Guillaud, C.

    1996-01-01

    In order to optimize the short-term power operation of a complex hydroelectric network, a new model called EDM was added to PROSPER, a water management analysis system developed by SNC-Lavalin. PROSPER is now divided into three parts: an optimization model (DDDP), a simulation model (ESOLIN), and an economic dispatch model (EDM) for the short-term operation. The operation of the KSEB hydroelectric system (located in southern India) with PROSPER was described. The long-term analysis with monthly time steps is assisted by the DDDP, and the daily analysis with hourly or half-hourly time steps is performed with the EDM model. 3 figs

  1. Quantity Constrained General Equilibrium

    Babenko, R.; Talman, A.J.J.

    2006-01-01

    In a standard general equilibrium model it is assumed that there are no price restrictions and that prices adjust infinitely fast to their equilibrium values.In case of price restrictions a general equilibrium may not exist and rationing on net demands or supplies is needed to clear the markets.In

  2. Exploring the anisotropic Kondo model in and out of equilibrium with alkaline-earth atoms

    Kanász-Nagy, Márton; Ashida, Yuto; Shi, Tao; Moca, Cǎtǎlin Paşcu; Ikeda, Tatsuhiko N.; Fölling, Simon; Cirac, J. Ignacio; Zaránd, Gergely; Demler, Eugene A.

    2018-04-01

    We propose a scheme to realize the Kondo model with tunable anisotropy using alkaline-earth atoms in an optical lattice. The new feature of our setup is Floquet engineering of interactions using time-dependent Zeeman shifts, that can be realized either using state-dependent optical Stark shifts or magnetic fields. The properties of the resulting Kondo model strongly depend on the anisotropy of the ferromagnetic interactions. In particular, easy-plane couplings give rise to Kondo singlet formation even though microscopic interactions are all ferromagnetic. We discuss both equilibrium and dynamical properties of the system that can be measured with ultracold atoms, including the impurity spin susceptibility, the impurity spin relaxation rate, as well as the equilibrium and dynamical spin correlations between the impurity and the ferromagnetic bath atoms. We analyze the nonequilibrium time evolution of the system using a variational non-Gaussian approach, which allows us to explore coherent dynamics over both short and long timescales, as set by the bandwidth and the Kondo singlet formation, respectively. In the quench-type experiments, when the Kondo interaction is suddenly switched on, we find that real-time dynamics shows crossovers reminiscent of poor man's renormalization group flow used to describe equilibrium systems. For bare easy-plane ferromagnetic couplings, this allows us to follow the formation of the Kondo screening cloud as the dynamics crosses over from ferromagnetic to antiferromagnetic behavior. On the other side of the phase diagram, our scheme makes it possible to measure quantum corrections to the well-known Korringa law describing the temperature dependence of the impurity spin relaxation rate. Theoretical results discussed in our paper can be measured using currently available experimental techniques.

  3. Geomagnetic polarity reversals as a mechanism for the punctuated equilibrium model of biological evolution

    Welsh, J.S.; Welsh, A.L.; Welsh, W.F.

    2003-01-01

    In contrast to what is predicted by classical Darwinian theory (phyletic gradualism), the fossil record typically displays a pattern of relatively sudden, dramatic changes as detailed by Eldregde and Gould's model of punctuated equilibrium. Evolutionary biologists have been at a loss to explain the ultimate source of the new mutations that drive evolution. One hypothesis holds that the abrupt speciation seen in the punctuated equilibrium model is secondary to an increased mutation rate resulting from periodically increased levels of ionizing radiation on the Earth's surface. Sporadic geomagnetic pole reversals, occurring every few million years on the average, are accompanied by alterations in the strength of the Earth's magnetic field and magnetosphere. This diminution may allow charged cosmic radiation to bombard Earth with less attenuation, thereby resulting in increased mutation rates. This episodic fluctuation in the magnetosphere is an attractive mechanism for the observed fossil record. Selected periods and epochs of geologic history for which data was available were reviewed for both geomagnetic pole reversal history and fossil record. Anomalies in either were scrutinized in greater depth and correlations were made. A 35 million year span (118-83 Ma) was identified during the Early/Middle Cretaceous period that was devoid of geomagnetic polarity reversals(the Cretaceous normal superchron). Examination of the fossil record (including several invertebrate and vertebrate taxons) during the Cretaceous normal superchron does not reveal any significant gap or slowing of speciation. Although increased terrestrial radiation exposure due to a diminution of the Earth's magnetosphere caused by a reversal of geomagnetic polarity is an attractive explanation for the mechanism of punctuated equilibrium, our investigation suggests that such polarity reversals cannot fully provide the driving force behind biological evolution. Further research is required to determine if

  4. Risk Route Choice Analysis and the Equilibrium Model under Anticipated Regret Theory

    pengcheng yuan

    2014-02-01

    Full Text Available The assumption about travellers’ route choice behaviour has major influence on the traffic flow equilibrium analysis. Previous studies about the travellers’ route choice were mainly based on the expected utility maximization theory. However, with the gradually increasing knowledge about the uncertainty of the transportation system, the researchers have realized that there is much constraint in expected util­ity maximization theory, because expected utility maximiza­tion requires travellers to be ‘absolutely rational’; but in fact, travellers are not truly ‘absolutely rational’. The anticipated regret theory proposes an alternative framework to the tra­ditional risk-taking in route choice behaviour which might be more scientific and reasonable. We have applied the antici­pated regret theory to the analysis of the risk route choosing process, and constructed an anticipated regret utility func­tion. By a simple case which includes two parallel routes, the route choosing results influenced by the risk aversion degree, regret degree and the environment risk degree have been analyzed. Moreover, the user equilibrium model based on the anticipated regret theory has been established. The equivalence and the uniqueness of the model are proved; an efficacious algorithm is also proposed to solve the model. Both the model and the algorithm are demonstrated in a real network. By an experiment, the model results and the real data have been compared. It was found that the model re­sults can be similar to the real data if a proper regret degree parameter is selected. This illustrates that the model can better explain the risk route choosing behaviour. Moreover, it was also found that the traveller’ regret degree increases when the environment becomes more and more risky.

  5. An Equilibrium Chance-Constrained Multiobjective Programming Model with Birandom Parameters and Its Application to Inventory Problem

    Zhimiao Tao

    2013-01-01

    Full Text Available An equilibrium chance-constrained multiobjective programming model with birandom parameters is proposed. A type of linear model is converted into its crisp equivalent model. Then a birandom simulation technique is developed to tackle the general birandom objective functions and birandom constraints. By embedding the birandom simulation technique, a modified genetic algorithm is designed to solve the equilibrium chance-constrained multiobjective programming model. We apply the proposed model and algorithm to a real-world inventory problem and show the effectiveness of the model and the solution method.

  6. Optimal Energy Consumption in Refrigeration Systems - Modelling and Non-Convex Optimisation

    Hovgaard, Tobias Gybel; Larsen, Lars F. S.; Skovrup, Morten J.

    2012-01-01

    Supermarket refrigeration consumes substantial amounts of energy. However, due to the thermal capacity of the refrigerated goods, parts of the cooling capacity delivered can be shifted in time without deteriorating the food quality. In this study, we develop a realistic model for the energy...... consumption in super market refrigeration systems. This model is used in a Nonlinear Model Predictive Controller (NMPC) to minimise the energy used by operation of a supermarket refrigeration system. The model is non-convex and we develop a computational efficient algorithm tailored to this problem...

  7. Interface model conditions for a non-equilibrium heat transfer model for conjugate fluid/porous/solid domains

    Betchen, L.J.; Straatman, A.G.

    2005-01-01

    A mathematical and numerical model for the treatment of conjugate fluid flow and heat transfer problems in domains containing pure fluid, porous, and pure solid regions has been developed. The model is general and physically reasoned, and allows for local thermal non-equilibrium in the porous region. The model is developed for implementation on a simple collocated finite volume grid. Of particular novelty are the conditions implemented at the interfaces between porous regions, and those containing a pure solid or pure fluid. The model is validated by simulation of a three-dimensional porous plug problem for which experimental results are available. (author)

  8. Microwave-assisted combustion synthesis of NiAl intermetallics in a single mode applicator: Modeling and optimisation

    Poli, G.; Sola, R.; Veronesi, P.

    2006-01-01

    The microwave-assisted combustion synthesis of NiAl intermetallics in a single mode applicator has been simulated numerically and performed with the aim of achieving the highest yields, energy efficiency and process reproducibility. The electromagnetic field modeling of the microwave system allowed to chose the proper experimental set-up and the materials more suitable for the application, minimising the reflected power and the risks of arcing. In all the experimental conditions tested, conversions of 3-5 g 1:1 atomic ratio Ni and Al powder compacts into NiAl ranged from 98.7% to 100%, requiring from 30 to 180 s with power from 500 to 1500 W. The optimisation procedure allowed to determine and quantify the effects of the main process variables on the ignition time, the NiAl yields and the specific energy consumption, leading to a fast, reproducible and cost-effective process of microwave-assisted combustion synthesis of NiAl intermetallics

  9. TEM turbulence optimisation in stellarators

    Proll, J. H. E.; Mynick, H. E.; Xanthopoulos, P.; Lazerson, S. A.; Faber, B. J.

    2016-01-01

    With the advent of neoclassically optimised stellarators, optimising stellarators for turbulent transport is an important next step. The reduction of ion-temperature-gradient-driven turbulence has been achieved via shaping of the magnetic field, and the reduction of trapped-electron mode (TEM) turbulence is addressed in the present paper. Recent analytical and numerical findings suggest TEMs are stabilised when a large fraction of trapped particles experiences favourable bounce-averaged curvature. This is the case for example in Wendelstein 7-X (Beidler et al 1990 Fusion Technol. 17 148) and other Helias-type stellarators. Using this knowledge, a proxy function was designed to estimate the TEM dynamics, allowing optimal configurations for TEM stability to be determined with the STELLOPT (Spong et al 2001 Nucl. Fusion 41 711) code without extensive turbulence simulations. A first proof-of-principle optimised equilibrium stemming from the TEM-dominated stellarator experiment HSX (Anderson et al 1995 Fusion Technol. 27 273) is presented for which a reduction of the linear growth rates is achieved over a broad range of the operational parameter space. As an important consequence of this property, the turbulent heat flux levels are reduced compared with the initial configuration.

  10. Study on possibility of plasma current profile determination using an analytical model of tokamak equilibrium

    Moriyama, Shin-ichi; Hiraki, Naoji

    1996-01-01

    The possibility of determining the current profile of tokamak plasma from the external magnetic measurements alone is investigated using an analytical model of tokamak equilibrium. The model, which is based on an approximate solution of the Grad-Shafranov equation, can set a plasma current profile expressed with four free parameters of the total plasma current, the poloidal beta, the plasma internal inductance and the axial safety factor. The analysis done with this model indicates that, for a D-shaped plasma, the boundary poloidal magnetic field prescribing the external magnetic field distribution is dependent on the axial safety factor in spite of keeping the boundary safety factor and the plasma internal inductance constant. This suggests that the plasma current profile is reversely determined from the external magnetic analysis. The possibility and the limitation of current profile determination are discussed through this analytical result. (author)

  11. Equilibrium Model of Discrete Dynamic Supply Chain Network with Random Demand and Advertisement Strategy

    Guitao Zhang

    2014-01-01

    Full Text Available The advertisement can increase the consumers demand; therefore it is one of the most important marketing strategies in the operations management of enterprises. This paper aims to analyze the impact of advertising investment on a discrete dynamic supply chain network which consists of suppliers, manufactures, retailers, and demand markets associated at different tiers under random demand. The impact of advertising investment will last several planning periods besides the current period due to delay effect. Based on noncooperative game theory, variational inequality, and Lagrange dual theory, the optimal economic behaviors of the suppliers, the manufactures, the retailers, and the consumers in the demand markets are modeled. In turn, the supply chain network equilibrium model is proposed and computed by modified project contraction algorithm with fixed step. The effectiveness of the model is illustrated by numerical examples, and managerial insights are obtained through the analysis of advertising investment in multiple periods and advertising delay effect among different periods.

  12. International trade in oil, gas and carbon emission rights: An intertemporal general equilibrium model

    Manne, A.S.; Rutherford, T.F.

    1994-01-01

    This paper employs a five-region intertemporal model to examine three issues related to carbon emission restrictions. First, we investigate the possible impact of such limits upon future oil prices. We show that carbon limits are likely to differ in their near- and long-term impact. Second, we analyze the problem of open-quotes leakageclose quotes which could arise if the OECD countries were to adopt unilateral limits upon carbon emissions. Third, we quantify some of the gains from trade in carbon emission rights. Each of these issues have been studied before, but to our knowledge this is the first study based on a multi-regional, forward-looking model. We show that sequential joint maximization can be an effective way to compute equilibria for intertemporal general equilibrium models of international trade. 18 refs., 10 figs

  13. Modelling self-optimised short term load forecasting for medium voltage loads using tunning fuzzy systems and Artificial Neural Networks

    Mahmoud, Thair S.; Habibi, Daryoush; Hassan, Mohammed Y.; Bass, Octavian

    2015-01-01

    Highlights: • A novel Short Term Medium Voltage (MV) Load Forecasting (STLF) model is presented. • A knowledge-based STLF error control mechanism is implemented. • An Artificial Neural Network (ANN)-based optimum tuning is applied on STLF. • The relationship between load profiles and operational conditions is analysed. - Abstract: This paper presents an intelligent mechanism for Short Term Load Forecasting (STLF) models, which allows self-adaptation with respect to the load operational conditions. Specifically, a knowledge-based FeedBack Tunning Fuzzy System (FBTFS) is proposed to instantaneously correlate the information about the demand profile and its operational conditions to make decisions for controlling the model’s forecasting error rate. To maintain minimum forecasting error under various operational scenarios, the FBTFS adaptation was optimised using a Multi-Layer Perceptron Artificial Neural Network (MLPANN), which was trained using Backpropagation algorithm, based on the information about the amount of error and the operational conditions at time of forecasting. For the sake of comparison and performance testing, this mechanism was added to the conventional forecasting methods, i.e. Nonlinear AutoRegressive eXogenous-Artificial Neural Network (NARXANN), Fuzzy Subtractive Clustering Method-based Adaptive Neuro Fuzzy Inference System (FSCMANFIS) and Gaussian-kernel Support Vector Machine (GSVM), and the measured forecasting error reduction average in a 12 month simulation period was 7.83%, 8.5% and 8.32% respectively. The 3.5 MW variable load profile of Edith Cowan University (ECU) in Joondalup, Australia, was used in the modelling and simulations of this model, and the data was provided by Western Power, the transmission and distribution company of the state of Western Australia.

  14. Dabigatran – an exemplar case history demonstrating the need for comprehensive models to optimise the use of new drugs

    Brian eGodman

    2014-06-01

    Full Text Available Background: There are potential conflicts between authorities and companies to fund new premium priced drugs especially where there are effectiveness, safety and/ or budget concerns. Dabigatran, a new oral anticoagulant for the prevention of stroke in patients with non-valvular atrial fibrillation (AF, exemplifies this issue. Whilst new effective treatments are needed, there are issues in the elderly with dabigatran due to variable drug concentrations, no known antidote and dependence on renal elimination. Published studies showed dabigatran to be cost-effective but there are budget concerns given the prevalence of AF. These concerns resulted in extensive activities pre- to post-launch to manage its introduction. Objective: To (i review authority activities across countries, (ii use the findings to develop new models to better manage the entry of new drugs, and (iii review the implications based on post-launch activities. Methodology: (i Descriptive review and appraisal of activities regarding dabigatran, (ii development of guidance for key stakeholder groups through an iterative process, (iii refining guidance following post launch studies. Results: Plethora of activities to manage dabigatran including extensive pre-launch activities, risk sharing arrangements, prescribing restrictions and monitoring of prescribing post launch. Reimbursement has been denied in some countries due to concerns with its budget impact and/or excessive bleeding. Development of a new model and future guidance is proposed to better manage the entry of new drugs, centring on three pillars of pre-, peri- and post-launch activities. Post-launch activities include increasing use of patient registries to monitor the safety and effectiveness of new drugs in clinical practice. Conclusion: Models for introducing new drugs are essential to optimise their prescribing especially where concerns. Without such models, new drugs may be withdrawn prematurely and/ or struggle for

  15. Modeling of Eddy current distribution and equilibrium reconstruction in the SST-1 Tokamak

    Banerjee, Santanu; Sharma, Deepti; Radhakrishnana, Srinivasan; Daniel, Raju; Shankara Joisa, Y.; Atrey, Parveen Kumar; Pathak, Surya Kumar; Singh, Amit Kumar

    2015-01-01

    Toroidal continuity of the vacuum vessel and the cryostat leads to the generation of large eddy currents in these passive structures during the Ohmic phase of the steady state superconducting tokamak SST-1. This reduces the magnitude of the loop voltage seen by the plasma as also delays its buildup. During the ramping down of the Ohmic transformer current (OT), the resultant eddy currents flowing in the passive conductors play a crucial role in governing the plasma equilibrium. Amount of this eddy current and its distribution has to be accurately determined such that this can be fed to the equilibrium reconstruction code as an input. For the accurate inclusion of the effect of eddy currents in the reconstruction, the toroidally continuous conducting structures like the vacuum vessel and the cryostat with large poloidal cross-section and any other poloidal field (PF) coil sitting idle on the machine are broken up into a large number of co-axial toroidal current carrying filaments. The inductance matrix for this large set of toroidal current carrying conductors is calculated using the standard Green's function and the induced currents are evaluated for the OT waveform of each plasma discharge. Consistency of this filament model is cross-checked with the 11 in-vessel and 12 out-vessel toroidal flux loop signals in SST-1. Resistances of the filaments are adjusted to reproduce the experimental measurements of these flux loops in pure OT shots and shots with OT and vertical field (BV). Such shots are taken routinely in SST-1 without the fill gas to cross-check the consistency of the filament model. A Grad-Shafranov (GS) equation solver, named as IPREQ, has been developed in IPR to reconstruct the plasma equilibrium through searching for the best-fit current density profile. Ohmic transformer current (OT), vertical field coil current (BV), currents in the passive filaments along with the plasma pressure (p) and current (I p ) profiles are used as inputs to the IPREQ

  16. The case for an internal dynamics model versus equilibrium point control in human movement.

    Hinder, Mark R; Milner, Theodore E

    2003-06-15

    The equilibrium point hypothesis (EPH) was conceived as a means whereby the central nervous system could control limb movements by a relatively simple shift in equilibrium position without the need to explicitly compensate for task dynamics. Many recent studies have questioned this view with results that suggest the formation of an internal dynamics model of the specific task. However, supporters of the EPH have argued that these results are not incompatible with the EPH and that there is no reason to abandon it. In this study, we have tested one of the fundamental predictions of the EPH, namely, equifinality. Subjects learned to perform goal-directed wrist flexion movements while a motor provided assistance in proportion to the instantaneous velocity. It was found that the subjects stopped short of the target on the trials where the magnitude of the assistance was randomly decreased, compared to the preceding control trials (P = 0.003), i.e. equifinality was not achieved. This is contrary to the EPH, which predicts that final position should not be affected by external loads that depend purely on velocity. However, such effects are entirely consistent with predictions based on the formation of an internal dynamics model.

  17. Systematic validation of non-equilibrium thermochemical models using Bayesian inference

    Miki, Kenji

    2015-10-01

    © 2015 Elsevier Inc. The validation process proposed by Babuška et al. [1] is applied to thermochemical models describing post-shock flow conditions. In this validation approach, experimental data is involved only in the calibration of the models, and the decision process is based on quantities of interest (QoIs) predicted on scenarios that are not necessarily amenable experimentally. Moreover, uncertainties present in the experimental data, as well as those resulting from an incomplete physical model description, are propagated to the QoIs. We investigate four commonly used thermochemical models: a one-temperature model (which assumes thermal equilibrium among all inner modes), and two-temperature models developed by Macheret et al. [2], Marrone and Treanor [3], and Park [4]. Up to 16 uncertain parameters are estimated using Bayesian updating based on the latest absolute volumetric radiance data collected at the Electric Arc Shock Tube (EAST) installed inside the NASA Ames Research Center. Following the solution of the inverse problems, the forward problems are solved in order to predict the radiative heat flux, QoI, and examine the validity of these models. Our results show that all four models are invalid, but for different reasons: the one-temperature model simply fails to reproduce the data while the two-temperature models exhibit unacceptably large uncertainties in the QoI predictions.

  18. Equilibrium and kinetic models for colloid release under transient solution chemistry conditions.

    Bradford, Scott A; Torkzaban, Saeed; Leij, Feike; Simunek, Jiri

    2015-10-01

    We present continuum models to describe colloid release in the subsurface during transient physicochemical conditions. Our modeling approach relates the amount of colloid release to changes in the fraction of the solid surface area that contributes to retention. Equilibrium, kinetic, equilibrium and kinetic, and two-site kinetic models were developed to describe various rates of colloid release. These models were subsequently applied to experimental colloid release datasets to investigate the influence of variations in ionic strength (IS), pH, cation exchange, colloid size, and water velocity on release. Various combinations of equilibrium and/or kinetic release models were needed to describe the experimental data depending on the transient conditions and colloid type. Release of Escherichia coli D21g was promoted by a decrease in solution IS and an increase in pH, similar to expected trends for a reduction in the secondary minimum and nanoscale chemical heterogeneity. The retention and release of 20nm carboxyl modified latex nanoparticles (NPs) were demonstrated to be more sensitive to the presence of Ca(2+) than D21g. Specifically, retention of NPs was greater than D21g in the presence of 2mM CaCl2 solution, and release of NPs only occurred after exchange of Ca(2+) by Na(+) and then a reduction in the solution IS. These findings highlight the limitations of conventional interaction energy calculations to describe colloid retention and release, and point to the need to consider other interactions (e.g., Born, steric, and/or hydration forces) and/or nanoscale heterogeneity. Temporal changes in the water velocity did not have a large influence on the release of D21g for the examined conditions. This insensitivity was likely due to factors that reduce the applied hydrodynamic torque and/or increase the resisting adhesive torque; e.g., macroscopic roughness and grain-grain contacts. Our analysis and models improve our understanding and ability to describe the amounts

  19. Final Technical Report: "Representing Endogenous Technological Change in Climate Policy Models: General Equilibrium Approaches"

    Ian Sue Wing

    2006-04-18

    The research supported by this award pursued three lines of inquiry: (1) The construction of dynamic general equilibrium models to simulate the accumulation and substitution of knowledge, which has resulted in the preparation and submission of several papers: (a) A submitted pedagogic paper which clarifies the structure and operation of computable general equilibrium (CGE) models (C.2), and a review article in press which develops a taxonomy for understanding the representation of technical change in economic and engineering models for climate policy analysis (B.3). (b) A paper which models knowledge directly as a homogeneous factor, and demonstrates that inter-sectoral reallocation of knowledge is the key margin of adjustment which enables induced technical change to lower the costs of climate policy (C.1). (c) An empirical paper which estimates the contribution of embodied knowledge to aggregate energy intensity in the U.S. (C.3), followed by a companion article which embeds these results within a CGE model to understand the degree to which autonomous energy efficiency improvement (AEEI) is attributable to technical change as opposed to sub-sectoral shifts in industrial composition (C.4) (d) Finally, ongoing theoretical work to characterize the precursors and implications of the response of innovation to emission limits (E.2). (2) Data development and simulation modeling to understand how the characteristics of discrete energy supply technologies determine their succession in response to emission limits when they are embedded within a general equilibrium framework. This work has produced two peer-reviewed articles which are currently in press (B.1 and B.2). (3) Empirical investigation of trade as an avenue for the transmission of technological change to developing countries, and its implications for leakage, which has resulted in an econometric study which is being revised for submission to a journal (E.1). As work commenced on this topic, the U.S. withdrawal

  20. Optimising the anaerobic co-digestion of urban organic waste using dynamic bioconversion mathematical modelling

    Fitamo, Temesgen Mathewos; Boldrin, Alessio; Dorini, G.

    2016-01-01

    Mathematical anaerobic bioconversion models are often used as a convenient way to simulate the conversion of organic materials to biogas. The aim of the study was to apply a mathematical model for simulating the anaerobic co-digestion of various types of urban organic waste, in order to develop...... in a continuously stirred tank reactor. The model's outputs were validated with experimental results obtained in thermophilic conditions, with mixed sludge as a single substrate and urban organic waste as a co-substrate at hydraulic retention times of 30, 20, 15 and 10 days. The predicted performance parameter...... (methane productivity and yield) and operational parameter (concentration of ammonia and volatile fatty acid) values were reasonable and displayed good correlation and accuracy. The model was later applied to identify optimal scenarios for an urban organic waste co-digestion process. The simulation...

  1. Comparative assessment of knee joint models used in multi-body kinematics optimisation for soft tissue artefact compensation.

    Richard, Vincent; Cappozzo, Aurelio; Dumas, Raphaël

    2017-09-06

    Estimating joint kinematics from skin-marker trajectories recorded using stereophotogrammetry is complicated by soft tissue artefact (STA), an inexorable source of error. One solution is to use a bone pose estimator based on multi-body kinematics optimisation (MKO) embedding joint constraints to compensate for STA. However, there is some debate over the effectiveness of this method. The present study aimed to quantitatively assess the degree of agreement between reference (i.e., artefact-free) knee joint kinematics and the same kinematics estimated using MKO embedding six different knee joint models. The following motor tasks were assessed: level walking, hopping, cutting, running, sit-to-stand, and step-up. Reference knee kinematics was taken from pin-marker or biplane fluoroscopic data acquired concurrently with skin-marker data, made available by the respective authors. For each motor task, Bland-Altman analysis revealed that the performance of MKO varied according to the joint model used, with a wide discrepancy in results across degrees of freedom (DoFs), models and motor tasks (with a bias between -10.2° and 13.2° and between -10.2mm and 7.2mm, and with a confidence interval up to ±14.8° and ±11.1mm, for rotation and displacement, respectively). It can be concluded that, while MKO might occasionally improve kinematics estimation, as implemented to date it does not represent a reliable solution to the STA issue. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Development of a global computable general equilibrium model coupled with detailed energy end-use technology

    Fujimori, Shinichiro; Masui, Toshihiko; Matsuoka, Yuzuru

    2014-01-01

    Highlights: • Detailed energy end-use technology information is considered within a CGE model. • Aggregated macro results of the detailed model are similar to traditional model. • The detailed model shows unique characteristics in the household sector. - Abstract: A global computable general equilibrium (CGE) model integrating detailed energy end-use technologies is developed in this paper. The paper (1) presents how energy end-use technologies are treated within the model and (2) analyzes the characteristics of the model’s behavior. Energy service demand and end-use technologies are explicitly considered, and the share of technologies is determined by a discrete probabilistic function, namely a Logit function, to meet the energy service demand. Coupling with detailed technology information enables the CGE model to have more realistic representation in the energy consumption. The proposed model in this paper is compared with the aggregated traditional model under the same assumptions in scenarios with and without mitigation roughly consistent with the two degree climate mitigation target. Although the results of aggregated energy supply and greenhouse gas emissions are similar, there are three main differences between the aggregated and the detailed technologies models. First, GDP losses in mitigation scenarios are lower in the detailed technology model (2.8% in 2050) as compared with the aggregated model (3.2%). Second, price elasticity and autonomous energy efficiency improvement are heterogeneous across regions and sectors in the detailed technology model, whereas the traditional aggregated model generally utilizes a single value for each of these variables. Third, the magnitude of emissions reduction and factors (energy intensity and carbon factor reduction) related to climate mitigation also varies among sectors in the detailed technology model. The household sector in the detailed technology model has a relatively higher reduction for both energy

  3. Assessment of thermodynamic models for the design, analysis and optimisation of gas liquefaction systems

    Nguyen, Tuong-Van; Elmegaard, Brian

    2016-01-01

    of their performance. However, the thermodynamic models used for this purpose are characterised by different mathematical formulations, ranges of application and levels of accuracy. This may lead to inconsistent results when estimating hydrocarbon properties and assessing the efficiency of a given process. This paper...... are related to the prediction of the energy flows (up to 7%) and to the heat exchanger conductances (up to 11%), and they are not systematic errors. The results illustrate the superiority of using the GERG-2008 model for designing gas processes in real applications, with the aim of reducing their energy use....... They demonstrate as well that particular caution should be exercised when extrapolating the results of the conventional thermodynamic models to the actual conception of the gas liquefaction chain....

  4. A Stochastic After-Taxes Optimisation Model to Support Distribution Network Strategies

    Fernandes, Rui; Hvolby, Hans-Henrik; Gouveia, Borges

    2012-01-01

    The paper proposes a stochastic model to integrate tax issues into strategic distribution network decisions. Specifically, this study will explore the role of distribution models in business profitability, and how to use the network design to deliver additional bottom-line results, using...... distribution centres located in different countries. The challenge is also to reveal how financial and tax knowledge can help logistic leaders improving the value to their companies under global solutions and sources of business net profitability in a dynamic environment. In particular, based on inventory...

  5. Entropy analysis on non-equilibrium two-phase flow models

    Karwat, H.; Ruan, Y.Q.

    1995-01-01

    A method of entropy analysis according to the second law of thermodynamics is proposed for the assessment of a class of practical non-equilibrium two-phase flow models. Entropy conditions are derived directly from a local instantaneous formulation for an arbitrary control volume of a structural two-phase fluid, which are finally expressed in terms of the averaged thermodynamic independent variables and their time derivatives as well as the boundary conditions for the volume. On the basis of a widely used thermal-hydraulic system code it is demonstrated with practical examples that entropy production rates in control volumes can be numerically quantified by using the data from the output data files. Entropy analysis using the proposed method is useful in identifying some potential problems in two-phase flow models and predictions as well as in studying the effects of some free parameters in closure relationships

  6. Effect of including decay chains on predictions of equilibrium-type terrestrial food chain models

    Kirchner, G.

    1990-01-01

    Equilibrium-type food chain models are commonly used for assessing the radiological impact to man from environmental releases of radionuclides. Usually these do not take into account build-up of radioactive decay products during environmental transport. This may be a potential source of underprediction. For estimating consequences of this simplification, the equations of an internationally recognised terrestrial food chain model have been extended to include decay chains of variable length. Example calculations show that for releases from light water reactors as expected both during routine operation and in the case of severe accidents, the build-up of decay products during environmental transport is generally of minor importance. However, a considerable number of radionuclides of potential radiological significance have been identified which show marked contributions of decay products to calculated contamination of human food and resulting radiation dose rates. (author)

  7. Emission policies and the Nigerian economy. Simulations from a dynamic applied general equilibrium model

    Nwaobi, Godwin Chukwudum

    2004-01-01

    Recently, there has been growing concern that human activities may be affecting the global climate through growing atmospheric concentrations of greenhouse gases (GHG). Such warming could have major impacts on economic activity and society. For the Nigerian case, the study uses multisector dynamic applied general equilibrium model to quantify the economy-wide, distributional and environmental costs of policies to curb GHG emissions. The simulation results indicate effectiveness of carbon tax, tradable permit and backstop technology policies in curbing GHG emissions but with distorted economy-wide income distributional effects. However, the model was found to be sensitive to three key exogenous variable and parameters tested: lower GDP growth rate, changed interfuel substitution elasticity and autonomous energy efficiency factor. Unlike the first test, the last two tests only had improved environmental effect but stable economy wide effect. This then suggest that domestic energy conservation measures could be a second best alternative

  8. Equilibrium arsenic adsorption onto metallic oxides : Isotherm models, error analysis and removal mechanism

    Simsek, Esra Bilgin [Yalova University, Yalova (Turkmenistan); Beker, Ulker [Yldz Technical University, Istanbul (Turkmenistan)

    2014-11-15

    Arsenic adsorption properties of mono- (Fe or Al) and binary (Fe-Al) metal oxides supported on natural zeolite were investigated at three levels of temperature (298, 318 and 338 K). All data obtained from equilibrium experiments were analyzed by Freundlich, Langmuir, Dubinin-Radushkevich, Sips, Toth and Redlich-Peterson isotherms, and error functions were used to predict the best fitting model. The error analysis demonstrated that the As(Ⅴ) adsorption processes were best described by the Dubinin-Raduskevich model with the lowest sum of normalized error values. According to results, the presence of iron and aluminum oxides in the zeolite network improved the As(Ⅴ) adsorption capacity of the raw zeolite (ZNa). The X-ray photoelectron spectroscopy (XPS) analyses of ZNa-Fe and ZNa-AlFe samples suggested that the redox reactions are the postulated mechanisms for the adsorption onto them while the adsorption process is followed by surface complexation reactions for ZNa-Al.

  9. An equilibrium model for tungsten fuzz in an eroding plasma environment

    Doerner, R.P.; Baldwin, M.J.; Stangeby, P.C.

    2011-01-01

    A model equating the growth rate of tungsten fuzz on a plasma-exposed surface to the erosion rate of the fuzzy surface is developed to predict the likelihood of tungsten fuzz formation in the steady-state environment of toroidal confinement devices. To date this question has not been answered because the operational conditions in existing magnetic confinement machines do not necessarily replicate those expected in future fusion reactors (i.e. high-fluence operation, high temperature plasma-facing materials and edge plasma relatively free of condensable impurities). The model developed is validated by performing plasma exposure experiments at different incident ion energies (thereby varying the erosion rate) and measuring the resultant fuzz layer thickness. The results indicate that if the conditions exist for fuzz development in a steady-state plasma (surface temperature and energetic helium flux), then the erosion rate will determine the equilibrium thickness of the surface fuzz layer.

  10. Entropy analysis on non-equilibrium two-phase flow models

    Karwat, H.; Ruan, Y.Q. [Technische Universitaet Muenchen, Garching (Germany)

    1995-09-01

    A method of entropy analysis according to the second law of thermodynamics is proposed for the assessment of a class of practical non-equilibrium two-phase flow models. Entropy conditions are derived directly from a local instantaneous formulation for an arbitrary control volume of a structural two-phase fluid, which are finally expressed in terms of the averaged thermodynamic independent variables and their time derivatives as well as the boundary conditions for the volume. On the basis of a widely used thermal-hydraulic system code it is demonstrated with practical examples that entropy production rates in control volumes can be numerically quantified by using the data from the output data files. Entropy analysis using the proposed method is useful in identifying some potential problems in two-phase flow models and predictions as well as in studying the effects of some free parameters in closure relationships.

  11. Modelling and optimisation of fs laser-produced K (alpha) sources

    Gibbon, P.; Mašek, Martin; Teubner, U.; Lu, W.; Nicoul, M.; Shymanovich, U.; Tarasevitch, A.; Zhou, P.; Sokolowski-Tinten, K.; von der Linde, D.

    2009-01-01

    Roč. 96, č. 1 (2009), 23-31 ISSN 0947-8396 R&D Projects: GA MŠk(CZ) LC528 Institutional research plan: CEZ:AV0Z10100523 Keywords : fs laser-plasma interaction * K (alpha) sources * 3D numerical modelling Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 1.595, year: 2009

  12. Optimising the management of complex dynamic ecosystems. An ecological-economic modelling approach

    Hein, L.G.

    2005-01-01

    Keywords: ecological-economic modelling; ecosystem services; resource use; efficient; sustainability; wetlands, rangelands.

  13. A model-based combinatorial optimisation approach for energy-efficient processing of microalgae

    Slegers, P.M.; Koetzier, B.J.; Fasaei, F.; Wijffels, R.H.; Straten, van G.; Boxtel, van A.J.B.

    2014-01-01

    The analyses of algae biorefinery performance are commonly based on fixed performance data for each processing step. In this work, we demonstrate a model-based combinatorial approach to derive the design-specific upstream energy consumption and biodiesel yield in the production of biodiesel from

  14. Optimisation of near-infrared reflectance model in measuring protein and amylose content of rice flour.

    Xie, L H; Tang, S Q; Chen, N; Luo, J; Jiao, G A; Shao, G N; Wei, X J; Hu, P S

    2014-01-01

    Near-infrared reflectance spectroscopy (NIRS) has been used to predict the cooking quality parameters of rice, such as the protein (PC) and amylose content (AC). Using brown and milled flours from 519 rice samples representing a wide range of grain qualities, this study was to compare the calibration models generated by different mathematical, preprocessing treatments, and combinations of different regression algorithm. A modified partial least squares model (MPLS) with the mathematic treatment "2, 8, 8, 2" (2nd order derivative computed based on 8 data points, and 8 and 2 data points in the 1st and 2nd smoothing, respectively) and inverse multiplicative scattering correction preprocessing treatment was identified as the best model for simultaneously measurement of PC and AC in brown flours. MPLS/"2, 8, 8, 2"/detrend preprocessing was identified as the best model for milled flours. The results indicated that NIRS could be useful in estimation of PC and AC of breeding lines in early generations of the breeding programs, and for the purposes of quality control in the food industry. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. The tropical water and energy cycles in a cumulus ensemble model. Part 1: Equilibrium climate

    Sui, C. H.; Lau, K. M.; Tao, W. K.; Simpson, J.

    1994-01-01

    A cumulus ensemble model is used to study the tropical water and energy cycles and their role in the climate system. The model includes cloud dynamics, radiative processes, and microphysics that incorporate all important production and conversion processes among water vapor and five species of hydrometeors. Radiative transfer in clouds is parameterized based on cloud contents and size distributions of each bulk hydrometeor. Several model integrations have been carried out under a variety of imposed boundary and large-scale conditions. In Part 1 of this paper, the primary focus is on the water and heat budgets of the control experiment, which is designed to simulate the convective - radiative equilibrium response of the model to an imposed vertical velocity and a fixed sea surface temperature at 28 C. The simulated atmosphere is conditionally unstable below the freezing level and close to neutral above the freezing level. The equilibrium water budget shows that the total moisture source, M(sub s), which is contributed by surface evaporation (0.24 M(sub s)) and the large-scale advection (0.76 M(sub s)), all converts to mean surface precipitation bar-P(sub s). Most of M(sub s) is transported verticaly in convective regions where much of the condensate is generated and falls to surface (0.68 bar-P(sub s)). The remaining condensate detrains at a rate of 0.48 bar-P(sub s) and constitutes 65% of the source for stratiform clouds above the melting level. The upper-level stratiform cloud dissipates into clear environment at a rate of 0.14 bar-P(sub s), which is a significant moisture source comparable to the detrained water vapor (0.15 bar-P(sub s)) to the upper troposphere from convective clouds. In the lower troposphere, stratiform clouds evaporate at a rate of 0.41 bar-P(sub s), which is a more dominant moisture source than surface evaporation (0.22 bar-P(sub s)). The precipitation falling to the surface in the stratiform region is about 0.32 bar-P(sub s). The associated

  16. Molecular finite-size effects in stochastic models of equilibrium chemical systems.

    Cianci, Claudia; Smith, Stephen; Grima, Ramon

    2016-02-28

    The reaction-diffusion master equation (RDME) is a standard modelling approach for understanding stochastic and spatial chemical kinetics. An inherent assumption is that molecules are point-like. Here, we introduce the excluded volume reaction-diffusion master equation (vRDME) which takes into account volume exclusion effects on stochastic kinetics due to a finite molecular radius. We obtain an exact closed form solution of the RDME and of the vRDME for a general chemical system in equilibrium conditions. The difference between the two solutions increases with the ratio of molecular diameter to the compartment length scale. We show that an increase in the fraction of excluded space can (i) lead to deviations from the classical inverse square root law for the noise-strength, (ii) flip the skewness of the probability distribution from right to left-skewed, (iii) shift the equilibrium of bimolecular reactions so that more product molecules are formed, and (iv) strongly modulate the Fano factors and coefficients of variation. These volume exclusion effects are found to be particularly pronounced for chemical species not involved in chemical conservation laws. Finally, we show that statistics obtained using the vRDME are in good agreement with those obtained from Brownian dynamics with excluded volume interactions.

  17. Novel non-equilibrium modelling of a DC electric arc in argon

    Baeva, M.; Benilov, M. S.; Almeida, N. A.; Uhrlandt, D.

    2016-06-01

    A novel non-equilibrium model has been developed to describe the interplay of heat and mass transfer and electric and magnetic fields in a DC electric arc. A complete diffusion treatment of particle fluxes, a generalized form of Ohm’s law, and numerical matching of the arc plasma with the space-charge sheaths adjacent to the electrodes are applied to analyze in detail the plasma parameters and the phenomena occurring in the plasma column and the near-electrode regions of a DC arc generated in atmospheric pressure argon for current levels from 20 A up to 200 A. Results comprising electric field and potential, current density, heating of the electrodes, and effects of thermal and chemical non-equilibrium are presented and discussed. The current-voltage characteristic obtained is in fair agreement with known experimental data. It indicates a minimum for arc current of about 80 A. For all current levels, a field reversal in front of the anode accompanied by a voltage drop of (0.7-2.6) V is observed. Another field reversal is observed near the cathode for arc currents below 80 A.

  18. Novel non-equilibrium modelling of a DC electric arc in argon

    Baeva, M; Uhrlandt, D; Benilov, M S; Almeida, N A

    2016-01-01

    A novel non-equilibrium model has been developed to describe the interplay of heat and mass transfer and electric and magnetic fields in a DC electric arc. A complete diffusion treatment of particle fluxes, a generalized form of Ohm’s law, and numerical matching of the arc plasma with the space-charge sheaths adjacent to the electrodes are applied to analyze in detail the plasma parameters and the phenomena occurring in the plasma column and the near-electrode regions of a DC arc generated in atmospheric pressure argon for current levels from 20 A up to 200 A. Results comprising electric field and potential, current density, heating of the electrodes, and effects of thermal and chemical non-equilibrium are presented and discussed. The current–voltage characteristic obtained is in fair agreement with known experimental data. It indicates a minimum for arc current of about 80 A. For all current levels, a field reversal in front of the anode accompanied by a voltage drop of (0.7–2.6) V is observed. Another field reversal is observed near the cathode for arc currents below 80 A. (paper)

  19. Kinetics and equilibrium modeling of uranium(VI) sorption by bituminous shale from aqueous solution

    Ortaboy, Sinem; Atun, Gülten

    2014-01-01

    Highlights: • Oil shales are sedimentary rocks containing a polymeric matter in a mineral matrix. • Sorption potential of bituminous shale (BS) for uranium recovery was investigated. • U(VI) sorption increased with decreasing pH and increasing temperature. • Kinetic data were analyzed based on single and two resistance diffusion models. • The results fit well to the McKay equation assuming film and intraparticle diffusion. - Abstract: Sorption of U(VI) onto a bituminous shale (BS) from a nuclear power plant project site in Black Sea region was investigated for potential risk assessment when it releases into the environment with contaminated ground and surface water. The sorption characteristics of the BS for U(VI) recovery were evaluated as a function of contact time, adsorbent dosage, initial concentration, pH and temperature. Kinetic results fit better with pseudo-second-order model rather than pseudo-first-order. The possibility of diffusion process was analyzed based on Weber–Morris intra-particle diffusion model. The McKay equation assuming film- and intraparticle diffusion better predicted the data than the Vermeulen approximation presuming surface diffusion. Equilibrium sorption data were modeled according to the Langmuir, Dubinin–Radushkevich (D–R) and Freundlich isotherm equations. Sorption capacity increased from 0.10 to 0.15 mmol g −1 in 298–318 K temperature range. FT-IR analysis and pH dependent sorption studies conducted in hydroxide and carbonate media revealed that U(VI) species were sorbed in uranyl and its hydroxo forms on the BS. Desorption studies showed that U(VI) leaching with Black Sea water was negligible from the loaded BS. The activation parameters (E a , ΔH ∗ and ΔG ∗ ) estimated from diffusion coefficients indicated the presence of an energy barrier in the sorption system. However, thermodynamic functions derived from sorption equilibrium constants showed that overall sorption process was spontaneous in nature

  20. The Analysis of Pricing Power of Preponderant Metal Mineral Resources under the Perspective of Intergenerational Equity and Social Preferences: An Analytical Framework Based on Cournot Equilibrium Model

    Meirui Zhong

    2014-01-01

    Full Text Available This paper combines intergenerational equity equilibrium and social preferences equilibrium with Cournot equilibrium solving the technological problem of intergenerational equity and strategic value compensation confirmation, achieving the effective combination between sustainable development concept and value evaluation, thinking and expanding the theoretical framework for the lack of pricing power of mineral resources. The conclusion of the theoretical model and the numerical simulation shows that intergenerational equity equilibrium and social preferences equilibrium enhance international trade market power of preponderant metal mineral resources owing to the production of intergenerational equity compensation value and strategic value. However, the impact exerted on Cournot market power by social preferences is inconsistent: that is, changes of altruistic Cournot equilibrium and reciprocal inequity Cournot equilibrium are consistent, while inequity aversion Cournot equilibrium has the characteristic of loss aversion, namely, under the consideration of inequity aversion Cournot competition, Counot-Nash equilibrium transforms monotonically with sympathy and jealousy of inequity aversion.

  1. The Use of VMD Data/Model to Test Different Thermodynamic Models for Vapour-Liquid Equilibrium

    Abildskov, Jens; Azquierdo-Gil, M.A.; Jonsson, Gunnar Eigil

    2004-01-01

    Vacuum membrane distillation (VMD) has been studied as a separation process to remove volatile organic compounds from aqueous streams. A vapour pressure difference across a microporous hydrophobic membrane is the driving force for the mass transport through the membrane pores (this transport take...... place in vapour phase). The vapour pressure difference is obtained in VMD processes by applying a vacuum on one side of the membrane. The membrane acts as a mere support for the liquid-vapour equilibrium. The evaporation of the liquid stream takes place on the feed side of the membrane...... values; membrane type: PTFE/PP/PVDF; feed flow rate; feed temperature. A comparison is made between different thermodynamic models for calculating the vapour-liquid equilibrium at the membrane/pore interface. (C) 2004 Elsevier B.V. All rights reserved....

  2. Optimisation of distributed maintenance: Modelling and application to the multi-factory production

    Simeu-Abazi, Zineb, E-mail: Zineb.Simeu-Abazi@g-scop.inpg.fr [Laboratory G-SCOP, 46 Avenue Felix Viallet, 38031 Grenoble Cedex 1 (France); Ahmad, Alali Alhouaij [Laboratory G-SCOP, 46 Avenue Felix Viallet, 38031 Grenoble Cedex 1 (France)

    2011-11-15

    This paper concerns the modelling and the cost evaluation of maintenance activities in a distributed context. In this work we study the particular case where the maintenance activities are executed by two workshops: a central maintenance workshop (CMW) and a mobile maintenance workshop (MMW). The CMW concerns the repairing process for the corrective maintenance and the MMW executes all preventive maintenance in several factories according to a defined scheduling. The aim is to take into account the resources (spare parts in the MMW) and maintenance actions for a given operating budget. A modular approach for modelling a multi-site structure is proposed to achieve the aim of improving the availability of facilities on production sites while minimising the cost of maintenance.

  3. Using reaction-technical models for characterisation and optimisation of continuous ethanol production with biomass recirculation

    Yayanata, Y

    1983-11-28

    Ethanol production from S. cerevisiae was studied experimentally in one- and two-stage plants, with and without biomass recirculation. The hydrogen sources were glucose and molasses. The experimental findings were used as a basis for mathematical models whose kinetic parameters were established by comparison with the experiments. In the fermentation processes with glucose as carbon and energy source, an activation kinetics of yeast extract was considered in addition to the limitations resulting from the substrate and the inhibition by the produced ethanol. The problem of biomass recirculation received particular attention. Lamellar separators in the form of a cated tube cluster are described as an alternative to conventional conical separator tanks. Biomass concentrations in the fermenter may amount to about 80 gTS/l. Satisfactory simulation of the plant behaviour is possible by combining the kinetic approaches for the fermenter with the mathematical models for the separator.

  4. A comparison of aggregated models for simulation and operational optimisation of district heating networks

    Larsen, Helge V.; Bøhm, Benny; Wigbels, M.

    2004-01-01

    as a test case. For the 23 substations in Ishoej, heat loads and primary and secondary supply and return temperatures were available every 5 min for the period December 19–24, 2000. The accuracy of the aggregation models has been documented as the errors in heat production and in return temperature......Work on aggregation of district heating networks has been in progress during the last decade. Two methods have independently been developed in Denmark and Germany. In this article, a comparison of the two methods is first presented. Next, the district heating system Ishoej near Copenhagen is used...... at the DH plant between the physical network and the aggregated model. Both the Danish and the German aggregation methods work well. It is concluded that the number of pipes can be reduced from 44 to three when using the Danish method of aggregation without significantly increasing the error in heat...

  5. Semi-empirical model for optimising future heavy-ion luminosity of the LHC

    Schaumann, M

    2014-01-01

    The wide spectrum of intensities and emittances imprinted on the LHC Pb bunches during the accumulation of bunch trains in the injector chain result in a significant spread in the single bunch luminosities and lifetimes in collision. Based on the data collected in the 2011 Pb-Pb run, an empirical model is derived to predict the single-bunch peak luminosity depending on the bunch’s position within the beam. In combination with this model, simulations of representative bunches are used to estimate the luminosity evolution for the complete ensemble of bunches. Several options are being considered to improve the injector performance and to increase the number of bunches in the LHC, leading to several potential injection scenarios, resulting in different peak and integrated luminosities. The most important options for after the long shutdown (LS) 1 and 2 are evaluated and compared.

  6. Optimal investment paths for future renewable based energy systems - Using the optimisation model Balmorel

    Karlsson, Kenneth Bernard; Meibom, Peter

    2008-01-01

    that with an oil price at 100 $/barrel, a CO2 price at40 €/ton and the assumed penetration of hydrogen in the transport sector, it is economically optimal to cover more than 95% of the primary energy consumption for electricity and district heat by renewables in 2050. When the transport sector is converted......: A model for analyses of the electricity and CHP markets in the Baltic Sea Region. 〈www.Balmorel.com〉; 2001. [1

  7. Analysis of the EU renewable energy directive by a techno-economic optimisation model

    Lind, Arne; Rosenberg, Eva; Seljom, Pernille; Espegren, Kari; Fidje, Audun; Lindberg, Karen

    2013-01-01

    The EU renewable energy (RES) directive sets a target of increasing the share of renewable energy used in the EU to 20% by 2020. The Norwegian goal for the share of renewable energy in 2020 is 67.5%, an increase from 60.1% in 2005. The Norwegian power production is almost solely based on renewable resources and the possibility to change from fossil power plants to renewable power production is almost non-existing. Therefore other measures have to be taken to fulfil the RES directive. Possible ways for Norway to reach its target for 2020 are analysed with a technology-rich, bottom-up energy system model (TIMES-Norway). This new model is developed with a high time resolution among others to be able to analyse intermittent power production. Model results indicate that the RES target can be achieved with a diversity of options including investments in hydropower, wind power, high-voltage power lines for export, various heat pump technologies, energy efficiency measures and increased use of biodiesel in the transportation sector. Hence, it is optimal to invest in a portfolio of technology choices in order to satisfy the RES directive, and not one single technology in one energy sector. - Highlights: • A new technology-rich, bottom-up energy system model is developed for Norway. • Possible ways for Norway to reach its renewable energy target for 2020 is analysed. • Results show that the renewable target can be achieved with a diversity of options. • The green certificate market contributes to increased investments in wind power

  8. Parameter optimisation for a better representation of drought by LSMs: inverse modelling vs. sequential data assimilation

    Dewaele, Hélène; Munier, Simon; Albergel, Clément; Planque, Carole; Laanaia, Nabil; Carrer, Dominique; Calvet, Jean-Christophe

    2017-09-01

    Soil maximum available water content (MaxAWC) is a key parameter in land surface models (LSMs). However, being difficult to measure, this parameter is usually uncertain. This study assesses the feasibility of using a 15-year (1999-2013) time series of satellite-derived low-resolution observations of leaf area index (LAI) to estimate MaxAWC for rainfed croplands over France. LAI interannual variability is simulated using the CO2-responsive version of the Interactions between Soil, Biosphere and Atmosphere (ISBA) LSM for various values of MaxAWC. Optimal value is then selected by using (1) a simple inverse modelling technique, comparing simulated and observed LAI and (2) a more complex method consisting in integrating observed LAI in ISBA through a land data assimilation system (LDAS) and minimising LAI analysis increments. The evaluation of the MaxAWC estimates from both methods is done using simulated annual maximum above-ground biomass (Bag) and straw cereal grain yield (GY) values from the Agreste French agricultural statistics portal, for 45 administrative units presenting a high proportion of straw cereals. Significant correlations (p value Bag and GY are found for up to 36 and 53 % of the administrative units for the inverse modelling and LDAS tuning methods, respectively. It is found that the LDAS tuning experiment gives more realistic values of MaxAWC and maximum Bag than the inverse modelling experiment. Using undisaggregated LAI observations leads to an underestimation of MaxAWC and maximum Bag in both experiments. Median annual maximum values of disaggregated LAI observations are found to correlate very well with MaxAWC.

  9. Incorporation of the equilibrium temperature approach in a Soil and Water Assessment Tool hydroclimatological stream temperature model

    Du, Xinzhong; Shrestha, Narayan Kumar; Ficklin, Darren L.; Wang, Junye

    2018-04-01

    Stream temperature is an important indicator for biodiversity and sustainability in aquatic ecosystems. The stream temperature model currently in the Soil and Water Assessment Tool (SWAT) only considers the impact of air temperature on stream temperature, while the hydroclimatological stream temperature model developed within the SWAT model considers hydrology and the impact of air temperature in simulating the water-air heat transfer process. In this study, we modified the hydroclimatological model by including the equilibrium temperature approach to model heat transfer processes at the water-air interface, which reflects the influences of air temperature, solar radiation, wind speed and streamflow conditions on the heat transfer process. The thermal capacity of the streamflow is modeled by the variation of the stream water depth. An advantage of this equilibrium temperature model is the simple parameterization, with only two parameters added to model the heat transfer processes. The equilibrium temperature model proposed in this study is applied and tested in the Athabasca River basin (ARB) in Alberta, Canada. The model is calibrated and validated at five stations throughout different parts of the ARB, where close to monthly samplings of stream temperatures are available. The results indicate that the equilibrium temperature model proposed in this study provided better and more consistent performances for the different regions of the ARB with the values of the Nash-Sutcliffe Efficiency coefficient (NSE) greater than those of the original SWAT model and the hydroclimatological model. To test the model performance for different hydrological and environmental conditions, the equilibrium temperature model was also applied to the North Fork Tolt River Watershed in Washington, United States. The results indicate a reasonable simulation of stream temperature using the model proposed in this study, with minimum relative error values compared to the other two models

  10. Equilibrium Trust

    Luca Anderlini; Daniele Terlizzese

    2009-01-01

    We build a simple model of trust as an equilibrium phenomenon, departing from standard "selfish" preferences in a minimal way. Agents who are on the receiving end of an other to transact can choose whether to cheat and take away the entire surplus, taking into account a "cost of cheating." The latter has an idiosyncratic component (an agent's type), and a socially determined one. The smaller the mass of agents who cheat, the larger the cost of cheating suffered by those who cheat. Depending o...

  11. Services in wireless sensor networks modelling and optimisation for the efficient discovery of services

    Becker, Markus

    2014-01-01

    In recent years, originally static and single purpose Wireless Sensor Networks have moved towards applications that need support for mobility and multiple purposes. These heterogeneous applications and services demand for a framework which distributes and discovers the various services, so that other pieces of equipment can use them. Markus Becker studies, extends, analytically models, simulates and employs the so called Trickle algorithm in measurements in a Wireless Sensor Network test bed for the service distribution. The obtained results apply to the application of the Trickle algorithm at

  12. Physicochemical modelling of the pressurised-water reactors primary coolant for the optimisation of its purification

    Elain, L.; Doury-Berthod, M.; Berger, M.

    2002-01-01

    This paper purpose is to bring up some speciation results obtained by simulation for a complete reactor operation cycle. Calculations were performed at 25 C, which is close to the operational temperature of the CVCS purification system (30 C to 50 C). Due to the lack of a number of data, a totally predictive quantitative study proved impossible. The present modelling aims at identifying trends and giving orders of magnitude for concentrations, an essential information for clarifying the fluid evolution over a cycle. It is based on the few coherent results we were able to extract from experience feedback and on the most recent and accessible thermodynamic databases. (authors)

  13. Statistical equilibrium calculations for silicon in early-type model stellar atmospheres

    Kamp, L.W.

    1976-02-01

    Line profiles of 36 multiplets of silicon (Si) II, III, and IV were computed for a grid of model atmospheres covering the range from 15,000 to 35,000 K in effective temperature and 2.5 to 4.5 in log (gravity). The computations involved simultaneous solution of the steady-state statistical equilibrium equations for the populations and of the equation of radiative transfer in the lines. The variables were linearized, and successive corrections were computed until a minimal accuracy of 1/1000 in the line intensities was reached. The common assumption of local thermodynamic equilibrium (LTE) was dropped. The model atmospheres used also were computed by non-LTE methods. Some effects that were incorporated into the calculations were the depression of the continuum by free electrons, hydrogen and ionized helium line blocking, and auto-ionization and dielectronic recombination, which later were found to be insignificant. Use of radiation damping and detailed electron (quadratic Stark) damping constants had small but significant effects on the strong resonance lines of Si III and IV. For weak and intermediate-strength lines, large differences with respect to LTE computations, the results of which are also presented, were found in line shapes and strengths. For the strong lines the differences are generally small, except for the models at the hot, low-gravity extreme of the range. These computations should be useful in the interpretation of the spectra of stars in the spectral range B0--B5, luminosity classes III, IV, and V

  14. Model based fleet optimisation and master control of a power production system

    Joergensen, C.; Mortensen, J.H.; Nielsen, E.O.; Moelbak, T.

    2006-01-01

    This paper discussed an optimization concept for power plants operated by the Danish power company Elsam. The power company operates a distributed power production system with fossil fuel thermal plants, biomass-fired thermal plants, waste incineration plants, on- and offshore wind power, and district heating storage units. Power and regulation power are traded on an hourly basis, while trading of district heating resources is conducted using bilateral contracts. System and plant level case studies on optimization and control were presented. A system control level was developed to ensure compliance with power market requirements. Dynamic constraints were posed by environmental regulations, grid capabilities, and fuel and district heating contracts. System components included a short-term load scheduler; a power controller; a frequency control scheduler; a marginal cost calculator; and a master control. The scheduler consisted of an optimization algorithm and a set of steady-state models designed to minimize fuel, load, and maintenance costs. Quadratic programming and mixed integer programming methods were used to minimize deviations between the total electrical power production reference value and actual power production values. The study showed that control levels can be optimized using advanced modelling and control methods. However, integration and coordination between the various levels is needed to obtain improved performance. It was concluded that a bottom-up approach starting at the lowest possible level can ensure the performance of an optimization scheme. 6 refs., 9 figs

  15. PBDE exposure from food in Ireland: optimising data exploitation in probabilistic exposure modelling.

    Trudel, David; Tlustos, Christina; Von Goetz, Natalie; Scheringer, Martin; Hungerbühler, Konrad

    2011-01-01

    Polybrominated diphenyl ethers (PBDEs) are a class of brominated flame retardants added to plastics, polyurethane foam, electronics, textiles, and other products. These products release PBDEs into the indoor and outdoor environment, thus causing human exposure through food and dust. This study models PBDE dose distributions from ingestion of food for Irish adults on congener basis by using two probabilistic and one semi-deterministic method. One of the probabilistic methods was newly developed and is based on summary statistics of food consumption combined with a model generating realistic daily energy supply from food. Median (intermediate) doses of total PBDEs are in the range of 0.4-0.6 ng/kg(bw)/day for Irish adults. The 97.5th percentiles of total PBDE doses lie in a range of 1.7-2.2 ng/kg(bw)/day, which is comparable to doses derived for Belgian and Dutch adults. BDE-47 and BDE-99 were identified as the congeners contributing most to estimated intakes, accounting for more than half of the total doses. The most influential food groups contributing to this intake are lean fish and salmon which together account for about 22-25% of the total doses.

  16. A comparison of aggregated models for simulation and operational optimisation of district heating networks

    Larsen, Helge V.; Boehm, Benny; Wigbels, Michael

    2004-01-01

    Work on aggregation of district heating networks has been in progress during the last decade. Two methods have independently been developed in Denmark and Germany. In this article, a comparison of the two methods is first presented. Next, the district heating system Ishoej near Copenhagen is used as a test case. For the 23 substations in Ishoej, heat loads and primary and secondary supply and return temperatures were available every 5 min for the period December 19-24, 2000. The accuracy of the aggregation models has been documented as the errors in heat production and in return temperature at the DH plant between the physical network and the aggregated model. Both the Danish and the German aggregation methods work well. It is concluded that the number of pipes can be reduced from 44 to three when using the Danish method of aggregation without significantly increasing the error in heat production or return temperature at the plant. In the case of the German method, the number of pipes should not be reduced much below 10 in the Ishoej case

  17. Modelling and Optimising the Value of a Hybrid Solar-Wind System

    Nair, Arjun; Murali, Kartik; Anbuudayasankar, S. P.; Arjunan, C. V.

    2017-05-01

    In this paper, a net present value (NPV) approach for a solar hybrid system has been presented. The system, in question aims at supporting an investor by assessing an investment in solar-wind hybrid system in a given area. The approach follow a combined process of modelling the system, with optimization of major investment-related variables to maximize the financial yield of the investment. The consideration of solar wind hybrid supply presents significant potential for cost reduction. The investment variables concern the location of solar wind plant, and its sizing. The system demand driven, meaning that its primary aim is to fully satisfy the energy demand of the customers. Therefore, the model is a practical tool in the hands of investor to assess and optimize in financial terms an investment aiming at covering real energy demand. Optimization is performed by taking various technical, logical constraints. The relation between the maximum power obtained between individual system and the hybrid system as a whole in par with the net present value of the system has been highlighted.

  18. Optimising Shovel-Truck Fuel Consumption using Stochastic ...

    Optimising the fuel consumption and truck waiting time can result in significant fuel savings. The paper demonstrates that stochastic simulation is an effective tool for optimising the utilisation of fossil-based fuels in mining and related industries. Keywords: Stochastic, Simulation Modelling, Mining, Optimisation, Shovel-Truck ...

  19. Design of optimised backstepping controller for the synchronisation ...

    Ehsan Fouladi

    2017-12-18

    Dec 18, 2017 ... for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller. Keywords. Colpitts oscillator; backstepping controller; chaos synchronisation; shark smell algorithm; particle .... The velocity model is based on the gradient of the objective function, tilting ...

  20. Computable general equilibrium models for sustainability impact assessment: Status quo and prospects

    Boehringer, Christoph; Loeschel, Andreas

    2006-01-01

    Sustainability Impact Assessment (SIA) of economic, environmental, and social effects triggered by governmental policies has become a central requirement for policy design. The three dimensions of SIA are inherently intertwined and subject to trade-offs. Quantification of trade-offs for policy decision support requires numerical models in order to assess systematically the interference of complex interacting forces that affect economic performance, environmental quality, and social conditions. This paper investigates the use of computable general equilibrium (CGE) models for measuring the impacts of policy interference on policy-relevant economic, environmental, and social (institutional) indicators. We find that operational CGE models used for energy-economy-environment (E3) analyses have a good coverage of central economic indicators. Environmental indicators such as energy-related emissions with direct links to economic activities are widely covered, whereas indicators with complex natural science background such as water stress or biodiversity loss are hardly represented. Social indicators stand out for very weak coverage, mainly because they are vaguely defined or incommensurable. Our analysis identifies prospects for future modeling in the field of integrated assessment that link standard E3-CGE-models to themespecific complementary models with environmental and social focus. (author)

  1. Chemical equilibrium modeling of organic acids, pH, aluminum, and iron in Swedish surface waters.

    Sjöstedt, Carin S; Gustafsson, Jon Petter; Köhler, Stephan J

    2010-11-15

    A consistent chemical equilibrium model that calculates pH from charge balance constraints and aluminum and iron speciation in the presence of natural organic matter is presented. The model requires input data for total aluminum, iron, organic carbon, fluoride, sulfate, and charge balance ANC. The model is calibrated to pH measurements (n = 322) by adjusting the fraction of active organic matter only, which results in an error of pH prediction on average below 0.2 pH units. The small systematic discrepancy between the analytical results for the monomeric aluminum fractionation and the model results is corrected for separately for two different fractionation techniques (n = 499) and validated on a large number (n = 3419) of geographically widely spread samples all over Sweden. The resulting average error for inorganic monomeric aluminum is around 1 µM. In its present form the model is the first internally consistent modeling approach for Sweden and may now be used as a tool for environmental quality management. Soil gibbsite with a log *Ks of 8.29 at 25°C together with a pH dependent loading function that uses molar Al/C ratios describes the amount of aluminum in solution in the presence of organic matter if the pH is roughly above 6.0.

  2. A nested-LES wall-modeling approach for computation of high Reynolds number equilibrium and non-equilibrium wall-bounded turbulent flows

    Tang, Yifeng; Akhavan, Rayhaneh

    2014-11-01

    A nested-LES wall-modeling approach for high Reynolds number, wall-bounded turbulence is presented. In this approach, a coarse-grained LES is performed in the full-domain, along with a nested, fine-resolution LES in a minimal flow unit. The coupling between the two domains is achieved by renormalizing the instantaneous LES velocity fields to match the profiles of kinetic energies of components of the mean velocity and velocity fluctuations in both domains to those of the minimal flow unit in the near-wall region, and to those of the full-domain in the outer region. The method is of fixed computational cost, independent of Reτ , in homogenous flows, and is O (Reτ) in strongly non-homogenous flows. The method has been applied to equilibrium turbulent channel flows at 1000 shear-driven, 3D turbulent channel flow at Reτ ~ 2000 . In equilibrium channel flow, the friction coefficient and the one-point turbulence statistics are predicted in agreement with Dean's correlation and available DNS and experimental data. In shear-driven, 3D channel flow, the evolution of turbulence statistics is predicted in agreement with experimental data of Driver & Hebbar (1991) in shear-driven, 3D boundary layer flow.

  3. Sterile insect technique: A model for dose optimisation for improved sterile insect quality

    Parker, A.; Mehta, K.

    2007-01-01

    The sterile insect technique (SIT) is an environment-friendly pest control technique with application in the area-wide integrated control of key pests, including the suppression or elimination of introduced populations and the exclusion of new introductions. Reproductive sterility is normally induced by ionizing radiation, a convenient and consistent method that maintains a reasonable degree of competitiveness in the released insects. The cost and effectiveness of a control program integrating the SIT depend on the balance between sterility and competitiveness, but it appears that current operational programs with an SIT component are not achieving an appropriate balance. In this paper we discuss optimization of the sterilization process and present a simple model and procedure for determining the optimum dose. (author) [es

  4. Ordering phenomena and non-equilibrium properties of lattice gas models

    Fiig, T.

    1994-03-01

    This report falls within the general field of ordering processes and non-equilibrium properties of lattice gas models. The theory of diffuse scattering of lattice gas models originating from a random distribution of clusters is considered. We obtain relations between the diffuse part of the structure factor S dif (q), the correlation function C(r), and the size distribution of clusters D(n). For a number of distributions we calculate S dif (q) exactly in one dimension, and discuss the possibility for a Lorentzian and a Lorentzian square lineshape to arise. We discuss the two- and three-dimensional oxygen ordering processes in the high T c superconductor YBa 2 Cu 3 O 6+x based on a simple anisotropic lattice gas model. We calculate the structural phase diagram by Monte Carlo simulation and compared the results with experimental data. The structure factor of the oxygen ordering properties has been calculated in both two and three dimensions by Monte Carlo simulation. We report on results obtained from large scale computations on the Connection Machine, which are in excellent agreement with recent neutron diffraction data. In addition we consider the effect of the diffusive motion of metal-ion dopants on the oxygen ordering properties on YBa 2 Cu 3 O 6+x . The stationary properties of metastability in long-range interaction models are studied by application of a constrained transfer matrix (CTM) formalism. The model considered, which exhibits several metastable states, is an extension of the Blume Capel model to include weak long-range interactions. We show, that the decay rate of the metastable states is closely related to the imaginary part of the equilibrium free-energy density obtained from the CTM formalism. We discuss a class of lattice gas model for dissipative transport in the framework of a Langevin description, which is capable of producing power law spectra for the density fluctuations. We compare with numerical results obtained from simulations of a

  5. Assessing and optimizing the economic and environmental impacts of cogeneration/district energy systems using an energy equilibrium model

    Wu, Y.J.; Rosen, M.A.

    1999-01-01

    Energy equilibrium models can be valuable aids in energy planning and decision-making. In such models, supply is represented by a cost-minimizing linear submodel and demand by a smooth vector-valued function of prices. In this paper, we use the energy equilibrium model to study conventional systems and cogeneration-based district energy (DE) systems for providing heating, cooling and electrical services, not only to assess the potential economic and environmental benefits of cogeneration-based DE systems, but also to develop optimal configurations while accounting for such factors as economics and environmental impact. The energy equilibrium model is formulated and solved with software called WATEMS, which uses sequential non-linear programming to calculate the intertemporal equilibrium of energy supplies and demands. The methods of analysis and evaluation for the economic and environmental impacts are carefully explored. An illustrative energy equilibrium model of conventional and cogeneration-based DE systems is developed within WATEMS to compare quantitatively the economic and environmental impacts of those systems for various scenarios. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  6. A multi-period superstructure optimisation model for the optimal planning of China's power sector considering carbon dioxide mitigation

    Zhang Dongjie; Ma Linwei; Liu Pei; Zhang Lili; Li Zheng

    2012-01-01

    Power sector is the largest CO 2 emitter in China. To mitigate CO 2 emissions for the power sector is a tough task, which requires implementation of targeted carbon mitigation policies. There might be multiple forms for carbon mitigation policies and it is still unclear which one is the best for China. Applying a superstructure optimisation model for optimal planning of China's power sector built by the authors previously, which was based on real-life plants composition data of China's power sector in 2009, and could incorporate all possible actions of the power sector, including plants construction, decommission, and application of carbon capture and sequestration (CCS) on coal-fuelled plants, the implementation effects of three carbon mitigation policies were studied quantitatively, achieving a conclusion that the so-called “Surplus-Punishment and Deficit-Award” carbon tax policy is the best from the viewpoint of increasing CO 2 reduction effect and also reducing the accumulated total cost. Based on this conclusion, the corresponding relationships between CO 2 reduction objectives (including the accumulated total emissions reduction by the objective year and the annual emissions reduction in the objective year) were presented in detail. This work provides both directional and quantitative suggestions for China to make carbon mitigation policies in the future. - Highlights: ► We study the best form of carbon mitigation policy for China's power sector. ► We gain quantitative relationship between CO 2 reduction goal and carbon tax policy. ► The “Surplus-Punishment and Deficit-Award” carbon tax policy is the best. ► Nuclear and renewable power and CCS can help greatly reduce CO 2 emissions of the power sector. ► Longer objective period is preferred from the viewpoint of policy making.

  7. Computer program to solve two-dimensional shock-wave interference problems with an equilibrium chemically reacting air model

    Glass, Christopher E.

    1990-08-01

    The computer program EASI, an acronym for Equilibrium Air Shock Interference, was developed to calculate the inviscid flowfield, the maximum surface pressure, and the maximum heat flux produced by six shock wave interference patterns on a 2-D, cylindrical configuration. Thermodynamic properties of the inviscid flowfield are determined using either an 11-specie, 7-reaction equilibrium chemically reacting air model or a calorically perfect air model. The inviscid flowfield is solved using the integral form of the conservation equations. Surface heating calculations at the impingement point for the equilibrium chemically reacting air model use variable transport properties and specific heat. However, for the calorically perfect air model, heating rate calculations use a constant Prandtl number. Sample calculations of the six shock wave interference patterns, a listing of the computer program, and flowcharts of the programming logic are included.

  8. Equilibrium dynamical correlations in the Toda chain and other integrable models

    Kundu, Aritra; Dhar, Abhishek

    2016-12-01

    We investigate the form of equilibrium spatiotemporal correlation functions of conserved quantities in the Toda lattice and in other integrable models. From numerical simulations we find that the correlations satisfy ballistic scaling with a remarkable collapse of data from different times. We examine special limiting choices of parameter values, for which the Toda lattice tends to either the harmonic chain or the equal mass hard-particle gas. In both these limiting cases, one can obtain the correlations exactly and we find excellent agreement with the direct Toda simulation results. We also discuss a transformation to "normal mode" variables, as commonly done in hydrodynamic theory of nonintegrable systems, and find that this is useful, to some extent, even for the integrable system. The striking differences between the Toda chain and a truncated version, expected to be nonintegrable, are pointed out.

  9. A non-equilibrium thermodynamic model for tumor extracellular matrix with enzymatic degradation

    Xue, Shi-Lei; Li, Bo; Feng, Xi-Qiao; Gao, Huajian

    2017-07-01

    The extracellular matrix (ECM) of a solid tumor not only affords scaffolding to support tumor architecture and integrity but also plays an essential role in tumor growth, invasion, metastasis, and therapeutics. In this paper, a non-equilibrium thermodynamic theory is established to study the chemo-mechanical behaviors of tumor ECM, which is modeled as a poroelastic polyelectrolyte consisting of a collagen network and proteoglycans. By using the principle of maximum energy dissipation rate, we deduce a set of governing equations for drug transport and mechanosensitive enzymatic degradation in ECM. The results reveal that osmosis is primarily responsible for the compression resistance of ECM. It is suggested that a well-designed ECM degradation can effectively modify the tumor microenvironment for improved efficiency of cancer therapy. The theoretical predictions show a good agreement with relevant experimental observations. This study aimed to deepen our understanding of tumor ECM may be conducive to novel anticancer strategies.

  10. EVALUATION OF BIOMASS AND COAL CO-GASIFICATION OF BRAZILIAN FEEDSTOCK USING A CHEMICAL EQUILIBRIUM MODEL

    R. Rodrigues

    Full Text Available Abstract Coal and biomass are energy sources with great potential for use in Brazil. Coal-biomass co-gasification enables the combination of the positive characteristics of each fuel, besides leading to a cleaner use of coal. The present study evaluates the potential of co-gasification of binary coal-biomass blends using sources widely available in Brazil. This analysis employs computational simulations using a reliable thermodynamic equilibrium model. Favorable operational conditions at high temperatures are determined in order to obtain gaseous products suitable for energy cogeneration and chemical synthesis. This study shows that blends with biomass ratios of 5% and equivalence ratios ≤ 0.3 lead to high cold gas efficiencies. Suitable gaseous products for chemical synthesis were identified at biomass ratios ≤ 35% and moisture contents ≥ 40%. Formation of undesirable nitrogen and sulfur compounds was also analyzed.

  11. Measurement and Modelling of Phase Equilibrium of Oil - Water - Polar Chemicals

    Frost, Michael Grynnerup

    in the temperature range of 303-323 K at atmospheric pressure. In the second part of this work, the CPA EoS has been used for modeling hydrocarbon systemcontaining polar chemicals, such as water and gas hydrate inhibitor MEG or methanol. All the experimental data measured in this work have been investigated using...... with the measurement of newexperimental data, but through the development of new experimental equipment for the study ofmulti-phase equilibrium. In addition to measurement of well-defined systems, LLE have beenmeasured for North Sea oils with MEG and water. The work can be split up into two parts: Experimental: VLE...... systems presented, confirming the quality of theequipment. The equipment is used for measurement of VLE for several systems of interest; methane+ water, methane + methanol, methane + methanol + water and methane + MEG. Details dealing with the design, assembling and testing of new experimental equipment...

  12. A simplified model for equilibrium and transient swelling of thermo-responsive gels.

    Drozdov, A D; deClaville Christiansen, J

    2017-11-01

    A simplified model is developed for the elastic response of thermo-responsive gels subjected to swelling under an arbitrary deformation with finite strains. The constitutive equations involve five adjustable parameters that are determined by fitting observations in equilibrium water uptake tests and T-jump transient tests on thin gel disks. Two scenarios for water release under heating are revealed by means of numerical simulation. When the final temperature in a T-jump test is below the volume-phase transition temperature, deswelling is characterized by smooth distribution of water molecules and small tensile stresses. When the final temperature exceeds the critical temperature, a gel disk is split into three regions (central part with a high concentration of water molecules and two domains near the boundaries with low water content) separated by sharp interfaces, whose propagation is accompanied by development of large (comparable with the elastic modulus) tensile stresses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Resilience of traffic networks: From perturbation to recovery via a dynamic restricted equilibrium model

    Nogal, Maria; O'Connor, Alan; Caulfield, Brian; Martinez-Pastor, Beatriz

    2016-01-01

    When a disruptive event takes place in a traffic network some important questions arise, such as how stressed the traffic network is, whether the system is able to respond to this stressful situation, or how long the system needs to recover a new equilibrium position after suffering this perturbation. Quantifying these aspects allows the comparison of different systems, to scale the degree of damage, to identify traffic network weaknesses, and to analyse the effect of user knowledge about the traffic network state. The indicator that accounts for performance and recovery pattern under disruptive events is known as resilience. This paper presents a methodology to assess the resilience of a traffic network when a given perturbation occurs, from the beginning of the perturbation to the total system recovery. To consider the dynamic nature of the problem, a new dynamic equilibrium-restricted assignment model is presented to simulate the network performance evolution, which takes into consideration important aspects, such as the cost increment due to the perturbation, the system impedance to alter its previous state and the user stress level. Finally, this methodology is used to evaluate the resilience indices of a real network. - Highlights: • Method to assess the resilience of a traffic network suffering progressive impacts. • It simulates the dynamic response during the perturbation and system recovery. • The resilience index is based on the travel costs and the stress level of users. • It considers the capacity of adaptation of the system to the new situations. • The model evaluates redundancy, adaptability, ability to recover, etc.

  14. Use of a three dimensional network model to predict equilibrium desaturation properties of coal filter cakes

    Qamar, I.; Bayles, G.A.; Tierney, J.W.; Chiang, S.-H.; Klinzing, G.E.

    1987-01-01

    A three dimensional bond-flow correlated network model has been successfully used to calculate equilibrium desaturation curves for coal filter cakes. A simple cubic lattice with the pore sizes correlated in the direction of macroscopic flow is used as the network. A new method of pore volume assignment is presented in which the pore volume occupied by the large pores (which give rise to capillary pressures less than a calculated critical value) is assigned to the nodes and the rest is distributed to the bonds according to an experimentally determined micrographic pore size distribution. Equilibrium desaturation curves for -32 mesh, -200 mesh and -100 + 200 mesh coal cakes (Pittsburgh Seam coal), formed with distilled water have been calculated. A bond flow correlation factor, F/sub c/ is introduced to account for channeling of the displacing fluid through high volume, low resistance flow paths - a phenomenon which is displayed by many real systems. It is determined that a single value of 0.6 for F/sub c/ is required for -32 mesh and -200 mesh coals. However, for -100 + 200 mesh coal, where all small as well as large particles have been removed, a value of 1.0 is required. The results of six -32 mesh cakes formed with surfactants show that the effect of surfactants can be accounted for by modifying one of the model parameters, the entry diameter correction. A correlation is presented to estimate the modified correction using experimentally determined surface tension and contact angle values. Further, the predicted final saturations agree with the experimental values within an average absolute error of 5%. 16 refs., 11 figs., 2 tabs.

  15. An Analytic Approach to Modeling Land-Atmosphere Interaction: 1. Construct and Equilibrium Behavior

    Brubaker, Kaye L.; Entekhabi, Dara

    1995-03-01

    A four-variable land-atmosphere model is developed to investigate the coupled exchanges of water and energy between the land surface and atmosphere and the role of these exchanges in the statistical behavior of continental climates. The land-atmosphere system is substantially simplified and formulated as a set of ordinary differential equations that, with the addition of random noise, are suitable for analysis in the form of the multivariate Îto equation. The model treats the soil layer and the near-surface atmosphere as reservoirs with storage capacities for heat and water. The transfers between these reservoirs are regulated by four states: soil saturation, soil temperature, air specific humidity, and air potential temperature. The atmospheric reservoir is treated as a turbulently mixed boundary layer of fixed depth. Heat and moisture advection, precipitation, and layer-top air entrainment are parameterized. The system is forced externally by solar radiation and the lateral advection of air and water mass. The remaining energy and water mass exchanges are expressed in terms of the state variables. The model development and equilibrium solutions are presented. Although comparisons between observed data and steady state model results re inexact, the model appears to do a reasonable job of partitioning net radiation into sensible and latent heat flux in appropriate proportions for bare-soil midlatitude summer conditions. Subsequent work will introduce randomness into the forcing terms to investigate the effect of water-energy coupling and land-atmosphere interaction on variability and persistence in the climatic system.

  16. Equilibrium-eulerian les model for turbulent poly-dispersed particle-laden flow

    Icardi, Matteo

    2013-04-01

    An efficient Eulerian method for poly-dispersed particles in turbulent flows is implemented, verified and validated for a channel flow. The approach couples a mixture model with a quadrature-based moment method for the particle size distribution in a LES framework, augmented by an approximate deconvolution method to reconstructs the unfiltered velocity. The particle velocity conditioned on particle size is calculated with an equilibrium model, valid for low Stokes numbers. A population balance equation is solved with the direct quadrature method of moments, that efficiently represents the continuous particle size distribution. In this first study particulate processes are not considered and the capability of the model to properly describe particle transport is investigated for a turbulent channel flow. First, single-phase LES are validated through comparison with DNS. Then predictions for the two-phase system, with particles characterised by Stokes numbers ranging from 0.2 to 5, are compared with Lagrangian DNS in terms of particle velocity and accumulation at the walls. Since this phenomenon (turbophoresis) is driven by turbulent fluctuations and depends strongly on the particle Stokes number, the approximation of the particle size distribution, the choice of the sub-grid scale model and the use of an approximate deconvolution method are important to obtain good results. Our method can be considered as a fast and efficient alternative to classical Lagrangian methods or Eulerian multi-fluid models in which poly-dispersity is usually neglected.

  17. Equilibrium-eulerian les model for turbulent poly-dispersed particle-laden flow

    Icardi, Matteo; Marchisio, Daniele Luca; Chidambaram, Narayanan; Fox, Rodney O.

    2013-01-01

    An efficient Eulerian method for poly-dispersed particles in turbulent flows is implemented, verified and validated for a channel flow. The approach couples a mixture model with a quadrature-based moment method for the particle size distribution in a LES framework, augmented by an approximate deconvolution method to reconstructs the unfiltered velocity. The particle velocity conditioned on particle size is calculated with an equilibrium model, valid for low Stokes numbers. A population balance equation is solved with the direct quadrature method of moments, that efficiently represents the continuous particle size distribution. In this first study particulate processes are not considered and the capability of the model to properly describe particle transport is investigated for a turbulent channel flow. First, single-phase LES are validated through comparison with DNS. Then predictions for the two-phase system, with particles characterised by Stokes numbers ranging from 0.2 to 5, are compared with Lagrangian DNS in terms of particle velocity and accumulation at the walls. Since this phenomenon (turbophoresis) is driven by turbulent fluctuations and depends strongly on the particle Stokes number, the approximation of the particle size distribution, the choice of the sub-grid scale model and the use of an approximate deconvolution method are important to obtain good results. Our method can be considered as a fast and efficient alternative to classical Lagrangian methods or Eulerian multi-fluid models in which poly-dispersity is usually neglected.

  18. Self-employment in an equilibrium model of the labor market

    Jake Bradley

    2016-06-01

    Full Text Available Abstract Self-employed workers account for between 8 and 30 % of participants in the labor markets of OECD countries (Blanchower, Self-employment: more may not be better, 2004. This paper develops and estimates a general equilibrium model of the labor market that accounts for this sizable proportion. The model incorporates self-employed workers, some of whom hire paid employees in the market. Employment rates and earnings distributions are determined endogenously and are estimated to match their empirical counterparts. The model is estimated using the British Household Panel Survey (BHPS. The model is able to estimate nonpecuniary amenities associated with employment in different labor market states, accounting for both different employment dynamics within state and the misreporting of earnings by self-employed workers. Structural parameter estimates are then used to assess the impact of an increase in the generosity of unemployment benefits on the aggregate employment rate. Findings suggest that modeling the self-employed, some of whom hire paid employees implies that small increases in unemployment benefits leads to an expansion in aggregate employment. JEL Classification J21, J24, J28, J64

  19. Computable General Equilibrium Model Fiscal Year 2013 Capability Development Report - April 2014

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC); Rivera, Michael K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). National Infrastructure Simulation and Analysis Center (NISAC)

    2014-04-01

    This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.

  20. Model uncertainties of local-thermodynamic-equilibrium K-shell spectroscopy

    Nagayama, T.; Bailey, J. E.; Mancini, R. C.; Iglesias, C. A.; Hansen, S. B.; Blancard, C.; Chung, H. K.; Colgan, J.; Cosse, Ph.; Faussurier, G.; Florido, R.; Fontes, C. J.; Gilleron, F.; Golovkin, I. E.; Kilcrease, D. P.; Loisel, G.; MacFarlane, J. J.; Pain, J.-C.; Rochau, G. A.; Sherrill, M. E.; Lee, R. W.

    2016-09-01

    Local-thermodynamic-equilibrium (LTE) K-shell spectroscopy is a common tool to diagnose electron density, ne, and electron temperature, Te, of high-energy-density (HED) plasmas. Knowing the accuracy of such diagnostics is important to provide quantitative conclusions of many HED-plasma research efforts. For example, Fe opacities were recently measured at multiple conditions at the Sandia National Laboratories Z machine (Bailey et al., 2015), showing significant disagreement with modeled opacities. Since the plasma conditions were measured using K-shell spectroscopy of tracer Mg (Nagayama et al., 2014), one concern is the accuracy of the inferred Fe conditions. In this article, we investigate the K-shell spectroscopy model uncertainties by analyzing the Mg spectra computed with 11 different models at the same conditions. We find that the inferred conditions differ by ±20-30% in ne and ±2-4% in Te depending on the choice of spectral model. Also, we find that half of the Te uncertainty comes from ne uncertainty. To refine the accuracy of the K-shell spectroscopy, it is important to scrutinize and experimentally validate line-shape theory. We investigate the impact of the inferred ne and Te model uncertainty on the Fe opacity measurements. Its impact is small and does not explain the reported discrepancies.

  1. Thermochemical Equilibrium Model of Synthetic Natural Gas Production from Coal Gasification Using Aspen Plus

    Rolando Barrera

    2014-01-01

    Full Text Available The production of synthetic or substitute natural gas (SNG from coal is a process of interest in Colombia where the reserves-to-production ratio (R/P for natural gas is expected to be between 7 and 10 years, while the R/P for coal is forecasted to be around 90 years. In this work, the process to produce SNG by means of coal-entrained flow gasifiers is modeled under thermochemical equilibrium with the Gibbs free energy approach. The model was developed using a complete and comprehensive Aspen Plus model. Two typical technologies used in entrained flow gasifiers such as coal dry and coal slurry are modeled and simulated. Emphasis is put on interactions between the fuel feeding technology and selected energy output parameters of coal-SNG process, that is, energy efficiencies, power, and SNG quality. It was found that coal rank does not significantly affect energy indicators such as cold gas, process, and global efficiencies. However, feeding technology clearly has an effect on the process due to the gasifying agent. Simulations results are compared against available technical data with good accuracy. Thus, the proposed model is considered as a versatile and useful computational tool to study and optimize the coal to SNG process.

  2. A dynamic programming model for optimising feeding and slaughter decisions regarding fattening pigs

    J. K. NIEMI

    2008-12-01

    Full Text Available Costs of purchasing new piglets and of feeding them until slaughter are the main variable expenditures in pig fattening. They both depend on slaughter intensity, the nature of feeding patterns and the technological constraints of pig fattening, such as genotype. Therefore, it is of interest to examine the effect of production technology and changes in input and output prices on feeding and slaughter decisions. This study examines the problem by using a dynamic programming model that links genetic characteristics of a pig to feeding decisions and the timing of slaughter and takes into account how these jointly affect the quality-adjusted value of a carcass. The state of nature and the genotype of a pig are known in the analysis. The results suggest that producer can benefit from improvements in the pig’s genotype. Animals of improved genotype can reach optimal slaughter maturity quicker and produce leaner meat than animals of poor genotype. In order to fully utilise the benefits of animal breeding, the producer must adjust feeding and slaughter patterns on the basis of genotype. The results also suggest that the producer can benefit from flexible feeding technology. Typically, such a technology provides incentives to feed piglets with protein-rich feed. When the pig approaches slaughter maturity, the share of protein-rich feed in the diet gradually decreases and the amount of energy-rich feed increases. Generally, the optimal slaughter weight is within the weight range that pays the highest price per kilogram of pig meat. The optimal feeding pattern and the optimal timing of slaughter depend on price ratios. Particularly, an increase in the price of pig meat provides incentives to increase the growth rates up to the pig’s biological maximum by increasing the amount of energy in the feed. Price changes and changes in slaughter premium can also have large income effects.;

  3. Energy–exergy analysis and optimisation of a model sugar factory in Turkey

    Taner, Tolga; Sivrioglu, Mecit

    2015-01-01

    This study is related to the energy and exergy analysis of a model sugar factory in Turkey. In this study, energy efficiency issue in food industries are investigated within a general context to provide energy saving by reducing energy – exergy losses in the sugar production process. The aim of this study is to determine the best energy and exergy efficiency with the mass and energy balances according to design parameters for a sugar factory. Energy savings that can be applied in food industries are examined. Appropriate scenarios are prepared, and optimization results are compared. As a result of thermodynamics calculations made according to the 1st and 2nd Laws of Thermodynamics, energy and exergy efficiencies of a factory were calculated. Factory total energy efficiency and exergy efficiency were found to be 72.2% and 37.4%, respectively, and according to these results, energy quality was found to be 0.64. In conclusion, the current turbine power process energy and exergy efficiencies were 46.4% and 27.7%, respectively, and the optimized turbine power process energy and exergy efficiencies were 48.7% and 31.7%, respectively. This study performs an attitude to the problem of exergy optimization of the turbine power plant. An overall assessment of the energy and exergy efficiency calculations is performed and is focused on how they should be. - Highlights: • The energy and exergy efficiency of a sugar plant depends more on steam than process. • Energy and exergy efficiencies of a factory increase when the turbine power increases, as in a sugar factory. • Statistical analysis demonstrates the precision of data. • Thermoeconomic analysis of the energy and exergy efficiency of the Çumra Sugar Integrated Plant is performed.

  4. A model on CME/Flare initiation: Loss of Equilibrium caused by mass loss of quiescent prominences

    Miley, George; Chon Nam, Sok; Kim, Mun Song; Kim, Jik Su

    2015-08-01

    Coronal Mass Ejections (CMEs) model should give an answer to enough energy storage for giant bulk plasma into interplanetary space to escape against the sun’s gravitation and its explosive eruption. Advocates of ‘Mass Loading’ model (e.g. Low, B. 1996, SP, 167, 217) suggested a simple mechanism of CME initiation, the loss of mass from a prominence anchoring magnetic flux rope, but they did not associate the mass loss with the loss of equilibrium. The catastrophic loss of equilibrium model is considered as to be a prospective CME/Flare model to explain sudden eruption of magnetic flux systems. Isenberg, P. A., et al (1993, ApJ, 417, 368)developed ideal magnetohydrodynamic theory of the magnetic flux rope to show occurrence of catastrophic loss of equilibrium according to increasing magnetic flux transported into corona.We begin with extending their study including gravity on prominence’s material to obtain equilibrium curves in case of given mass parameters, which are the strengths of the gravitational force compared with the characteristic magnetic force. Furthermore, we study quasi-static evolution of the system including massive prominence flux rope and current sheet below it to obtain equilibrium curves of prominence’s height according to decreasing mass parameter in a properly fixed magnetic environment. The curves show equilibrium loss behaviors to imply that mass loss result in equilibrium loss. Released fractions of magnetic energy are greater than corresponding zero-mass case. This eruption mechanism is expected to be able to apply to the eruptions of quiescent prominences, which is located in relatively weak magnetic environment with 105 km of scale length and 10G of photospheric magnetic field.

  5. Investigating the Trade-Off Between Power Generation and Environmental Impact of Tidal-Turbine Arrays Using Array Layout Optimisation and Habitat Sustainability Modelling.

    du Feu, R. J.; Funke, S. W.; Kramer, S. C.; Hill, J.; Piggott, M. D.

    2016-12-01

    The installation of tidal turbines into the ocean will inevitably affect the environment around them. However, due to the relative infancy of this sector the extent and severity of such effects is unknown. The layout of an array of turbines is an important factor in determining not only the array's final yield but also how it will influence regional hydrodynamics. This in turn could affect, for example, sediment transportation or habitat suitability. The two potentially competing objectives of extracting energy from the tidal current, and of limiting any environmental impact consequent to influencing that current, are investigated here. This relationship is posed as a multi-objective optimisation problem. OpenTidalFarm, an array layout optimisation tool, and MaxEnt, habitat sustainability modelling software, are used to evaluate scenarios off the coast of the UK. MaxEnt is used to estimate the likelihood of finding a species in a given location based upon environmental input data and presence data of the species. Environmental features which are known to impact habitat, specifically those affected by the presence of an array, such as bed shear stress, are chosen as inputs. MaxEnt then uses a maximum-entropy modelling approach to estimate population distribution across the modelled area. OpenTidalFarm is used to maximise the power generated by an array, or multiple arrays, through adjusting the position and number of turbines within them. It uses a 2D shallow water model with turbine arrays represented as adjustable friction fields. It has the capability to also optimise for user created functionals that can be expressed mathematically. This work uses two functionals; power extracted by the array, and the suitability of habitat as predicted by MaxEnt. A gradient-based local optimisation is used to adjust the array layout at each iteration. This work presents arrays that are optimised for both yield and the viability of habitat for chosen species. In each scenario

  6. Optimising cell aggregate expansion in a perfused hollow fibre bioreactor via mathematical modelling.

    Chapman, Lloyd A C

    2014-08-26

    The need for efficient and controlled expansion of cell populations is paramount in tissue engineering. Hollow fibre bioreactors (HFBs) have the potential to meet this need, but only with improved understanding of how operating conditions and cell seeding strategy affect cell proliferation in the bioreactor. This study is designed to assess the effects of two key operating parameters (the flow rate of culture medium into the fibre lumen and the fluid pressure imposed at the lumen outlet), together with the cell seeding distribution, on cell population growth in a single-fibre HFB. This is achieved using mathematical modelling and numerical methods to simulate the growth of cell aggregates along the outer surface of the fibre in response to the local oxygen concentration and fluid shear stress. The oxygen delivery to the cell aggregates and the fluid shear stress increase as the flow rate and pressure imposed at the lumen outlet are increased. Although the increased oxygen delivery promotes growth, the higher fluid shear stress can lead to cell death. For a given cell type and initial aggregate distribution, the operating parameters that give the most rapid overall growth can be identified from simulations. For example, when aggregates of rat cardiomyocytes that can tolerate shear stresses of up to 0:05 Pa are evenly distributed along the fibre, the inlet flow rate and outlet pressure that maximise the overall growth rate are predicted to be in the ranges 2.75 x 10(-5) m(2) s(-1) to 3 x 10(-5) m(2) s(-1) (equivalent to 2.07 ml min(-1) to 2.26 ml min(-1)) and 1.077 x 10(5) Pa to 1.083 x 10(5) Pa (or 15.6 psi to 15.7 psi) respectively. The combined effects of the seeding distribution and flow on the growth are also investigated and the optimal conditions for growth found to depend on the shear tolerance and oxygen demands of the cells.

  7. Computable general equilibrium modelling in the context of trade and environmental policy

    Koesler, Simon Tobias

    2014-10-14

    This thesis is dedicated to the evaluation of environmental policies in the context of climate change. Its objectives are twofold. Its first part is devoted to the development of potent instruments for quantitative impact analysis of environmental policy. In this context, the main contributions include the development of a new computable general equilibrium (CGE) model which makes use of the new comprehensive and coherent World Input-Output Dataset (WIOD) and which features a detailed representation of bilateral and bisectoral trade flows. Moreover it features an investigation of input substitutability to provide modellers with adequate estimates for key elasticities as well as a discussion and amelioration of the standard base year calibration procedure of most CGE models. Building on these tools, the second part applies the improved modelling framework and studies the economic implications of environmental policy. This includes an analysis of so called rebound effects, which are triggered by energy efficiency improvements and reduce their net benefit, an investigation of how firms restructure their production processes in the presence of carbon pricing mechanisms, and an analysis of a regional maritime emission trading scheme as one of the possible options to reduce emissions of international shipping in the EU context.

  8. Surface structures of equilibrium restricted curvature model on two fractal substrates

    Song Li-Jian; Tang Gang; Zhang Yong-Wei; Han Kui; Xun Zhi-Peng; Xia Hui; Hao Da-Peng; Li Yan

    2014-01-01

    With the aim to probe the effects of the microscopic details of fractal substrates on the scaling of discrete growth models, the surface structures of the equilibrium restricted curvature (ERC) model on Sierpinski arrowhead and crab substrates are analyzed by means of Monte Carlo simulations. These two fractal substrates have the same fractal dimension d f , but possess different dynamic exponents of random walk z rw . The results show that the surface structure of the ERC model on fractal substrates are related to not only the fractal dimension d f , but also to the microscopic structures of the substrates expressed by the dynamic exponent of random walk z rw . The ERC model growing on the two substrates follows the well-known Family—Vicsek scaling law and satisfies the scaling relations 2α + d f ≍ z ≍ 2z rw . In addition, the values of the scaling exponents are in good agreement with the analytical prediction of the fractional Mullins—Herring equation. (general)

  9. Chemical Equilibrium Modeling of Hanford Waste Tank Processing: Applications of Fundamental Science

    Felmy, Andrew R.; Wang, Zheming; Dixon, David A.; Hess, Nancy J.

    2004-01-01

    The development of computational models based upon fundamental science is one means of quantitatively transferring the results of scientific investigations to practical application by engineers in laboratory and field situations. This manuscript describes one example of such efforts, specifically the development and application of chemical equilibrium models to different waste management issues at the U.S. Department of Energy (DOE) Hanford Site. The development of the chemical models is described with an emphasis on the fundamental science investigations that have been undertaken in model development followed by examples of different waste management applications. The waste management issues include the leaching of waste slurries to selective remove non-hazardous components and the separation of Sr90 and transuranics from the waste supernatants. The fundamental science contributions include: molecular simulations of the energetics of different molecular clusters to assist in determining the species present in solution, advanced synchrotron research to determine the chemical form of precipitates, and laser based spectroscopic studies of solutions and solids.

  10. Modeling of existing cooling towers in ASPEN PLUS using an equilibrium stage method

    Queiroz, João A.; Rodrigues, Vitor M.S.; Matos, Henrique A.; Martins, F.G.

    2012-01-01

    Highlights: ► Simulation of cooling tower performance under different operating conditions. ► Cooling tower performance is simulated using ASPEN PLUS. ► Levenberg–Marquardt method used to adjust model parameters. ► Air and water outlet temperatures are in good accordance with experimental data. - Abstract: Simulation of cooling tower performance considering operating conditions away from design is typically based on the geometrical parameters provided by the cooling tower vendor, which are often unavailable or outdated. In this paper a different approach for cooling tower modeling based on equilibrium stages and Murphree efficiencies to describe heat and mass transfer is presented. This approach is validated with published data and with data collected from an industrial application. Cooling tower performance is simulated using ASPEN PLUS. Murphree stage efficiency values for the process simulator model were optimized by minimizing the squared difference between the experimental and calculated data using the Levenberg–Marquardt method. The minimization algorithm was implemented in Microsoft Excel with Visual Basic for Applications, integrated with the process simulator (ASPEN PLUS) using Aspen Simulation Workbook. The simulated cooling tower air and water outlet temperatures are in good accordance with experimental data when applying only the outlet water temperature to calibrate the model. The methodology is accurate for simulating cooling towers at different operational conditions.

  11. Lattice ellipsoidal statistical BGK model for thermal non-equilibrium flows

    Meng, Jianping; Zhang, Yonghao; Hadjiconstantinou, Nicolas G.; Radtke, Gregg A.; Shan, Xiaowen

    2013-03-01

    A thermal lattice Boltzmann model is constructed on the basis of the ellipsoidal statistical Bhatnagar-Gross-Krook (ES-BGK) collision operator via the Hermite moment representation. The resulting lattice ES-BGK model uses a single distribution function and features an adjustable Prandtl number. Numerical simulations show that using a moderate discrete velocity set, this model can accurately recover steady and transient solutions of the ES-BGK equation in the slip-flow and early transition regimes in the small Mach number limit that is typical of microscale problems of practical interest. In the transition regime in particular, comparisons with numerical solutions of the ES-BGK model, direct Monte Carlo and low-variance deviational Monte Carlo simulations show good accuracy for values of the Knudsen number up to approximately 0.5. On the other hand, highly non-equilibrium phenomena characterized by high Mach numbers, such as viscous heating and force-driven Poiseuille flow for large values of the driving force, are more difficult to capture quantitatively in the transition regime using discretizations chosen with computational efficiency in mind such as the one used here, although improved accuracy is observed as the number of discrete velocities is increased.

  12. Computable general equilibrium modelling in the context of trade and environmental policy

    Koesler, Simon Tobias

    2014-01-01

    This thesis is dedicated to the evaluation of environmental policies in the context of climate change. Its objectives are twofold. Its first part is devoted to the development of potent instruments for quantitative impact analysis of environmental policy. In this context, the main contributions include the development of a new computable general equilibrium (CGE) model which makes use of the new comprehensive and coherent World Input-Output Dataset (WIOD) and which features a detailed representation of bilateral and bisectoral trade flows. Moreover it features an investigation of input substitutability to provide modellers with adequate estimates for key elasticities as well as a discussion and amelioration of the standard base year calibration procedure of most CGE models. Building on these tools, the second part applies the improved modelling framework and studies the economic implications of environmental policy. This includes an analysis of so called rebound effects, which are triggered by energy efficiency improvements and reduce their net benefit, an investigation of how firms restructure their production processes in the presence of carbon pricing mechanisms, and an analysis of a regional maritime emission trading scheme as one of the possible options to reduce emissions of international shipping in the EU context.

  13. The nature of the continuous non-equilibrium phase transition of Axelrod's model

    Peres, Lucas R.; Fontanari, José F.

    2015-09-01

    Axelrod's model in the square lattice with nearest-neighbors interactions exhibits culturally homogeneous as well as culturally fragmented absorbing configurations. In the case in which the agents are characterized by F = 2 cultural features and each feature assumes k states drawn from a Poisson distribution of parameter q, these regimes are separated by a continuous transition at qc = 3.10 +/- 0.02 . Using Monte Carlo simulations and finite-size scaling we show that the mean density of cultural domains μ is an order parameter of the model that vanishes as μ ∼ (q - q_c)^β with β = 0.67 +/- 0.01 at the critical point. In addition, for the correlation length critical exponent we find ν = 1.63 +/- 0.04 and for Fisher's exponent, τ = 1.76 +/- 0.01 . This set of critical exponents places the continuous phase transition of Axelrod's model apart from the known universality classes of non-equilibrium lattice models.

  14. Internalisation of external costs in the Polish power generation sector: A partial equilibrium model

    Kudelko, Mariusz

    2006-01-01

    This paper presents a methodical framework, which is the basis for the economic analysis of the mid-term planning of development of the Polish energy system. The description of the partial equilibrium model and its results are demonstrated for different scenarios applied. The model predicts the generation, investment and pricing of mid-term decisions that refer to the Polish electricity and heat markets. The current structure of the Polish energy sector is characterised by interactions between the supply and demand sides of the energy sector. The supply side regards possibilities to deliver fuels from domestic and import sources and their conversion through transformation processes. Public power plants, public CHP plants, industry CHP plants and municipal heat plants represent the main producers of energy in Poland. Demand is characterised by the major energy consumers, i.e. industry and construction, transport, agriculture, trade and services, individual consumers and export. The relationships between the domestic electricity and heat markets are modelled taking into account external costs estimates. The volume and structure of energy production, electricity and heat prices, emissions, external costs and social welfare of different scenarios are presented. Results of the model demonstrate that the internalisation of external costs through the increase in energy prices implies significant improvement in social welfare

  15. Evaluation of trace metals bioavailability in Japanese river waters using DGT and a chemical equilibrium model.

    Han, Shuping; Naito, Wataru; Hanai, Yoshimichi; Masunaga, Shigeki

    2013-09-15

    To develop efficient and effective methods of assessing and managing the risk posed by metals to aquatic life, it is important to determine the effects of water chemistry on the bioavailability of metals in surface water. In this study, we employed the diffusive gradients in thin-films (DGT) to determine the bioavailability of metals (Ni, Cu, Zn, and Pb) in Japanese water systems. The DGT results were compared with a chemical equilibrium model (WHAM 7.0) calculation to examine its robustness and utility to predict dynamic metal speciation. The DGT measurements showed that biologically available fractions of metals in the rivers impacted by mine drainage and metal industries were relatively high compared with those in urban rivers. Comparison between the DGT results and the model calculation indicated good agreement for Zn. The model calculation concentrations for Ni and Cu were higher than the DGT concentrations at most sites. As for Pb, the model calculation depended on whether the precipitated iron(III) hydroxide or precipitated aluminum(III) hydroxide was assumed to have an active surface. Our results suggest that the use of WHAM 7.0 combined with the DGT method can predict bioavailable concentrations of most metals (except for Pb) with reasonable accuracy. Copyright © 2013. Published by Elsevier Ltd.

  16. A numerical model of non-equilibrium thermal plasmas. I. Transport properties

    Zhang, Xiao-Ning; Li, He-Ping; Murphy, Anthony B.; Xia, Wei-Dong

    2013-03-01

    A self-consistent and complete numerical model for investigating the fundamental processes in a non-equilibrium thermal plasma system consists of the governing equations and the corresponding physical properties of the plasmas. In this paper, a new kinetic theory of the transport properties of two-temperature (2-T) plasmas, based on the solution of the Boltzmann equation using a modified Chapman-Enskog method, is presented. This work is motivated by the large discrepancies between the theories for the calculation of the transport properties of 2-T plasmas proposed by different authors in previous publications. In the present paper, the coupling between electrons and heavy species is taken into account, but reasonable simplifications are adopted, based on the physical fact that me/mh ≪ 1, where me and mh are, respectively, the masses of electrons and heavy species. A new set of formulas for the transport coefficients of 2-T plasmas is obtained. The new theory has important physical and practical advantages over previous approaches. In particular, the diffusion coefficients are complete and satisfy the mass conversation law due to the consideration of the coupling between electrons and heavy species. Moreover, this essential requirement is satisfied without increasing the complexity of the transport coefficient formulas. Expressions for the 2-T combined diffusion coefficients are obtained. The expressions for the transport coefficients can be reduced to the corresponding well-established expressions for plasmas in local thermodynamic equilibrium for the case in which the electron and heavy-species temperatures are equal.

  17. A numerical model of non-equilibrium thermal plasmas. I. Transport properties

    Zhang XiaoNing; Xia WeiDong [Department of Thermal Science and Energy Engineering, University of Science and Technology of China, Hefei, Anhui Province 230026 (China); Li HePing [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Murphy, Anthony B. [CSIRO Materials Science and Engineering, PO Box 218, Lindfield NSW 2070 (Australia)

    2013-03-15

    A self-consistent and complete numerical model for investigating the fundamental processes in a non-equilibrium thermal plasma system consists of the governing equations and the corresponding physical properties of the plasmas. In this paper, a new kinetic theory of the transport properties of two-temperature (2-T) plasmas, based on the solution of the Boltzmann equation using a modified Chapman-Enskog method, is presented. This work is motivated by the large discrepancies between the theories for the calculation of the transport properties of 2-T plasmas proposed by different authors in previous publications. In the present paper, the coupling between electrons and heavy species is taken into account, but reasonable simplifications are adopted, based on the physical fact that m{sub e}/m{sub h} Much-Less-Than 1, where m{sub e} and m{sub h} are, respectively, the masses of electrons and heavy species. A new set of formulas for the transport coefficients of 2-T plasmas is obtained. The new theory has important physical and practical advantages over previous approaches. In particular, the diffusion coefficients are complete and satisfy the mass conversation law due to the consideration of the coupling between electrons and heavy species. Moreover, this essential requirement is satisfied without increasing the complexity of the transport coefficient formulas. Expressions for the 2-T combined diffusion coefficients are obtained. The expressions for the transport coefficients can be reduced to the corresponding well-established expressions for plasmas in local thermodynamic equilibrium for the case in which the electron and heavy-species temperatures are equal.

  18. A generalized quantitative antibody homeostasis model: maintenance of global antibody equilibrium by effector functions.

    Prechl, József

    2017-11-01

    The homeostasis of antibodies can be characterized as a balanced production, target-binding and receptor-mediated elimination regulated by an interaction network, which controls B-cell development and selection. Recently, we proposed a quantitative model to describe how the concentration and affinity of interacting partners generates a network. Here we argue that this physical, quantitative approach can be extended for the interpretation of effector functions of antibodies. We define global antibody equilibrium as the zone of molar equivalence of free antibody, free antigen and immune complex concentrations and of dissociation constant of apparent affinity: [Ab]=[Ag]=[AbAg]= K D . This zone corresponds to the biologically relevant K D range of reversible interactions. We show that thermodynamic and kinetic properties of antibody-antigen interactions correlate with immunological functions. The formation of stable, long-lived immune complexes correspond to a decrease of entropy and is a prerequisite for the generation of higher-order complexes. As the energy of formation of complexes increases, we observe a gradual shift from silent clearance to inflammatory reactions. These rules can also be applied to complement activation-related immune effector processes, linking the physicochemical principles of innate and adaptive humoral responses. Affinity of the receptors mediating effector functions shows a wide range of affinities, allowing the continuous sampling of antibody-bound antigen over the complete range of concentrations. The generation of multivalent, multicomponent complexes triggers effector functions by crosslinking these receptors on effector cells with increasing enzymatic degradation potential. Thus, antibody homeostasis is a thermodynamic system with complex network properties, nested into the host organism by proper immunoregulatory and effector pathways. Maintenance of global antibody equilibrium is achieved by innate qualitative signals modulating a

  19. A reaction-based paradigm to model reactive chemical transport in groundwater with general kinetic and equilibrium reactions

    Zhang, Fan; Yeh, Gour-Tsyh; Parker, Jack C.; Brooks, Scott C; Pace, Molly; Kim, Young Jin; Jardine, Philip M.; Watson, David B.

    2007-01-01

    This paper presents a reaction-based water quality transport model in subsurface flow systems. Transport of chemical species with a variety of chemical and physical processes is mathematically described by M. partial differential equations (PDEs). Decomposition via Gauss-Jordan column reduction of the reaction network transforms M. species reactive transport equations into two sets of equations: a set of thermodynamic equilibrium equations representing NE equilibrium reactions and a set of reactive transport equations of M-NE kinetic-variables involving no equilibrium reactions (a kinetic-variable is a linear combination of species). The elimination of equilibrium reactions from reactive transport equations allows robust and efficient numerical integration. The model solves the PDEs of kinetic-variables rather than individual chemical species, which reduces the number of reactive transport equations and simplifies the reaction terms in the equations. A variety of numerical methods are investigated for solving the coupled transport and reaction equations. Simulation comparisons with exact solutions were performed to verify numerical accuracy and assess the effectiveness of various numerical strategies to deal with different application circumstances. Two validation examples involving simulations of uranium transport in soil columns are presented to evaluate the ability of the model to simulate reactive transport with complex reaction networks involving both kinetic and equilibrium reactions

  20. A reaction-based paradigm to model reactive chemical transport in groundwater with general kinetic and equilibrium reactions.

    Zhang, Fan; Yeh, Gour-Tsyh; Parker, Jack C; Brooks, Scott C; Pace, Molly N; Kim, Young-Jin; Jardine, Philip M; Watson, David B

    2007-06-16

    This paper presents a reaction-based water quality transport model in subsurface flow systems. Transport of chemical species with a variety of chemical and physical processes is mathematically described by M partial differential equations (PDEs). Decomposition via Gauss-Jordan column reduction of the reaction network transforms M species reactive transport equations into two sets of equations: a set of thermodynamic equilibrium equations representing N(E) equilibrium reactions and a set of reactive transport equations of M-N(E) kinetic-variables involving no equilibrium reactions (a kinetic-variable is a linear combination of species). The elimination of equilibrium reactions from reactive transport equations allows robust and efficient numerical integration. The model solves the PDEs of kinetic-variables rather than individual chemical species, which reduces the number of reactive transport equations and simplifies the reaction terms in the equations. A variety of numerical methods are investigated for solving the coupled transport and reaction equations. Simulation comparisons with exact solutions were performed to verify numerical accuracy and assess the effectiveness of various numerical strategies to deal with different application circumstances. Two validation examples involving simulations of uranium transport in soil columns are presented to evaluate the ability of the model to simulate reactive transport with complex reaction networks involving both kinetic and equilibrium reactions.

  1. The onset of double diffusive convection in a viscoelastic fluid-saturated porous layer with non-equilibrium model.

    Zhixin Yang

    Full Text Available The onset of double diffusive convection in a viscoelastic fluid-saturated porous layer is studied when the fluid and solid phase are not in local thermal equilibrium. The modified Darcy model is used for the momentum equation and a two-field model is used for energy equation each representing the fluid and solid phases separately. The effect of thermal non-equilibrium on the onset of double diffusive convection is discussed. The critical Rayleigh number and the corresponding wave number for the exchange of stability and over-stability are obtained, and the onset criterion for stationary and oscillatory convection is derived analytically and discussed numerically.

  2. Vapor-Liquid Equilibrium of Methane with Water and Methanol. Measurements and Modeling

    Frost, Michael Grynnerup; Karakatsani, Eirini; von Solms, Nicolas

    2014-01-01

    that rely on phase equilibrium data for optimization. The objective of this work is to provide experimental data for hydrocarbon systems with polar chemicals such as alcohols, glycols, and water. New vapor-liquid equilibrium data are reported for methane + water, methane + methanol, and methane + methanol...

  3. Nuclear Statistical Equilibrium for compact stars: modelling the nuclear energy functional

    Aymard, Francois

    2015-01-01

    The core collapse supernova is one of the most powerful known phenomena in the universe. It results from the explosion of very massive stars after they have burnt all their fuel. The hot compact remnant, the so-called proto-neutron star, cools down to become an inert catalyzed neutron star. The dynamics and structure of compact stars, that is core collapse supernovae, proto-neutron stars and neutron stars, are still not fully understood and are currently under active research, in association with astrophysical observations and nuclear experiments. One of the key components for modelling compact stars concerns the Equation of State. The task of computing a complete realistic consistent Equation of State for all such stars is challenging because a wide range of densities, proton fractions and temperatures is spanned. This thesis deals with the microscopic modelling of the structure and internal composition of baryonic matter with nucleonic degrees of freedom in compact stars, in order to obtain a realistic unified Equation of State. In particular, we are interested in a formalism which can be applied both at sub-saturation and super-saturation densities, and which gives in the zero temperature limit results compatible with the microscopic Hartree-Fock-Bogoliubov theory with modern realistic effective interactions constrained on experimental nuclear data. For this purpose, we present, for sub-saturated matter, a Nuclear Statistical Equilibrium model which corresponds to a statistical superposition of finite configurations, the so-called Wigner-Seitz cells. Each cell contains a nucleus, or cluster, embedded in a homogeneous electron gas as well as a homogeneous neutron and proton gas. Within each cell, we investigate the different components of the nuclear energy of clusters in interaction with gases. The use of the nuclear mean-field theory for the description of both the clusters and the nucleon gas allows a theoretical consistency with the treatment at saturation

  4. A simple non-equilibrium, statistical-physics toy model of thin-film growth

    Ochab, Jeremi K; Nagel, Hannes; Janke, Wolfhard; Waclaw, Bartlomiej

    2015-01-01

    We present a simple non-equilibrium model of mass condensation with Lennard–Jones interactions between particles and the substrate. We show that when some number of particles is deposited onto the surface and the system is left to equilibrate, particles condense into an island if the density of particles becomes higher than some critical density. We illustrate this with numerically obtained phase diagrams for three-dimensional systems. We also solve a two-dimensional counterpart of this model analytically and show that not only the phase diagram but also the shape of the cross-sections of three-dimensional condensates qualitatively matches the two-dimensional predictions. Lastly, we show that when particles are being deposited with a constant rate, the system has two phases: a single condensate for low deposition rates, and multiple condensates for fast deposition. The behaviour of our model is thus similar to that of thin film growth processes, and in particular to Stranski–Krastanov growth. (paper)

  5. Strategic forward contracting in electricity markets: modelling and analysis by equilibrium method

    Chung, T.S.; Zhang, S.H.; Wong, K.P.; Yu, C.W.; Chung, C.Y.

    2004-01-01

    Contractual arrangement plays an important role in mitigating market power in electricity markets. The issue of whether rational generators would voluntarily enter contract markets through a strategic incentive is examined, and the factors which could affect this strategic contracting behaviour. A two-stage game model is presented to formulate the competition of generators in bid-based pool spot markets and contract markets, as well as the interaction between these two markets. The affine supply function equilibrium (SFE) method is used to model competitive bidding for the spot market, while the contract market is modelled with the general conjectural variation method. The proposed methodology allows asymmetric, multiple strategic generators having capacity constraints and affine marginal costs with non-zero intercepts to be taken into account. It is shown that the presence of forward contract markets will complicate the solution to the affine SFE, and a new methodology is developed in this regard. Strategic contracting behaviours are analysed in the context of asymmetric, multiple strategic generators. A numerical example is used to verify theoretical results. It is shown that the observability of contract markets plays an important role in fostering generators' strategic contracting incentive, and that this contracting behaviour could also be affected by generators' cost parameters and demand elasticity. (author)

  6. An Extension of the Miller Equilibrium Model into the X-Point Region

    Hill, M. D.; King, R. W.; Stacey, W. M.

    2017-10-01

    The Miller equilibrium model has been extended to better model the flux surfaces in the outer region of the plasma and scrape-off layer, including the poloidally non-uniform flux surface expansion that occurs in the X-point region(s) of diverted tokamaks. Equations for elongation and triangularity are modified to include a poloidally varying component and grad-r, which is used in the calculation of the poloidal magnetic field, is rederived. Initial results suggest that strong quantitative agreement with experimental flux surface reconstructions and strong qualitative agreement with poloidal magnetic fields can be obtained using this model. Applications are discussed. A major new application is the automatic generation of the computation mesh in the plasma edge, scrape-off layer, plenum and divertor regions for use in the GTNEUT neutral particle transport code, enabling this powerful analysis code to be routinely run in experimental analyses. Work supported by US DOE under DE-FC02-04ER54698.

  7. A simplified unified Hauser-Feshbach/Pre-Equilibrium model for calculating double differential cross sections

    Fu, C.Y.

    1988-01-01

    A unified Hauser-Feshbach/Pre-Equilibrium model is extended and simplified. The extension involves the addition of correlations among states of different total quantum numbers (J and J') and the introduction of consistent level density formulas for the H-F and the P-E parts of the calculation. The simplification, aimed at reducing the computational cost, is achieved mainly by keeping only the off-diagonal terms that involve strongly correlated 2p-1h states. A correlation coefficient is introduced to fit the experimental data. The model has been incorporated into the multistep H-F model code TNG. Calculated double differential (n,xn) cross sections at 14 and 25.7 MeV for iron, niobium, and bismuth are in good agreement with experiments. In use at ORNL and JAERI, the TNG code in various stages of development has been applied with success to the evaluation of double differential (n,xn) cross sections from 1 to 20 MeV for the dominant isotopes of chromium, manganese, iron, nickel, copper, and lead. 11 refs., 2 figs

  8. Modelling innovative interventions for optimising healthy lifestyle promotion in primary health care: "Prescribe Vida Saludable" phase I research protocol

    Pombo Haizea

    2009-06-01

    Full Text Available Abstract Background The adoption of a healthy lifestyle, including physical activity, a balanced diet, a moderate alcohol consumption and abstinence from smoking, are associated with large decreases in the incidence and mortality rates for the most common chronic diseases. That is why primary health care (PHC services are trying, so far with less success than desirable, to promote healthy lifestyles among patients. The objective of this study is to design and model, under a participative collaboration framework between clinicians and researchers, interventions that are feasible and sustainable for the promotion of healthy lifestyles in PHC. Methods and design Phase I formative research and a quasi-experimental evaluation of the modelling and planning process will be undertaken in eight primary care centres (PCCs of the Basque Health Service – OSAKIDETZA, of which four centres will be assigned for convenience to the Intervention Group (the others being Controls. Twelve structured study, discussion and consensus sessions supported by reviews of the literature and relevant documents, will be undertaken throughout 12 months. The first four sessions, including a descriptive strategic needs assessment, will lead to the prioritisation of a health promotion aim in each centre. In the remaining eight sessions, collaborative design of intervention strategies, on the basis of a planning process and pilot trials, will be carried out. The impact of the formative process on the practice of healthy lifestyle promotion, attitude towards health promotion and other factors associated with the optimisation of preventive clinical practice will be assessed, through pre- and post-programme evaluations and comparisons of the indicators measured in professionals from the centres assigned to the Intervention or Control Groups. Discussion There are four necessary factors for the outcome to be successful and result in important changes: (1 the commitment of professional

  9. Computer Based Optimisation Rutines

    Dragsted, Birgitte; Olsen, Flemmming Ove

    1996-01-01

    In this paper the need for optimisation methods for the laser cutting process has been identified as three different situations. Demands on the optimisation methods for these situations are presented, and one method for each situation is suggested. The adaptation and implementation of the methods...

  10. Optimal Optimisation in Chemometrics

    Hageman, J.A.

    2004-01-01

    The use of global optimisation methods is not straightforward, especially for the more difficult optimisation problems. Solutions have to be found for items such as the evaluation function, representation, step function and meta-parameters, before any useful results can be obtained. This thesis aims

  11. Evaluation of indoor radon equilibrium factor using CFD modeling and resulting annual effective dose

    Rabi, R.; Oufni, L.

    2018-04-01

    The equilibrium factor is an important parameter for reasonably estimating the population dose from radon. However, the equilibrium factor value depended mainly on the ventilation rate and the meteorological factors. Therefore, this study focuses on investigating numerically the influence of the ventilation rate, temperature and humidity on equilibrium factor between radon and its progeny. The numerical results showed that ventilation rate, temperature and humidity have significant impacts on indoor equilibrium factor. The variations of equilibrium factor with the ventilation, temperature and relative humidity are discussed. Moreover, the committed equivalent doses due to 218Po and 214Po radon short-lived progeny were evaluated in different tissues of the respiratory tract of the members of the public from the inhalation of indoor air. The annual effective dose due to radon short lived progeny from the inhalation of indoor air by the members of the public was investigated.

  12. Equilibrium Droplets on Deformable Substrates: Equilibrium Conditions.

    Koursari, Nektaria; Ahmed, Gulraiz; Starov, Victor M

    2018-05-15

    Equilibrium conditions of droplets on deformable substrates are investigated, and it is proven using Jacobi's sufficient condition that the obtained solutions really provide equilibrium profiles of both the droplet and the deformed support. At the equilibrium, the excess free energy of the system should have a minimum value, which means that both necessary and sufficient conditions of the minimum should be fulfilled. Only in this case, the obtained profiles provide the minimum of the excess free energy. The necessary condition of the equilibrium means that the first variation of the excess free energy should vanish, and the second variation should be positive. Unfortunately, the mentioned two conditions are not the proof that the obtained profiles correspond to the minimum of the excess free energy and they could not be. It is necessary to check whether the sufficient condition of the equilibrium (Jacobi's condition) is satisfied. To the best of our knowledge Jacobi's condition has never been verified for any already published equilibrium profiles of both the droplet and the deformable substrate. A simple model of the equilibrium droplet on the deformable substrate is considered, and it is shown that the deduced profiles of the equilibrium droplet and deformable substrate satisfy the Jacobi's condition, that is, really provide the minimum to the excess free energy of the system. To simplify calculations, a simplified linear disjoining/conjoining pressure isotherm is adopted for the calculations. It is shown that both necessary and sufficient conditions for equilibrium are satisfied. For the first time, validity of the Jacobi's condition is verified. The latter proves that the developed model really provides (i) the minimum of the excess free energy of the system droplet/deformable substrate and (ii) equilibrium profiles of both the droplet and the deformable substrate.

  13. Credit price optimisation within retail banking

    2014-02-14

    Feb 14, 2014 ... cost based pricing, where the price of a product or service is based on the .... function obtained from fitting a logistic regression model .... Note that the proposed optimisation approach below will allow us to also incorporate.

  14. Ignition conditions relaxation for central hot-spot ignition with an ion-electron non-equilibrium model

    Fan, Zhengfeng; Liu, Jie

    2016-10-01

    We present an ion-electron non-equilibrium model, in which the hot-spot ion temperature is higher than its electron temperature so that the hot-spot nuclear reactions are enhanced while energy leaks are considerably reduced. Theoretical analysis shows that the ignition region would be significantly enlarged in the hot-spot rhoR-T space as compared with the commonly used equilibrium model. Simulations show that shocks could be utilized to create and maintain non-equilibrium conditions within the hot spot, and the hot-spot rhoR requirement is remarkably reduced for achieving self-heating. In NIF high-foot implosions, it is observed that the x-ray enhancement factors are less than unity, which is not self-consistent and is caused by assuming Te =Ti. And from this non-consistency, we could infer that ion-electron non-equilibrium exists in the high-foot implosions and the ion temperature could be 9% larger than the equilibrium temperature.

  15. GEM-E3: A computable general equilibrium model applied for Switzerland

    Bahn, O. [Paul Scherrer Inst., CH-5232 Villigen PSI (Switzerland); Frei, C. [Ecole Polytechnique Federale de Lausanne (EPFL) and Paul Scherrer Inst. (Switzerland)

    2000-01-01

    The objectives of the European Research Project GEM-E3-ELITE, funded by the European Commission and coordinated by the Centre for European Economic Research (Germany), were to further develop the general equilibrium model GEM-E3 (Capros et al., 1995, 1997) and to conduct policy analysis through case studies. GEM-E3 is an applied general equilibrium model that analyses the macro-economy and its interaction with the energy system and the environment through the balancing of energy supply and demand, atmospheric emissions and pollution control, together with the fulfillment of overall equilibrium conditions. PSI's research objectives within GEM-E3-ELITE were to implement and apply GEM-E3 for Switzerland. The first objective required in particular the development of a Swiss database for each of GEM-E3 modules (economic module and environmental module). For the second objective, strategies to reduce CO{sub 2} emissions were evaluated for Switzerland. In order to develop the economic, PSI collaborated with the Laboratory of Applied Economics (LEA) of the University of Geneva and the Laboratory of Energy Systems (LASEN) of the Federal Institute of Technology in Lausanne (EPFL). The Swiss Federal Statistical Office (SFSO) and the Institute for Business Cycle Research (KOF) of the Swiss Federal Institute of Technology (ETH Zurich) contributed also data. The Swiss environmental database consists mainly of an Energy Balance Table and of an Emission Coefficients Table. Both were designed using national and international official statistics. The Emission Coefficients Table is furthermore based on know-how of the PSI GaBE Project. Using GEM-E3 Switzerland, two strategies to reduce the Swiss CO{sub 2} emissions were evaluated: a carbon tax ('tax only' strategy), and the combination of a carbon tax with the buying of CO{sub 2} emission permits ('permits and tax' strategy). In the first strategy, Switzerland would impose the necessary carbon tax to achieve

  16. GEM-E3: A computable general equilibrium model applied for Switzerland

    Bahn, O.; Frei, C.

    2000-01-01

    The objectives of the European Research Project GEM-E3-ELITE, funded by the European Commission and coordinated by the Centre for European Economic Research (Germany), were to further develop the general equilibrium model GEM-E3 (Capros et al., 1995, 1997) and to conduct policy analysis through case studies. GEM-E3 is an applied general equilibrium model that analyses the macro-economy and its interaction with the energy system and the environment through the balancing of energy supply and demand, atmospheric emissions and pollution control, together with the fulfillment of overall equilibrium conditions. PSI's research objectives within GEM-E3-ELITE were to implement and apply GEM-E3 for Switzerland. The first objective required in particular the development of a Swiss database for each of GEM-E3 modules (economic module and environmental module). For the second objective, strategies to reduce CO 2 emissions were evaluated for Switzerland. In order to develop the economic, PSI collaborated with the Laboratory of Applied Economics (LEA) of the University of Geneva and the Laboratory of Energy Systems (LASEN) of the Federal Institute of Technology in Lausanne (EPFL). The Swiss Federal Statistical Office (SFSO) and the Institute for Business Cycle Research (KOF) of the Swiss Federal Institute of Technology (ETH Zurich) contributed also data. The Swiss environmental database consists mainly of an Energy Balance Table and of an Emission Coefficients Table. Both were designed using national and international official statistics. The Emission Coefficients Table is furthermore based on know-how of the PSI GaBE Project. Using GEM-E3 Switzerland, two strategies to reduce the Swiss CO 2 emissions were evaluated: a carbon tax ('tax only' strategy), and the combination of a carbon tax with the buying of CO 2 emission permits ('permits and tax' strategy). In the first strategy, Switzerland would impose the necessary carbon tax to achieve the reduction target, and use the tax

  17. Non-equilibrium thermochemical heat storage in porous media: Part 1 – Conceptual model

    Nagel, T.; Shao, H.; Singh, A.K.; Watanabe, N.; Roßkopf, C.; Linder, M.; Wörner, A.; Kolditz, O.

    2013-01-01

    Thermochemical energy storage can play an important role in the establishment of a reliable renewable energy supply and can increase the efficiency of industrial processes. The application of directly permeated reactive beds leads to strongly coupled mass and heat transport processes that also determine reaction kinetics. To advance this technology beyond the laboratory stage requires a thorough theoretical understanding of the multiphysics phenomena and their quantification on a scale relevant to engineering analyses. Here, the theoretical derivation of a macroscopic model for multicomponent compressible gas flow through a porous solid is presented along with its finite element implementation where solid–gas reactions occur and both phases have individual temperature fields. The model is embedded in the Theory of Porous Media and the derivation is based on the evaluation of the Clausius–Duhem inequality. Special emphasis is placed on the interphase coupling via mass, momentum and energy interaction terms and their effects are partially illustrated using numerical examples. Novel features of the implementation of the described model are verified via comparisons to analytical solutions. The specification, validation and application of the full model to a calcium hydroxide/calcium oxide based thermochemical storage system are the subject of part 2 of this study. - Highlights: • Rigorous application of the Theory of Porous Media and the 2nd law of thermodynamics. • Thermodynamically consistent model for thermochemical heat storage systems. • Multicomponent gas; modified Fick's and Darcy's law; thermal non-equilibrium; solid–gas reactions. • Clear distinction between source and production terms. • Open source finite element implementation and benchmarks

  18. Fluctuation-dissipation relation and stationary distribution of an exactly solvable many-particle model for active biomatter far from equilibrium.

    Netz, Roland R

    2018-05-14

    An exactly solvable, Hamiltonian-based model of many massive particles that are coupled by harmonic potentials and driven by stochastic non-equilibrium forces is introduced. The stationary distribution and the fluctuation-dissipation relation are derived in closed form for the general non-equilibrium case. Deviations from equilibrium are on one hand characterized by the difference of the obtained stationary distribution from the Boltzmann distribution; this is possible because the model derives from a particle Hamiltonian. On the other hand, the difference between the obtained non-equilibrium fluctuation-dissipation relation and the standard equilibrium fluctuation-dissipation theorem allows us to quantify non-equilibrium in an alternative fashion. Both indicators of non-equilibrium behavior, i.e., deviations from the Boltzmann distribution and deviations from the equilibrium fluctuation-dissipation theorem, can be expressed in terms of a single non-equilibrium parameter α that involves the ratio of friction coefficients and random force strengths. The concept of a non-equilibrium effective temperature, which can be defined by the relation between fluctuations and the dissipation, is by comparison with the exactly derived stationary distribution shown not to hold, even if the effective temperature is made frequency dependent. The analysis is not confined to close-to-equilibrium situations but rather is exact and thus holds for arbitrarily large deviations from equilibrium. Also, the suggested harmonic model can be obtained from non-linear mechanical network systems by an expansion in terms of suitably chosen deviatory coordinates; the obtained results should thus be quite general. This is demonstrated by comparison of the derived non-equilibrium fluctuation dissipation relation with experimental data on actin networks that are driven out of equilibrium by energy-consuming protein motors. The comparison is excellent and allows us to extract the non-equilibrium

  19. Fluctuation-dissipation relation and stationary distribution of an exactly solvable many-particle model for active biomatter far from equilibrium

    Netz, Roland R.

    2018-05-01

    An exactly solvable, Hamiltonian-based model of many massive particles that are coupled by harmonic potentials and driven by stochastic non-equilibrium forces is introduced. The stationary distribution and the fluctuation-dissipation relation are derived in closed form for the general non-equilibrium case. Deviations from equilibrium are on one hand characterized by the difference of the obtained stationary distribution from the Boltzmann distribution; this is possible because the model derives from a particle Hamiltonian. On the other hand, the difference between the obtained non-equilibrium fluctuation-dissipation relation and the standard equilibrium fluctuation-dissipation theorem allows us to quantify non-equilibrium in an alternative fashion. Both indicators of non-equilibrium behavior, i.e., deviations from the Boltzmann distribution and deviations from the equilibrium fluctuation-dissipation theorem, can be expressed in terms of a single non-equilibrium parameter α that involves the ratio of friction coefficients and random force strengths. The concept of a non-equilibrium effective temperature, which can be defined by the relation between fluctuations and the dissipation, is by comparison with the exactly derived stationary distribution shown not to hold, even if the effective temperature is made frequency dependent. The analysis is not confined to close-to-equilibrium situations but rather is exact and thus holds for arbitrarily large deviations from equilibrium. Also, the suggested harmonic model can be obtained from non-linear mechanical network systems by an expansion in terms of suitably chosen deviatory coordinates; the obtained results should thus be quite general. This is demonstrated by comparison of the derived non-equilibrium fluctuation dissipation relation with experimental data on actin networks that are driven out of equilibrium by energy-consuming protein motors. The comparison is excellent and allows us to extract the non-equilibrium

  20. A pseudo-equilibrium thermodynamic model of information processing in nonlinear brain dynamics.

    Freeman, Walter J

    2008-01-01

    Computational models of brain dynamics fall short of performance in speed and robustness of pattern recognition in detecting minute but highly significant pattern fragments. A novel model employs the properties of thermodynamic systems operating far from equilibrium, which is analyzed by linearization near adaptive operating points using root locus techniques. Such systems construct order by dissipating energy. Reinforcement learning of conditioned stimuli creates a landscape of attractors and their basins in each sensory cortex by forming nerve cell assemblies in cortical connectivity. Retrieval of a selected category of stored knowledge is by a phase transition that is induced by a conditioned stimulus, and that leads to pattern self-organization. Near self-regulated criticality the cortical background activity displays aperiodic null spikes at which analytic amplitude nears zero, and which constitute a form of Rayleigh noise. Phase transitions in recognition and recall are initiated at null spikes in the presence of an input signal, owing to the high signal-to-noise ratio that facilitates capture of cortex by an attractor, even by very weak activity that is typically evoked by a conditioned stimulus.

  1. Energy from sugarcane bagasse under electricity rationing in Brazil: a computable general equilibrium model

    Scaramucci, Jose A.; Perin, Clovis; Pulino, Petronio; Bordoni, Orlando F.J.G.; Cunha, Marcelo P. da; Cortez, Luis A.B.

    2006-01-01

    In the midst of the institutional reforms of the Brazilian electric sectors initiated in the 1990s, a serious electricity shortage crisis developed in 2001. As an alternative to blackout, the government instituted an emergency plan aimed at reducing electricity consumption. From June 2001 to February 2002, Brazilians were compelled to curtail electricity use by 20%. Since the late 1990s, but especially after the electricity crisis, energy policy in Brazil has been directed towards increasing thermoelectricity supply and promoting further gains in energy conservation. Two main issues are addressed here. Firstly, we estimate the economic impacts of constraining the supply of electric energy in Brazil. Secondly, we investigate the possible penetration of electricity generated from sugarcane bagasse. A computable general equilibrium (CGE) model is used. The traditional sector of electricity and the remainder of the economy are characterized by a stylized top-down representation as nested CES (constant elasticity of substitution) production functions. The electricity production from sugarcane bagasse is described through a bottom-up activity analysis, with a detailed representation of the required inputs based on engineering studies. The model constructed is used to study the effects of the electricity shortage in the preexisting sector through prices, production and income changes. It is shown that installing capacity to generate electricity surpluses by the sugarcane agroindustrial system could ease the economic impacts of an electric energy shortage crisis on the gross domestic product (GDP)

  2. Modeling of thermodynamic non-equilibrium flows around cylinders and in channels

    Sinha, Avick; Gopalakrishnan, Shiva

    2017-11-01

    Numerical simulations for two different types of flash-boiling flows, namely shear flow (flow through a de-Laval nozzle) and free shear flow (flow past a cylinder) are carried out in the present study. The Homogenous Relaxation Model (HRM) is used to model the thermodynamic non-equilibrium process. It was observed that the vaporization of the fluid stream, which was initially maintained at a sub-cooled state, originates at the nozzle throat. This is because the fluid accelerates at the vena-contracta and subsequently the pressure falls below the saturation vapor pressure, generating a two-phase mixture in the diverging section of the nozzle. The mass flow rate at the nozzle was found to decrease with the increase in fluid inlet temperature. A similar phenomenon also occurs for the free shear case due to boundary layer separation, causing a drop in pressure behind the cylinder. The mass fraction of vapor is maximum at rear end of the cylinder, where the size of the wake is highest. As the back pressure is reduced, severe flashing behavior was observed. The numerical simulations were validated against available experimental data. The authors gratefully acknowledge funding from the public-private partnership between DST, Confederation of Indian Industry and General Electric Pvt. Ltd.

  3. Application of a non-equilibrium reaction model for describing horizontal well performance in foamy oil

    Luigi, A.; Saputelli, B.; Carlas, M.; Canache, P.; Lopez, E. [DPVS Exploracion y Produccion (Venezuela)

    1998-12-31

    This study was designed to determine the activation energy ranges and frequency factor ranges in chemical reactions in heavy oils of the Orinoco Belt in Venezuela, in order to account for the kinetics of physical changes that occur in the morphology of gas-oil dispersion. A non-equilibrium reaction model was used to model foamy oil behaviour observed at SDZ-182 horizontal well in the Zuata field. Results showed that activation energy for the first reaction ranged from 0 to 0.01 BTU/lb-mol and frequency factor from 0.001 to 1000 l/day. For the second reaction the activation energy was 50x10{sub 3} BTU/lb-mol and the frequency factor 2.75x10{sub 1}2 l/day. The second reaction was highly sensitive to the modifications in activation energy and frequency factor. However, both the activation energy and frequency factor were independent of variations for the first reaction. In the case of the activation energy, the results showed that the high sensitivity of this parameter reflected the impact that temperature has on the representation of foamy oil behaviour. 8 refs., 2 tabs., 6 figs.

  4. Electrical characteristics of TIG arcs in argon from non-equilibrium modelling and experiment

    Baeva, Margarita; Uhrlandt, Dirk; Siewert, Erwan

    2016-09-01

    Electric arcs are widely used in industrial processes so that a thorough understanding of the arc characteristics is highly important to industrial research and development. TIG welding arcs operated with pointed electrodes made of tungsten, doped with cerium oxide, have been studied in order to analyze in detail the electric field and the arc voltage. Newly developed non-equilibrium model of the arc is based on a complete diffusion treatment of particle fluxes, a generalized form of Ohm's law, and boundary conditions accounting for the space-charge sheaths within the magneto-hydrodynamic approach. Experiments have been carried out for electric currents in the range 5-200 A. The electric arc has been initiated between a WC20 cathode and a water-cooled copper plate placed 0.8 mm from each other. The arc length has been continuously increased by 0.1 mm up to 15 mm and the arc voltage has been simultaneously recorded. Modelling and experimental results will be presented and discussed.

  5. Extraction of benzene and cyclohexane using [BMIM][N(CN)2] and their equilibrium modeling

    Ismail, Marhaina; Bustam, M. Azmi; Man, Zakaria

    2017-12-01

    The separation of aromatic compound from aliphatic mixture is one of the essential industrial processes for an economically green process. In order to determine the separation efficiency of ionic liquid (IL) as a solvent in the separation, the ternary diagram of liquid-liquid extraction (LLE) 1-butyl-3-methylimidazolium dicyanamide [BMIM][N(CN)2] with benzene and cyclohexane was studied at T=298.15 K and atmospheric pressure. The solute distribution coefficient and solvent selectivity derived from the equilibrium data were used to evaluate if the selected ionic liquid can be considered as potential solvent for the separation of benzene from cyclohexane. The experimental tie line data was correlated using non-random two liquid model (NRTL) and Margules model. It was found that the solute distribution coefficient is (0.4430-0.0776) and selectivity of [BMIM][N(CN)2] for benzene is (53.6-13.9). The ternary diagram showed that the selected IL can perform the separation of benzene and cyclohexane as it has extractive capacity and selectivity. Therefore, [BMIM][N(CN)2] can be considered as a potential extracting solvent for the LLE of benzene and cyclohexane.

  6. Phase equilibrium engineering

    Brignole, Esteban Alberto

    2013-01-01

    Traditionally, the teaching of phase equilibria emphasizes the relationships between the thermodynamic variables of each phase in equilibrium rather than its engineering applications. This book changes the focus from the use of thermodynamics relationships to compute phase equilibria to the design and control of the phase conditions that a process needs. Phase Equilibrium Engineering presents a systematic study and application of phase equilibrium tools to the development of chemical processes. The thermodynamic modeling of mixtures for process development, synthesis, simulation, design and

  7. CO2, energy and economy interactions: A multisectoral, dynamic, computable general equilibrium model for Korea

    Kang, Yoonyoung

    While vast resources have been invested in the development of computational models for cost-benefit analysis for the "whole world" or for the largest economies (e.g. United States, Japan, Germany), the remainder have been thrown together into one model for the "rest of the world." This study presents a multi-sectoral, dynamic, computable general equilibrium (CGE) model for Korea. This research evaluates the impacts of controlling COsb2 emissions using a multisectoral CGE model. This CGE economy-energy-environment model analyzes and quantifies the interactions between COsb2, energy and economy. This study examines interactions and influences of key environmental policy components: applied economic instruments, emission targets, and environmental tax revenue recycling methods. The most cost-effective economic instrument is the carbon tax. The economic effects discussed include impacts on main macroeconomic variables (in particular, economic growth), sectoral production, and the energy market. This study considers several aspects of various COsb2 control policies, such as the basic variables in the economy: capital stock and net foreign debt. The results indicate emissions might be stabilized in Korea at the expense of economic growth and with dramatic sectoral allocation effects. Carbon dioxide emissions stabilization could be achieved to the tune of a 600 trillion won loss over a 20 year period (1990-2010). The average annual real GDP would decrease by 2.10% over the simulation period compared to the 5.87% increase in the Business-as-Usual. This model satisfies an immediate need for a policy simulation model for Korea and provides the basic framework for similar economies. It is critical to keep the central economic question at the forefront of any discussion regarding environmental protection. How much will reform cost, and what does the economy stand to gain and lose? Without this model, the policy makers might resort to hesitation or even blind speculation. With

  8. Effect of dissolved organic matter on pre-equilibrium passive sampling: A predictive QSAR modeling study.

    Lin, Wei; Jiang, Ruifen; Shen, Yong; Xiong, Yaxin; Hu, Sizi; Xu, Jianqiao; Ouyang, Gangfeng

    2018-04-13

    Pre-equilibrium passive sampling is a simple and promising technique for studying sampling kinetics, which is crucial to determine the distribution, transfer and fate of hydrophobic organic compounds (HOCs) in environmental water and organisms. Environmental water samples contain complex matrices that complicate the traditional calibration process for obtaining the accurate rate constants. This study proposed a QSAR model to predict the sampling rate constants of HOCs (polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs) and pesticides) in aqueous systems containing complex matrices. A homemade flow-through system was established to simulate an actual aqueous environment containing dissolved organic matter (DOM) i.e. humic acid (HA) and (2-Hydroxypropyl)-β-cyclodextrin (β-HPCD)), and to obtain the experimental rate constants. Then, a quantitative structure-activity relationship (QSAR) model using Genetic Algorithm-Multiple Linear Regression (GA-MLR) was found to correlate the experimental rate constants to the system state including physicochemical parameters of the HOCs and DOM which were calculated and selected as descriptors by Density Functional Theory (DFT) and Chem 3D. The experimental results showed that the rate constants significantly increased as the concentration of DOM increased, and the enhancement factors of 70-fold and 34-fold were observed for the HOCs in HA and β-HPCD, respectively. The established QSAR model was validated as credible (R Adj. 2 =0.862) and predictable (Q 2 =0.835) in estimating the rate constants of HOCs for complex aqueous sampling, and a probable mechanism was developed by comparison to the reported theoretical study. The present study established a QSAR model of passive sampling rate constants and calibrated the effect of DOM on the sampling kinetics. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. A report on intercomparison studies of computer programs which respectively model: i) radionuclide migration ii) equilibrium chemistry of groundwater

    Broyd, T.W.; McD Grant, M.; Cross, J.E.

    1985-01-01

    This report describes two intercomparison studies of computer programs which respectively model: i) radionuclide migration ii) equilibrium chemistry of groundwaters. These studies have been performed by running a series of test cases with each program and comparing the various results obtained. The work forms a part of the CEC MIRAGE project (MIgration of RAdionuclides in the GEosphere) and has been jointly funded by the CEC and the United Kingdom Department of the Environment. Presentations of the material contained herein were given at plenary meetings of the MIRAGE project in Brussels in March, 1984 (migration) and March, 1985 (equilibrium chemistry) respectively

  10. A model for a countercurrent gas—solid—solid trickle flow reactor for equilibrium reactions. The methanol synthesis

    Westerterp, K.R.; Kuczynski, M.

    1987-01-01

    The theoretical background for a novel, countercurrent gas—solid—solid trickle flow reactor for equilibrium gas reactions is presented. A one-dimensional, steady-state reactor model is developed. The influence of the various process parameters on the reactor performance is discussed. The physical

  11. Observation of non-chemical equilibrium effect on Ar-CO2-H2 thermal plasma model by changing pressure

    Al-Mamun, Sharif Abdullah; Tanaka, Yasunori; Uesugi, Yoshihiko

    2009-01-01

    The authors developed a two-dimensional one-temperature chemical non-equilibrium (1T-NCE) model of Ar-CO 2 -H 2 inductively coupled thermal plasmas (ICTP) to investigate the effect of pressure variation. The basic concept of one-temperature model is the assumption and treatment of the same energy conservation equation for electrons and heavy particles. The energy conservation equations consider reaction heat effects and energy transfer among the species produced as well as enthalpy flow resulting from diffusion. Assuming twenty two (22) different particles in this model and by solving mass conservation equations for each particle, considering diffusion, convection and net production terms resulting from hundred and ninety eight (198) chemical reactions, chemical non-equilibrium effects were taken into account. Transport and thermodynamic properties of Ar-CO 2 -H 2 thermal plasmas were self-consistently calculated using the first-order approximation of the Chapman-Enskog method. Finally results obtained at atmospheric pressure (760 Torr) and at reduced pressure (500, 300 Torr) were compared with results from one-temperature chemical equilibrium (1T-CE) model. And of course, this comparison supported discussion of chemical non-equilibrium effects in the inductively coupled thermal plasmas (ICTP).

  12. Stability analysis of a model equilibrium for a gravito-electrostatic sheath in a colloidal plasma under external gravity effect

    Rajkhowa, Kavita Rani; Bujarbarua, S.; Dwivedi, C.B.

    1999-01-01

    The present contribution tries to find a scientific answer to the question of stability of an equilibrium plasma sheath in a colloidal plasma system under external gravity effect. A model equilibrium of hydrodynamical character has been discussed on the basis of quasi-hydrostatic approximation of levitational condition. It is found that such an equilibrium is highly unstable to a modified-ion acoustic wave with a conditional likelihood of linear driving of the so-called acoustic mode too. Thus, it is reported (within fluid treatment) that a plasma-sheath edge in a colloidal plasma under external gravity effect could be highly sensitive to the acoustic turbulence. Its consequential role on possible physical mechanism of Coulomb phase transition has been conjectured. However, more rigorous calculations as future course of work are required to corroborate our phenomenological suggestions. (author)

  13. Model-based monitoring, optimisation and cogeneration plant billing in heating power stations; Modellgestuetzte Ueberwachung, Optimierung und KWK Abrechnung in Heizkraftwerken

    Deeskow, P. [STEAG KETEK IT GmbH, Oberhausen (Germany); Pawellek, R. [Sofbid GmbH, Zwingenberg (Germany)

    2005-07-01

    On the basis of thermodynamic modelling, efficient online systems can be constructed which provide multiple commercial uses for plant operation. Incipient failures are recognized earlier, so that countermeasures can be taken at an early stage and long-term maintenance measures can be planned. Performance can be optimised, and - last but not least - the multitude of processed data enables workflow analysis, e.g. for simplifying billing processes in secondary relational databases. Performance data are presented of two coal power plants with 350 MWel/250MWth and 450MWel/50MWth in which systems of this type have been in use for several years now. (orig.)

  14. Analytical Model of Inlet Growth and Equilibrium Cross-Sectional Area

    2016-04-01

    classic Escoffier (1940) inlet stability analysis to produce a new quadratic formula derived from simplified momentum and conservation equations ...neglecting time dependence and taking the maximum current gives the following quadratic equation : 2 0 0 d b d ghagAhU U c LA c Lω + − = (5) with the...or quadratic approach as the equilibrium area can be determined through Equation 9. As an alternative, cross- sectional equilibrium is expressed in

  15. Non-equilibrium relaxation in a stochastic lattice Lotka-Volterra model

    Chen, Sheng; Täuber, Uwe C.

    2016-04-01

    We employ Monte Carlo simulations to study a stochastic Lotka-Volterra model on a two-dimensional square lattice with periodic boundary conditions. If the (local) prey carrying capacity is finite, there exists an extinction threshold for the predator population that separates a stable active two-species coexistence phase from an inactive state wherein only prey survive. Holding all other rates fixed, we investigate the non-equilibrium relaxation of the predator density in the vicinity of the critical predation rate. As expected, we observe critical slowing-down, i.e., a power law dependence of the relaxation time on the predation rate, and algebraic decay of the predator density at the extinction critical point. The numerically determined critical exponents are in accord with the established values of the directed percolation universality class. Following a sudden predation rate change to its critical value, one finds critical aging for the predator density autocorrelation function that is also governed by universal scaling exponents. This aging scaling signature of the active-to-absorbing state phase transition emerges at significantly earlier times than the stationary critical power laws, and could thus serve as an advanced indicator of the (predator) population’s proximity to its extinction threshold.

  16. A consistent model for the equilibrium thermodynamic functions of partially ionized flibe plasma with Coulomb corrections

    Zaghloul, Mofreh R.

    2003-01-01

    Flibe (2LiF-BeF2) is a molten salt that has been chosen as the coolant and breeding material in many design studies of the inertial confinement fusion (ICF) chamber. Flibe plasmas are to be generated in the ICF chamber in a wide range of temperatures and densities. These plasmas are more complex than the plasma of any single chemical species. Nevertheless, the composition and thermodynamic properties of the resulting flibe plasmas are needed for the gas dynamics calculations and the determination of other design parameters in the ICF chamber. In this paper, a simple consistent model for determining the detailed plasma composition and thermodynamic functions of high-temperature, fully dissociated and partially ionized flibe gas is presented and used to calculate different thermodynamic properties of interest to fusion applications. The computed properties include the average ionization state; kinetic pressure; internal energy; specific heats; adiabatic exponent, as well as the sound speed. The presented results are computed under the assumptions of local thermodynamic equilibrium (LTE) and electro-neutrality. A criterion for the validity of the LTE assumption is presented and applied to the computed results. Other attempts in the literature are assessed with their implied inaccuracies pointed out and discussed

  17. Modelling and experimentation of the SO2 remotion through a plasma out of thermal equilibrium

    Moreno S, H.; Pacheco P, M.; Pacheco S, J.; Cruz A, A.

    2005-01-01

    In spite of the measures that have taken for the decrease of the emitted pollution by mobile sources ( T oday it doesn't Circulate , implementation of catalysts in those exhaust pipes,...), the pollution in the Valley of Mexico area overcomes the limits fixed by Mexican standards several days each year. It is foreseen that for 2020 those emissions of pollutants will be increase considerably, as example we can mention to the sulfur oxides which will be increase a 48% with regard to 1998. The purpose of this work is of proposing a technique for the degradation of the sulfur dioxide (SO 2 ) that consists in introducing this gas to a plasma out of thermal equilibrium where its were formed key radicals (O, OH) for its degradation. The proposed reactor has the advantage of combining the kindness of the dielectric barrier discharge and of corona discharge, besides working to atmospheric pressure and having small dimensions. The first obtained results of the modelling of the degradation of the SO 2 in plasma as well as those experimentally obtained are presented. (Author)

  18. Implementation of Equilibrium-Price Model to the Estimation of Import Inflation

    Yadulla Hasanli

    2015-04-01

    Full Text Available This study aims at investigating the the import inflationary processes as a result of feedbacks of mutual economic relations of World countries. It is used Equilibrium Price Model to estimate the import inflationary processes in CIS countries. The study investigates the further results regarding the import inflationary processes in the CIS countries on the scenario of increasing the Value Added norm in Russia. As well as by standpoint of economic growth and price stability, the recent revaluation of US dollar in the World and its impacts to total output of other countries have been investigated in details. In other words due to revaluation of the US dollar, if the final product decreases in USA, this decreasing impact how to be transmitted to the world countries have been estimated by the Input-Output Table in this study as well. The work is fulfilled on the Input-Output data for the year 2011. This study assumes theoretical and practical importance in defining the monetary policy.

  19. Discharge Fee Policy Analysis: A Computable General Equilibrium (CGE Model of Water Resources and Water Environments

    Guohua Fang

    2016-09-01

    Full Text Available To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and output sources of the National Economic Production Department. Secondly, an extended Social Accounting Matrix (SAM of Jiangsu province is developed to simulate various scenarios. By changing values of the discharge fees (increased by 50%, 100% and 150%, three scenarios are simulated to examine their influence on the overall economy and each industry. The simulation results show that an increased fee will have a negative impact on Gross Domestic Product (GDP. However, waste water may be effectively controlled. Also, this study demonstrates that along with the economic costs, the increase of the discharge fee will lead to the upgrading of industrial structures from a situation of heavy pollution to one of light pollution which is beneficial to the sustainable development of the economy and the protection of the environment.

  20. A non-equilibrium thermodynamics model of reconstituted Ca(2+)-ATPase.

    Waldeck, A R; van Dam, K; Berden, J; Kuchel, P W

    1998-01-01

    A non-equilibrium thermodynamics (NET) model describing the action of completely coupled or 'slipping' reconstituted Ca(2+)-ATPase is presented. Variation of the coupling stoichiometries with the magnitude of the electrochemical gradients, as the ATPase hydrolyzes ATP, is an indication of molecular slip. However, the Ca2+ and H+ membrane-leak conductances may also be a function of their respective gradients. Such non-ohmic leak typically yields 'flow-force' relationships that are similar to those that are obtained when the pump slips; hence, caution needs to be exercised when interpreting data of Ca(2+)-ATPase-mediated fluxes that display a non-linear dependence on the electrochemical proton (delta mu H) and/or calcium gradients (delta mu Ca). To address this issue, three experimentally verifiable relationships differentiating between membrane leak and enzymic slip were derived. First, by measuring delta mu H as a function of the rate of ATP hydrolysis by the enzyme. Second, by measuring the overall 'efficiency' of the pump as a function of delta mu H. Third, by measuring the proton ejection rate by the pump as a function of its ATP hydrolysis rate.